text stringlengths 8 5.77M |
|---|
With its dual single-coil pickups and smooth playing feel, the Player Jazz Bass is an inspiring instrument with classic, elevated style and authentic Fender bass tone. It’s powerful, rumbling sound is punchy and tight; a growling voice that’s matched with fast, smooth playing feel for a shot of creative inspiration. Ready for action in the studio, or on the stage, the Player Jazz Bass can take anything you can think of—and everything you haven’t yet.
\n Highlights:\n \n {"title":"PLAYER SERIES PICKUPS","description":"Designed for authentic Fender tone—with a bit of an edge—our Player Series pickups keep a foot in the past while looking to the future.","image":"https://www.fmicassets.com/demandware/assets/highlights/electrics/player/player-jazz-bass-pickups-v2.jpg"}\n \n {"title":"4-SADDLE BRIDGE","description":"Combining form and function perfectly, the 4-saddle bridge features modern slotted saddles for enhanced tuning stability and punchy attack.","image":"https://www.fmicassets.com/demandware/assets/highlights/electrics/player/player-jazz-bass-bridge-v2.jpg"}\n \n {"title":"“Modern C” NECK PROFILE","description":"This neck is designed for comfort and performance, with a “Modern C”-shaped profile and a smooth back finish—ideal for almost any playing style. ","image":"https://www.fmicassets.com/demandware/assets/highlights/electrics/player/player-jazz-bass-neck.jpg"}\n \n {"title":"OPEN-GEAR TUNING MACHINES","description":"A classic design element from the earliest days of the electric bass, the open-gear tuning machines offer rock-solid tuning stability.","image":"https://www.fmicassets.com/demandware/assets/highlights/electrics/player/player-jazz-bass-tuning-machines-v2.jpg"}\n \n {"title":"MORE TRADITIONAL BODY RADII","description":"The Player Jazz Bass body is hand-shaped to original specifications.","image":"https://www.fmicassets.com/demandware/assets/highlights/electrics/player/player-jazz-bass-radii.jpg"}\n \n {"title":"“F”-STAMPED NECK PLATE","description":"Each Player Jazz Bass includes an “F”-stamped neck plate, leaving no doubt as to the instrument’s pedigree.","image":"https://www.fmicassets.com/demandware/assets/highlights/electrics/player/player-jazz-bass-neckplate-v2.jpg"}\n \n |
The importance of crossroads in faecal marking behaviour of the wolves (Canis lupus).
For wolves (Canis lupus) scats play an important function in territorial marking behaviour. Depositing scats at strategic sites such as crossroads and on conspicuous substrates probably increases their effectiveness as visual and olfactory marks. It is therefore likely that scats will be deposited, and will accumulate, at particular crossroads where the probability of being detected by other wolves is greatest. To check this hypothesis, a wolf population in NW Spain was studied for two consecutive years, from May 1998 to March 2000, and the spatial distribution of 311 scats detected along roads (both at and away from crossroads) was analysed. This study was conducted over an area of 12,000 ha in Montes do Invernadeiro Natural Park. The results confirm that wolves preferably deposit their scats at crossroads (60.1%) and on conspicuous substrates (72.1%). Significantly more scats were found at intersections with numerous, easily passable roads connecting distant territories. Thus, wolves preferably deposit their faeces at crossroads with high accessibility and driveability. The larger the surface area of the crossroads, the more scats were found. Crossroads are therefore highly strategic points that facilitate the detection of scats. |
Chixoy Communities One Step Closer to Reparations Deal
27 years after the massacre, the Rio Negro community came together to commemorate the tragedy and mourn the victims at the top of Pak’oxom peak.
Photo by James Rodriguez
In an exciting development, Chixoy Dam affected communities met with officials from Guatemala’s new administration of President Otto Perez Molina on November 22nd. This is the first time communities have met with the new government to address the legalization and implementation of the Reparations Plan for affected communities. During the three prior administrations, communities have been negotiating to find concensus on a Reparations Plan which was finalized in April 2011. The implementation of the Plan has been delayed due to political and economic interests.
“We are hopeful, because government officials from the President’s office promised to sign the legal agreement that would allow all of us proceed with the Reparations Plan by the end of this year,” said Carlos Chen, who has for a long time worked to ensure that the damages caused by the construction of the dam are repaired. For years, Chixoy Dam affected communities have been fighting to obtain reparations from the government of Guatemala and from financiers of the project, the World Bank and the Inter-American Development Bank.
2012 might finally be a good year for Maya-Achi communities, who have suffered displacement from their ancient lands and the massacres of hundreds of children, women and men due to construction of the dam and Guatemala’s civil war. In October communities heard the long-awaited judgment from the Inter-American Court of Human Rights on the Rio Negro massacres case, which began as a petition to the Inter-American Human Rights Commission in 2005. Communities of Rio Negro are some of the 33 communities that were affected by the construction of the Chixoy Dam.
The Rio Negro
Photo by James Rodriguez
“The Inter-American Court of Human Rights ruling is mandatory for Guatemala since Guatemala recognizes the jurisdiction of the Court. The case will not be concluded before the Court until the judgment has been satisfied in full,” said Lewis Gordon, Executive Director of the Environmental Defender Law Center. “There should be no obstacles to the execution of the judgment, if we consider that Guatemala had already recognized, even before the judgment, its international responsibility for some of the violations in the Rio Negro case. In addition, all measures of satisfaction and guarantees of non-repetition, and allowances that were determined by the Court are perfectly executable within the deadlines that were set. However, it is recommended that civil society is attentive and requires compliance with the judgment.”
“This Court’s ruling is transcendental and opens a way for other communities or individuals whose rights have been violated by the State, and allow recourse from international bodies, when the internal justice system fails and States lean in favor of impunity,” said Juan de Dios Garcia, director of The Association for the Integral Development of the Victims of the Violence of the Verapaces, Maya Achí (ADIVIMA). “Perhaps one of the lessons we should learn from Guatemala is that you must ensure that such violations do not continue to be repeated. Unfortunately the attacks against people affected by dams and other human rights defenders are still too frequent in Guatemala.”
The cross reads: On the 13th of March 70 women and 107 children were massacred
Photo by James Rodriguez
The community is now waiting for a meeting with the government on December 4th to address the reparations deal, which they are ready to sign. And they are determined to ensure that the legal agreement gets published in the Central American Daily, the official newspaper.
Barbara Rose Johnston, an anthropologist with the Center for Political Ecology who has been for years working on issues of reparations, and wrote the Chixoy Dam Legacy Study said: “While thousands of people, literally, have been moved by this history and have worked to uncover the evidence that the Rio Negro massacres occurred in ways that produced immense profit and power in Guatemala and beyond, it is the massacre survivors – many of whom were children, and their children – whose passion for justice and whose desire for a life and future with dignity that must be celebrated. And, for the tens of millions of people in this world similarly displaced and abused in the name of hydrodevelopment, this news offers a glimmer of hope that historical injustices can be acknowledged and meaningful remedy can be secured.” |
Tall Tails
The Adventures continue in our newest installment!
On the high seas, Crackitus, Thaddeus and Jasper exchange exaggerated tales of a mighty leviathan... the Kraken! Guest starring the lovely Lisa Foiles (All That, Shiver) as Shelly the Mermaid! |
David Pears
David Pears, FBA (8 August 1921 – 1 July 2009) was a British philosopher renowned for his work on Ludwig Wittgenstein.
An Old Boy of Westminster School, he was in the Royal Artillery during World War II, and was seriously injured in a practice gas attack. After leaving the army he studied classics at Balliol College, Oxford, and was then for many years a Student (Fellow) of Christ Church.
Bibliography
Ludwig Wittgenstein. Viking Press 1970.
Motivated Irrationality. Oxford: Clarendon Press 1984.
The False Prison: A Study of the Development of Wittgenstein's Philosophy. 2 vols. Oxford: Oxford University Press 1987/1988.
Hume's System: An Examination of the First Book of His Treatise. Oxford: Oxford University Press 1991.
Paradox and Platitude in Wittgenstein's Philosophy. Oxford: Oxford University Press 2006.
References
Further reading
David Charles and William Child (Eds.). Wittgensteinian Themes: Essays in Honour of David Pears. Oxford: Oxford University Press 2002.
External links
David Francis Pears 1921–2009 - British Academy Memoir of Pears by Christopher Peacocke, FBA.
Category:British philosophers
Category:1921 births
Category:2009 deaths
Category:Alumni of Balliol College, Oxford
Category:British Army personnel of World War II
Category:Fellows of Christ Church, Oxford
Category:People educated at Westminster School, London
Category:Royal Artillery officers
Category:Wittgensteinian philosophers |
Politicians get tour of Duluth-area flooded neighborhoods
DULUTH, Minn. – The Northland’s congressional delegation toured flood-ravaged Duluth neighborhoods Friday and pledged to do all they can to obtain adequate federal assistance for the area.
Federal Emergency Management Administration officials will arrive in Minnesota on Monday to meet with state officials in St. Paul. They’ll begin working in the Duluth area Tuesday, Sen. Amy Klobuchar said.
“I know Duluth will get through this; Duluth has a great spirit,” she said during an afternoon news conference on West Skyline Parkway, feet from a huge washout that has closed the road and cut off several homeowners.
“You might break a few roads, but not the spirit,” she said.
Klobuchar, Sen. Al Franken and Rep. Chip Cravaack met with officials from the city, St. Louis County, the Minnesota Department of Transportation and the Red Cross during their visit.
“We talked dollars with them so they have an idea what Duluth and all of St. Louis County is facing,” city spokeswoman Amy Norris said.
On Thursday, local and state officials released preliminary damage estimates totaling $95 million to $115 million for public structures and roads in Duluth, St. Louis County and the region’s state highways alone. And those estimates are nowhere near complete, and do not take into account most damage in other counties or to private property.
In addition to meeting local officials, the three were bused to several sites around the city to see damage caused by Wednesday’s rainfall and flooding.
Cravaack said he had followed the news on the flooding and seen photo and film of it, but “it looked worse” in person.
“Our thoughts and prayers go out to people affected,” he said.
Cravaack said recovery efforts should concentrate first on public safety and getting people back into their homes. After that efforts can move onto repairing other infrastructure.
“We’re all here to help,” he said.
Klobuchar said the damage she saw was horrific.
“It felt like you were walking on the lunar surface,” she said of walking some of the damaged and destroyed Duluth streets with their buckled pavement and sinkholes. “It was an incredible sight.”
She said it was a tribute to Duluth and the surrounding areas that no one was killed in the floods. She, Franken and Cravaack all praised the efforts of local officials, public works and public safety workers.
“We will be supporting them and you,” Franken said.
Kuchera writes for the Duluth News Tribune
Have a comment to share about a story? Letters to the editor should include author’s name, address and phone number. Generally, letters should be no longer than 250 words. All letters are subject to editing. Send a letter to the editor. |
Tag Archives: women
Why do women easily suffer from vaginitis?
The vaginitis is the most common disease in the Department of gynaecology genital inflammation, it can occur in all ages, but especially in the age of childbearing age women. Normal healthy women, there may be a variety of pathogens in the vagina, but because of their tissue anatomy and biochemistry characteristics. There is a natural defense against pathogens, namely “self purification function”. Such as vaginal bacteria decompose glycogen vaginal epithelial cells into lactic acid to maintain normal in the acidic environment of the vagina, if the unclean sex life, childbirth, abortion, vaginal surgery, prolonged uterine bleeding and corrosive drugs damage, change the vaginal pH, caused by pathogen invasion occur vaginitis. Performance is increased, with the pudendal itch.
According to the different clinical pathogens, divided into Trichomonas vaginitis. Candida vaginitis and non specific vaginitis, etc.. Diagnosis through examination of Department of gynaecology, combined with vaginal smears and PCR examination to confirm the diagnosis, the treatment of multiple local nano medicine, with systemic anti infection treatment, but should notice vaginitis easily through the line after recurrence, Mangin SIP: so illness medication to two courses of treatment. Laboratory tests have no exception for recovery. In addition, the spread of disease vaginitis attribute, should the couple Tongzhi, to reduce recurrence.
Why do some women in the menstrual abdominal pain before, on the occurrence of disease?
According to statistics, 80% of women in the world have different degrees of dysmenorrhea, which have 3/4 influence. China’s incidence of about 33%, of which 13.59% of women have severe dysmenorrhea, affecting work and life. In clinic is a common disease.
Dysmenorrhea is divided into primary and secondary two categories, primary dysmenorrhea without pelvic organic disease. Most of the functional dysmenorrhea; secondary dysmenorrhea is the result of pelvic organic diseases, such as endometriosis, pelvic inflammatory disease. Palace neck is narrow, the uterus is ill, the foreign body in the uterus is caused by, the following only talk about primary dysmenorrhea. There are two main reasons for the occurrence of dysmenorrhea, one of which is caused by excessive prostaglandin, which causes the pain of uterine spasm. Such as anxiety, fear, anxiety, excessive tension, work study life pressure greatly, etc. these factors through the central nervous system to stimulate the pelvic nerve and cause pain. The symptoms of dysmenorrhea according to diagnosis is not difficult, but to find out the specific reason is also not so easy, can be proud of some of the corresponding inspection, such as a gynecological examination, B ultra, pelvic blood flow graphs, basal body temperature measurement, blood prostaglandin determination, the determination of blood prostaglandin is present clinical a main objective indicators, dysmenorrhea patients with abnormally high.
Clear diagnosis after the treatment principle: in the pain during acetanilide spasmolysis, ibuprofen, or Innovar the acid, or indomethacin suppository into the anus or oral contraceptives, or with traditional Chinese medicine, heat clearing and detoxifying pain, such as method of differentiation of symptoms and signs. Chinese medicine and Western medicine treatment of dysmenorrhea. With psychotherapy, the effect is satisfactory. But the treatment effect is not good for congenital uterine malformation, or excessive bending of the uterus, and can be taken as necessary. |
Nightwing: A Knight in Blüdhaven
Nightwing: A Knight in Blüdhaven
Frequent Batman ally Nightwing takes matters into his own hands in a new city that is in desperate need of a hero. But Dick Grayson's efforts to shatter Bludhaven's corruption and organized crime are obstructed at every turn by vicious killers, angry cops and the mysterious mastermind behind it all! |
Introduction
============
Lesions of the long head of the biceps (LHB) tendon have been widely considered to be a notable trigger for anterior shoulder pain ([@b1-etm-0-0-8232]). Patients with mild symptoms of tendinopathy or partial LHB tears (\<50% of tendon width), non-surgical treatments, such as rest, physical therapy, non-steroidal anti-inflammatory drug treatment and intra-articular injection of corticosteroids, can be effective; however, for most cases, including partial-thickness LHB tears, LHB instability/subluxation, associated rotator cuff tears, biceps pulley lesions and superior labrum anterior-posterior (SLAP) lesions, surgical intervention is still the preferred method of treatment ([@b2-etm-0-0-8232]--[@b4-etm-0-0-8232]). Biceps tenotomy and tenodesis have become two of the most commonly performed surgical procedures for lesions of the LHB tendons ([@b5-etm-0-0-8232]). Although tenotomy is a relatively simple and reproducible procedure which can significantly relieve shoulder pain without postoperative rehabilitation, it is only indicated for patients aged over 60 years, who are not involved in heavy labor and high-demand activities ([@b6-etm-0-0-8232]). Moreover, tenotomy has a higher incidence of cosmetic deformity (Popeye sign) than that of tenodesis (43 vs. 8%) ([@b7-etm-0-0-8232]). Therefore, tenodesis is currently the preferred technique for treating LHB lesions as it provides a better recovery of physical activity, fewer cosmetic deformities and more closely aligns with normal anatomy, despite a longer postoperative rehabilitation time and higher technical demand ([@b8-etm-0-0-8232]).
Numerous techniques have been applied with LHB tenodesis, including arthroscopic techniques and minimally open or open surgeries ([@b9-etm-0-0-8232]). Moreover, tenodesis sites can be positioned in the suprapectoral location just proximal to the pectoralis major tendon, the subpectoral location, or other positions such as the conjoint tendon or soft tissue sites ([@b10-etm-0-0-8232]). Although comparably preferable clinical outcomes have been reported in various studies investigating both open subpectoral biceps tenodesis (OSPBT) and arthroscopic suprapectoral biceps tenodesis (ASPBT), the results are still controversial and there is limited information regarding postoperative complications, such as re-tears, implant failure, nerve and vascular injuries, bicipital groove tenderness, deformities, and postoperative infection and stiffness ([@b11-etm-0-0-8232],[@b12-etm-0-0-8232]).
The present study retrospectively investigated 117 cases who underwent LHB tenodesis. OSPBT and ASPBT were compared, including pre-/post-surgery shoulder range of motion (ROM), visual analog scale (VAS) scores, American Shoulder and Elbow Surgeons (ASES) scores, Constant-Murley shoulder outcome scores and postoperative complications. The purpose of the present study was to identify the differences in clinical outcomes and related complications between OSPBT and ASPBT.
Materials and methods
=====================
### Study design and patients
This retrospective, single-center study was performed based on a protocol approved by the institutional review board at The First Affiliated Hospital of Anhui Medical University (Hefei, China), and was in accordance with the Good Clinical Practice guidelines ([@b13-etm-0-0-8232]) and the principles of the Declaration of Helsinki. Medical records of adult patients who had received LHB tenodesis surgeries at the Department of Orthopedics, The First Affiliated Hospital of Anhui Medical University between January 2015 and June 2016 were reviewed (n=259). The inclusion criteria were as follows: The diagnosis of SLAP tears; complete or partial tearing of the LHB; biceps lesions (tenosynovitis); and LHB instability/subluxation or associated rotator cuff tears (small- or medium-sized). Additionally, the inclusion criteria also included the presence of LHB lesion symptoms and signs, such as anterior shoulder pain, bicipital groove tenderness and positive results from the Speeds, Yergason\'s and O\'Brien\'s tests; conservative treatments for at least 3 months; complete clinical evaluations and MRI scans; and followed up for more than 12 months. The exclusion criteria were as follows: Patients \<18 years old; glenoid labrum lesions; glenohumeral instability; preoperative ROM deficit due to frozen shoulder or glenohumeral arthritis; contralateral shoulder injury or surgery; shoulder arthroplasty; massive rotator cuff tear; and neuromuscular disorder-related shoulder pain.
### Grouping and treatments
A total of 117 patients (60 women and 57 men) who met the inclusion and exclusion criteria were enrolled in this study and randomly divided into two groups, the OSPBT group (n=62) and the ASPBT group (n=55). All tenodesis procedures (OSPBT and ASPBT) were performed by the same group of experienced orthopedic surgeons at The First Affiliated Hospital of Anhui Medical University. The choice of surgical technique was determined by surgeon preference.
### Surgical technique and rehabilitation
OSPBT was performed using the surgical technique described by Mazzocca *et al* ([@b14-etm-0-0-8232]). After positioning the upper arm in the external rotation position, the inferior margin of the pectoralis major was palpated and a 2--3 cm incision was made near the inferior margin of the pectoralis major in the axillary region. A Hohmann retractor was placed under the pectoralis major and a Chandler retractor was placed over the medial side of the humerus to enlarge the operative visual field. Subsequently, the LHB was isolated and extracted from the glenohumeral joint and LHB sheath by using a right-angle clamp ([Fig. 1](#f1-etm-0-0-8232){ref-type="fig"}). The end of the LHB (3--4 cm) was removed and the terminal 3 cm of the tendon was stitched using a no. 2 high-strength suture. An appropriately sized interference screw implant (7 mm interference bio-screw; Arthrex GmbH) was used to affix the tendon into the reamed tenodesis site.
ASPBT was performed according to previously reported surgical techniques ([@b15-etm-0-0-8232],[@b16-etm-0-0-8232]). After positioning the upper arm in the external rotation position, a probe was used to locate the major tubercle and medial side of the intertubercular groove. The arthroscope was repositioned into the lateral portal and the biceps tendon was identified in the sheath within the intertubercular groove. As shown in [Fig. 2](#f2-etm-0-0-8232){ref-type="fig"}, coblation was then used to release the biceps tendon from the sheath and an appropriate position for tenodesis was localized proximal to the pectoralis major tendon. Subsequently, a portal was established at this location and a guide wire was placed. A 7.5 mm reamer was drilled in the center of intertubercular groove to the appropriate depth. A polydioxanone suture was used to stabilize the proximal tendon and the Swivelock screw (Arthrex GmbH) was then used to affix the tendon into the reamed tenodesis site. A postoperative X-ray examination was performed to identify the position ([Fig. 3](#f3-etm-0-0-8232){ref-type="fig"}).
Both treatment groups received the same postoperative rehabilitation program. In general, only passive exercises were performed in the first 6 weeks. Thereafter, active-assisted ROM and active exercises were permitted for the subsequent 6 weeks. From the 13th week, patients could begin biceps strengthening exercises. Specifically, for patients with rotator cuff tears and LHB lesions, the wounded shoulder was fixed with an abduction brace for 4 weeks and only passive exercises of elbow joints could be performed for the first 2 of these 4 weeks. Thereafter, passive exercises of the shoulder joints were allowed. For patients without rotator cuff tears, the wounded shoulder was fixed with an abduction brace for 2 weeks and only passive exercises of the shoulder and elbow joints were performed during the first 6 weeks.
### Demographic characteristics and clinical examinations
The demographics of each patient were recorded in detail, including the age, sex, body mass index (BMI), smoking history, dominant shoulder, duration of pain, injury types, operation time and hospital stay. Moreover, clinical examinations of LHB lesions such as shoulder ROM, VAS scores (0, no pain, to 10, most severe pain), ASES scores and Constant-Murley shoulder outcome scores (Constant scores) were investigated preoperatively, as well as at 3, 6 and 12 months post-surgery. All patients received at least 12 months follow-up care after hospital discharge and the patients were advised to attend the associated outpatient clinic to complete these clinical assessments during this period. A total of 12 months following the surgery, the patients were contacted for follow-up using a telephone enquiry investigating abnormal signs of pain, instability or deformity, as had been mutually agreed. All patients were invited to the associated outpatient clinic if any abnormal signs appeared. Comprehensive evaluations and imaging examinations were performed to clarify the injury types and degrees. Furthermore, postoperative complications, including re-tears, implant failure, nerve and vascular injuries, bicipital groove tenderness, deformities (Popeye sign), postoperative infection and stiffness were comprehensively investigated.
### Statistical analysis
Statistical analysis was performed using SPSS software (version 19.0; IBM Corp.). The results are presented as the mean ± SD. Student\'s t-test and one/two-way ANOVAs were applied for continuous data, with Bonferroni post-hoc tests. χ^2^ tests were applied for the categorical data. P\<0.05 was considered to indicate a statistically significant difference.
Results
=======
### Demographic characteristics
A total of 117 adult patients (60 women and 57 men) with LHB lesions who met the inclusion and exclusion criteria were enrolled in the present study and divided into two groups, the OSPBT group (n=62) and the ASPBT group (n=55). The mean age of all 117 patients was 56.51±8.79 years (range, 32--78 years) and there were no significant differences in the mean ages between the OSPBT group (57.36±8.81 years old) and the ASPBT group (55.05±8.74 years old) (P\>0.05). As shown in [Table I](#tI-etm-0-0-8232){ref-type="table"}, there were no significant differences in gender, BMI, dominant shoulder, duration of pain, injury type and operation time between the two groups. The mean number of days of hospital stay in the ASPBT group was significantly lower than that in the OSPBT group (5.4±1.8 vs. 9.3±2.9 days; P\<0.05). All patients had completed at least 12 months of follow-up and the mean lengths of follow-up treatment in the OSPBT group and the ASPBT group were 20.11±7.10 and 20.51±7.47 months, respectively. A total of 34 patients abandoned the follow-up study after 12 months, including 18 patients from the OSPBT group and 16 patients from the ASPBT group.
### Clinical examinations
The clinical examinations, including VAS scores, Constant scores and ASES scores were taken preoperatively, as well as 3, 6 and 12 months post-surgery. VAS scores (0, no pain, to 10, most severe pain) were applied for evaluating shoulder pain. As shown in [Table II](#tII-etm-0-0-8232){ref-type="table"}, the VAS scores in both groups at 3, 6 and 12 months post-surgery were significantly lower than the VAS scores of both groups preoperatively (P\<0.05). At 3 months post-surgery, the VAS score in OSPBT group (2.41±0.76) was significantly lower than that in the ASPBT group (3.59±1.02; P\<0.05). Moreover, there were no significant differences in the VAS scores between the OSPBT group and the ASPBT group preoperatively, at 6 or 12 months post-surgery (P\>0.05). The average Constant scores and ASES scores between the two groups are also presented in [Table II](#tII-etm-0-0-8232){ref-type="table"}. The Constant scores and ASES scores of both groups at 3, 6 and 12 months post-surgery were significantly higher than the respective scores preoperatively in both groups (P\<0.05). However, there were no significant differences observed in the Constant scores and ASES scores between the OSPBT group and the ASPBT group at any stage of the study (P\>0.05).
### ROM
The active ROMs, including forward elevation, abduction and external rotation, were evaluated preoperatively and at 12 months post-surgery. As shown in [Fig. 4](#f4-etm-0-0-8232){ref-type="fig"}, the postoperative active ROMs were significantly higher than the preoperative active ROMs in both groups (P\<0.05). However, there were no significant differences in the preoperative or postoperative active ROMs between the two groups (P\>0.05).
### Postoperative complications
The postoperative complications, including re-tears, deformities (Popeye sign), implant failure, neurovascular injury, postoperative infection, stiffness and bicipital groove tenderness, were comprehensively investigated. As shown in [Table III](#tIII-etm-0-0-8232){ref-type="table"}, there were no incidences of re-tears, deformities (Popeye sign), implant failure, neurovascular injury or postoperative infection. Moreover, the incidence of postoperative stiffness in the OSPBT group (3, 5.5%) was significantly lower than that in the ASPBT group (11, 17.7%; P\<0.05). Furthermore, the incidences of bicipital groove tenderness in both two groups at 3, 6 and 12 months post-surgery were significantly lower than the incidences of bicipital groove tenderness in both groups on discharge day (P\<0.05). At 3 months post-surgery, the incidence of bicipital groove tenderness in OSPBT group (10, 16.1%) was significantly lower than that in ASPBT group (23, 41.8%; P\<0.05). Similarly, at 6 months post-surgery, the incidence of bicipital groove tenderness in the OSPBT group (4, 6.4%) was significantly lower than that in the ASPBT group (12, 21.8%; P\<0.05). However, there was no significant difference in the incidences of bicipital groove tenderness between the OSPBT group and the ASPBT group at 12 months post-surgery (P\>0.05).
Discussion
==========
In recent years, various techniques regarding LHB tenodesis have been reported, and among them, bony interference fixation tenodesis (BIFT) is the most widely used technique, exhibiting good clinical outcomes and a low rate of surgical complications ([@b9-etm-0-0-8232],[@b17-etm-0-0-8232]). Furthermore, soft tissue fixation (STT) is associated with excellent performance, without producing subscapular lesions or Popeye\'s deformity ([@b18-etm-0-0-8232]). Hwang *et al* ([@b19-etm-0-0-8232]) suggested that arthroscopic BIFT at the distal bicipital groove produced a greater improvement in the elbow flexion strength index and a lower failure rate than STT. Chiang *et al* ([@b20-etm-0-0-8232]) investigated the biomechanical characteristics of suture anchor and interference screw fixation in subpectoral tenodesis, and reported that both of the techniques led to an equivalent ultimate failure load and stiffness. However, the interference screw fixation technique was associated with significantly less displacement in response to cyclic and failure loading.
In regards to the safety of tenodesis, brachial plexopathy ([@b21-etm-0-0-8232]), musculocutaneous nerve injury and lateral antebrachial cutaneous nerve injury ([@b22-etm-0-0-8232],[@b23-etm-0-0-8232]) have been reported after OSPBT. Ma *et al* ([@b24-etm-0-0-8232]) reported a case of direct musculocutaneous nerve injury in subpectoral tenodesis, whereby the nerve was wrapped around the LHB in the revision surgery. Sethi *et al* ([@b25-etm-0-0-8232]) assessed the risk for neurological injury of open suprapectoral and subpectoral biceps tenodesis in cadavers, and suggested that penetration of the posterior humeral cortex at the suprapectoral location results in a high risk of damaging the axillary nerve due to its proximity, and should be avoided. Subpectoral bicortical button fixation drilled uniformly perpendicular to the axis of the humerus is performed in a safe location with respect to the axillary nerve. In this present study, it was found that the incidences of postoperative complications, such as re-tears, deformities (Popeye sign), implant failure, neurovascular injury and postoperative infection, were nil. Therefore, both suprapectoral and subpectoral tenodesis were deemed safe.
The results showed that the clinical outcomes, including shoulder ROMs, VAS scores, ASES scores and Constant scores, were significantly improved after OSPBT or ASPBT. Moreover, there were no significant differences in the improvement of clinical outcomes between the two groups. However, Gilmer *et al* ([@b26-etm-0-0-8232]) suggested that only 17% length of LHB tendon can be observed in ASPBT, and only 32% length of LHB can be observed in ASPBT even when the tendon is pulled into the joint with an arthroscopic grasper. This indicated that OSPBT may be the optimal method of tenodesis for the complete removal of all hidden biceps lesions and for the revision of failed postoperative LHB lesions ([@b27-etm-0-0-8232]). Kolz *et al* ([@b28-etm-0-0-8232]) compared the mechanical properties between OSPBT and ASPBT, and indicated that LHB in the suprapectoral region tended to have higher tensile strength than in the subpectoral region, and LHB tenodesis in the suprapectoral region could withstand higher failure loads and become more arthroscopically accessible. Furthermore, this present study found that the incidences of postoperative stiffness and bicipital groove tenderness in the ASPBT group were significantly higher than those in the OSPBT group. It was also reported that the VAS score in the OSPBT group was significantly lower than that in the ASPBT group at 3 months post-surgery. Similarly, Yi *et al* ([@b29-etm-0-0-8232]) suggested that VAS scores and tenderness at the bicipital groove were significantly decreased in the OSPBT group at the early stage post-surgery. However, there were no significant differences in ASES and Constant scores in this present study. This indicated that the early results of VAS score (within 3 months post-surgery) and bicipital groove tenderness (within 6 months post-surgery) for subpectoral tenodesis was related to the removal of the biceps tendinitis.
There were several limitations with this study, including: An insufficient number of enrolled patients; the absence of extended follow-up research; a lack of MRI data from enrolled patients preoperatively and postoperatively, especially MRI changes during the follow-up; and the study was not a prospective, randomized controlled trial.
In conclusion, the clinical outcomes, including shoulder ROMs, VAS scores, ASES scores and Constant scores, were significantly improved after OSPBT or ASPBT. Specifically, the VAS score, and the incidences of postoperative stiffness and bicipital groove tenderness in the OSPBT group were significantly lower than those in the ASPBT group at 3 months post-surgery. Moreover, there were no significant differences in the improvement of other clinical outcomes and postoperative complications between the two groups.
Not applicable.
Funding
=======
No funding was received.
Availability of data and materials
==================================
The datasets generated and/or analyzed during the current study are not publicly available due to statutory provisions regarding data and privacy protection but are available from the corresponding author on reasonable request.
Authors\' contributions
=======================
JT, BX and RG were involved in the conception and design of the study; the collection, assembly, analysis and interpretation of the data; and in drafting of the article. They also provided statistical expertise and contributed to the final approval of the article, provision of study materials, technical and logistical support as well as critical revision of the article for important intellectual content. All authors contributed equally to this article.
Ethics approval and consent to participate
==========================================
This study was approved by the ethics committee of The First Affiliated Hospital of Anhui Medical University (protocol no. PJ2014-10-04). Participants provided their written informed consent to participate in this study.
Patient consent for publication
===============================
Not applicable.
Competing interests
===================
The authors declare that they have no competing interests.
{#f1-etm-0-0-8232}
{#f2-etm-0-0-8232}
{#f3-etm-0-0-8232}
{#f4-etm-0-0-8232}
######
Demographic characteristics of patients in the OSPBT group and the ASPBT group.
Variable OSPBT (n=62) ASPBT (n=55) P-value
-------------------------- -------------- -------------- ---------
Age, years 57.36±8.81 55.05±8.74 0.64
Female, n (%) 33 (53.2%) 29 (52.7%) 0.85
BMI, kg/m^2^ 28.38±2.69 28.77±2.41 0.39
Smoking, n (%) 9 (14.5%) 7 (12.7%) 0.72
Dominant shoulder
Right, n (%) 38 (61.3%) 34 (61.8%) 0.81
Duration of pain, months 16.16±7.77 15.74±7.79 0.65
Injury types, n (%)
SLAP tear 30 (48.4%) 20 (36.4%) 0.14
Biceps tear 37 (59.7%) 32 (58.2%) 0.74
Tenosynovitis 9 (14.5%) 5 (9.1%) 0.21
LHB subluxation 18 (29.0%) 15 (27.3%) 0.53
Rotator cuff tear 55 (88.7%) 46 (83.6%) 0.48
Small-sized 26 (41.9%) 21 (38.2%) 0.51
Medium-sized 29 (46.8%) 25 (45.5%) 0.24
Operation time, h 2.63±0.63 3.12±0.75 0.09
Hospital stay, days 9.3±2.9 5.4±1.8 0.03
Follow-up, months 20.11±7.10 20.51±7.47 0.78
Data are presented as the mean ± SD or n (%), and P\<0.05 was considered to indicate a statistically significant difference when comparing the OSPBT group with the ASPBT group. ASPBT, arthroscopic suprapectoral biceps tenodesis; BMI, body mass index; OSPBT, open subpectoral biceps tenodesis; SLAP, superior labrum anterior-posterior.
######
Clinical examinations of patients in the OSPBT group and the ASPBT group.
Variable OSPBT ASPBT
----------------------------- ------------------------------------------------------------------------------------------------------ ----------------------------------------------------------
VAS score
Preoperative 5.02±1.05 4.92±1.51
3 months postoperatively 2.41±0.76^[a](#tfn3-etm-0-0-8232){ref-type="table-fn"},[b](#tfn4-etm-0-0-8232){ref-type="table-fn"}^ 3.59±1.02^[a](#tfn3-etm-0-0-8232){ref-type="table-fn"}^
6 months postoperatively 1.64±0.81^[a](#tfn3-etm-0-0-8232){ref-type="table-fn"}^ 1.77±0.81^[a](#tfn3-etm-0-0-8232){ref-type="table-fn"}^
12 months postoperatively 0.95±0.65^[a](#tfn3-etm-0-0-8232){ref-type="table-fn"}^ 1.18±1.36^[a](#tfn3-etm-0-0-8232){ref-type="table-fn"}^
Constant score
Preoperative 53.75±7.19 52.08±10.54
3 months postoperatively 63.25±7.01^[a](#tfn3-etm-0-0-8232){ref-type="table-fn"}^ 60.61±6.39^[a](#tfn3-etm-0-0-8232){ref-type="table-fn"}^
6 months postoperatively 81.16±6.32^[a](#tfn3-etm-0-0-8232){ref-type="table-fn"}^ 78.64±5.14^[a](#tfn3-etm-0-0-8232){ref-type="table-fn"}^
12 months postoperatively 90.71±4.29^[a](#tfn3-etm-0-0-8232){ref-type="table-fn"}^ 90.38±3.14^[a](#tfn3-etm-0-0-8232){ref-type="table-fn"}^
ASES score
Preoperative 52.89±8.16 49.51±11.05
3 months postoperatively 68.39±3.98^[a](#tfn3-etm-0-0-8232){ref-type="table-fn"}^ 64.84±4.07^[a](#tfn3-etm-0-0-8232){ref-type="table-fn"}^
6 months postoperatively 80.52±5.93^[a](#tfn3-etm-0-0-8232){ref-type="table-fn"}^ 78.36±5.53^[a](#tfn3-etm-0-0-8232){ref-type="table-fn"}^
12 months postoperatively 89.05±4.02^[a](#tfn3-etm-0-0-8232){ref-type="table-fn"}^ 88.51±3.42^[a](#tfn3-etm-0-0-8232){ref-type="table-fn"}^
Data presented as the mean ± SD or n (%), and P\<0.05 was considered to indicate a statistically significant difference.
P\<0.05 vs. respective preoperative score
P\<0.05 vs. ASPBT. ASES, American Shoulder and Elbow Surgeons; ASPBT, arthroscopic suprapectoral biceps tenodesis; Constant score, Constant-Murley shoulder outcome scores; OSPBT, open subpectoral biceps tenodesis; VAS, visual analog scale.
######
Postoperative complications of patients in the OSPBT group and the ASPBT group.
Variable OSPBT (%) ASPBT(%)
------------------------------------ ------------------------------------------------------------------------------------------------------ ---------------------------------------------------------
Re-tears, n (%) 0 (0) 0 (0)
Popeye sign, n (%) 0 (0) 0 (0)
Implant failure, n (%) 0 (0) 0 (0)
Neurovascular injury, n (%) 0 (0) 0 (0)
Postoperative infection, n (%) 0 (0) 0 (0)
Stiffness, n (%) 3 (5.5)^[b](#tfn7-etm-0-0-8232){ref-type="table-fn"}^ 11 (17.7)
Bicipital groove tenderness, n (%)
Discharge day 39 (62.9) 37 (67.3)
3 months postoperatively 10 (16.1)^[a](#tfn6-etm-0-0-8232){ref-type="table-fn"},[b](#tfn7-etm-0-0-8232){ref-type="table-fn"}^ 23 (41.8)^[a](#tfn6-etm-0-0-8232){ref-type="table-fn"}^
6 months postoperatively 4 (6.4)^[a](#tfn6-etm-0-0-8232){ref-type="table-fn"},[b](#tfn7-etm-0-0-8232){ref-type="table-fn"}^ 12 (21.8)^[a](#tfn6-etm-0-0-8232){ref-type="table-fn"}^
12 months postoperatively 0 (0)^[a](#tfn6-etm-0-0-8232){ref-type="table-fn"}^ 3 (5.4)^[a](#tfn6-etm-0-0-8232){ref-type="table-fn"}^
Data are presented as the mean ± SD or n (%), and P\<0.05 was considered to indicate a statistically significant difference.
P\<0.05 vs. respective group on discharge day
P\<0.05 vs. ASPBT. ASPBT, arthroscopic suprapectoral biceps tenodesis; OSPBT, open subpectoral biceps tenodesis.
|
Log In
Vault partners with thousands of colleges, universities and academic institutions to provide students with FREE access to our premium content. To determine if your school is a partner, please enter your school email address below.
Create Account
Vault partners with thousands of colleges, universities and academic institutions to provide students with FREE access to our premium content. To determine if your school is a partner, please enter your school email address below.
About Warner Media, LLC
Even among media titans, this company is a giant. Warner Media is one of the world's largest media conglomerates, with operations spanning television and film. Through subsidiary Turner Broadcasting, the company runs a portfolio of cable TV networks including CNN, TBS, and TNT. Time Warner also operates pay-TV channels HBO and Cinemax. Its Warner Bros. Entertainment, meanwhile, includes film studios (Warner Bros. Pictures, New Line Cinema), TV production units (Warner Bros. Television Group), and comic book publisher DC Entertainment. In 2018, Time Warner was bought by AT&T Inc. for $85 billion after a federal judge greenlit the deal over government objections.
Change in Company Type
Time Warner Inc was taken private in 2018 by AT&T, who acquired the media company for $85 billion, after which it changed its name to Warner Media, LLC (tradestyle WarnerMedia).
Operations
Warner Media divides its business into three reportable segments: Turner, Home Box Office, and Warner Bros. Turner consists of cable networks and digital media properties. Its businesses and brands include Adult Swim, Boomerang, Cartoon Network, CNN, TBS, TNT, truTV, Turner Classic Movies, and Turner Sports. Its digital properties also include Bleacher Report and the CNN digital network. Digital properties Turner manages or operates for sports leagues include NBA.com, NBA Mobile, NCAA.com, and PGA.com. It earns revenue through subscription, advertising, and licensing its certain owned original programming to international territories and to subscription video-on-demand services.
Home Box Office (known as HBO) consists of pay television and streaming services domestically and premium pay, basic tier television, and streaming services internationally. Its main revenue sources are subscriptions to its HBO, Cinemax, and HBO NOW services, as well as licensing of its original programming to third-party television networks and over-the-top (video-on-demand) services in over 150 countries, including the UK, Australia, France, Germany, and Canada. The segment has approximately 50 million subscribers in the US. The division is behind a whole library of well-loved TV shows, including Game of Thrones, True Detective, and Westworld.
The Warner Bros. segment consists of television, feature film, home video, and video game production and distribution. Its vast content library spans more than 100,000 hours of programming, including over 8,600 films and 5,000 TV shows.
Geographic Reach
Warner Media's content and brands have a truly global reach. However, in fiscal 2016, more than 70% of the company's revenue came from the US and Canada, followed by Europe accounting for 15%.
Warner Media has offices; studios; technical, production and warehouse spaces and communications facilities in the US, Hong Kong, Chile, Argentina, and the UK. Outside of the US, its portfolio of brands and digital businesses reaches consumers in more than 200 countries and territories.
Sales and Marketing
Warner Media sells and distributes its products through licensing to affiliates and digital distributors, apps, digital storefronts, traditional retailers, the Amazon Prime SVOD (Subscription Video-On-Demand) service, and other exhibitors such as airlines and hotels. In 2018, HBO had approximately 54 million domestic subscribers. |
Welcome to the best KC Chiefs site on the internet. You can view any post as a visitor, but you are required to register before you can post. Click the register link above, it only takes 30 seconds to start chatting with Chiefs fans from all over the world! Enjoy your stay!
I think MVP of this team is Herm, He has managed to keep the points off the board, and numerous times snatched defeat out of the jaws of victory, I mean heck if it wasn't for Herm the guys on the team might have to play in the playoffs. |
MixMAP: An approach to gene level testing of association
*Andrea S Foulkes, Division of Biostatistics, UMass Amherst School of Public Health and Health SciencesGregory Matthews, Division of Biostatistics, UMass Amherst School of Public Health and Health SciencesMuredach P Reilly, Cardiovascular Institute, Perelman School of Medicine at the University of Pennsylvania
Mixed modeling of Meta-Analysis P-values (MixMAP) is a recently described analytic framework for using publicly-available SNP-level summary data from candidate-gene association studies to characterize gene or locus-level associations with a measured trait. The underlying premise of this approach, similar to many clustered data methods, is that SNP-level effects are influenced by latent locus or gene level variables. In this presentation, we describe MixMAP, including a formal hypothesis testing framework with appropriate error control for genome-wide association studies (GWAS). We also describe a mixture model extension for further data exploration and characterization of gene-level associations that has the advantage of providing for more flexible underlying model assumptions. Application of MixMAP and its extensions to the Global Lipids Gene Consortium (GLGC) and the Meta-Analysis of Glucose and Insulin-related Traits Consortium (MAGIC) publicly-available GWAS metadata are presented for illustration. All statistical analysis is performed using R version 2.15.2 and the open-source, publicly-available MixMAP package (http://cran.r-project.org/web/packages/MixMAP/index.html). |
Q:
printing a collection item to immediate window in excel vba
I was wondering how do I print an item in my collection to the immediate window in excel VBA? I want to either have a collection for each collection item or an array for each collection item, which ever is easier to pull information from. Here is some example code of what I'm talking about
Sub test()
Dim c As Collection
Dim a As Collection
Set a = New Collection
For i = 1 To 10
Set c = New Collection
c.Add Array("value1", "value2", "value3","valvue4, "value5"), "key1"
c.Add "value2", "key2"
c.Add "value3", "key3"
c.Add "value4, "key4"
c.Add "value5", "key5"
a.Add c, c.Item(1)
'lets say I wanted to print value4 or value1 from the 1st item
Debug.Print a.Item(1(2))
Next i
End Sub
A:
To add to @Gary's Student's answer, you can't use integers as keys for a collection. So you either cast them to a string using the Cstr function or you can use a dictionary instead. If you decide to use a dictionary, make sure to enable the Microsoft Scripting Runtime (under tools -> references). I've added some examples below.
Sub collExample()
Dim i As Integer
Dim c As Collection
Set c = New Collection
For i = 1 To 10
c.Add 2 * i, CStr(i)
Next i
'keys cant be integers
'see https://msdn.microsoft.com/en-us/library/vstudio/f26wd2e5(v=vs.100).aspx
For i = 1 To 10
c.Item (i)
Next i
End Sub
Sub dictExample()
Dim d As New Dictionary
Dim i As Integer
For i = 1 To 10
d(i) = 2 * i
Next i
Dim k As Variant
For Each k In d
Debug.Print k, d(k)
Next k
Dim coll As New Collection
coll.Add "value1"
coll.Add "value2"
coll.Add "value3"
Set d("list") = coll
Dim newCol As Collection
Set newCol = d("list")
Dim v As Variant
For Each v In newCol
Debug.Print v
Next v
End Sub
|
Q:
Lazy Loading with Doctrine2 and Symfony2 using DQL
I have a tree structure with a parent field. Currently I am trying to get all parent nodes to display the path to the current node.
Basically I am doing a while-loop to process all nodes.
$current = $node->getParent();
while($current) {
// do something
$current = $current->getParent();
}
Using the default findById method works. Because the entity has some aggregated fields, I am using a custom repository method, to load all basic fields with one query.
public function findNodeByIdWithMeta($id) {
return $this->getEntityManager()
->createQuery('
SELECT p, a, c, cc, ca, pp FROM
TestingNestedObjectBundle:NestedObject p
JOIN p.actions a
LEFT JOIN p.children c
LEFT JOIN c.children cc
LEFT JOIN c.actions ca
LEFT JOIN p.parent pp
WHERE p.id = :id
')
->setParameter('id', $id)
->setHint(
\Doctrine\ORM\Query::HINT_CUSTOM_OUTPUT_WALKER,
'Gedmo\\Translatable\\Query\\TreeWalker\\TranslationWalker'
)
->getOneOrNullResult();
}
With that code, loading the parents fails. I only get the immediate parent (addressed by LEFT JOIN p.parent pp) but not the parents above. E.g. $node->getParent()->getParent() returns null.
Whats wrong with my code? Did I misunderstood the lazy loading thing?
Thanks a lot,
Hacksteak
A:
It looks like your are using the adjacency model for storing trees in a relational database. Which in turn means, that you will need a join for every level to get all ancestors with a single query.
As you are already using the Doctrine Extension Library I recommend to have a look at the Tree component.
|
Evaluation of a rapid membrane-based assay (HIV-CHEK) for detection of antibodies to HIV in serum samples from Nairobi.
We evaluated a rapid membrane-based assay (HIV-CHEK) for detection of antibodies to HIV using 737 serum samples in Nairobi, Kenya. The rapid assay had a sensitivity of 96.3% and specificity of 99.8% when compared with enzyme-linked immunosorbent assay (ELISA) and Western blot assay. Results were similar using fresh or previously frozen serum samples, although the latter occasionally left debris on the assay device membrane yielding uninterpretable results. This rapid HIV assay may be of particular use in developing countries where laboratory resources are limited. |
July 17, 2006
Dems Fold Over Web Caskets
Oh, now I understand why the DNC pulled its “controversial” web ad about America being on the wrong track. It was not just because of pressure over one controversial image — this shot of caskets coming back from Iraq. It’s because it contained two offensive frames — the second revealing a standing gun and helmet, that classic battlefield tribute to a fallen comrade.
So, once again the crazy liberals are up in arms. They’re screaming how the party is being run by a bunch of spineless wimps, and how, in contrast, Rove and Co. milked the visual hell out of smoldering WTC wreckage. |
The following typescript with handwritten annotations was found among R.J. Hunter’s unpublished notes and is now in PRONI (D4446/A/1/44).
Tobacco pipes in Ireland in the reign of James I
The Study of Irish trade in the early seventeenth century is greatly hampered by the scarcity of relevant source materials, However, a unique group of port books for. Ulster ports for the years 1612-151 (which for the most part only specify goods in detail for the year Michaelmas 1614 to Michaelmas 1615), yields, when correlated with English port books,2 detailed evidence of Ulster trade shortly after the British colony there had been established. For any more extended period or for the rest of Ireland non-Irish port books have to be used.
This brief note indicates the dimensions of recorded tobacco pipe imports into three Ulster ports. The evidence supplied for the other towns was noted from English port books being searched for another purpose. The table shows ports of arrival and departure with the dates of entry inwards and outwards where available and also the quantities involved. However, port books often use general terms such as ‘and other small necessaries’ which may make them unreliable for the statistical treatment of such commodities as tobacco pipes.
Londonderry
10July 1615
9 gross
London, Ap.-May 16153
Coleraine
29 July 1615
2 gross
London, Ap.-May 16154
Carrickfergus
6 Nov. 1614
4 dozen
Beaumaris, 20 Oct. 16145
Carrickfergus
2 June 1615
4 gross
Barnstable, 22 May 16156
Dublin
4 gross
London 12 Sept. 16127
Dublin
½ gross
Chester, 12 Dec. 16148
Cork
2 gross
Bristol, 1 Dec. 16129
Baltimore
1 gross
London, 12 Aug. 161510
The quantities of tobacco entering Ulster ports in these years were also small, by far the largest consignment being one of 50 lbs. which arrived in Londonderry, on the Daniel of Leith in May 1615.11 Nonetheless the fact that William Temple, provost of Trinity College, Dublin, issued a statute, c. 1613, forbidding the use of tobacco there,12 suggests that the habit was becoming fashionable. An impost on tobacco pipes and tobacco imported into Ireland was established in 1614.13 The impression left, however, by the English port books14 that exports of tobacco from England to Ireland had probably greatly increased by the 1630s. This may also be true of direct imports.
The only historical evidence for the manufacture of tobacco pipes in Ireland in this period appears to consist in a license in 1617 to J. Coker of Dublin to manufacture and sell tobacco pipes for twenty-one years at a rent of £10.15 It is possible that he did indeed engage in pipe making and if so internal trade facilities were such that they could have received a wide distribution.
2. Public Record Office, London (now The National Archives), E190, passim. The equivalent Scottish sources, customs books (Scottish Record Office, Edinburgh (now the National Archives of Scotland),, E71), survive in smaller quantities for this period. I hope to examine later in greater detail the points tentatively approached here for all of Ireland in the first half of the seventeenth century. |
Q:
Rails: Authlogic failed login URL?
On a vanilla Authlogic install set up a la Ryan Bate's Railscast #160, when a user goes to login and the session FAILS, the url changes from
/login
to
/user_session
(Of course, it shows the validation errors and all that jazz.)
I want to keep the URL always at /login, even on failure (and still display the login errors). How would I accomplish this?
PS - You can see this in his Railscast; scrub to 9:33 and watch the URL change on a failure.
A:
Found the routing solution here: Use custom route upon model validation failure
Shame on me for not searching more thoroughly. Any other suggestions welcome, though, as I'm not crazy about the extra routes...
|
Q:
Comparing objects memory address, Java
Robot r1,r2,r3;
r1=new Robot("Huey",2,3);
r2=new Robot("Louie",5,4);
r3=new Robot("Louie",5,4);
r1=r2;
r2=r3;
r3=r1;
System.out.print(r1==r2);
So this program prints false, but I thought it would print true. Its asking if the memory address of r1 is the same as r2. Well r1 is set to equal r2, then r2 is changed to r3, but that shouldn't matter, right? It's still r2 we're comparing it to.
A:
Let's see the situation after each assignment
// r1 - Huey, r2 - Louie1, r3 - Louie2
r1=r2;
// r1 - Louie1, r2 - Louie1, r3 - Louie2
r2=r3;
// r1 - Louie1, r2 - Louie2, r3 - Louie2
r3=r1;
// r1 - Louie1, r2 - Louie2, r3 - Louie1
In the end, r1 is the first 'Louie' instance (former r2) and r2 is the second.
PS I assume I don't need to comment why new Robot("Huey",2,3) == new Robot("Huey",2,3) returns false.
|
{
"status_code": 200,
"data": {
"ReplicationInstance": {
"MultiAZ": false,
"AvailabilityZone": "us-east-1b",
"ReplicationInstancePrivateIpAddress": "172.31.27.105",
"ReplicationInstanceArn": "arn:aws:dms:us-east-1:644160558196:rep:NNH7KJHLE6R7HAL7BZWOVUQHZY",
"ReplicationInstancePrivateIpAddresses": [
"172.31.27.105"
],
"ReplicationInstanceClass": "dms.t2.medium",
"ReplicationSubnetGroup": {
"ReplicationSubnetGroupDescription": "default group created by console for vpc id vpc-d2d616b5",
"Subnets": [
{
"SubnetStatus": "Active",
"SubnetIdentifier": "subnet-3a334610",
"SubnetAvailabilityZone": {
"Name": "us-east-1d"
}
},
{
"SubnetStatus": "Active",
"SubnetIdentifier": "subnet-efbcccb7",
"SubnetAvailabilityZone": {
"Name": "us-east-1b"
}
},
{
"SubnetStatus": "Active",
"SubnetIdentifier": "subnet-e3b194de",
"SubnetAvailabilityZone": {
"Name": "us-east-1e"
}
},
{
"SubnetStatus": "Active",
"SubnetIdentifier": "subnet-914763e7",
"SubnetAvailabilityZone": {
"Name": "us-east-1a"
}
}
],
"VpcId": "vpc-d2d616b5",
"SubnetGroupStatus": "Complete",
"ReplicationSubnetGroupIdentifier": "default-vpc-d2d616b5"
},
"AutoMinorVersionUpgrade": true,
"ReplicationInstanceStatus": "deleting",
"VpcSecurityGroups": [
{
"Status": "active",
"VpcSecurityGroupId": "sg-c63712b4"
}
],
"KmsKeyId": "arn:aws:kms:us-east-1:644160558196:key/f1f33a6b-91aa-4b0a-904c-f2a0378277f0",
"InstanceCreateTime": {
"hour": 5,
"__class__": "datetime",
"month": 10,
"second": 18,
"microsecond": 283000,
"year": 2017,
"day": 31,
"minute": 6
},
"ReplicationInstancePublicIpAddress": "54.156.233.85",
"AllocatedStorage": 50,
"EngineVersion": "2.3.0",
"ReplicationInstancePublicIpAddresses": [
"54.156.233.85"
],
"ReplicationInstanceIdentifier": "replication-instance-1",
"PubliclyAccessible": true,
"PreferredMaintenanceWindow": "wed:23:08-wed:23:38",
"PendingModifiedValues": {}
},
"ResponseMetadata": {
"RetryAttempts": 0,
"HTTPStatusCode": 200,
"RequestId": "51a71ad6-be26-11e7-88fb-33b14e65bc19",
"HTTPHeaders": {
"x-amzn-requestid": "51a71ad6-be26-11e7-88fb-33b14e65bc19",
"date": "Tue, 31 Oct 2017 10:29:04 GMT",
"content-length": "1807",
"content-type": "application/x-amz-json-1.1"
}
}
}
} |
Q:
Count the number of occurences of a pattern in a list in Python
Given a pattern [1,1,0,1,1], and a binary list of length 100, [0,1,1,0,0,...,0,1]. I want to count the number of occurences of this pattern in this list. Is there a simple way to do this without the need to track the each item at every index with a variable?
Note something like this, [...,1, 1, 0, 1, 1, 1, 1, 0, 1, 1,...,0] can occur but this should be counted as 2 occurrences.
A:
Convert your list to string using join. Then do:
text.count(pattern)
If you need to count overlapping matches then you will have to use regex matching or define your own function.
Edit
Here is the full code:
def overlapping_occurences(string, sub):
count = start = 0
while True:
start = string.find(sub, start) + 1
if start > 0:
count+=1
else:
return count
given_list = [1, 1, 0, 1, 1, 1, 1, 0, 1, 1]
pattern = [1,1,0,1,1]
text = ''.join(str(x) for x in given_list)
print(text)
pattern = ''.join(str(x) for x in pattern)
print(pattern)
print(text.count(pattern)) #for no overlapping
print(overlapping_occurences(text, pattern))
A:
you can always use the naive way :
for loop on slices of the list (as in the slice that starts at i-th index and ends at i+[length of pattern]).
and you can improve it - notice that if you found an occurence in index i' you can skip i+1 and i+2 and check from i+3 and onwards (meaning - you can check if there is a sub-pattern that will ease your search )
it costs O(n*m)
you can use backwards convolution (called pattern matching algorithem)
this costs O(n*log(n)) which is better
|
Applying a propensity score-based weighting model to interrupted time series data: improving causal inference in programme evaluation.
Often, when conducting programme evaluations or studying the effects of policy changes, researchers may only have access to aggregated time series data, presented as observations spanning both the pre- and post-intervention periods. The most basic analytic model using these data requires only a single group and models the intervention effect using repeated measurements of the dependent variable. This model controls for regression to the mean and is likely to detect a treatment effect if it is sufficiently large. However, many potential sources of bias still remain. Adding one or more control groups to this model could strengthen causal inference if the groups are comparable on pre-intervention covariates and level and trend of the dependent variable. If this condition is not met, the validity of the study findings could be called into question. In this paper we describe a propensity score-based weighted regression model, which overcomes these limitations by weighting the control groups to represent the average outcome that the treatment group would have exhibited in the absence of the intervention. We illustrate this technique studying cigarette sales in California before and after the passage of Proposition 99 in California in 1989. While our results were similar to those of the Synthetic Control method, the weighting approach has the advantage of being technically less complicated, rooted in regression techniques familiar to most researchers, easy to implement using any basic statistical software, may accommodate any number of treatment units, and allows for greater flexibility in the choice of treatment effect estimators. |
Q:
TFS 2010 Kerberos Falls Back to NTLM When Using FQDN
We have a Team Foundation Server 2010 set up using Kerberos. If we're accessing it via http://tfsserver:8080/tfs, everything's fine and users were never prompted for credentials. However if accessing it via http://tfsserver.domain.com:8080/tfs, then IE prompts for credentials. Has anyone experienced a similar issue? Thanks!
I tried IE, Firefox and Chrome and got the same result (Kerberos when using machine name, NTLM when using FQDN).
A:
First thing to check on your client that the fqdn is in the Trusted Sites zone, and the Trusted Sites zone is configured to "Automatic logon with current username and password".
I would also be inclined to create an SPN for the url if it does not exist:
http/tfsserver.domain.com:8080
You can show the spn's like this:
setspn.exe -L tfsserver.domain.com
setspn.exe -L tfsserver.domain.com:8080
setspn.exe -L tfsserver
setspn.exe -L tfsserver:8080
|
Speaker of the House Nancy Pelosi (D–Calif.) rejected a White House offer on Friday to cut $150 billion in federal spending over 10 years as a part of a possible deal to raise the debt ceiling.
Now, $150 billion might sound like a large amount of money. But relative to how much money the federal government is set to spend over the next 10 years, the White House's proposed cut is roughly equivalent to deciding you'll eat one fewer Chipotle burrito per month for the next decade. That's not going to pay off a maxed-out credit card.
The fact that Pelosi rejected such a comically small reduction without even giving her colleagues the chance to consider it tells you all you need to know about the state of fiscal responsibility in Washington right now.
Bloomberg reports that the White House provided House leaders with roughly $500 billion in possible budgetary offsets on Thursday night, asking that the Pelosi find $150 billion in cuts that her members would support. Both sides are continuing to negotiate in advance of a planned vote on raising budget caps and the debt limit next week. The Treasury has been using so-called "extraordinary measures" to deal with the debt limit since March, when the U.S. surpassed the current limit of $22 trillion.
It's possible that spending cuts will be part of whatever final deal is reached, but it's still worth stressing just how absurd a negotiating position Pelosi is taking here—if she does indeed stick to saying that $150 billion is too steep a cut.
The Congressional Budget Office (CBO) projects that the federal government will spend more than $57 trillion over the next decade. A $150 billion cut amounts to less than 0.3 percent of all spending during that time. In the context of a $50,000 annual household budget, that's like cutting about $150 per year—the cost of a single lunch each month.
That's hardly enough to get the federal government out from under $22 trillion in debt. The CBO projects that if current policies stay in place, the government will add another $11.6 trillion to the deficit over the next decade. By 2049, the national debt will be more than one and a half times the size of the entire U.S. economy, breaking a record set during World War II. If a recession hits, those numbers could be worse.
"It's hard to believe there is resistance to finding just $150 billion of offsets over the next decade," comments Maya MacGuineas, president of the Committee for a Responsible Federal Budget. "If Congressional leaders don't like the options suggested by the administration, they should propose alternatives and additions."
MacGuineas points out that $150 billion isn't enough to cover the expected cost of raising the budget caps—meaning that whatever Congress passes next week is almost guaranteed to add to the deficit.
Not that Congress seems to care. There's no political appetite for cutting spending or balancing the budget right now. That's true for both Democrats and Republicans. The latter have finally started admitting publicly that they don't care about deficits anymore, while the former are increasingly pushing for new entitlements that will only make existing budgetary problems worse.
But if Congress and the White House can't agree to cut a relative pittance, there's practically no hope that our elected officials will meaningfully address the debt crisis barrelling our way. |
And following our first report and images of the unexpected Studio Series SS-31 Battle Damaged Megatron, now we have our first in-hand images courtesy of Stanley Cheung from the Hobbymizer Hong Kong Discuss Group on Facebook.
We are sure most fans would be pleased with the new deco an extra details of this release. The new dark gray plastic looks very movie accurate, plus some smart new paint applications on the rest of the body. Gold traces, weathering effect over the body and tank parts. The big long claw of the right hand is now painted gold. Of course, we don’t have to forget the new battle damaged head as seen in the Revenge Of The Fallen Film after the battle with Optimus Prime. We can also confirm this is the only change of this mold, the rest of the body is exactly the same as the original Studio Series SS-03 Megatron.
Don’t wait more and check out the mirrored images after the jump and then join to the discussion on the 2005 Boards! |
Outcome of 100 pregnancies initiated under treatment with cabergoline in hyperprolactinaemic women.
Data concerning the safety for pregnancy of cabergoline treatment in hyperprolactinaemic women are still scarce. To exclude a higher than normal risk for miscarriage and congenital malformation in pregnancies initiated under cabergoline treatment. A retrospective study of 100 pregnancies in 72 hyperprolactinaemic women treated with cabergoline at the time of conception and follow-up of the 88 newborn children. Cabergoline was interrupted in 99 pregnancies and continued in one case. Foetal exposure dose to cabergoline was calculated for each pregnancy. Complications of pregnancy and neonatal status were compared to those observed in an age-and delivery time-matched control group of 163 women. The mean foetal exposure dose to cabergoline was 3.6 +/- 4.7 mg. The rate of spontaneous miscarriages was 10%. Three medical terminations of pregnancy were performed for a foetal malformation (3%). Minor to moderate complications were observed in 31% of the pregnancies, a figure similar to that found in the control group. An increase in tumour size (2-8 mm) was observed in 17/37 evaluated cases, needing reintroduction of cabergoline during pregnancy in five patients. The 84 deliveries resulted in 88 infants, three of them presenting with a malformation (3.4%). Neonatal status was comparable to the control group, where a malformation rate of 6.3% was observed. Postnatal development of the children was normal. Cabergoline treatment at the time of conception appears to be safe for both the pregnancy and the neonate, although more data are still needed on a larger number of pregnancies. |
1. Introduction
===============
The family Asteraceae is the largest Angiosperm group, consisting of approximately 23,000 species distributed in 1,535 genera. It has a cosmopolitan distribution, and found on all the continents except Antarctica. South America is home to about 20% of the existing genera. In Brazil, there are approximately 180 genera and 3,000 species distributed throughout the country.
Plants from this family have been extensively studied for their chemical composition and biological activity and some have led to the development of new drugs and insecticides \[[@B1-molecules-16-04828],[@B2-molecules-16-04828],[@B3-molecules-16-04828],[@B4-molecules-16-04828],[@B5-molecules-16-04828],[@B6-molecules-16-04828],[@B7-molecules-16-04828],[@B8-molecules-16-04828],[@B9-molecules-16-04828],[@B10-molecules-16-04828],[@B11-molecules-16-04828],[@B12-molecules-16-04828],[@B13-molecules-16-04828]\].
*Praxelis clematidea* R.M. King & Robinson belongs to the Eupatorieae tribe of the family Asteraceae, and consists of 2,400 species distributed in 170 genera \[[@B14-molecules-16-04828]\]. The species has the following synonyms: *Eupatorium clematideum* Griseb. and *Eupatorium urtifolium* var. *clematideum* (Griseb.) Hieron ex. Kuntze.
It is a perennial weed native to South America and distributed throughout Bolivia, Peru and Argentina. In Brazil, it is found mainly in the states of Bahia, Alagoas, Pernambuco, Paraiba, Amazonas and Mato Grosso \[[@B15-molecules-16-04828]\]. In phytochemical studies, Bolhmann and coworkers \[[@B16-molecules-16-04828]\] isolated *N*-(acetoxy)-jasmonoylphenylalanine-methyl-ester. Gas chromatographic analysis showed the presence of sesquiterpenes and monoterpenes in the essential oil extracted from *P. clematidae*, which also showed growth inhibitory effect on two plant species, *Lactuca sativa* and *Brassica campestris*, and on fungal colonies of *Fusarium oxysporum* and *Phytopthora capsici* \[[@B17-molecules-16-04828]\]. A pharmacological study conducted with the aerial parts of this species demonstrated significant gastroprotective activity against gastric ulcers induced in animals with ethanol, stress, and a non-steroidal antiinflammatory \[[@B18-molecules-16-04828]\].
The studies on *Praxelis clematidea* don't report the presence of flavonoids, although scientific studies conducted on the family Asteraceae have identified flavonoids as important chemotaxonomic markers of this family \[[@B19-molecules-16-04828]\]. Based on this information, we started with the aerial parts of *Praxelis clematidea* to isolate compounds belonging to this secondary metabolite class. This class is increasingly becoming an object of investigation, and many studies have isolated and identified flavonoids that possess antifungal, antiviral and antibacterial activities. In addition, various studies have demonstrated synergy between active flavonoids, and between flavonoids and conventional chemotherapeutic agents \[[@B20-molecules-16-04828],[@B21-molecules-16-04828]\].
The ever increasing bacterial resistance to antibiotics is a serious problem for public health that affects most current antibacterial agents. Efflux pumps are integral proteins of the bacterial membrane and are recognized as one of the major sources of bacterial resistance since they extrude antibiotics from the cell \[[@B22-molecules-16-04828],[@B23-molecules-16-04828]\].
Modulators of antibiotic drug resistance are compounds that potentiate antibiotic activity against resistant strains. Some of these agents act as efflux pump inhibitors (EPIs) \[[@B24-molecules-16-04828],[@B25-molecules-16-04828]\]. Plants provide a rich source of EPIs and several compounds have been identified as potent inhibitors \[[@B26-molecules-16-04828],[@B27-molecules-16-04828],[@B28-molecules-16-04828]\].
The aim of the present work was to isolate and characterize the structure of flavonoids from *Praxelis clematidea* and study their activity as modulators of drug resistance in *Staphylococcus aureus* SA-1199B.
Some methoxylated flavonoids that potentiate the activity of antimicrobial drugs have already been described \[[@B29-molecules-16-04828],[@B30-molecules-16-04828],[@B31-molecules-16-04828],[@B32-molecules-16-04828]\]. However, as far as we know, none of the flavonoids presented here has been previously evaluated. The results add new scientific evidence that flavonoids modulate antibiotic resistance, probably by efflux pump inhibition.
2. Results and Discussion
=========================
The structural identification of the compounds ([Figure 1](#molecules-16-04828-f001){ref-type="fig"}) was carried out based on the analysis of the spectral data and by comparison with the literature \[[@B33-molecules-16-04828],[@B34-molecules-16-04828]\]. The compounds were: (1) apigenine (4',5,7-trihydroxyflavone), (2) genkwanin (4',5-dihydroxy-7-methoxyflavone), (3) 7,4'-dimethylapigenin (5-hydroxy-4',7-dimethoxyflavone), (4) trimethylapigenin (4',5,7-trimethoxyflavone), (5) cirsimaritin (4',5,-dihydroxy-6,7-dimethoxyflavone) and (6) tetramethylscutellarein (4',5,6,7-tetramethoxyflavone).
{#molecules-16-04828-f001}
Methoxylated flavones showed no antibacterial activity at 256 μg/mL against the tested strain of *S. aureus*. When the compounds were added to the growth medium at 64 µg/mL (1/4 MIC), a reduction in the MIC of at least two-fold (and up to 16-fold) was observed for norfloxacin and ethidium bromide ([Table 1](#molecules-16-04828-t001){ref-type="table"}). All experiments were carried out at least twice with consistent results.
molecules-16-04828-t001_Table 1
######
Minimum inhibitory concentrations (MICs) of antibiotics and ethidium bromide against *Staphylococcus aureus* strain SA-1199B, in the absence and presence of flavones.
Flavones MIC (µg/mL)
---------- ------------- --------- ----
None 128 32 16
**1** 128 32 16
**2** 64 (2×) ^a^ 16 (2×) 16
**3** 64 (2×) 16 (2×) 16
**4** 16 (8×) 8 (4×) 16
**5** 32 (4×) 8 (4×) 16
**6** 8 (16×) 2 (16×) 16
^a^ Fold reduction in MIC.
Methoxylated flavones modulate drug activity by reducing the concentration needed to inhibit the growth of the drug-resistant (effluxing) bacteria. This activity may be related to flavanoid lipophilicity due to the presence of methoxyl groups. Lipophilicity is a common feature of several efflux pump inhibitors and may be a key factor for inhibition in Gram-positive bacteria \[[@B28-molecules-16-04828]\].
Ethidium bromide is a well-known substrate for the NorA efflux protein, and active efflux is the only known mechanism of resistance to this DNA-intercalating dye \[[@B35-molecules-16-04828]\]. Therefore, the use of ethidium bromide against the strain SA-1199B was used to demonstrate that the methoxylated flavones evaluated here modulated norfloxacin resistance by efflux pump inhibition.
Pefloxacin, a hydrophobic quinolone, is a poor substrate of the NorA efflux pump, and it was used as a negative control \[[@B25-molecules-16-04828]\]. Reductions in MICs of norfloxacin and ethidium bromide when combined with chlorpromazine or trifluoperazine were also observed (data not shown), and the results were consistent with those reported by Kaatz *et al.* \[[@B24-molecules-16-04828]\] and by Falcão-Silva *et al.* \[[@B32-molecules-16-04828]\]; both phenothiazines were used as positive (internal) control.
The results can be explained by the increasing lipophilicity in the compounds. An analysis of log P values for the compounds (calculated with ChemDraw Ultra 10.0, Cambridge Software) revealed the following order of lipophilicity: **1** (log P 1.9) \< **5** (log P 2.04) \< **2** (log P 2.17) \< **3** (log P 2.43) \< **6** (log P 2.57) \< **4** (log P 2.69). This order explains, in part, the following order of activity: Nor: **1** \< **2**/**3** \< **5** \< **4** \< **6** and EB: **1** \< **2**/**3** \< **5**/**4** \<**6**.
The importance of the presence of a methoxyl in the 4' position \[[@B29-molecules-16-04828]\] was also observed, as the activity of compounds **3** and **4** was higher than that of compounds **1** and **2**, as well as the activity of compound **6** in relation to that of compound **5**. Another important factor was the total number of methoxyls in the flavonoids, in general, compounds **5** and **6** were more active than the other compounds, which is evident when we observe that compound **5** has a greater lipophilicity only compared to compound **1** and does not show a methoxyl in the 4' position, even though it is more active than compounds **1**, **2** and **3**.
3. Experimental
===============
3.1. General
------------
The NMR spectra were obtained with a Mercury-Varian spectrometer at 200 MHz (^1^H) and 50 MHz (^13^C) and a VARIAN System model operating at 500 MHz (^1^H) and 125 MHz (^13^C). The solvents used were CDCl~3~, CD~3~OD and DMSO-*d~6~*, whose characteristic peaks in ^1^H and ^13^C-NMR are used to adjust the frequency scale. For column chromatography, silica gel 60 (70-230 Mesh) from Merck was utilized as the stationary phase. PF~254~ silica gel from Merck was used for the analytical (ATLC) and preparative (PTLC) thin-layer chromatography. The studied substances were identified by using ultraviolet radiation at wavelengths of 254 and 366 nm and by impregnation of plates in glass containers saturated with iodine vapor.
3.2. Plant Material
-------------------
The aerial parts of *Praxelis clematidea* R.M. King & Robinson were collected in Lagoa do Paturi, a municipality of Santa Rita, in the state of Paraiba (Brazil), in May 2008. The identification of the botanical material was performed by Prof. Dr. Maria de Fatima Agra, Botany Sector, Laboratory of Pharmaceutical Technology/UFPB "Professor Delby Fernandes de Medeiros". Exsiccates of the plant are deposited in the Prof. Lauro Pires Xavier (JPB) Herbarium, Paraiba Federal University, under the code M. F. Agra *et al*. 6894 (JPB).
3.3. Extraction and Isolation
-----------------------------
The dried and pulverized plant material (aerial parts, 10 kg) was submitted to exhaustive maceration utilizing ethanol as the extraction solvent (3 × 10 L, every 72 h). The ethanolic solution obtained was concentrated in a rotary evaporator under reduced pressure, resulting in a crude ethanolic extract (600 g). This was partitioned with hexane, chloroform and ethyl acetate. The chloroform phase (120.96 g) was submitted to adsorption column chromatography (CC) using silica gel as the stationary phase and chloroform and methanol as the mobile phase, both as pure or as binary mixtures of increasing polarity, resulting in 188 fractions. These samples were analyzed by ATLC, and after examination with under UV light and iodine vapor, they were classified according to their Rf values into 25 groups. The sub-fractions 10--18 and 64--70 appeared as yellow solids and were identified as compounds **3** and **5**. The subgroup 23--24 was submitted to PTLC, utilizing chloroform and methanol (97:3) and furnishing compounds **6**, **2** and **4**. The subgroup 77--80 was submitted to PTLC, using chloroform and methanol (95:5) and supplying compound **1**.
3.4. Bacterial Strains
----------------------
The *S. aureus* strain used, SA-1199B, over expresses the norA gene encoding the NorA efflux protein which extrudes hydrophilic fluoroquinolones and other drugs such as DNA-intercalating dyes \[[@B24-molecules-16-04828],[@B36-molecules-16-04828]\]. The strain, kindly provided by Professor Simon Gibbons (University of London), was maintained on blood agar base slants (Laboratorios Difco Ltda., Brazil), and prior to use, the cells were grown overnight at 37 °C in brain heart infusion broth (BHI--Laboratorios Difco Ltda., Brazil).
3.5. Antibiotics and Chemicals
------------------------------
Norfloxacin, pefloxacin and ethidium bromide were obtained from Sigma Aldrich Co. Ltd. (USA). The stock solutions of the flavones were prepared in DMSO, and the highest concentration remaining after dilution in broth (4%) caused no inhibition of bacterial growth.
3.6. Drug Susceptibility Testing and Modulation Assay
-----------------------------------------------------
The minimum inhibitory concentrations (MICs) of the antibiotics and flavonoids were determined in BHI by micro-dilution assay, using a suspension of ca. 105 CFU/mL and a drug concentration range from 256 to 0.5 μg/mL (twofold serial dilutions). MIC is defined as the lowest concentration at which no growth is observed. A solution of resazurin (0.01% w/v in sterile distilled water) was used to detect bacterial growth by color change from blue to pink. For the evaluation of flavones as modulators of drug resistance, the "modulation assay" was used, a method that has been widely applied to identify potential EPIs \[[@B29-molecules-16-04828]\], *i.e.*, the MICs of antibiotics were determined in the presence of the flavones (in the BHI) at a sub-inhibitory concentration.
3.7. Log P Estimation
---------------------
The structures were drawn utilizing the ChemDraw Ultra® 10.0 program (CambridgeSoft, 1986--2005), which also estimate their log P values.
4. Conclusions
==============
Six flavones were isolated from *Praxelis clematidea* and identified through ^1^H and ^13^C-NMR data. Assays were carried out with these compounds against a strain of *Staphylococcus aureus* SA-1199B, with NorA efflux pump activity, demonstrating that the highest methoxylated flavones showed the highest efflux pump inhibition, or modulating of bacterial resistance. Inhibition of the bacterial transporter is related to the lipophilicity of the compound and might confer selectivity when used with antimicrobials.
J.P.S-J. and V.S.F-S. are very grateful to Simon Gibbons (University of London). The authors thank Maria de Fatima Agra for botanical identification of the species, and CNPq, CAPES, PRONEX/FAPESQ-PB Brazil for financial support.
*Sample Availability:* Samples of the compounds are available from the authors.
|
Q:
How to create an ArcMap Layer from a ArcGIS Map Service
I would like to add an ILayer created from an ArcGIS Server Map service to an IMap with ArcObjects, but don't see how to do it.
I am getting an IMapServer3 with the following code, where mapName = the map service:
serverContext = som.CreateServerContext(mapName, "MapServer");
IServerObject serverObject = serverContext.ServerObject;
IMapServer3 mapServer = (IMapServer3)serverObject;
It looks like I can get an ILayer from an IMapServerGroupLayer, but it looks like the IMapServerGroupLayer is looking for a different connection type than I am using.
If you have an example of getting an ILayer from a Map Service, your assistance is appreciated.
A:
This is what worked...
private static void GetLayerFromMapServerLayer()
{
IAGSServerConnectionName servConnName = new AGSServerConnectionNameClass();
IPropertySet props = new PropertySetClass();
props.SetProperty("machine", "server");
servConnName.ConnectionProperties = props;
IAGSServerConnectionFactory pAGSServerConnectionFactory = new AGSServerConnectionFactoryClass();
IAGSServerConnection pAGSConnection = pAGSServerConnectionFactory.Open(props, 0);
IAGSEnumServerObjectName pEnumSOName = pAGSConnection.ServerObjectNames;
IAGSServerObjectName pSOName = pEnumSOName.Next();
while (pSOName != null)
{
if (pSOName.Name == "Base_Map")
break;
pSOName = pEnumSOName.Next();
}
IName pName = (IName)pSOName;
IMapServer mapServer = (IMapServer)pName.Open();
IMapServerLayer msLyr = new MapServerLayerClass();
msLyr.ServerConnect(pSOName, mapServer.DefaultMapName);
IMapServerGroupLayer group = (IMapServerGroupLayer)msLyr;
ILayer msLayer = (ILayer)msLyr;
//return msLayer;
MapDocument mapDoc = new MapDocumentClass();
mapDoc.Open(@"F:\~mkoneya~2011_82_13_58_30.mxd");
IMap myMap = mapDoc.get_Map(0);
myMap.AddLayer(msLayer);
mapDoc.Save();
}
|
Simian immunodeficiency viruses (SIV) are a group of primate retroviruses that are morphologically and antigenically related to human immunodeficiency, viruses (HIVs). HIV infection in humans is associated with the development of Acquired Immune Deficiency Syndrome (AIDS). The SIV group includes strains isolated from macaques (SIV.sub.mac) (see, e.g., Daniel et al., Science 228, 1201 (1985)); sooty mangabey monkeys (SIV.sub.smm) (see, e.g., Lowenstein et al., Int. J. Cancer 38, 563-574, (1986)); African Green Monkeys (SIV.sub.agm)(see, e.g., Otha et al., Int. J. Cancer 41, 115 (1988)); chimpanzees (SIV.sub.cpz-ant) (PCT application WO 91/19785 published 26 Dec. 1991) and mandrills (SIV.sub.mnd)(Tsujimoto et al., J. Virol. 62, 4044 (1988)). Macaques infected with cultured SIV develop opportunistic infections and other manifestations of immunodeficiency associated with a loss of CD4+ cells.
Both HIV and SIV replicate in vitro in a variety of CD.sub.4 + cell lines and in primary cell cultures. Cellular infection causes drastic cytopathic effects and cytolysis. The cytopathic effects include syncytia formation, which is produced by the interaction of viral envelope glycoproteins (expressed on the surface of the infected cells) and uninfected cells that express CD.sub.4.
HIV and SIV can also establish persistent infections in vitro. See, e.g., Benveniste et al., J. Med. Primatol., 19, 351 (1990); Lairmore et al. , Arch. Virol. 121, 43 (1991). Persistently infected cells can produce infectious as well as defective virus particles. HIV mutants defective in the pol region have been obtained from cultures. Folks et al., Science, 231, 601 (1986). Products of the pol virus genome region, including the virus protease enzyme, are required for viral infectivity. See, e.g., Henderson et al., J. Virol., 66, 1856 (1992); Henderson et al., J. Med. Primatol. 19, 411 (1990). A noninfectious HIV mutant able to synthesize all major viral proteins except proteins p64 and p34 is disclosed in U.S. Pat. No. 4,752,565 to Folks et al. A non-infectious mutant HIV virus lacking a functional protease, and a cell line infected with the mutant virus, is described in Benveniste et al., J. Med. Primatol., 19, 351 (1990). Mutant SIV strains producing large amount of either the envelope glycoprotein gp120 or the nucleic acid binding gag protein are described in Benveniste et al., J. Med. Primatol., 19, 351 (1990).
Additionally, a natural SIV isolated from chimpanzees has been reported as having antigenic properties closer to HIV-1 than HIV-2, and has been proposed for use in preparing antibodies for diagnostic kits and for developing vaccines against HIV-1. Published application, WPI Ace No: 90-329700/44.
Various strategies are currently being investigated in attempts to develop effective vaccines against retroviruses such as SIV and HIV, including subunit vaccines and live recombinant virus vaccines. Synthetic peptides containing multiple epitopes of a given pathogen are also under investigation for use in vaccines. See, e.g., PCT patent application WO 91/05864, international publication date 2 May 1991.
Inactivated whole-virus vaccines consist of purified preparations of intact viral particles that have been rendered non-infectious by chemical or physical methods. Inactivated SIV viral vaccines have been tested in macaques, and have resulted in the development of high levels of neutralizing antibodies. Johnson et al., Proc. Natl. Acad. Sci. USA, 89, 2175 (1992). While such vaccines are comparatively easy to produce and contain most or all of the important immunological epitopes, production of these vaccines requires the propagation of large amounts of infectious virus. Additionally, the virus must be rendered completely non-infectious without altering various immunological epitopes.
Because both SIV and HIV are spread by contaminated body fluids, immunochemical testing of sera can be used to determine whether animals or humans are infected with SIV or HIV. Immunochemical techniques employ proteins isolated from purified virus particles or infected cell lysates as antigens to detect serum antibodies directed against the virus of interest. These antigens may also be used in competition studies designed to detect the presence of viral antigens. Preparation of the viral proteins requires manipulating large volumes of virus and tissue cultures; if the virus is infectious workers are exposed to a risk of accidental infection. |
Absence of weight loss during Cryptosporidium infection in susceptible mice deficient in Fas-mediated apoptosis.
Apoptosis plays a major role in the development of pathogenesis due to a number of microbial infections. Epithelial cells have been previously shown to die through apoptosis during in vitro infection by the Apicomplexan parasite Cryptosporidium parvum. We now test the possibility that Fas (APO-1/CD95)-dependent apoptosis of uninfected cells, due to enhanced expression of the Fas ligand (FasL) on infected cells, may contribute to the pathology of cryptosporidiosis. Expression of the FasL increased by a large amount on the surface of intestinal epithelial cells infected with C. parvum, and the increase was limited exclusively to infected cells. In addition, a significant increase in FasL expression was observed in epithelial cells from the small intestine of mice infected with C. parvum. Finally, whereas wild-type mice depleted of CD4(+) lymphocytes lost weight during C. parvum infection, CD4(+) cell-depleted lpr mice (deficient in Fas function) infected with C. parvum gained weight at the same rate as undepleted wild-type or lpr mice. These results suggest that bystander Fas-dependent apoptosis of uninfected epithelial cells may exacerbate the weight loss associated with cryptosporidiosis. |
"One of the most addictive role playing games I have ever played"
Note: this is my 350th review. Yay to me.
Introduction
The Final Fantasy series has always been one of my favorite video game series, ever since I first played the original Final Fantasy. While the original game in the series was never one of my favorites, I saw the potential in the series and had a lot of fun playing the game. Several months later, I finally got Final Fantasy 2 and Final Fantasy 3 and the rest, as they say, is history. The Final Fantasy series was thrust into my game playing department, and it has never left.
Out of all the Final Fantasies, I had never really got my chance to play one. That game was Final Fantasy 5. The main reason for this is the fact that the game was never released here in America. Back then, I did not have my own computer, so I could not emulate the game and download a ROM to play. So, I did not really have a chance to play the critically acclaimed ''best game in the Final Fantasy series'' until recently. When it was finally released here in America, as part of Final Fantasy Anthology.
Final Fantasy Anthology was a game I anxiously awaited ever since I first heard of it. How could I not be excited? My favorite game of all time (Final Fantasy 6) was featured on it, and it also included the infamous Final Fantasy 5, which I had always wanted to play. So, when Final Fantasy Anthology was released, I purchased a copy and went home to play Final Fantasy 5. The rest is gaming history ^_^
The game was a lot of fun to play, and I had a lot of fun playing through it. I was a little disappointed though, but how could I not be? Everyone was calling this the best Final Fantasy game of all time, and I played it, and I did not feel that it was, is, or ever will be, the best Final Fantasy ever made. Final Fantasy 6 still tops the list, in my opinion. Regardless, Final Fantasy 5 is a fantastic game that suffered a bit on its way to Playstation, but is still a fantastic game nevertheless.
Storyline (9.3/10)
The storyline of Final Fantasy 5 never really got hyped as much as it should have, in my opinion. Why do I think this? Well, because I think this game had the best storyline of any of the Final Fantasy games. Well, FF6 still holds the top spot, but FF5 runs a close second. The coolest thing about this game is that Square Soft decided to try something a bit different this time than the normal ''save the princess and find all the crystals'' type things that we had been used to be seeing as a storyline in most of the role playing games.
But Final Fantasy 5 changed that. We did not get the total ''save the princess and get the crystals storyline'' we were so used to seeing. Instead, we were graced with a really unique storyline that dealt with meteorites. Yes, meteorites were featured prominently in this game as one of the main storyline pieces of the game. You actually start off the game in a meteor crater, and throughout the game visit a lot of meteorite spots. Of course, the crystals are still featured in the storyline of the game, they just aren’t the main focus anymore. Gaining the crystals opens up new jobs in the game.
Overall, I think that Final Fantasy 5 has the second best storyline of any Final Fantasy game. Yes, I think that Final Fantasy 6, has, and probably always will have, the best storyline to grace a Final Fantays game, but the character development and storyline in this game is great, as well. The main problem with the game is the fact that the translation is all screwed up. Otherwise, the storyline is fantastic, and completely worthy of the Final Fantasy name.
Graphics (9.2/10)
The first thing you have to remember before booting this game up is the fact that this game was originally a Super Famicom (Japanese version of Super Nintendo) game. Therefore, the graphics in the game are not like a Playstation game, instead they are like a Super Nintendo game. About the only thing graphically that even remotely come in the Playstation’s graphical power ballpark are the added full motion video scenes that Square Soft added to the game. Otherwise, this looks like a Super Nintendo game. And I am so very happy that it does.
Let me get the problems out of the way first. As some of you may know, the Playstation as a video game console is notorious for loading times and slowdown. And these two things affected Final Fantasy 6 completely, and I think that totally destroyed the game. On the other hand, it affected Final Fantasy 5 also, but I don’t think it did as badly. Okay, maybe I am just saying that because I have yet to play the Super Famicom version of Final Fantasy 5, but I think that the slowdown during battles is non-existant. On the other hand, the loading times still plague the game dearly, especially when entering the menu screen and entering and exiting a battle.
Now onto the good things of the game. The character designs in the game are fantastic. A key thing to remember here is the fact that the characters don’t actually have their own personal design really. Instead, they have the design of whatever job they currently are assigned. There are about 22 different jobs in the game, and they all look different. The variety is definitely a good thing. The enemy designs in the game are fantastic, as well. I especially enjoy some of the boss designs in the game. And the final boss simply looks amazing.
Overall, the graphics in the game are awesome, for a Super Nintendo game. The thing you have to realize is the game is basically an ''enhanced graphically'' version of Final Fantasy 4 when it comes to graphics. So, the game does look a lot like Final Fantasy 4. But the graphics in Final Fantasy 4 were pretty good, so these graphics are even better. I like the added full motion video sequences that Square Soft added to the game. I also like how the loading times and slowdown that plagued Final Fantasy 6 for Playstation are not present as badly in this game.
Music and Sound Effects (9.2/10)
The Final Fantasy series has always been one of the best when it comes to music and sound effects, and Final Fantasy 5 is no exception. The game sounds different than it does on the Super Famicom version I hear, but I cannot confirm that at this time. All I know is that the game sounds outstanding, especially the battle theme of the game. The battle theme is one of the few battle themes in a role playing game that did not get annoying for me by the millionth battle. The boss theme in the game is also great.
Music in the game is fantastic. I love the soundtrack in the game, and am glad Square Soft decided to release a separate music CD featuring all the classic tunes from the game, with Final Fantasy Anthology. The music is moving, and each of the music pieces fits the mood of the scenes perfectly. For instance, sad and mellow music plays during the sad moments, while more happy and cheerful music plays when something good happens to the group. The battle and boss themes are top notch, as well. The battle theme is one of the few battle themes in a role playing game that did not get annoying for me by the millionth battle. The boss theme in the game is also great. Overall, the music in the game is fantastic.
Sound effects in the game are great, as well. The coolest part of the game’s sound effects, in my opinion, are the fact that sounds actually occur in the overworld. Yes, let’s say you are walking around in the overworld and a story scene occurs when a volcano erupts. Suddendly, you hear the volcano erupt from a faraway distance. That’s pretty cool. The sound effects during battle are great as well, even if they are just the basic battle sounds featured in a lot of role playing games. Overall, the sound effects in the game are fantastic.
Gameplay and Control (9.8/10)
I used to hear a lot about this game before it was released here in America. The main thing I heard about the game, and the main thing that everyone seemed to love about it, was the job system featured in the game. And yes, the job system in the game is great. There are also other great gameplay features in the game, including a top notch battle system, as well as great control. Adding to the fun is the fact that there is a lot of battles to fight. This means that you can raise your job levels quicker.
Control: Control in the game is great. The game controls a lot like any other 2D role playing game, and it controls a lot like the Super Famicom version. A disappointment to me is the fact that this game did not support analog control. Oh well, I guess Square Soft wanted to keep the game intact in feeling as possible. Moving around from menu to menu is effortless, and it is easy to control your characters during battle. I also like how they added a run button. Overall, the control in the game is great.
Battle System: One of the coolest parts of Final Fantasy 5, in my opinion, is the amazing battle system. Think of the basic setup as being a lot like the battle system in Final Fantasy 4. Then add to this a time meter, like the one featured in Final Fantasy 6. Then add the ability to do lots of amazing things during battle, ranging from summoning monsters to attacking an enemy three times in a row with your sword. You can do it all in this game. Overall, the battle system in the game is incredible.
Job System: Ahh yes, the famous job system that everyone has talked about so much. After all the hype, I expected an incredibly addicting thing. And that is what it turned out to be. Think of it a lot like the job system featured in Final Fantasy Tactics. During battle, you fight. After winning, you get a certain amount of AP. After getting enough AP, you raise a job level. When you raise a job level, you master a new command. What does this mean? It means that you can now use the mastered command with any job class you want. For instance, you can summon a monster with the samurai class if you raise the summoner’s job level. I spent many hours raising my levels and job levels. Overall, the job system in the game lives up to all the hype, as it is great and incredibly addicting.
Overall, Final Fantasy 5 is simply an incredible game. Besides the cool storyline, the game featured outstanding gameplay, as well. The job system lives up to all the hype, as it is great and incredibly addicting. The battle system in the game is incredible. The control of the game is awesome, as well, save the one minor flaw that I already pointed out (no analog control). Otherwise, the game is simply incredible, as it features everything needed to make a great role playing game.What a great game this is.========================Other Important Scores========================
Replay Value: High
Final Fantasy 5 could quite possibly be the most addicting role playing game of all time. I mean, I have never felt compelled to play a role playing game in my life. The main reason for this is the fact that the job system is so incredibly fun and addicting. I spent many hours walking around in the various places, getting into fights, just so I can raise my levels and job levels. And I never really got bored while doing it. Overall, the replay value in the game is incredible, and some of the best ever in a role playing game.
Challenge: Medium
Let me get the record straight right here. I have never really had any problems completing any of the games in the Final Fantasy series. And Final Fantasy 5 is no exception. I had little trouble going through the first half of the game. This was mainly because I built up my levels and job levels a lot. But once I got to the second world, the game changed a lot. Suddendly, the battles got much tougher. Some of the bosses killed me with only a few hits. So, what did I do? I raised my levels and job levels some more! I came back and kicked some monster butt, and soon I completed the game (after taking a few tries to beat the final boss, Neo X-Death). Overall, the game is easy in the early going then gets tough later on.
GOOD POINTS---------------------The storyline in the game is great, as the characters develop a lot before your very eyes.-The job system in the game is incredible and incredibly addicting.-The battle system is one of the best I have ever seen in a role playing game.BAD POINTS------------------The sound effects could have been a bit better.-There still is loading time noticeable, although it’s not as bad as in the PSX version of FF6.-There should have been more full motion video scenes included.
Overall (9.8/10)
Overall, Final Fantasy 5 is simply an incredible game. Besides the cool storyline, the game featured outstanding gameplay, as well. The job system lives up to all the hype, as it is great and incredibly addicting. The battle system in the game is incredible. The control of the game is awesome, as well, save the one minor flaw that I already pointed out (no analog control). Otherwise, the game is simply incredible, as it features everything needed to make a great role playing game. What a great game this is. |
ulakbus.views.ders.ders module
==============================
.. automodule:: ulakbus.views.ders.ders
:members:
:undoc-members:
:show-inheritance:
|
From the sounds of it, though, none of them are in the habitable zone for their orbits because of how close they are to TRAPPIST-1. The inner two? They might have habitable regions, and the outermost (with the unknown orbital period) might be habitable considering that it "probably" gleans less radiation than Earth does from our sun.
Come May 4th, astronomers will be able to get a better look at TRAPPIST-1 and measure two of the planets as they transit the star via the Hubble telescope, analyzing their atmosphere and seeing if there are any bits of water vapor present. An extended campaign will give NASA a chance to study these with the relatively new James Webb Space Telescope's infrared capabilities to further study their atmospheres. |
FSB detains another suspected accomplice in St. Petersburg metro blast
The suspect is believed to have been in contact with the suicide bomber Akbarjon Jalilov
MOSCOW, May 11. /TASS/. Russia’s federal security service FSB has detained a citizen of a Central Asian country connected with suicide bomber Akbarjon Jalilov, who exploded himself on a St. Petersburg metro train early last month.
"Russia’s FSB, acting under a special instruction from the Investigative Committee, on May 11 tracked down and detained in Moscow a citizen of one of the Central Asian countries, M.B. Ermatov, involved in the illegal trafficking of explosives. He was a contact of the suicide bomber Akbarjon Jalilov, who on April 3 exploded a makeshift bomb on a St. Petersburg metro train," the FSB’s public relations center said.
"At the moment a set of detective measures and investigative actions is being taken to probe into M. Ermatov’s complicity in the terrorist attack that is being investigated," the FSB said. |
Will Thor Die In 'Avengers 4'? The New 'Endgame' Trailer Hints He Might Not Survive This Fight — VIDEO
Marvel released a brand new trailer for its upcoming Avengers installment on Thursday, March 14, and boy, is there a lot to take in. With the end of the world seemingly near and many of the universe's most powerful heroes vanished from existence, fans have been wondering if Thor will die in Avengers 4.While its unclear exactly what will take place when the film debuts on April 26, this new snippet offers flashback into the hammer-wielding god's past that makes his future feel a tad bit grim.
The Endgame trailer is full of black and white moments, which appear as a reflection of Thor's time as an Avenger (Iron Man and Captain America's journeys also get this black and white treatment). During one scene, Thor somberly recalls the moment he watched many of his fellow superheroes die during their last quest to save the world. Perhaps its simply just a recap of his storyline, but only time will tell what the outcome will be.
As one of only a handful of survivors, which also includes Black Widow, Captain America, Bruce Banner, and Iron Man, Thor will likely be desperate to bring everyone back and defeat Thanos. And fans can only speculate what may happen next as the group fights to restore the universe. Although the clip manages to keep things very vague when it comes to his future survival, it appears that Thor is very much a strong presence in Endgame. The character notably appears at the end of the snippet alongside Captain Marvel in what appears to be their first meeting.
Marvel Entertainment on YouTube
The post-credits scene of Avengers: Infinity War showed Nick Fury sending a text summonsing Captain Marvel's help and it seems that she couldn't have arrived at a better time to help save the day. In the final moments of Infinity War, Thanos got the sixth and final infinity stone, giving him the power to wipe out half the Earth's population. With it, he took out a myriad of Marvel superheroes including Black Panther, Spider-Man, and Scarlet Witch. It was certainly a devastating moment, not only for the surviving Avengers, but also for MCU fans who had a hard time processing the deaths of so many major Marvel characters.
The new film seems to pick back up right where Avengers 3 left off. The feelings of survivor's remorse are heavy throughout the clip as the remaining Avengers attempt to regroup from the massive tragedy. Despite the traumatic experience, they manage to turn things around as they begin a training mission that will seemingly help them try to settle the score. By the end, their sorrows have turned into strength, resulting in the new resonating mantra: “Whatever it takes.”
Bound by their new agreement, Captain America, Black Widow, Iron Man, Ant-Man, Hawkeye, Nebula, and War Machine, are seen donning red and white uniforms — a strong indication that the heroes may be heading into the quantum realm. The quantum realm, which was first introduced in Ant-Man, is another alternate dimension in the multiverse that can be reached through magic. The rules of space and time do not apply within the realm — a factor which should come in handy when trying to bring back half of Earth's population. |
The present invention relates generally to the measuring art, and more specifically to a new and useful system for measuring coating thicknesses utilizing beta backscatter radiation techniques.
The beta backscatter radiation technique for measuring ultra thin coating thicknesses is well known, being disclosed in U.S. Pat. No. 3,132,248 among others, and various types of apparatus have been developed for use in conjunction therewith. U.S. Pat. No. 3,115,577, for example discloses a measuring table having interchangeable apertured platens for supporting a workpiece to be measured in operative alinement with a radiation source and detector, together with an illuminating arrangement to facilitate positioning of the workpiece. U.S. Pat. No. 3,705,305 shows another table arrangement for supporting a workpiece to be measured, the table carrying a measuring head including a source and a detector, and a sighting device to align the workpiece for measurement. The measuring head is rotatable on the table for supporting the workpiece on the head itself.
For many purposes it is undesireable, if not impractical or impossible to bring the workpiece to be measured to such a table, and portable probes have been developed which can be positioned either directly on the workpiece under test or on a surface on which such workpiece is placed. U.S. Pat. No. 3,720,833, for example, shows a housing adapted to be placed, for example, directly on a circuit board, the housing containing a locator and a probe including both a radiation source and a radiation detector. When the locator has been used to position the coating under test relative to the housing, it is retracted and the probe is simultaneously lowered into position against the coating area under test, to make the measurement.
Another portable probe is shown in U.S. Pat. No. 3,529,158 wherein a guide receptacle is adapted to be positioned relative to the coating area under test, the guide then being removed from the receptacle, and replaced by a portable probe which also can be installed on a table to receive and support a coated object under test. |
East Side and UWM
Milwaukee's East Side, home to the University of Wisconsin-Milwaukee, offers art house cinema, boutique shopping, clusters of night life, dining and more. And the Olmsted-designed Lake Park is a Milwaukee gem.
Articles about East Side and UWM
Looking for a winter activity that's worth coming out of hibernation? Head to The Back Room at Colectivo this Thursday, Jan. 17, for a can't miss show with Rayland Baxter. Here's everything you need to know before the show.
One of the benefits of Milwaukee living is the cost of real estate. While a one-bedroom apartment in New York City may run you more than $1 million, you can - at this very moment - nab a gorgeous and historic three-story upper East Side mansion a block from lake park for under $800,000.
Patricia Van Alyea owns the yellow giraffe that's visible from Lincoln Memorial Drive. Recently, she shared its story with OnMilwaukee and, it turns out, the giraffe once shared the yard with live goats.
You can find just about anything at the newly opened Crossroads Collective: ribs, tacos, soup and ice cream. Beginning this weekend, you can also grab a tasty boozy beverage to wash down all that deliciousness - well, if you know where to look.
The building 2140-50 N. Prospect Ave. isn't large, but you can't miss it because of its striking exterior. Built in 1934 as headquarters for the Milwaukee-Western Fuel Company, the building was designed by hometown architect Herbert Tullgren, who drew a two-story Art Deco gem of a building.
Old Milwaukee Facebook group moderator Adam Levin recently posted a message to the group inquiring about the whereabouts of the signage that once adorned the exterior of the iconic Oriental Drugs, 2238 N. Farwell Ave. Soon after, Levin was standing in a West Allis basement next to the vertical blade sign that once hung on the corner of the building, announcing "DRUGS."
As we reach the holiday season, it seems like the perfect time to catch up with John Gurda. He is the Milwaukeean that is to us history-lovers something of a beard-less and more svelte Santa Claus of our own, bringing both longed-for and unexpected treasures, in the form of books, articles, talks and television appearances. |
---
author:
- Mia Sato Tackney
- 'David C. Woods'
- Ilya Shpitser
bibliography:
- 'library.bib'
title: 'Nonmyopic and pseudo-nonmyopic approaches to optimal sequential design in the presence of covariates'
---
Abstract {#abstract .unnumbered}
========
In sequential experiments, subjects become available for the study over a period of time, and covariates are often measured at the time of arrival. We consider the setting where the sample size is fixed but covariate values are unknown until subjects enrol. Given a model for the outcome, a sequential optimal design approach can be used to allocate treatments to minimize the variance of the treatment effect. We extend existing optimal design methodology so it can be used within a nonmyopic framework, where treatment allocation for the current subject depends not only on the treatments and covariates of the subjects already enrolled in the study, but also the impact of possible future treatment assignments. The nonmyopic approach is computationally expensive as it requires recursive formulae. We propose a pseudo-nonmyopic approach which has a similar aim to the nonmyopic approach, but does not involve recursion and instead relies on simulations of future possible decisions. Our simulation studies show that the myopic approach is the most efficient for the logistic model case with a single binary covariate and binary treatment.\
Keywords: design of experiments, optimal design, dynamic programming, sequential design, coordinate exchange
Introduction
============
How treatments should be allocated in sequential experiments in the presence of covariates is a highly debated topic, particularly within the clinical trials community [@Senn2013; @Rosenberger2008]. We consider experiments where subjects become available sequentially, covariates are measured at the time of arrival, and treatment is assigned soon after. We assume that a response is measured before the next subject arrives, and we assume a fixed sample size. At any point in the experiment, the covariate values for the subjects yet to enrol in the experiment are unknown. Such a set-up is often characteristic of large Phase III trials, but is also common in experiments in the social sciences, such as political psychology lab experiments [@Moore2013]. Covariates should be included in the analysis; omitting them results in bias [@Senn2013], and further, from an optimal design point of view, the allocation of treatment should be done in a way that maintains as equal replication as possible of treatment within covariate groups, which improves precision of the parameter estimates [@Atkinson1982].\
Minimization is an approach aimed to keep the numbers of treatments approximately equal for each group of subjects who have the same combination of covariate values and is now used extensively in clinical trials [@Pocock1975; @Taves1974]. It has received some criticism for being based on measures of imbalance of covariates which are not theoretically grounded [@Senn2010] and methods based on minimizing the variance of the parameter estimators in statistical models have been suggested instead, originally by @Atkinson1982. Atkinson’s optimal design approach aims to minimize the variance of the estimator of the treatment effect for a linear model which describes the relationship between the treatments, covariates and response. The $D_A$-optimal objective function is used to make decisions for treatment allocation. We generalize Atkinson’s approach for the logistic model case, which can be applied to any information matrix-based optimality criterion in Section \[myopic\].\
The sequential optimal design approach is myopic in the sense that decisions are made using information about the past subjects’ covariates, treatments and response and the current subject’s covariates. The decision about the current subject is made assuming that the experiment will terminate after its response is recorded, ignoring the fact that there are further subjects which will enter the trial, and the estimates of interest are based on data from the entire experiment. Nonmyopic approaches are able to consider the potential impact of the current treatment decision on future possible decisions [@Huan2016] in terms of efficiency of the estimators. In this paper, we assess whether there is a benefit, in terms of efficiency of the estimators, in taking into account the impact of future possible decisions. This relies on the method of dynamic programming to compute the expected value of the objective function, where the expectation is taken over unknown quantities of future subjects [@Bradley1977 p. 323]. Most applications of nonmyopic approaches in clinical trials aim to maximize some measure of benefit of the treatment to the subject. The Gittens index is an example of such a nonmyopic approach [@Gittens1979; @Smith2018; @Williamson2017; @Villar2018]. Nonmyopic approaches for a clinical trials based problem involving covariates where the objective is related to the estimation of parameters have not been explored explicitly in the literature. We address this in Section \[nonmyopic\] and compare the myopic and nonmyopic approaches in a simulation study.\
The nonmyopic approach is computationally expensive which limits its use in practical settings. We propose the pseudo-nonmyopic approach in Section \[pseudononmyopic\], which has a similar aim to the nonmyopic approach but does not require recursive formulae. We compare how it fares against the myopic approach in a simulation. We discuss our findings and potential extensions of our work in Section \[discussion\].
Myopic Sequential Design {#myopic}
========================
Optimal Design
--------------
Suppose there are $n$ subjects in total in an experiment, which is fixed from the start. For $i \in \left\{1, ..., n\right\}$, we observe the values of the $s$ covariates associated with unit $i$
$$\bm{z}_i= \begin{pmatrix} z_{i, 1}, ..., z_{i, s} \end{pmatrix}^T,$$
and we select a treatment $t_i$ from a set of possible treatments $\mathcal{T}$. We observe the response $y_i$, which we assume is binary and zero is the desirable response. We define the following:
$$\bm{Z}_i = \begin{pmatrix}
\bm{z}_1^T \\
\bm{z}_2^T \\
\vdots \\
\bm{z}_i^T
\end{pmatrix} ,$$
$$\bm{t}_i = \begin{pmatrix}
t_1, t_2, ... , t_i
\end{pmatrix}^T ,$$
$$\bm{y}_i = \begin{pmatrix}
y_1, y_2, ... , y_i
\end{pmatrix}^T ,$$
to be the $i \times s$ matrix of covariate values, the $i$-vector of treatments and $i$-vector of responses, respectively, for subjects 1 up to $i$.\
Given that $y_i$ has distribution
$$\label{dist}
y_i \sim \mbox{Bernoulli}(\pi_i),$$
we assume a logistic regression for the response, where the probability $\pi_i=\mathbb{P} \left( y_i=1 \right)$ is given by
$$\pi_i = \frac{ \exp{\eta_i}}{1+\exp{\eta_i}},$$
where $\eta_i$ is the linear predictor. We assume $\eta_i$ is a linear combination of the intercept, main effects for the covariates and treatment, and potentially interaction terms. We denote the number of terms in the linear predictor by $q$ and assume it takes the following form:
$$\label{predictor}
\eta_j = \bm{x}_j \bm{\beta},$$
where $\bm{x}_j$ is the $j$th row of the $i \times q$ design matrix $\bm{X}_i$ , for $j \in \left\{1, ..., i \right\}$, and we denote by $\bm{\beta}$ the associated $q$-vector of parameter values. We can write the information matrix as $\bm{I} = \bm{X}_i^T \bm{W}_i \bm{X}_i$, where $\bm{W}_i$ is a diagonal matrix with $(j, j)$th entry given by $\hat{\pi}_j(1-\hat{\pi}_j)$. There is a close link between the information matrix and the variance of the parameter values; the maximum likelihood estimator of ${\bm{\beta}}$ has asymptotic variance-covariance matrix given by the inverse of the information matrix:
$$\operatorname{\mathbb{V}ar}(\bm{\beta}) =\left( \bm{X}_i^\top \bm{W}_i \bm{X}_i \right) ^{-1}.$$
In optimal design theory, decisions about treatments are made to minimize some function of $\left( \bm{X}_i^\top \bm{W}_i \bm{X}_i \right) ^{-1}$. A $D$-optimal design minimizes the determinant of the inverse of the information matrix, or equivalently, it minimizes the volume of the confidence ellipsoid of $\bm{\beta}$ [@Atkinson2007 p. 53]. The $D$-optimal objective function, assessing the choice of treatments of the subjects enrolled in the study $t_1, ..., t_i$, is given by:
$$\label{D_logit}
\Psi_{D}(\bm{X}_i, \bm{\beta})=\left| \left( \bm{X}_i^T \bm{W}_i \bm{X}_i\right) ^ {-1} \right|,$$
where $\left| \cdot \right|$ denotes the determinant. One may have interest only in a subset of the parameters, or in some linear combination of them. Supposing that interest lies in $m$ linear combinations of $\bm{\beta}$, the quantity of interest can be expressed as $\bm{A}^T \bm{\beta}$, where $\bm{A}$ is a $q \times m$ matrix with $m < q $ [@Atkinson2007 p.137]. The asymptotic variance-covariance matrix for $\bm{A}^T \bm{\beta}$ is given by:
$$\operatorname{\mathbb{V}ar}\left( \bm{A}^T \bm{\beta} \right) = \bm{A}^T \left( \bm{X}_i^T \bm{W}_i \bm{X}_i\right) ^ {-1} \bm{A} .$$
In this case, the $D_A$-optimality criterion is more appropriate:
$$\label{D_A_logit}
\Psi_{D_A}(\bm{X}_i, \bm{\beta})= \left| \bm{A}^T \left( \bm{X}_i^T \bm{W}_i \bm{X}_i\right) ^ {-1} \bm{A} \right|.$$
In our case, we are interested in estimating the treatment effect as precisely as possible, $\bm{A}$ is a $q$-vector with entry one corresponding to the treatment effect and zeros otherwise. If we wish to evaluate the decision for the $i$th treatment given the covariates of subjects 1 up to $i$, $\bm{Z}_{i}$, the treatments of the previous subjects, $\bm{t}_{i-1}$, and the responses of previous subjects, $\bm{y}_{i-1}$, we can denote the value of a generic objective function evaluated when treatment $t_i$ is assigned to subject $i$ as
$$\label{seq_obj}
{\Psi} ( t_{i} \mid \bm{Z}_{{i}}, \bm{t}_{i-1}, \bm{y}_{i-1}),$$
where $\Psi$ could be the $D$- or $D_A$-, or some other information matrix based objective function. In a non-sequential setting, an optimal allocation of treatments $\bm{t}_i$ for a design with $i$ subjects $\bm{X}_i$ can be constructed using the exchange algorithm. See, for example, @Goos2011 [p.36] or @Atkinson2007 [p.172].\
In a sequential setting, extending the work of [@Atkinson1999] for binary treatments, we assign treatment $t \in \mathcal{T}$ to subject $i$ by the probability given by
$$\label{logit_tmt1}
\frac{ {\Psi} ( t_{i}=t \mid \bm{Z}_{{i}}, \bm{t}_{i-1}, \bm{y}_{i-1}) ^ {-1}} { \sum_{t \in \mathcal{T}} {\Psi} ( t_{i}=t \mid \bm{Z}_{{i}}, \bm{t}_{i-1}, \bm{y}_{i-1}) ^ {-1} }.$$
This is a biased-coin type approach to treatment allocation, where the optimal decision according to the objective function $\Psi$ is likely to be selected, but there is random variation to avoid any suspicion of selecting bias [@Atkinson1982].\
The evaluation of the objective function in can be problematic in the context of logistic regression for two reasons. Firstly, because we have binary treatments and potentially binary covariates, separation is more likely to occur, where a linear combination of covariates perfectly predicts the response ([@Firth1993], [@Gelman2008]). Separation can result in the likelihood function becoming monotonic and maximum likelihood estimates of the regression coefficients tending to plus or minus infinity [@Rainey2016]. Common approaches of dealing with separation include penalizing maximum likelihood estimates to reduce bias, and introducing a prior distribution for the regression coefficients to shrink parameter estimates, particularly large ones, to zero. Jeffry’s prior is a common choice of prior, and @Gelman2008 recommended independent Cauchy distributions, where the probability distribution function given the location parameter $x_0$ and scale parameter $\gamma$ is given by:
$$f\left(x \mid x_0 \gamma, \right) =\frac{1}{\pi \gamma \left( 1+\left(\frac{x-x_0}{\gamma}\right)^2 \right) }$$
where the intercept has $x_0= 0, \gamma= 10$, and the slope coefficients have $x_0= 0, \gamma= 2$. We use this recommendation by @Gelman2008.\
Secondly, the objective function in depends on values of the model parameters. Therefore, estimates of parameters are needed in order to design the experiment aimed to estimate these parameters in the first place [@Atkinson2015]. We overcome this by beginning with an initial design where we use the exchange algorithm to allocate treatments for an initial $n_0$ units, under the assumption that $\bm{\beta}$ is a vector of zeros. Responses are then obtained or generated for the first $n_0$ subjects, and the model is fit to obtain the first estimate of the model parameters. Algorithm \[Algorithm1\] in the Appendix outlines the steps in constructing a sequential optimal design.
Nonmyopic Approach {#nonmyopic}
==================
Having a nonmyopic approach to the treatment allocation problem means that the optimization involves multiple stages. Not only is it important to consider the impact of the decision at the time of subject $i$, but we consider future subjects, possibly up to subject $n$. The number of future subjects considered is called the horizon, denoted $N$. The state at stage $i$ comprises the information that is known at that stage, which in our example includes the values of the covariates of subjects 1 up to $i$, as well as the treatments and responses of subjects 1 up to $i-1$. The decision about the treatment $t_i$ at stage $i$ is made based on the state $S_{i-1} = (\bm{Z}_{i},\bm{t}_{i-1}, \bm{y}_{i-1}).$ Based on that decision, there is a transition function $f_i$ that outputs the state of the next stage, $S_{i}=f_i(S_{i-1}, t_i)$. In our case, this transition function is represented by a logistic model linking the responses to the treatments and covariates. There is a need to balance two conflicting aims in the decision making:
1. The aim to exploit and to choose the treatment that most precisely estimates $\bm{\beta}$ in the current state.
2. The aim to explore and to choose treatment which may not be optimal given the current state, but may lead to gain of information which could lead to more precise estimators in later states.
Dynamic programming is an approach for solving multistage optimization problems (see, for example, [@Powell2009]). The overall problem is broken into different stages, which often correspond to points in time, and each stage of the problem can be optimized conditionally on past states. The key idea is that the overall sequence of decisions for treatment selection will be optimal for the entire experiment [@Bradley1977 p. 320]. The optimal design can be obtained by forward or backward induction. We focus on backward induction since it is the approach that is usually most appropriate in problems involving uncertainties [@Bradley1977 p. 328]. In backward induction, we start by finding the optimal decision at the end of the sequence of decisions, taking into account all possible treatments and covariates that may have been observed up until that point. Then, one can work backwards and obtain the optimal design taking expectations of unknown quantities [@Bradley1977 p.330]. See [@Huan2016] for a recent overview of approximate dynamic programming in the context of Bayesian experimental design. Dynamic programming has been used in some clinical trials applications where one wishes to balance the aim of estimating the parameters (exploration) with the aim of giving subjects the best possible treatment or obtaining maximum total revenue (exploitation). See, for example, [@Cheng2007], @Ondra2019, [@Muller2007] or @Bartroff2010.\
We now describe the nonmyopic approach for the binary response. To keep notation simple, we assume that we have a single binary covariate and a single binary treatment, and we do not consider interactions. We begin by constructing an initial design $\bm{X}_{n_0}$ with $n_0$ subjects using the exchange algorithm. We assume $\bm{\beta}=\bm{0}$ as an initial guess for evaluating the objective function in the construction of $\bm{X}_{n_0}$. We then obtain responses for the first $n_0$ subjects, $\bm{y}_{n_0}$, and fit the model to obtain the initial maximum likelihood estimates of the model parameters, $\hat{\bm{\beta}}_0$.\
Now, supposing that we have a design for $i-1$ subjects, and we have obtained parameter estimates $\hat{\bm{\beta}}_{i-1}$ as a result of that design. We observe covariate value $z_{i}$ for the $i$th subject and wish to evaluate the impact of assigning treatment $t_i$ on decisions on future possible subjects. For example, for horizon $N=1$, we consider the expected value of the objective function after $i+1$ subjects. Suppose treatment $t_i$ is assigned to subject $i$. Since $\Psi$ depends on $y_i$, we need to consider the two possible responses that $y_i$ may take, and then consider the possible values that ${z}_{i+1}$ can take. For a given covariate value ${z}_{i+1}$ for subject $i+1$, we denote by $t^*_{i+1}({z}_{i+1}, t_{i}, y_i \mid \bm{z}_i, \bm{t}_{i-1}, \bm{y}_{i-1})$ the optimal choice of treatment for subject $i+1$ given ${z}_{i+1}$ and $t_i$:
$$t^*_{i+1}({z}_{i+1}, t_{i}, y_i \mid \bm{z}_i, \bm{t}_{i-1}, \bm{y}_{i-1}) =\displaystyle \operatorname*{arg\,min}_{t_{i+1}} \Psi (t_{i+1} \mid \bm{z}_{i+1}, \bm{t}_{i}, \bm{y}_{i}).$$
From here on, we suppress the conditioning and write $t^*_{i+1}({z}_{i+1}, t_{i}, y_i)$ for simplicity. Now, we take the expectation of the objective function over two possible responses which may be obtained to find an expected value of the objective function over the unknown response:
$$\begin{aligned}
\mathbb{E}_{{y}_{i}} \Psi (t_{i+1} \mid \bm{z}_{i+1}, \bm{t}_{i}, \bm{y}_{i})& = \mathbb{P} ({y}_{i}=0 \mid \bm{z}_i, \bm{t}_i, \bm{y}_{i-1}) \Psi (t_{i+1} \mid \bm{z}_{i+1}, \bm{t}_{i}, \bm{y}_{i-1},y ) \\
&+ \mathbb{P} ({y}_{i}=1 \mid \bm{z}_i, \bm{t}_i, \bm{y}_{i-1}) \Psi (t_{i+1} \mid \bm{z}_{i+1}, \bm{t}_{i}, \bm{y}_{i-1},y ) , \end{aligned}$$
where $y_{i} \sim$ Bernoulli$(\pi_{i})$ with $\pi_{i}$ given by:
$$\pi_{i} = \frac{\exp \left(\bm{x}_i \hat{\bm{\beta}}_{i-1} \right) }{1+\exp\left( \bm{x}_i \hat{\bm{\beta}}_{i-1} \right)},$$
where $\bm{x}_i = \begin{pmatrix} 1, z_i, t_i \end{pmatrix}$ is the $i$th row of the design matrix. Now, we consider the possible covariate values that we may observe for the next subject. We denote by $ \mathbb{P} ({z}_i = {z}) $ the probability that the $i$th subject has covariate value ${z}$. In some cases, the distribution of the covariates may be known; if not, the distribution can be estimated by the empirical distribution of the covariates of the first $i$ subjects. We denote by $\Psi_1(t_i\mid \bm{z}_i, \bm{t}_{i-1}, \bm{y}_{i-1} )$ the expected value of the objective function when treatment $t_i$ is assigned to subject $i$, taking into account the impact of the decision on one further decision in the future. We obtain an expectation over the possible covariate combinations of the optimality criterion:
$$\begin{aligned}
\Psi_1(t_i\mid \bm{z}_i, \bm{t}_{i-1}, \bm{y}_{i-1}) &= \mathbb{E}_{\bm{z}_{i+1}} \mathbb{E}_{{y}_{i}} \Psi (t^*_{i+1}(\bm{z}_{i+1}, t_{i}, y_i) \mid \bm{z}_{i+1}, \bm{t}_{i}, \bm{y}_{i}) \\
&= \displaystyle \sum_{{z} } \mathbb{P} ({z}_{i+1} = {z}) \mathbb{E}_{{y}_{i}} \Psi (t^*_{i+1}({z}_{i+1}, t_{i}, y_i ) \mid \bm{z}_{i}, {z}_{i+1}, \bm{t}_{i}, \bm{y}_{i} ) .\end{aligned}$$
For a horizon greater than 1, we can use the following recursive relationship to find the optimal treatment for subject $i$. The expected value of the objective function after $i +N$ subjects, when treatment $t_i $ has been assigned, is given as follows:
For $N > 0$:
$$\begin{aligned}
\label{logisfuture}
\begin{split}
\Psi_N(t_i &\mid \bm{z}_i, \bm{t}_{i-1}, \bm{y}_{i-1}) = \mathbb{E}_{\bm{z}_{i+1}} \mathbb{E}_{{y}_{i}} \Psi_{N-1}(t^*_{i+1}(\bm{z}_{i+1}, t_{i}, y_i) \mid \bm{z}_{i+1}, \bm{t}_{i}, \bm{y}_{i}) \\
&= \displaystyle \sum_{{z} } \mathbb{P} ({z}_{i+1}= {z}) \mathbb{E}_{{y}_{i}} \Psi_{N-1} (t^*_{i+1}(\bm{z}_{i+1}, t_{i}, \bm{y}_i ) \mid \bm{z}_{i}, {z}_{i+1}, \bm{t}_{i},\bm{y}_{i} ) ,
\end{split}\end{aligned}$$
and for $N=0$, we have $$\begin{aligned}
\Psi_0(t_i \mid \bm{z}_i, \bm{t}_{i-1}, \bm{y}_{i-1}) =\Psi(t_i \mid \bm{z}_i, \bm{t}_{i-1}, \bm{y}_{i-1}) ,\end{aligned}$$
which is simply the myopic loss after $i$ subjects. We note that the nonmyopic approach for the logistic model case is considerably more computationally intensive than the myopic approach.
Simulations {#nonmy_simulations}
-----------
Our simulation compares $D_A$-optimal designs that are constructed sequentially using myopic and nonmyopic methods. Further, we compare the nonmyopic approach where we assume the true distribution for the covariates, and the nonmyopic approach where we use the empirical distribution of the covariates obtained by finding the proportion of observed subjects with each covariate value.\
Since the information matrix and the objective function depend on values of the model parameters in the logistic model case, estimates of parameters are needed in order to design the experiment aimed to estimate these parameters in the first place. We begin with an initial design where we use the exchange algorithm to allocate treatments to 10 units, under the assumption that $\bm{\beta}$ is a vector of zeros. In order to reduce sources of variability in our simulations, we make sure that the same initial design is used for the myopic and non-myopic cases. Another source of variability is the generation of the responses which are needed to obtain the estimates of the model parameters and subsequently to evaluate the design under the objective function. When comparing the myopic and non-myopic designs, we generate the responses in the following way:
1. Generate a deviate $u_i$ from the Unif$(0,1)$ distribution.
2. Set $$\label{ueq}
y_i = \begin{cases}1 & \mbox{if } u_i \geq \pi_i \\ 0 & \mbox{if } u_i < \pi_i \end{cases}.$$
The deviates $u_i$ kept the same for the myopic and non-myopic approaches to try to minimize sources of random variability in the simulation.\
In our simulation, 100 units of a covariate $z$ are generated. The covariate can take values in $\left\{-1,1\right\}$ and is generated such that $\mathbb{P}(z_i=1)=0.5$ and $\mathbb{P}(z_i=-1)=0.5$ for all $i$. We assume the true model for the response is $y_i \sim $ Bernoulli($\pi_i$) with $\mbox{logit}(\pi_i)= z_i + t_i$, and generate responses according to this model. Our simulation is constructed as follows:
1. 1. 100 subjects are assumed and their covariates are generated.
2. 100 deviates from a Unif$(0,1)$ distribution are generated for the response.
3. An initial design with 10 units is constructed using an exchange algorithm with $D_A$ optimality as the objective function.
4. Seven sequential designs using the covariates, random deviates for the responses, and initial design in part (a) are constructed using:
- A myopic $D_A$-optimal design.
- A nonmyopic $D_A$-optimal design with horizon $N=1$, with the correct covariate distribution assumed.
- A nonmyopic $D_A$-optimal design with horizon $N=1$, with the empirical covariate distribution assumed.
- A nonmyopic $D_A$-optimal design with horizon $N=2$, with the correct covariate distribution assumed.
- A nonmyopic $D_A$-optimal design with horizon $N=2$, with the empirical covariate distribution assumed.
- A nonmyopic $D_A$-optimal design with horizon $N=3$, with the correct covariate distribution assumed.
- A nonmyopic $D_A$-optimal design with horizon $N=3$, with the empirical covariate distribution assumed.
5. Designs are evaluated using the performance measure $\Psi_{D_A}$, given by Equation , at each sample size between 10 and 100, inclusive. The true values of the parameters are used to calculate $\Psi_{D_A}$.
2. (a)-(e) above is above 20 times to obtain a distribution of the performance measure for each sample size.
In addition to comparing the estimates of $\bm{\beta}$ and the values of $\Psi_{D_A}$ for the myopic and non-myopic designs, we also consider the efficiency of the nonmyopic design relative to the myopic design. We define the $D_A$-efficiencies of a design $\bm{X}_i$ relative to another design $\bm{X}_i^*$ with parameter values $\bm{\beta}$ in the logistic model case as
$$\label{DAeff_logit}
\mathrm{Eff}_{D_A}= \left\{ \frac{\Psi_{D_A} \left( \bm{X_i^*, \bm{\beta} }\right) }{ \Psi_{D_A} \left( \bm{X}_i, \bm{\beta} \right) } \right\}^ {1/m},$$
where $m$ is the number of non-zero rows in the matrix $\bm{A}$.\
Figure \[cov1DAbeta\] displays the distributions of $\hat{\bm{\beta}}_i$ at each sample size between 11 and 100. The estimates appear to be centered around their true value, $\bm{\beta}=\left( 0, 1, 1\right) ^T$, and the plots appear to be very similar across the seven methods.
![Distributions of $\hat{\bm{\beta}}_i$ for designs for the logistic model for one covariate are plotted against sample size. We show the myopic approach ($N=0$), as well as the nonmyopic approach to constructing $D_A$-optimal designs with horizon $N=$ 1 and 3. For the nonmyopic approach, we consider both the case where the correct covariate distribution is known, and when it is unknown so the empirical covariate distribution is used. The black line indicates the median, the dark grey indicates the 40th to 60th percentile, and the light grey indicates the 10th to 90th percentile of the distribution.[]{data-label="cov1DAbeta"}](cov1DAbeta.pdf)
In Figure \[DAnonmy\], we plot the distribution of $\Psi_{D_A}$ for each sample size between 11 and 100. We observe that the value of this objective function decreases as sample size increases, as expected. We note that the plots look extremely similar across the seven methods. There is is no noticeable difference between having horizon equal to one or three. In Figure \[DAnonmyeff\], we plot the relative efficiencies of the nonmyopic designs against the myopic design, which confirms that there is no observable difference across the methods in $\Psi_{D_A}$; Table \[sim1tab\] shows the efficiencies at the end of the experiment, and we see that the lower bound of the $10\%-90\%$ intervals are above 1. We observe that the myopic approach is slightly more efficient for when sample size is below 30.
![Distributions of $\Psi_{D_A}$ for designs for the logistic model for one covariate are plotted against sample size. We show the myopic approach ($N=0$), as well as the nonmyopic approach to constructing $D_A$-optimal designs with horizon $N=$ 1 and 3. For the nonmyopic approach, we consider both the case where the correct covariate distribution is known, and when it is unknown so the empirical covariate distribution is used. []{data-label="DAnonmy"}](DAnonmy.pdf)
![Distributions of the relative efficiencies of the nonmyopic $D_A$-optimal designs against the myopic $D_A$-optimal designs for the logistic model for one covariate are plotted against sample size. We consider the efficiencies of the non-myopic approach with horizons 1 and 3, with the correct and empirical distributions, against the myopic approach as the baseline. []{data-label="DAnonmyeff"}](DAnonmyeff.pdf)
**median** **40-60% interval**
----------------------- ------------ ---------------------- -- --
$N=1$, correct dist 1.008978 (1.005972, 1.010633)
$N=1$, empirical dist 1.005368 (1.004617, 1.005720)
$N=2$, correct dist 1.009279 (1.005669, 1.012720)
$N=2$, empirical dist 1.012895 (1.009806, 1.013174)
$N=3$, correct dist 1.005409 (1.005232, 1.007396)
$N=3$, empirical dist 1.005409 (1.002690, 1.009182)
: Distribution of the efficiencies of the non-myopic approaches relative to the myopic approach at the end of the experiment (n=100)
\[sim1tab\]
We observe in this simulation that there appears to be no benefit to the nonmyopic approach in this setting where we have one binary treatment and one binary covariate, and the covariate is generated such that $\mathbb{P}\left( z_i \right) = 0.5$ for all $i$. We call this a static covariate, since its distribution does not change with $i$. In Section \[pseudo\_simluations\], we consider a dynamic covariate, where the distribution of the covariate changes over time.
Pseudo-nonmyopic approach {#pseudononmyopic}
=========================
One main limitation of the nonmyopic approach is that computing the nested expectations and minimizations over unknown quantities, such as in Equation , requires recursive formulae which are computationally expensive. The number of calculations increases exponentially with each additional future subject in the horizon and, as a result, our simulations considered examples with horizon no more than three. We now explore a *pseudo-nonmyopic* approach which involves evaluating a related objective function with a similar aim without the use of recursion. The computational burden is reduced as nested expectations and minimizations are not necessary but we are still able to incorporate information about future possible decisions. We describe this novel approach for the logistic model case (it can easily be described for the linear model case) and provide a simulation to show how it compares to the myopic approach.\
In the pseudo-nonmyopic approach, in order to make a decision about the treatment of the $i$th subject, we generate $M$ possible *trajectories* of covariate values for subject $i+1$ until subject $n$. We assume, as for the non-myopic approach, that we have a distribution $f_{\bm{z}}$ for the covariate $\bm{z}$. This may be the true distribution in the population (if it is known), or an empirical approximation based on the subjects in the trial up until the $i$th subject. The covariate distribution may depend on time, in which case we refer to it as a dynamic covariate. For each of the $M$ trajectories, we construct a *pseudo-design* in which we have the $i$ subjects and $(n-i-1)$ subjects in the trajectory, and treatments allocated using an approach that we describe below. We look at the average losses of the $M$ pseudo-designs where we assign $t_i=1$, and compare it to the average loss of the $M$ pseudo-designs when $t_i=-1$; we select $t_i$ according to a probability that is weighted by these average losses.\
This approach takes averages over simulated values of the covariates for subjects $i+1$ up to $n$. Optimization based on Monte Carlo simulations of unknown quantities is typically conducted in a Bayesian setting for design of experiments [@Woods2017], where values of the unknown parameters may be simulated from a prior distribution. See @Gentle2002 for an overview of Monte Carlo methods and @Ryan2003 for an application to Bayesian design of experiments.\
In order to create a design using the pseudo-nonmyopic approach for the logistic model, just like in the sequential myopic and nonmyopic algorithms, we begin by constructing an initial design $\bm{X}_{n_0}$. This involves an exchange algorithm where we assume $\bm{\beta}=\bm{0}$ as an initial guess. We then generate responses $\bm{y}_{n_0}$, and fit the model to obtain the initial estimates $\hat{\bm{\beta}}_{n_0}$.\
Then, to select a treatment for subject $i$, for $i \in \left\{n_0+1, ... , n \right\}$, we observe $\bm{z}_i$. Based on the assumed covariate distribution $f_{\bm{z}}$, we generate $M$ possible trajectories for the covariates, $\bm{z}_{(i+1):n}^1, \bm{z}_{(i+1):n}^2, ..., \bm{z}_{(i+1):n}^m$, where
$$\bm{z}_{(i+1):n} ^m = \begin{pmatrix}
z_{i+1}^m, z_{i+2}^m, ..., z_{n}^m
\end{pmatrix}^T,$$
for $m \in \left\{ 1, 2, ..., M \right\}$. We then allocate treatments sequentially along each trajectory.\
Given the first subject in the trajectory, $\bm{z}_{i+1}^m$, we choose the treatment $t_{i+1}^{*^m}$ which minimizes the objective function $ \Psi$ given $t_i$, and the treatments and covariates of previous subjects and estimates $\hat{\bm{\beta}}_{i-1}$ based on the responses of the previous subjects, $\bm{y}_{i-1}$:
$$\label{logit_pseu}
t_{i+1}^{*^m} \left( \bm{z}_{i+1}^m, t_{i} \mid \bm{z}_{i}, \bm{t}_{i-1}, \bm{y}_{i-1} \right)
= \operatorname*{arg\,min}_{t_{i+1}}
\Psi \left( t_{i+1} \mid \bm{z}_{i}, \bm{z}_{i+1}^m, \bm{t}_{i-1}, t_i , \bm{y}_{i-1}\right).$$
To allocate a treatment for the next subject in the trajectory with covariate values $\bm{z}_{i+2}^m$, we then assume that $t_{i+1}^{*^m}$ has been allocated to subject $\bm{z}_{i+1}^m$ and choose the treatment $t_{i+2}^{*^m}$ which minimizes the objective function. We make the assumption that the future decisions are independent of the future responses. This means that we assume the same estimate for $\bm{\beta}$ as in the Equation and do not update it. We continue in this way until all subjects in the trajectory have been allocated a treatment:
For each $j$ in $\left\{ i+2, i+4, ..., n \right\}$, we define: $$\begin{aligned}
&t_{j}^{*^m} \left( \bm{z}_{j}^m, t_{j-1}^* \mid \bm{z}_{i}, \bm{z}_{(i+1):(j-1)}^m, \bm{t}_{i-1},t_i, t_{(i+1):(j-2)}^*, \bm{y}_{i-1}\right) \nonumber \\
&= \operatorname*{arg\,min}_{t_{j}}
\Psi \left( t_{j} \mid \bm{z}_{i}, \bm{z}_{(i+1):(j)}^m, \bm{t}_{i-1},t_i, \bm{t}_{(i+1):(j-1)}^* , \bm{y}_{i-1} \right). \end{aligned}$$
For the $m$th trajectory, we obtain a pseudo-design with $n$ subjects where the $i$th treatment is 1, as well as a pseudo-design where the $i$th subject receives treatment $-1$. We denote the objective function of the two designs as follows: $$\Psi \left( t_{n} \mid \bm{z}_{i}, \bm{z}_{(i+1):n}^m, \bm{t}_{i-1}, t_i =1, \bm{t}_{(i+1):(n-1)}^{*^m}, \bm{y}_{i-1} \right),$$ $$\Psi \left( t_{n} \mid \bm{z}_{i}, \bm{z}_{(i+1):n}^m, \bm{t}_{i-1}, t_i =-1, \bm{t}_{(i+1):(n-1)}^{*^m} , \bm{y}_{i-1}\right).$$
We define the average objective function for $i=n_0+1, ..., n-1$ across the $M$ designs, assuming, firstly, that $t_i=1$, and secondly, that $t_i=-1$, as:
$$\overline{\Psi}(t_i=1) = \frac{1}{M} \displaystyle \sum_{m=1}^M \Psi \left( t_{n} \mid \bm{z}_{i}, \bm{z}_{(i+1):n}^m, \bm{t}_{i-1}, t_i =1, \bm{t}_{(i+1):(n-1)}^{*^m} , \bm{y}_{i-1}\right),$$
$$\overline{\Psi}(t_i=-1) = \frac{1}{M} \displaystyle \sum_{m=1}^M \Psi \left( t_{n} \mid \bm{z}_{i}, \bm{z}_{(i+1):n}^m, \bm{t}_{i-1}, t_i =-1, \bm{t}_{(i+1):(n-1)}^{*^m} , \bm{y}_{i-1}\right).$$
For $i=n$, we do not generate any future covariates so we have:
$$\overline{\Psi}(t_i=t) = \Psi \left( t_{n}=t \mid \bm{z}_{n}, \bm{t}_{n-1} , \bm{y}_{n-1} \right),$$
for $t \in \left\{-1, 1\right\}$.\
We sample $t_i$ from the set $\left\{-1, 1\right\}$ where the probability of selecting $1$ is given by $$\frac{ \overline{\Psi}(t_i=1)^{-1} }{\overline{\Psi}(t_i=1)^{-1} + \overline{\Psi}(t_i=-1)^{-1}}.$$ We then observe the response $y_i$ and refit the model to obtain $\bm{\hat{\beta}}_i$.\
Simulations {#pseudo_simluations}
-----------
Similarly to Section \[nonmy\_simulations\], we need to make sure that sources of variability are controlled as much as possible so that differences between the results for the myopic and pseudo-nonmyopic approaches are likely to be attributable to the differences in the treatment allocation approach. We make sure that simulations have the same initial design; the initial design is constructed with an exchange algorithm to allocate treatments to 10 units, under the assumption that $\bm{\beta}$ is a vector of zeros. We fit the models using the `R` function `bayesglm` in the `arm` package [@Rarm], with Cauchy prior distribution with center zero and scale given by 2.5 for both the treatment and covariate parameters. We generate deviates $u_i$ as in Equation in order to generate responses $y_i$, so that we can ensure that the data generating mechanism is the same across simulations comparing the myopic and pseudo-nonmyopic designs.\
In this example, we have one binary covariate ${z}$. It is dynamic with a distribution given by $\mathbb{P}(z_i=1) =0.01i$. The model is given by ${y}_i \sim$ Bernoulli$(\pi_i)$ where
$$\mbox{logit}\left(\frac{\pi_i}{1-\pi_i} \right) = \ {z}_i + {t}_i.$$
The structure of the simulation is as follows:
1. 1. 100 subjects are assumed and their covariates are generated from a specified distribution.
2. 100 deviates from a Unif$(0,1)$ distribution are generated for the response.
3. An initial design with 10 units is constructed using an exchange algorithm with $D_A$-optimality as the objective function.
4. The three following sequential designs are constructed using the covariates, random deviates for the responses, and initial design in part (a):
- A myopic $D_A$-optimal design.
- A pseudo-nonmyopic $D_A$-optimal design with $M=10$, and the correct covariate distribution assumed.
- A pseudo-nonmyopic $D_A$-optimal design with $M=100$, and the correct covariate distribution assumed.
5. Designs are evaluated using the performance measure $\Psi_{D_A}$ at each sample size between 10 and 100, inclusive. The true values of the parameters are used to calculate $\Psi_{D_A}$.
2. (a)-(e) above is repeated 20 times to obtain a distribution of the performance measure for each sample size.
In Figure \[011pseudo\_beta\], we see the estimates of $\bm{\beta}$ for the myopic approach, the pseudo-nonmyopic approach with $M=10$ and with $M=100.$ We observe that the plots looks very similar across the three methods. The variability of the estimates reduces with sample size for the intercept and the coefficient of treatment. The median of the distributions converge to their true value after a sample size of approximately 40.
![Parameter estimates given by the myopic approach, pseudo-nonmyopic approach with $M=10$ and pseudo-nonmyopic approach with $M=100$ for a logistic model with one dynamic covariate. The black line indicates the median, the dark grey indicates the 40th to 60th percentile, and the light grey indicates the 10th to 90th percentile of the distribution[]{data-label="011pseudo_beta"}](011pseudo_beta.pdf)
In Figure \[011pseudo\_relopt\], the top row displays the values of $\Psi_{D_A}$ evaluated at each sample size. This appears to be similar across all methods with slightly higher variation observed for the pseudo-nonmyopic approach with $M=10$. In all three cases, the value of the objective function drops after a few initial subjects and stabilizes after around 30 subjects. The bottom row shows the relative $D_A$-efficiencies (see Equation ) of the pseudo-nonmyopic approaches, compared to the myopic approach. We see that, initially, they have equal efficiency, but then the myopic approach appears to be slightly more efficient. We note that the distributions of efficiencies are skewed; there appears to be a number of extreme points where the myopic approach is much more efficient than the pseudo-nonmyopic approach. This is partly due to the fact that the efficiency is bounded below by zero, but unbounded above. Table \[sim2tab\] displays the efficiencies at the end of the experiment; the distributions are centered around one and have greater uncertainty than in the efficiencies of nonmyopic approach in section Table \[sim1tab\].
![Top row: $D$-optimality against sample size for designs for a logistic model with one dynamic covariate. Bottom row: relative $D$-optimality against sample size for designs for a logistic model with one dynamic covariate. Values below 1 indicate that the pseudononmyopic approach is more beneficial than the myopic approach.[]{data-label="011pseudo_relopt"}](011pseudo_relopt.pdf)
**median** **40-60% interval**
--------- ------------ ------------------------ -- --
$M=10$ 0.9690018 (0.9014291, 1.0202391)
$M=100$ 1.0157450 (0.9340631, 1.0845578)
: Distribution of the efficiencies of the pseudo-nonmyopic approaches relative to the myopic approach at the end of the experiment (n=100)
\[sim2tab\]
We found no evidence for the benefit of the pseudo-nonmyopic approach over the nonmyopic approach in this example. Further, we observed that the number of trajectories in the pseudo-nonmyopic approach, $M$, appears to have little effect on the parameter estimates or the values of the $D_A$-optimal objective functions.
Discussion
==========
This paper extended the sequential optimal design approach first proposed by [@Atkinson1982] for the logistic model case and for any optimality criterion. We then placed this approach in a nonmyopic framework. In our simulations, we observed no benefit to using the nonmyopic approach over the myopic approach. We then developed a novel methodology called the pseudo-nonmyopic approach which is still able to take into account future possible subjects, but is less computationally expensive than the nonmyopic approach. Simulations showed that the pseudo-nonmyopic approach performs similarly to the myopic approach for the logistic model case with a binary treatment.
Limitations
-----------
There are a number of limitations to our work in its ability to be directly applicable to clinical trials and other experiments involving human subjects. Firstly, we assume responses are measured immediately after treatments are given to subjects. This is not a realistic assumption so some method to allow for a delay between treatment allocation and response could be useful. One modification would be to allow for the method to be batch sequential; instead of allocating treatments to one subject at a time, a group of subjects may be given optimal treatments by using the exchange algorithm. It is also possible to incorporate delay in adaptive designs. @Hardwick2006 achieve this by assuming that subjects arrive according to a Poisson process.\
Secondly, we do not consider toxicity in our work. We assume that the treatment which leads to a better response is the more desirable treatment, but it is possible that such a treatment has unsafe toxicity levels [@Rosenberger1999]. In our algorithms for treatment assignment, if the optimality criterion is equal for treatment $t_i=1$ and $t_i=-1$, we would assign the treatment at random. In clinical trials, this is less likely to happen as relative efficiency of the treatments need to be considered in conjunction with relative toxicity [@Simon1977]. In general, [@Rosenberger1999] recommended that adaptive designs should be considered after previous experiments have been able to establish low toxicity of the treatments.\
A further limitation of our work is that we arbitrarily assume in all of our simulations that we have 100 subjects in the trial. In clinical trials, there are stopping rules that determine when the trial should terminate [@Stallard2001]. See @Whitehead1993 for a frequentist perspective and @Berry1989 and @Freedman1989 for a Bayesian perspective on stopping rules in interim analysis. Including this element into our designs would mean that our methodology is more generally applicable to clinical trials. Further, we may be able to make statements about relative numbers of subjects and costs required in order to detect a significant difference in treatment effect for each method.
Future Work
-----------
The non-myopic and pseudo-nonmyopic algorithms consider only the case where the response and treatments are binary. Natural extensions include allowing for more complex treatment structures, such as factorial designs, or allowing for a continuous response. Computing the expected objective function for a continuous response would require Monte-Carlo simulations. Extending our framework for the non-myopic approach to allow for a more general response will require greater computational efficiency in our algorithms. This is also true of the pseudo-nonmyopic approach.\
In the optimality criteria that we have considered, the response of the subjects are included in order to update parameter estimates (optimal design methods for the logistic model case, weighted $L$-optimal design). The response has not been used in order to inform treatment allocation based on the efficacy of the treatment. Covariate adjusted response-adaptive designs based on efficiency and ethics (CARAEE) aim to optimize a utility function which takes into account the number of subjects who receive the more effective treatment. We did some preliminary work on CARAEE designs. Here, our optimality criterion is a function which has a component for efficiency and a component for ethics, as well as a tuning parameter which allows the practitioner to decide which aim is more important. The CARA (covariate adjusted response adaptive) design and RAR (response adaptive randomization) design are special cases of the CARAEE design.\
Appendix A {#appendix_code .unnumbered}
==========
An $\texttt{R}$ package $\texttt{biasedcoin}$ is included in the Supplementary materials. The following commands implement the designs for logistic regression discussed in this paper:\
$\texttt{logit.coord}$: non-sequential optimal design (coordinate exchange algorithm).\
$\texttt{logit.des}$: myopic sequential optimal design.\
$\texttt{logit.nonmy}$: nonmyopic sequential optimal design.\
$\texttt{simfuture.logis}$: pseudo-nonmyopic sequential optimal design.\
The `R` function `bayesglm` in the `arm` package is used to fit the logistic regression model using the Cauchy prior to avoid problems in estimation due to separation.
Appendix B {#appendix_algorithm .unnumbered}
==========
\[H\]
**Initialization** Construct initial design $\bm{X}_{n_0}$ using the exchange algorithm for the first $n_0$ subjects assuming $\bm{\beta}=\bm{0}$. Observe responses $\bm{y}_{n_0}=\begin{pmatrix} y_1, y_2, ..., y_{n_0} \end{pmatrix} ^T.$ Fit the model assuming that the responses are distributed according to Equation with linear predictor given by Equation to obtain the MLE $\hat{\bm{\beta}}_{n_0}$. Observe $z_{i,1}, ..., z_{i, k}$ Calculate $\Psi (t_{i} \mid \bm{z}_{i}, \bm{t}_{i-1}, \bm{y}_{i-1})$ for each treatment. Sample treatment for subject $i$ where probability of treatment $1$ is given by Equation . Observe response $y_i$. Refit model the model with response given by $\bm{y}_{i}$ and updated design matrix $ \bm{X}_{i}$ and update the parameter estimates $\hat{\bm{\beta}}_i$. $\bm{X}$
|
Astralis signs two-year partnership deal with Unibet
The Danish esports team, Astralis, has recently signed a two-year partnership deal with the online betting company, Unibet. The global partnership deal will see Unibet becoming Astralis’ official betting parter, and it’s hoped that this relationship will provide Astralis will further funds and stability in their quest to become the best CSGO team in the world.
Unibet’s parent company, Kindred Group, signed the agreement which will grant the online betting brand exclusive rights to using Astralis’ image and strengthen Unibet’s foothold in the increasingly lucrative esports betting market.
The move is just another in a series of efforts made by traditional sports betting brands to sponsor esports teams and tournaments. Similarly, there have been plenty of esports organisations who have made no attempt to hide their increasingly close relationships with big money companies. All of which points to an increasing level of professionalism in the competitive gaming realm and signals the fact that esports is close to becoming perceived as a regular sporting activity.
Why have Unibet teamed up with Astralis?
As esports has grown to become a hugely popular global activity that has many millions of active participants and spectators, it’s little surprise to find that traditional sports betting sites would want to get involved. Many online bookmakers like Unibet now allow customers to bet on a wide variety of esports tournaments, and by tying their brand to one of the most successful esports teams, it’s hoped that Unibet will gain an extra level of brand recognition in this relatively new market.
Unibet aren’t the first traditional sports betting brand to have formed a partnership deal with an esports team. In the past few years, we have seen the popular online bookmaker, Betway, launch a successful partnership with both the Ninjas in Pyjamas and MIBR esports teams. Similarly, the dedicated esports betting site, GG.Bet, will have received plenty of exposure when their sponsored team, Natus Vincere, wore shirts with the brand name whilst winning the ESL One Cologne 2018 tournament.
Some online betting sites have even gone one step further as was seen when the Rivalry CIS Invitational tournament took place in June earlier this year. Whilst there is some concern about how close online gambling sites should get to competitive gaming, it’s clear that this relationship is only going to get further intertwined. After all, a large number of football teams in top European leagues have signed lucrative deals with betting brands, and it could be just a matter of time before the same thing happens to esports.
High expectations for the Astralis esports team
By sponsoring Astralis, Unibet will be hoping that the esports team continue their remarkable winning record within the competitive gaming realm. Whilst the Danish esports organisation have only been around since 2016, they have made a big impression in the gaming world, and were recently nominated as being the esports team of the year in the Game Awards 2018.
Unlike many other esports organisations, Astralis just focus their attention on one game – Counter Strike Global Offensive. The team have already won an impressive amount of CSGO tournaments, including the FACEIT Major in London 2018, the ELEAGUE Major 2017 and the DreamHack Masters Marseille 2018.
The team was formed by a group of gamers who left Team SoloMid, and used investment of Sunstone Capital and various Danish entrepreneurs to set up their own organisation. These funds also helped Astralis make some high profile signings such as Lukas ‘gla1ve’ Rossander in October 2018.
As a result, Astralis are easily one of the most feared CSGO teams around. And with a roster packed with talented gamers like dev1ce, depreeh, Xyp9x, Magisk and zonic, it’s easy to see why Unibet would be so keen to get involved with a team so full of star players.
The growing professionalism of esports
It’s remarkable to think just how far esports has come in the past decade. From being a niche activity in South Korea at the turn of the century, esports has become a world beater and looks to hit revenues of $1.65 billion by 2021. As a result, we have seen many large businesses seeking to use esports to target a younger demographic that tend to stray away from traditional media.
From car brands like Hyundai and Audi sponsoring esports teams and tournaments, to fast food giants such as McDonald’s creating special gamer-friendly menus, it’s clear that even the most unlikely of firms are starting to get serious about esports.
This has the power to help esports gain a further level of legitimacy. Whilst many traditional sports fans have been sceptical of the esports phenomenon, with online bookmakers like Unibet and Betway investing large amounts of money into sponsoring esports team, it seems that competitive gaming will start to be treated more and more like a traditional sport. |
17th July 2017
Ahead of its 10 years anniversary in the fashion business, Nigerian design label Yomi Casual has released its first collection for 2017 titled Renaissance. Originality is evident in the timeless pieces which reflects elaborate magical details and, also meet the consistent expectation of high quality ...
All written material that appears on www.exquisitemag.com is protected by copyright. You may not copy, distribute and transmit the work in full. You can link it using a quick summary. Conditions and sharing this content can only be waived with expressed written permission of authors. |
Q:
Exercise in a book should be easy, my solution is too complicated to finish
Consider the following exercise
Should it be somehow easy arrive at the formula
$$
\mathbb{E}[X_{1}|X_{2}]=\frac{n-X_{2}}{5}\
$$ as no explanation regarding the solution was given?
I tried to derive it form first principles and I got stuck. First
I tried to compute
$$
\mathbb{E}[X_{1}|X_{2}=k]
$$
in hope that that would allow me to see a nice formula for $\mathbb{E}[X_{1}|X_{2}]$
. But computing this formula I obtained
$$
\mathbb{E}[X_{1}|X_{2}=k]=\sum_{i=0}^{n-k}i\cdot P(X_{1}=i|X_{2}=k)=\sum_{i=0}^{n-k}i\cdot\frac{P(X_{1}=i\land X_{2}=k)}{P(X_{2}=k)}= \\ =\sum_{i=0}^{n-k}i\cdot\frac{\binom{n}{n-k-i}\binom{k+i}{k}6^{n-k-i}\frac{1}{6^{n}}}{6^{n-k}\binom{n}{k}\frac{1}{6^{n}}},
$$
at which point I stopped, because I didn't knew how to proceed further.
(For your information, I arrived at numerator in the following way: $\binom{n}{n-k-i}$
counts the number of ways to distribute places in a sequence with
$n$ elements that are neither 1s nor 2s; having these places fixed,
the place where 1s or 2s have to be are alos fixed and there $\binom{k+i}{k}$
ways to distribute the 1s, which also fixed the places of the 2s;
the $n-k-i$places where there aren't 1s or 2s can be filled $6^{n-k-i}$
ways; $\frac{1}{6^{n}}$} is the probability
each of these $\binom{n}{n-k-i}\binom{k+i}{k}6^{n-k-i}$ many choices
has. Similarly one can derive the denominator.)
Question 1 (which is basically my question from above): Is there any easier way to do this?
Question 2: How to carry out my analysis to the end, by computing first $\mathbb{E}[X_{1}|X_{2}=k]$?
A:
There is an easier way. Notice that $X_1+\cdots+X_6=n$. This implies that $X_1=n-X_2-\cdots-X_6$. Since the die has an equal chance of landing on each number, convince yourself that $E[X_1|X_2]=E[X_3|X_2]=\cdots=E[X_6|X_2]$. Thus letting $y:=E[X_1|X_2]$, take conditional expectations of the above with $E[\cdot|X_2]$ and use linearity:
$$y=n-E[X_2|X_2]-5y=n-4y-X_2.$$
Or,
$$y=\frac{n-X_2}{5}.$$
For $E[X_1|X_2,X_3]$, the argument is very similar.
|
Should medical students act as surrogate patients for each other?
Until recently, most clinical teachers and medical students have regarded using medical students as surrogate patients for peer teaching of physical examinations and clinical skills as practical and uncontroversial. Recent changes to medical curricula and changes in hospitalized patient populations have led to questions about the ethical acceptability of this practice. This paper explores the ethical issues inherent in the use of medical students as surrogate patients. It suggests that, ethically, there are parallels with two situations: when students conduct physical examinations on patients and when students participate as subjects in research. Drawing on accepted ethical practice in these two germane areas, the paper argues that there are both ethical strengths and weaknesses in the practice of using students as surrogate patients. Strategies to promote free and informed involvement of students as surrogate patients are suggested. |
Former Virginia Sen. and 2016 Democratic presidential candidate Jim Webb says that he will not vote for Hillary Clinton, but he is still considering Donald Trump. "I would not vote for Hillary Clinton,” Webb said on MSBNC's "Morning Joe" on Friday.
When asked about Trump, Webb replied: I'm not sure yet. I don't know who I'm going to vote for."
“If you're voting for Donald Trump you may get something very good or very bad,” Webb said. “If you're voting for Hillary Clinton, you're going to be getting the same thing.”
Jim Webb dropped out of the race for the Democratic nomination in October, but he left the door open to pursuing an independent bid.
SCARBOROUGH: Senator, a lot of people have been saying online and sending e-mails when they found out you were going to be on that they wanted you to consider jumping in as an independent in this race and saving them from the horror that they see the coming general election setting up to be. Would you consider, under any circumstances, running an independent bid?
WEBB: We looked at this very hard for three months when I withdrew from the Democratic primary process and it's a very costly process to get on the ballot in all of the different states, it's very litigious. I think when Ralph Nader did this there were 29 lawsuits from the DNC alone came at him. If you had the money, if you were a Michael Bloomberg with the money, it gets conceivable, but it's a very difficult process to do.
GEIST: You just said a second ago, Senator, that Hillary Clinton has been wrong on every foreign policy issue since 9/11. You were in the race against her, you expressed all those things. Would you have enough reservation about Hillary Clinton as a commander in chief that you would not support her for president?
WEBB: I'm not supporting anybody right now.
GEIST: Would you vote for Hillary Clinton?
WEBB: No, I would not vote for Hillary Clinton.
GEIST: Would you vote for Donald Trump?
WEBB: I'm not -- I'm not sure yet. I don't know who I'm going to vote for.
GEIST: But not Hillary Clinton?
WEBB: No, I don't -- Look -- and this is nothing personal about Hillary Clinton, but I think the reason that Donald Trump is getting so much support right now is not because of the racist, you know, et cetera, et cetera, it's because people are seeing him, a certain group of people are seeing him as the only one who has the courage to step forward and say we've got to clean out the stables of the American governmental system right now. We've got to make it work. And if you're voting for Donald Trump you may get something very good or very bad. If you're voting for Hillary Clinton, you're going to be getting the same thing. Do you want the same thing? You know, 6 percent of the people in the country maybe want the same thing.
Full interview below via MSNBC: |
VIDEO: Mike Tyson Has Revealed That He Smokes $40,000 Worth Of Weed Every Month
On the latest episode of the Hotboxin’ With Mike Tyson podcast, the champ sat down with Dipset capo Jim Jones. Like the previous instalments, the conversation takes twists leading to Tyson revealing little known aspects of his life. This conversation’s fun fact was how much Tyson spends on weed.
The “ranch” Britton and Tyson are referring to is Iron Mike’s 420 acre California marijuana farm. Tyson broke ground on the plot in California City, California in December of 2018. The goal is to turn the area into a cannabis farm and resort which will be open to the public in 2020. Currently, it’s a functioning farm that produces THC and CBD products as well as nine strains of cannabis. According to Tyson, this output more than recoups the money he spends on weed.
Kuulpeeps is a social news, lifestyle and entertainment brand for the young Ghanaian. Kuulpeeps is leading millennials through the journey of discovering news, engaging with content and interacting with one another through the use of tailored content platforms, social media executions and new media.
What started primarily as a campus focused site is growing into a media brand focused on satisfying the curiosity of young people across different geographic locations and lifestyles. Kuulpeeps seeks to be the leading content brand for youth interaction in Ghana and beyond. |
This project is focused on correlating the morphology of the node of Ranvier with its biochemistry and functionality. Particular emphasis will be on the differentiated membrane and cytoskeletal specialization of the nodal and paranodal regions. The data generated by this project is expected to contribute to an understanding of how the axon and the ensheathing cell interact, both with each other and with their local environment during development, degeneration, and regeneration, and during nerve conduction. The techniques that will be used are freeze- etching electron microscopy (EM), thin-section and thick section EM, selective staining, immunocytochemical labeling, immuno-EM localization, in situ hybridization, video-enhanced Nomarski light microscopy, laser confocal scanning microscopy, electrophysiology and the recording of signals from optical dyes. We will use the National High Voltage Electron Microscope (HVEM) Facility in Boulder, Colorado and the Regional Resource for Intermediate HVEM and Image Analysis in San Diego to obtain three-dimensional images from selectively stained structures or by serial section reconstruction. The following aims of the project are subdivided into areas of: structure; function; and development and recovery from demyelinative insult: 1) Structural Studies: Electron microscopy will be conducted on specimens prepared by rapid-freezing and deep-etching to complete our description of the cytoskeletal and extracellular matrix associations of the axonal and glial elements for both CNS and PNS nodes. In immunolabeling studies we will identify structures including: a) plasma membrane proteins such as Na channels, K channels, and the Na/K ATPase; b) internal membrane proteins like the ryanodine binding protein; c) cytoskeletal proteins like intermediate filaments, tubulins, spectrin, actin, myosin, alpha actinin, and ankyrin; d) extracellular matrix constituents like heparan sulfate proteoglycan, NCAM, and cytoactin. 2) Function: Our second aim is to correlate structural changes with physiological mechanisms of impulse conduction at the node. This is possible by in vitro electrophysiological recording of impulses at the nodes of Ranvier in PNS fibers that are simultaneously observed at high optical resolution using video-enhanced Nomarski differential interference contrast ((DIC) time-lapse recording in conjunction with optical methods for recording membrane potentials and Ca transients. This capability provides a unique opportunity to test hypotheses regarding ion channel locus, density and function of cell compartments at node of Ranvier. 3) Development and Remyelination: Our third aim is to determine the sequence of development of macromolecular components of the nodal complex with a special interest in determining if one of the constituents disclosed during the antibody localization studies conducted in aim (1) appears prior to the arrival of myelinating cells thus participating in the early definition of the node of Ranvier. We plan to establish a panel of antibody probes to probe the cellular mechanisms by which nodal sites are first defined during development as well as redefined in mature nervous systems following traumatic insults. An increase in our understanding of the dynamic cellular interactions that establish, maintain, and control axonal membrane protein complexes requisite for conduction of action potentials unquestionably will be of value in recognizing the effects of disease processes at these sites. |
First of all, he’s not a shrimp–he’s a king prawn, okay?
Pepe the King Prawn, whose full name is Pepino Rodrigo Serrano Gonzales, appears to have Kermit the Frog beat in this ad promoting the U.K. release of The Muppets. In the brief but impactful clip, Kermit is showing off the spiffy new backpack of his likeness that he received via a promotion from U.K. milk brand Cravendale, but Pepe has got something better. The spot was created by Wieden + Kennedy London.
Although this is the first Cravendale ad involving muppets, the brand’s had some luck with small animals before. The below cat-centric clip has been viewed over 5 million times. |
One of the premises of the bailout bill is that the banking industry must have government help to get back on its feet.
A banking industry expert, Bert Ely, who has a stellar track record in predicting crises and calling false alarms says that the banking industry can handle this mess internally and does not need subsidies.
The comments from Bert come in an interview at Institutional Risk Analytics (the entire newsletter is wide-ranging and very much worth reading), First, IRA’s recap of Ely’s qualifications:
To get some perspective on the evolution of the last remaining large investment banks into commercial banks, we now turn to Bert Ely, one of the leading experts on banking and finance in the Washington policy community. An accountant by training, Ely has specialized in deposit insurance and banking structure issues since 1981. In 1986, he became an early predictor of the S&L crisis and a taxpayer bailout of the FSLIC. In 1991, he was the first person to correctly predict the non crisis in commercial banking. In 1992, he predicted an eventual taxpayer bailout of the Japanese banking system.
Here are the excerpts that relate to whether the banks need government intervention:
The IRA: And if our internal estimates at IRA are correct about the magnitude of the losses facing the industry, then the banks may not have the resources to deal with the problems alone. What then?
Ely: That is of course the trillion dollar question. I have run the numbers looking at the capacity of the industry to pay the tab. Assuming that bank insolvency losses don’t get way out of line, which I don’t think they will, then the industry can handle it. It’s not going to be cheap, but the banks can handle it and clean up their own mess. The losses will feed back through the industry to depositors and borrowers in the form of lower rates on deposits and higher cost of loans….
The IRA: So you oppose the idea of the government putting preferred equity into solvent but troubled banks that cannot raise capital on reasonable terms?
Ely: Yes, it is not necessary, even now. There is absolutely no need for the Treasury to have the authority, as you suggested, to “inject capital into solvent banks that are temporarily unable to raise new capital.” If a bank truly is solvent, it can raise additional capital or sell itself, if its present owners are realistic about what their bank is worth. The reason solvent banks have a problem raising capital, or selling themselves to a stronger bank, is that they set their price too high, as did AIG. As an aside, I am glad to see AIG’s shareholders getting whacked by the warrants associated with the Fed’s taxpayer’s loan to AIG. There is absolutely no need for the taxpayer to subsidize banks so they can stay independent, provided no barriers are erected to prevent new entrants into bank or specific banking markets.
The IRA: Agreed. We were referring to banks that could not be recapitalized or sold. A sale is obviously the first, best choice. So you would let the banks resolve their problems privately. Would you agree with Ernie Patrikis (‘A Change in Bank Control: Interview With Ernest Patrikis’, July 9, 2008′) that the Fed needs to loosen the restrictions on bank ownership in order to facilitate this process?
Ely: I fully agree that restrictions limiting investors from taking significant positions in banks should be lifted. Not only is the belief in separating banking from commerce invalid in an open, competitive economy, but we need to get ruthless investors inside troubled banks to get these banks and their bad assets cleaned up and/or sold. That is what should have happened at AIG, but unfortunately did not.
The IRA: Precisely. We want to see the bad assets remain in private hands, not in a government warehouse for toxic waste. But why then should anyone support Paulson’s proposal to place these toxic assets in the hands of the government? Chairman Frank seems to want to declare the jubilee and engage in mass loan forgiveness in order to ensure his permanent re-election. Maybe we can just all stay home instead of going to work and Barney Frank will just mail everyone a check.
Ely: Look, all of the fallout we are seeing in the markets today is part of clearing the detritus from the last speculative bubble. The housing bubble has to be allowed to collapse in order to clear the markets. We have a very necessary correction process underway. But this process creates a lot of pain and loss. I don’t like that, but we have to clean up the mess and take the pain in order to get the economy back into balance. In collapsing bubbles you have collapsing companies. Japan tried to muddle through and they had a lost decade. I hope we are not going to do that…
The IRA: But that is precisely the point. Why should Washington use taxpayer funds to rescue people who deliberately made bad business decisions?
Ely: This is the question that comes up frequently about Dick Fuld at Lehman and Kerry Killinger at WM. When these guys were contemplating life, did they have any second thoughts, any doubts about these decisions? Did hushed discussions among the top folks in their organizations, with the senior managers and directors, include deliberations such as these or were they too arrogant, too isolated from reality?…
The IRA: How do you see the Paulson plan unfolding? What should the markets expect in the next couple of weeks and months?
Ely: It is likely that Congress will not pass the Paulson bailout legislation this week. However, whenever it is passed, it will be much more complex, and incorporate unwise punitive terms and conditions that will seriously impede the intent of the Paulson plan. Further, I believe the process of pricing the assets purchased under the legislation will be much more complex and contentious than many appreciate at this time, which means that this program will get off to a much slower start than many anticipate, just as the RTC started quite slowly. If the Paulson plan starts slowly, market forces may sweep past the plan. It will be extremely interesting to see how this plan evolves over the next year, particularly given that a new Administration will come to power on January 20. |
The former Audi factory driver completes the driver squad for United Autosports’ Le Mans 24 Hours debut. Filipe will contest the world famous race in France alongside his European Le Mans Series LMP2 team-mates Will Owen and Hugo de Sadeleer in a Ligier JS P217. The trio dramatically won the 4 Hours of Silverstone last month, the opening round of the 2017 ELMS – the Anglo-American team’s maiden LMP2 race.
The 85th running of this year’s annual race marks Albuquerque’s fourth Le Mans having raced for Audi Sport Team Joest in LMP1 (2014-15) and most recently RGR Sport by Morand in LMP2 last year. Driving a diesel-hybrid Audi R18 e-tron quattro, the 31-year-old Portuguese driver was the best-placed Le Mans Rookie on the 54-car grid in 2014. The following year, Filipe on only his 14th lap in his first-ever race stint at Le Mans, broke the lap record (3m 17.647s) – only bettered by a mere two-tenth’s of a second late in the race around the legendary 8.47-mile road course made up predominantly of closed public roads.
The LMP2 team completed a successful night test last week (24-25 April) at Magny Cours in preparation for the Le Mans 24 Hours. The team completed 249 laps - the majority on the first night as rain blighted the second night’s testing.
The official Le Mans 24 Hours test day is held on 4 June with the Le Mans 24 Hour race scheduled to begin at 15:00 (local time) on Saturday 17 June.
Filipe Albuquerque (P), driver #32, United Autosports:
Born/Lives: Coimbra, Portugal. Age: 31
“I am really happy to race again this year at Le Mans. As everyone knows, it’s a very special race that every racing driver wants to take part in. This year the LMP2 category will be very special because the cars are much faster so everyone is curious to see what lap time we can do. I am sure I will enjoy the faces of Hugo and Will when they complete their first laps around that huge track. My teammates don’t have much experience but on the other hand we are not fighting for WEC points. We will be coming from the ELMS round at Monza, which is the best track to prepare for this race. So, all in all I think we can push for a good result.
“I want to thank United Autosports, Richard and Zak for hiring me for their Le Mans team.”
Zak Brown, Team Owner and Chairman, United Autosports:
“I’m delighted that we have been able to secure Filipe for the Le Mans 24 Hours. We have been working hard to keep the driver line up the same for Le Mans as for our ELMS campaign as I think the consistency can only go in our favour. I can’t wait to see United Autosports heading to Le Mans - I think we have a really strong driver line up!”
Richard Dean, Team Owner and Managing Director, United Autosports:
“We had a great start to our debut ELMS LMP2 campaign at Silverstone so I really wanted to keep the three drivers together for Le Mans. It is such an important, world famous race, we wanted to go into it with the strongest line up we could and I think that’s what we have achieved here. Filipe is obviously a phenomenal driver and he has worked so well with Will and Hugo that it would have been a shame to effectively start from scratch for Le Mans. I’m feeling positive ahead of the race. It’s the first time United Autosports have raced at Le Mans but the majority of the team has done Le Mans in one guise or another. Fingers crossed for a good result.” |
Q:
Save Webcam Image From Website
There is a website that posts a image from their webcam that I would like to be able to grab using a program. How would I get this jpeg data in memory? I have tried HttpRequest but it only returns html.
Here is the link:
http://bigwatersedge.axiscam.net/view/snapshot.shtml?picturepath=/jpg/image.jpg
A:
The URL of the image is actually http://bigwatersedge.axiscam.net/jpg/image.jpg?timestamp=. The URL you gave returns a page with an image on it.
Now, once you are downloading the correct URL of the image, this particular server is checking your HTTP referer. Put this in your HTTP headers:
Referer: http://bigwatersedge.axiscam.net/view/snapshot.shtml?picturepath=/jpg/image.jpg
|
Q:
Where is the correct location for Dask Worker configuration file and the Dask Scheduler configuration file?
I am attempting to find the correct location of Dask configuration files. I have a number of questions related to configuring Dask.
$ dask-worker --version
dask-worker, version 2.3.2
Do the Dask Worker and Dask Scheduler share the same configuration file or do they use different configuration files?
I am unclear if there are configuration variables that are specific to Dask Worker and Dask Scheduler. Is there a list of the valid configuration variables for Dask Worker and Dask Scheduler?
Where are the correct locations of the Dask Worker and Dask Scheduler configuration files?
I have found three different configuration files across my system and the Dask documentation:
~/.config/dask/distributed.yaml
~/.config/dask/dask.yaml
~/.dask/config.yaml
On my Dask Worker and Dask Scheduler machines, I find a file located at ~/.config/dask/dask.yaml which does not contain much information. I am not sure what should go into this file or if/where it is ever called by the Dask library.
I also see a file at ~/.config/dask/distributed.yaml that contains much more information. This looks more like the configuration I was expecting. I can see that these configuration are also loaded by Dask in distributed/config.py
A third file (~/.dask/config.yaml) makes an appearance in the documentation. To quote the documentation:
Dask accepts some configuration options in a configuration file, which by default is a .dask/config.yaml file located in your home directory.
I do not see this file on my system. Am I responsible for creating this configuration file? I never see this file referenced in the repository. Why does the documentation differ from the source code?
Can I print a list of all active configuration variables for both the Worker and the Scheduler?
Is there a way, either on the command line or in Python, where I can inspect the active configurations?
A:
For documentation on Dask's configuation system please see https://docs.dask.org/en/latest/configuration.html
That page says:
Configuration is specified in one of the following ways:
YAML files in ~/.config/dask/ or /etc/dask/
Environment variables like DASK_DISTRIBUTED__SCHEDULER__WORK_STEALING=True
Default settings within sub-libraries
I've removed the page that you were looking at in this PR: https://github.com/dask/distributed/pull/3038
|
Bryan Stinespring
Bryan Stinespring (born October 12, 1963) is an American football coach. He is the tight ends coach and run game coordinator at Old Dominion University. Stinespring was the run game coordinator and offensive line coach at James Madison University from 2016 to 2017. He was previously the tight ends coach (1993–1997, 2006–2015) and recruiting coordinator for the Virginia Tech Hokies football program. He was a full-time member of head coach Frank Beamer's staff from 1993-2015. Throughout his tenure in Blacksburg, Stinespring held a number of other positions including offensive line coach (1993–2005), recruiting coordinator (1994–2001), assistant head coach (2001) and offensive coordinator (2002–2012).
Following Beamer's retirement at the end of the 2015 season, Stinespring joined the staff at his alma mater James Madison where he served as offensive line coach and run-game coordinator.
Criticism
Stinespring had faced criticism from the fans and a player for offensive output during his time as offensive coordinator, which compares poorly with that of his predecessors under Frank Beamer.
In 2008, sports columnist Norm Wood has commented that Stinespring's offensive production in recent years has been "abysmal", and that he heard fans chanting "Fire Stinespring" before one home game.
While Stinespring faced criticism for offensive production, he has also been praised for his abilities as a recruiter. Players have also expressed their appreciation for Stinespring as a personal coach, and for his ability to recruit talented new players to the franchise.
Statistics
Below are Virginia Tech's offensive statistics during Stinespring's time as offensive coordinator.
References
External links
Old Dominion profile
Category:1963 births
Category:Living people
Category:James Madison Dukes football coaches
Category:James Madison Dukes football players
Category:Old Dominion Monarchs football coaches
Category:Virginia Tech Hokies football coaches
Category:High school football coaches in Virginia
Category:People from Clifton Forge, Virginia |
Strange and unconventional isotope effects in ozone formation.
The puzzling mass-independent isotopic enrichment in ozone formation contrasts markedly with the more recently observed large unconventional mass-dependent ratios of the individual ozone formation rate constants in certain systems. An RRKM (Rice, Ramsperger, Kassel, Marcus)-based theory is used to treat both effects. Restrictions of symmetry on how energy is shared among the rotational/vibrational states of the ozone isotopomer, together with an analysis of the competition between the transition states of its two exit channels, permit the calculation of isotope effects consistent with a wide array of experimental results. |
open! Core
open! Import
(* The choice of 8000 bytes is copied from git:
https://github.com/git/git/blob/b7bd9486b055c3f967a870311e704e3bb0654e4f/xdiff-interface.c#L201
*)
let prefix_length = 8000
let string s = String.contains s '\000' ~len:(Int.min prefix_length (String.length s))
|
Recently our dear friend Shane Legano, affectionately known by his friends as "Legan" was in a very dire car accident while in his work truck. He has bruised ribs, bruised lungs and a bruised kidney. His right arm also suffered some damage. Legan is constantly working overtime to provide for his wife and two children. Due to this accident he has high medical bills and will be out of work for sometime. What we ask of, is that the community that loves him and has known him for decades give back to this man that would give anyone he cares about the shirt off his back. We are starting this fundraiser to help legan pay his bills and give him money to support his family while he is out of work. Please donate anything you can. On behalf of Legan and everyone that knows and loves him, we thank you.
Read more |
Amazon Price:N/A(as of May 23, 2018 9:13 pm – Details). Product prices and availability are accurate as of the date/time indicated and are subject to change. Any price and availability information displayed on the Amazon site at the time of purchase will apply to the purchase of this product.
Eat to live : Tip guide on selecting healthy food
Get this Kindle book now for only Free Regularly priced at $5.99. Read on your PC, Mac, smart phone, tablet or any Kindle device. |
I have to restart 3 times before I get a console that I can log into. I get a fsck run right away that is rather annoying with a reboot. Then I get a kernel panic so I restart. Then it works all of a sudden. I'm pretty sure this has to do the the date being set to 1970. I tried adding -localtime to the qemu command line parameters but that didn't seem to fix it really. I imagine this won't happen on the real hardware.
Once I login I run the following:
date -s "05/13/2008" #set to the current date
apt-get install ntpdate #need to check if this adds it into cron
tzselect #follow the prompts
apt-get update
apt-get upgrade
apt-get install less #I can't get along with out it
apt-get install icewm
apt-get install firefox
apt-get xterm
I've been trying various packages. icewm - pretty ugly at the moment, but it launches just fine. Firefox - doesn't launch. It nails the cpu for a while and then nothing happens. I think maybe there isn't enough memory perhaps? Seems to me that I need to set qemu to 64mb RAM and setup a swap file to make things more similar to my c3000. xterm - it works fine.
What is everyone else running? What's working? What's not? I'm so stoked about this! I hope I can find more time tomorrow to mess around and get a few more things working. I think I'm going to setup a bzr repository of my tarball so I can recover from the massively huge mistakes I'll be making over the next few weeks .
So I've been playing around a little bit more. I've made a whole mess of little scripts to handle creating the virtual qemu drive and untar the rootfs back into it. I also created a script to generate a new tarball when I want. This has made my horrible mistakes... er I mean my testing go much smoother.
I've made a few minor changes to Cortez suggestions. I added the -localtime and the -startdate now settings. This sets the date and time for me immediately and I don't have to worry about remembering the set anything with the date command after I startup.
When I'm working on my PC, the /etc/init.d/keymap file sets the keyboard to use the zaurus keymap. That drives me nuts so I disable that by moving the file out of /etc/init.d. This is temporary and before I create a new rootfs I put it back so the Zaurus has the right keyboard layout.
I started messing around with getting my Ambicom wireless card working. It seemed to load the drivers fine. However, for some reason when I ran ifup wlan0, the dhcp client was not working properly. I kept getting a "execve (/lib/dhcp3-client/call-dhclient-script, …): Permission denied" error.
After a quick google search someone else with the same issue just ran "apt-get install --reinstall dhcp3-client" to fix the issue. That seemed to work for me just fine as well. |
Q:
How can I keep products in CART when guest user Leaves the Site. Magento 2
I want to keep the products in Add to cart for guest user when He leave the site how is it will be possible?? for example I'm a guest user n I add few products in Add to cart n then I close the browser or the site but when I reopen the site then the products which I add before already exists in Cart how is it will be possible??
A:
Magento does this automatically. You can adjust the cookie lifetime in the backend, which will determine how long the cart is stored - I think default is 1 hour, you can set it to 2 weeks or what you like.
When the user deletes their cookies, though, there's no way to have him keep cart items without a customer account. This is because the PHP session has, without a cookie, no way to determine who this user was.
|
President Trump suggested that there could be an opportunity for former White House communications director Hope Hicks to return to the administration amid new shakeup rumors.
When asked on Friday if Hicks would return to the White House after resigning from her position in March, Trump told reporters that he’s “been hearing little things like that.”
“Well, I don’t know. Well I love Hope. She’s great,” Trump said aboard Air Force One, according to a CNN reporter. “I hope that — I’ve been hearing little things like that.”
[Opinion: Hope Hicks should replace John Kelly as White House chief of staff]
Trump, asked if Hope Hicks is coming back to the White House, says "I've been hearing little things like that." pic.twitter.com/LQsQGrxUag — Kevin Liptak (@Kevinliptakcnn) June 29, 2018
While Trump did not confirm rumors that Hicks would make a return, his comments suggest that there is an open door policy for the long-serving Trump aide who left on good terms with most members of the administration. Many close to the president said he viewed Hicks like a daughter and one of his closest allies.
Rumors of her return began after reports that current White House chief of staff John Kelly is planning to leave his post in the near future surfaced yet again on Thursday. Some have even suggested that Hicks should replace Kelly upon his possible departure.
[Also read: Trump 'misses' Hope Hicks 'and still talks about her often'] |
Irish Divorced Dads
Last week’s post covered parents in France and Belgium who must be constantly vigilant in regard to their teenagers’ choices of websites. The above link refers to another European nation, Ireland, one I know far better than those two francophone countries. Three quarters of my ancestors are Irish and I was raised Catholic, as were over 90% of the citizens of the Republic of Ireland (as opposed to the Protestant Northern Ireland, which is , for the most part, proudly part of the United Kingdom).
Most Americans don’t realize the huge impact the Irish have had on the USA. We are the #2 ethnic group, behind only the Germans. The Irish have become secularized, as have virtually all Americans, but the Catholic Church has yet to fully liberalize divorce and remarriage, so there remains a bit more stigma to single parenting than that found among the typical American. One of the most interesting statements from the article is that divorced dads can help their ex-spouses pursue their careers by helping with childcare.
This points out the #1 difference with life here because in the US 75% of custodial mothers move within the first four years after a divorce, often making such arrangements impossible. Ireland is like the Scandinavian nations featured in last years fantastic film, Divorcecorp, where childcare is easily shared because up to half the population lives in one metropolitan area.
Speaking of films, I should point out the wonderful Irish father-custody film, Evelyn. It stars Pierce Brosnan, who ironically was the bad-guy in perhaps the most famous father-custody film, Mrs. Doubtfire. In that he portrayed that archetypal villain all divorce dads hate, the suave, handsome, wealthy suitor of one’s ex. Brosnan really makes amends for that role by being a loving father of three who loses custody of his three kids, partly due to that bete noire of the Irish, alcohol.
It is based on the true story of a man who takes his case to the Irish supreme court and won. As one who wrote his own brief to the California Supreme Court, won, and received no benefit whatsoever, I can only congratulate that hero, Desmond Doyle, on having the luck of the Irish! |
Atlantic City Lawmakers Fight to Save Showboat and Trump Plaza
New Jersey State Senator James Whelan: “It would be a nightmare for Atlantic City to have a string of vacant properties along the Boardwalk.” (Image: savejersey.com)
Atlantic City lawmakers are trying to buy some time for The Showboat and Trump Plaza this week, hoping to find buyers and save the workforce of the two ailing casinos. Both properties have warned their staff that their employment will cease within two months, with the Showboat expected to close August 31 and the Trump Plaza two weeks after that.
Former Atlantic City mayor and now New Jersey State Senator James Whelan and Assemblymen Vince Mazzeo and Chris Brown have written to New Jersey’s Casino Control Commission asking that the casinos stay open for a further four months, arguing in a letter to commission chairman Matthew Levinson that quick closures are not in the public interest.
“Given the complexities of the situation here in Atlantic City, this two-month timeframe is simply not enough time for potential buyers to do the appropriate research that acquisition of either property may require,” they wrote.
“Nightmare” for Atlantic City
“It would be a nightmare for Atlantic City to have a string of vacant properties along the Boardwalk, like Atlantic Club, a situation orchestrated by joint venture between Caesars and Tropicana,” they added, referring to the Atlantic Club, which was sold in December for just $23.4 million to Caesars and Tropicana, to be stripped for parts, after a last-minute deal with PokerStars fell through.
The lawmakers feel that offering just two months’ notice “to many 20-plus-year employees is wrong and unrealistic,” and that while it may be convenient for the casino operators to close the properties as quickly as possible, it’s simply not in the broader interests of Atlantic City.
The Showboat, though profitable, is the smallest of Caesars’ properties in Atlantic City in terms of net revenue and, as of June 2nd, it employed 2,100 people. The Trump Plaza, meanwhile, is the poorest-performing casino in Atlantic City and has just over 1,000 employees.
But Can Buyers Be Found?
Despite the efforts of these legislators, it’s difficult to see where buyers for the two properties might be found. Trump Entertainment has been trying to sell the Plaza since 2011, with little success. Last year California-based Meruelo Group attempted to buy it for $20 million, but the deal fell through when Trump Entertainment was unable to get a release on its mortgage, with the senior lender refusing to approve the sale at such a low price.
And that’s not the only problem. The consensus is that casino market in Atlantic City is over-saturated and that the city needs to sacrifice a few properties for the benefit of the market as a whole. When it sold the Atlantic Club, Caesars included deed restrictions that barred new owners from running the property as a casino, and now the legislators fear that the same clause will exist for potential buyers of the Showboat, something they feel should not be permitted.
Commission Chair Levinson, while sympathetic, has said the situation may be out of his hands.
“I certainly share the very serious concerns they raised about the welfare of workers and all of the businesses that will suffer if casino properties close their doors,” he said. “While our authority is broad in some respects, and our ability to direct business decisions of the casinos is limited under the Casino Control Act, the current circumstances are unprecedented and present novel issues which we have been and will continue to review.”
Please don’t close Trump Plaza, so many of us are loyal customers since it has opened. Trump is home and family to many of us seniors and we look forwb ard to visiting and staying every month. We enjoy gam
Sharyn Haney
we enjoy gambling,eating and socializing at that casino. We love the staff and everyone is so nice. I hope something can be done to keep Trump Plaza from folding. |
The TCR New Zealand championship has announced a revised calendar where they move from five to seven races over a period of five months, compared to the originally announced five-week period.
The season will start “in the second half of 2020” and end in February 2021, with a complete calendar to be revealed “in due course”.
“We have had a number of overseas competitors who would like to be part of TCR New Zealand but were having difficulty with the original calendar,” said Grant Smith, category manager of TCR New Zealand.
“The arrival delays of TCR cars in New Zealand means we would not be able to deliver the product we are aiming for. This revised calendar will offer competitors more time to prepare for the new season and enables us to get more TCR cars into New Zealand before season commencement in the second half of 2020.”
One of the rounds will be held at Bathurst in late 2020 together with the TCR Australia series.
“The interest that we’ve had in TCR New Zealand has been extremely positive, however there have been some challenges securing the vehicles to launch at the level that we are comfortable with,” said Matt Braid, director of the Australian Racing Group.
“Like TCR Australia, our intention has always been to ensure the championship launches with significant impact, and by making this alteration to the calendar, we know that TCR New Zealand will be a success in 2020 and beyond.” |
Driving Neurogenesis in Neural Stem Cells with High Sensitivity Optogenetics.
Optogenetic stimulation of neural stem cells (NSCs) enables their activity-dependent photo-modulation. This provides a spatio-temporal tool for studying activity-dependent neurogenesis and for regulating the differentiation of the transplanted NSCs. Currently, this is mainly driven by viral transfection of channelrhodopsin-2 (ChR2) gene, which requires high irradiance and complex in vivo/vitro stimulation systems. Additionally, despite the extensive application of optogenetics in neuroscience, the transcriptome-level changes induced by optogenetic stimulation of NSCs have not been elucidated yet. Here, we made transformed NSCs (SFO-NSCs) stably expressing one of the step-function opsin (SFO)-variants of chimeric channelrhodopsins, ChRFR(C167A), which is more sensitive to blue light than native ChR2, via a non-viral transfection system using piggyBac transposon. We set up a simple low-irradiance optical stimulation (OS)-incubation system that induced c-fos mRNA expression, which is activity-dependent, in differentiating SFO-NSCs. More neuron-like SFO-NCSs, which had more elongated axons, were differentiated with daily OS than control cells without OS. This was accompanied by positive/negative changes in the transcriptome involved in axonal remodeling, synaptic plasticity, and microenvironment modulation with the up-regulation of several genes involved in the Ca2+-related functions. Our approach could be applied for stem cell transplantation studies in tissue with two strengths: lower carcinogenicity and less irradiance needed for tissue penetration. |
Q:
Footer view missing in android RelativeLayout
I am using header and footer and also use scroll view in within content part only.
if i run that application in emulator it display header and content part only. Footer view is missing in my application.
Here my code:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent">
<RelativeLayout
android:id="@+id/rl_header"
android:layout_width="fill_parent"
android:layout_height="40dp"
android:background="@drawable/head" >
<TextView android:id="@+id/txtTitle"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerInParent="true"
android:text="@string/gebrauchte"
android:textColor="#fff"
android:textStyle="bold"
android:textSize="20dp"/> </RelativeLayout>
<ScrollView
android:id="@+id/scrollView"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_below="@+id/rl_header"
>
<RelativeLayout
android:id="@+id/ll_bikeDetail"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:background="#fff" >
<RelativeLayout
android:id="@+id/rl_bikeDetail"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:background="#fff" >
<TextView
android:id="@+id/txtDetail"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_marginLeft="8dp"
android:text="@string/detail"
android:textColor="#000"
android:textSize="15dp"
android:textStyle="bold" android:layout_marginTop="5dp"/>
<TextView
android:id="@+id/txtDetail1"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_marginLeft="130dp"
android:text="@string/pass"
android:textColor="#000"
android:textSize="13dp" android:layout_marginTop="5dp"/>
<TextView
android:id="@+id/txtDynamic"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginLeft="267dp"
android:text="2175"
android:textStyle="bold" android:layout_marginTop="5dp"/>
<View
android:id="@+id/img_hrDetail"
android:layout_width="wrap_content"
android:layout_height="2dp"
android:layout_below="@+id/txtDetail"
android:background="#FF909090" />
<TextView
android:id="@+id/txtBikeType"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@+id/img_hrDetail"
android:layout_marginLeft="8dp"
android:text="@string/biketype"
android:textColor="#000"
android:textSize="15dp"
android:textStyle="bold" android:layout_marginTop="5dp"/>
<TextView
android:id="@+id/txtBike"
android:layout_width="wrap_content"
android:layout_height="23dp"
android:layout_below="@+id/img_hrDetail"
android:layout_marginLeft="110dp"
android:text="@string/belie"
android:textColor="#ff0000"
android:textSize="15dp" android:layout_marginTop="5dp"/>
<Button
android:id="@+id/btn_arwBikeType"
android:layout_width="30dp"
android:layout_height="30dp"
android:layout_below="@+id/img_hrDetail"
android:layout_marginLeft="280dp"
android:background="@drawable/arrow" android:layout_marginTop="5dp"/>
<View
android:id="@+id/img_hrBikeType"
android:layout_width="wrap_content"
android:layout_height="2dp"
android:layout_below="@+id/txtBikeType"
android:background="#FF909090" />
<TextView
android:id="@+id/txtMarke"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@+id/img_hrBikeType"
android:layout_marginLeft="8dp"
android:text="@string/mark"
android:textColor="#000"
android:textSize="15dp"
android:textStyle="bold" />
<TextView
android:id="@+id/txtMarkeBe"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@+id/img_hrBikeType"
android:layout_marginLeft="110dp"
android:text="@string/belie"
android:textColor="#ff0000"
android:textSize="15dp" />
<Button
android:id="@+id/btn_arwBrand"
android:layout_width="30dp"
android:layout_height="30dp"
android:layout_below="@+id/img_hrBikeType"
android:layout_marginLeft="280dp"
android:background="@drawable/arrow" />
<TextView
android:id="@+id/txtModel"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@+id/txtMarke"
android:layout_marginLeft="8dp"
android:text="@string/model"
android:textColor="#000"
android:textSize="15dp"
android:textStyle="bold" />
<TextView
android:id="@+id/txtModelBe"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@id/txtMarke"
android:layout_marginLeft="110dp"
android:text="@string/belie"
android:textColor="#ff0000"
android:textSize="15dp" />
<Button
android:id="@+id/btn_arwModel"
android:layout_width="30dp"
android:layout_height="30dp"
android:layout_below="@+id/txtMarke"
android:layout_marginLeft="280dp"
android:background="@drawable/arrow" />
<View
android:id="@+id/img_hrModel"
android:layout_width="wrap_content"
android:layout_height="2dp"
android:layout_below="@+id/txtModel"
android:background="#FF909090" />
<TextView
android:id="@+id/txtErst"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@+id/img_hrModel"
android:layout_marginLeft="8dp"
android:text="@string/erst"
android:textColor="#000"
android:textSize="15dp"
android:textStyle="bold" />
<TextView
android:id="@+id/txtErstBe"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@+id/img_hrModel"
android:layout_marginLeft="115dp"
android:layout_marginTop="3dp"
android:text="@string/belie"
android:textColor="#ff0000"
android:textSize="13dp" />
<Button
android:id="@+id/btn_arwErstBe"
android:layout_width="30dp"
android:layout_height="30dp"
android:layout_below="@+id/img_hrModel"
android:layout_marginLeft="170dp"
android:background="@drawable/arrow"/>
<TextView
android:id="@+id/txtErstBis"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@+id/img_hrModel"
android:layout_marginLeft="200dp"
android:layout_marginTop="5dp"
android:text="@string/bis"
android:textColor="#000"
android:textSize="13dp" />
<TextView
android:id="@+id/txtErstBel"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@+id/img_hrModel"
android:layout_marginLeft="220dp"
android:layout_marginTop="5dp"
android:text="@string/belie"
android:textColor="#ff0000"
android:textSize="13dp" />
<Button
android:id="@+id/btn_arwErst"
android:layout_width="30dp"
android:layout_height="30dp"
android:layout_below="@+id/img_hrModel"
android:layout_marginLeft="280dp"
android:background="@drawable/arrow" />
<TextView
android:id="@+id/txtLauf"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@+id/txtErst"
android:layout_marginLeft="8dp"
android:text="@string/lauf"
android:textColor="#000"
android:textSize="15dp"
android:textStyle="bold" />
<TextView
android:id="@+id/txtLaufKm"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@id/txtErst"
android:layout_marginLeft="115dp"
android:layout_marginTop="3dp"
android:text="@string/km"
android:textColor="#ff0000"
android:textSize="13dp" />
<Button
android:id="@+id/btn_arwLaufBe"
android:layout_width="30dp"
android:layout_height="30dp"
android:layout_below="@+id/txtErst"
android:layout_marginLeft="170dp"
android:background="@drawable/arrow" />
<TextView
android:id="@+id/txtLaufBis"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@+id/txtErst"
android:layout_marginLeft="200dp"
android:layout_marginTop="3dp"
android:text="@string/bis"
android:textColor="#000"
android:textSize="13dp" />
<TextView
android:id="@+id/txtLaufBel"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@id/txtErst"
android:layout_marginLeft="220dp"
android:layout_marginTop="3dp"
android:text="@string/belie"
android:textColor="#ff0000"
android:textSize="13dp" />
<Button
android:id="@+id/btn_arwLauf"
android:layout_width="30dp"
android:layout_height="30dp"
android:layout_below="@+id/txtErst"
android:layout_marginLeft="280dp"
android:background="@drawable/arrow" />
<TextView
android:id="@+id/txtHub"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@+id/txtLauf"
android:layout_marginLeft="8dp"
android:text="@string/hub"
android:textColor="#000"
android:textSize="15dp"
android:textStyle="bold" />
<TextView
android:id="@+id/txtHubCcm"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@id/txtLauf"
android:layout_marginLeft="115dp"
android:layout_marginTop="3dp"
android:text="@string/ccm"
android:textColor="#ff0000"
android:textSize="13dp" />
<Button
android:id="@+id/btn_arwHubCcm"
android:layout_width="30dp"
android:layout_height="30dp"
android:layout_below="@+id/txtLauf"
android:layout_marginLeft="170dp"
android:background="@drawable/arrow" />
<TextView
android:id="@+id/txtHubBis"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@+id/txtLauf"
android:layout_marginLeft="200dp"
android:layout_marginTop="3dp"
android:text="@string/bis"
android:textColor="#000"
android:textSize="13dp" />
<TextView
android:id="@+id/txtHubBel"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@id/txtLauf"
android:layout_marginLeft="220dp"
android:layout_marginTop="3dp"
android:text="@string/belie"
android:textColor="#ff0000"
android:textSize="13dp" />
<Button
android:id="@+id/btn_arwHub"
android:layout_width="30dp"
android:layout_height="30dp"
android:layout_below="@+id/txtLauf"
android:layout_marginLeft="280dp"
android:background="@drawable/arrow" />
<TextView
android:id="@+id/txtPre"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@+id/txtHub"
android:layout_marginLeft="8dp"
android:text="@string/pre"
android:textColor="#000"
android:textSize="15dp"
android:textStyle="bold" />
<TextView
android:id="@+id/txtPreBe"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@id/txtHub"
android:layout_marginLeft="115dp"
android:layout_marginTop="3dp"
android:text="@string/sign"
android:textColor="#ff0000"
android:textSize="13dp" />
<Button
android:id="@+id/btn_arwPreBe"
android:layout_width="30dp"
android:layout_height="30dp"
android:layout_below="@+id/txtHub"
android:layout_marginLeft="170dp"
android:background="@drawable/arrow" />
<TextView
android:id="@+id/txtPreBis"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@+id/txtHub"
android:layout_marginLeft="200dp"
android:layout_marginTop="3dp"
android:text="@string/bis"
android:textColor="#000"
android:textSize="13dp" />
<TextView
android:id="@+id/txtPreBel"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@id/txtHub"
android:layout_marginLeft="220dp"
android:layout_marginTop="3dp"
android:text="@string/belie"
android:textColor="#ff0000"
android:textSize="13dp" />
<Button
android:id="@+id/btn_arwPre"
android:layout_width="30dp"
android:layout_height="30dp"
android:layout_below="@+id/txtHub"
android:layout_marginLeft="280dp"
android:background="@drawable/arrow" />
<View
android:id="@+id/img_hrPre"
android:layout_width="fill_parent"
android:layout_height="2dp"
android:layout_below="@+id/txtPre"
android:background="#FF909090" />
<ProgressBar
android:id="@+id/progressBar1"
style="?android:attr/progressBarStyleSmall"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignBottom="@+id/txtDynamic"
android:layout_alignLeft="@+id/btn_arwBikeType"
android:background="#000" android:layout_marginTop="5dp"/>
</RelativeLayout>
<RelativeLayout
android:id="@+id/rl_Wo"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_below="@+id/rl_bikeDetail"
android:background="#fff" >
<TextView
android:id="@+id/txtWo"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@+id/imgHrule3"
android:layout_marginLeft="8dp"
android:text="@string/wo"
android:textColor="#000"
android:textSize="15dp"
android:textStyle="bold"
android:layout_marginTop="5dp"/>
<com.InternetGMBH.ThousandPS.Utilities.SegmentedRadioGroup
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="horizontal" android:layout_below="@+id/img_hrPre"
android:layout_marginLeft="100dp"
android:layout_marginRight="15dp" android:id="@+id/segment_text"
android:checkedButton="@+id/btn_egal"> <RadioButton
android:id="@id/btn_egal" android:minWidth="60dip"
android:minHeight="33dip" android:text="Egal"
android:textAppearance="?android:attr/textAppearanceSmall"
android:button="@null" android:gravity="center"
android:textColor="@color/radio_colors" /> <RadioButton
android:id="@+id/btn_gps" android:minWidth="60dip"
android:minHeight="33dip" android:text="Gps"
android:button="@null" android:gravity="center"
android:textAppearance="?android:attr/textAppearanceSmall"
android:textColor="@color/radio_colors" /> <RadioButton
android:id="@+id/btn_eingabe" android:minWidth="60dip"
android:minHeight="33dip" android:text="Eingabe"
android:button="@null" android:gravity="center"
android:textAppearance="?android:attr/textAppearanceSmall"
android:textColor="@color/radio_colors" />
</com.InternetGMBH.ThousandPS.Utilities.SegmentedRadioGroup>
<!-- <Button
android:id="@+id/btn_egal"
android:layout_width="70dp"
android:layout_height="35dp"
android:layout_below="@+id/img_hrPre"
android:layout_marginLeft="100dp"
android:layout_marginRight="15dp"
android:text="Egal" android:layout_marginTop="5dp" android:background="@drawable/new_01" android:textColor="#fff"/>
-->
<TextView
android:id="@+id/txtland"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@+id/txtWo"
android:layout_marginLeft="8dp"
android:layout_marginTop="5dp"
android:text="@string/land"
android:textColor="#000"
android:textSize="15dp"
android:textStyle="bold" />
<TextView
android:id="@+id/txtGpsStatus"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@+id/txtWo"
android:textStyle="bold"
android:layout_marginTop="3dp"
android:text=""
android:textColor="#000"
android:textSize="15dp"
android:layout_marginLeft="8dp"/>
<TextView
android:id="@+id/txtlandBe"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@id/txtWo"
android:layout_marginLeft="110dp"
android:layout_marginTop="8dp"
android:text="@string/belie"
android:textColor="#ff0000"
android:textSize="13dp" />
<TextView
android:id="@+id/txtGpsValue"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@id/txtWo"
android:layout_marginLeft="110dp"
android:layout_marginTop="3dp"
android:text=""
android:textColor="#ff0000"
android:textSize="13dp" />
<Button
android:id="@+id/btn_arwLand"
android:layout_width="30dp"
android:layout_height="30dp"
android:layout_below="@+id/txtWo"
android:layout_marginLeft="180dp"
android:background="@drawable/arrow" android:layout_marginTop="5dp"/>
<EditText
android:id="@+id/txt_plz"
android:layout_width="50dp"
android:layout_height="30dp"
android:layout_below="@+id/txtWo"
android:layout_marginLeft="230dp"
android:layout_marginRight="15dp"
android:textSize="10dp" android:layout_marginTop="5dp"/>
<TextView
android:id="@+id/txtmax"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@+id/txtland"
android:layout_marginLeft="8dp"
android:layout_marginTop="3dp"
android:text="@string/max"
android:textColor="#000"
android:textSize="10dp" />
<TextView
android:id="@+id/txtmaxKm"
android:layout_width="wrap_content"
android:layout_height="30dp"
android:layout_below="@+id/txtland"
android:layout_marginLeft="110dp"
android:layout_marginTop="3dp"
android:text="200 km"
android:textColor="#ff0000"
android:textSize="13dp" />
<SeekBar
android:id="@+id/seekBar"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignLeft="@+id/btn_arwLand"
android:layout_alignRight="@+id/txt_plz"
android:layout_below="@+id/txtlandBe"
android:max="490" />
</RelativeLayout> </RelativeLayout>
</ScrollView>
<RelativeLayout
android:id="@+id/rl_footer"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_below="@+id/scrollView"
android:background="#fff" >
<Button
android:id="@+id/btn_resetForm"
android:layout_width="120dp"
android:layout_height="30dp"
android:layout_alignParentLeft="true"
android:layout_below="@+id/rl_Wo"
android:layout_marginLeft="14dp"
android:layout_marginTop="20dp"
android:background="@drawable/resetform" />
<Button
android:id="@+id/btn_anze"
android:layout_width="120dp"
android:layout_height="30dp"
android:layout_below="@+id/rl_Wo"
android:layout_marginLeft="56dp"
android:layout_marginTop="20dp"
android:layout_toRightOf="@+id/btn_resetForm"
android:background="@drawable/redbutton"
android:text="Anzeigen"
android:textColor="#fff" /> </RelativeLayout>
</RelativeLayout>
any one can help me what's the issue here?
A:
Here my answer,
in footer relative layout just add android:layout_alignParentBottom="true"
<RelativeLayout
android:id="@+id/rl_footer"
android:layout_width="fill_parent"
android:layout_height="46dp"
android:layout_alignParentBottom="true"
android:background="#fff" >
<Button....../>
<Button...../>
</RelativeLayout>
|
534 F.2d 690
Carl QUALLS, Administrator of the Estate of Billy Don Trulland Manual Daniel Bunch, Plaintiffs-Appellants.v.Jack K. PARRISH et al., Defendants-Appellees.
No. 75-1590.
United States Court of Appeals,Sixth Circuit.
Submitted June 5, 1975.Decided April 19, 1976.
W. C. Keaton, Keaton & Turner, Hohenwald, Tenn., William Lamar Newport, Gullett, Steele, Sanford, Robinson, & Merritt, Nashville, Tenn., for plaintiffs-appellants.
James P. Diamond, James D. Todd, Jackson, Tenn., for defendants-appellees.
Before EDWARDS, McCREE and LIVELY, Circuit Judges.
McCREE, Circuit Judge.
1
This is an appeal from a judgment in favor of defendant law enforcement officers in a civil rights action. 42 U.S.C. § 1983. The district court, sitting without a jury, determined that the Sheriff of Decatur County, Jack K. Parrish, and two of his deputies, Jeffery L. Long and Jack French, did not violate plaintiffs'1 civil rights when, after a high-speed automobile chase, one of the deputies shot at plaintiff Bunch's automobile and killed Trull, plaintiff Quall's decedent. The district court, on alternative grounds, determined that defendants lawfully employed deadly force in order to apprehend plaintiffs. In the concluding paragraph of its opinion, the district court said:
2
Finally, the Court finds that plaintiff has failed to carry the burden of proof that French acted unreasonably or with excessive force or authority under all the circumstances. There was a reasonable basis for the Sheriff's deputies to believe that a felony had been committed and that plaintiffs might be involved, or that plaintiffs had committed a felony in at least threatening the officers with assault by use of the automobile after the chase began. Bunch and Trull were at least equally to blame for the tragic consequence that ensued.
3
We affirm the judgment of the district court.
4
The events giving rise to this appeal took place in Decatur County, a rural area in the Western District of Tennessee, on the evening of March 31, 1972, and the early morning hours of April 1. Bunch and his passenger, Trull, were in Bunch's automobile returning to their homes in Perry County after spending the evening at a party in the neighboring town of Waverly. Bunch was driving his 1971 red Dodge Demon automobile east on Highway 20 in Decatur County. Evidence at trial showed that Trull was quite intoxicated, but Bunch was sober.
5
At about 11:30 p. m., Sheriff Parrish was informed by departmental radio that Wilbur Dean Ellis had been observed dragging a woman at gunpoint from a local restaurant and forcing her into his automobile. Since Parrish was busy on another assignment, he instructed the police dispatcher to inform Deputy French about the kidnapping. The dispatcher called French and requested him to locate and apprehend Ellis. French, a part-time deputy sheriff, used his own car, a white Chevrolet that bore no police identification, for this assignment. His car was neither equipped with a police siren nor with an emergency flashing light, and he was not wearing a police uniform.
6
French stopped to pick up another off-duty deputy, Jeffery L. Long, who also was dressed in civilian clothing. Together they proceeded to an all-night restaurant. Both officers knew and would have recognized Ellis and his female captive on sight.
7
The officers began their investigation at the restaurant, and, a few moments later, observed Bunch's red Dodge Demon automobile proceeding along the highway. Although the dispatcher had not suggested it, Deputy French thought that the kidnapper might be driving a Chrysler or Dodge automobile because Ellis worked at a Chrysler-Dodge dealership. Accordingly, because the vehicle that the officers saw was a Dodge automobile with two occupants, they speculated that it might have been the kidnapper's car.
8
The district judge found that appellant Bunch's automobile "was being driven in its proper lane of traffic, and neither Bunch nor Qualls (Trull) was guilty of any apparent violation of law when the deputies decided to 'check out' the vehicle and its occupants." Next, the district judge observed:
9
French drove up behind the Bunch car which bore Perry County license plates and attempted to pull it over; Deputy Long trying to signal them to halt inside the town of Parsons with flashlight, while French turned on his 'emergency' blinker lights and sounded his horn. Neither French nor Long identified the occupants of the Bunch car but Long yelled out to attempt to make themselves known as police officers.
10
In response to French's attempts to direct Bunch to the side of the road, Bunch rapidly accelerated his car and attempted to elude the deputies. Bunch testified that neither he nor Trull ever realized that their pursuers were police officers. A seven mile chase ensued during which both vehicles reached very high speeds. In fact, the Bunch vehicle went out of control on two curves during the course of the chase.
11
Bunch testified that several miles past Parsons on Highway 69, his car "spun out" on a curve and came to rest across the highway with the passenger side of his car facing the oncoming officers. French drove his automobile to within six or eight feet of Bunch's car with his headlights shining directly upon Bunch's car. Then Deputy Long got out of the car and walked toward Bunch's vehicle intending to open the passenger door in order to protect the kidnap victim if she were in the car. Before Long could reach the door, Bunch started his automobile again and sped away from the scene. The car "fishtailed" as it departed, knocking Long backward. After firing a warning shot, Long reentered Deputy French's automobile and they resumed the chase.
12
The high-speed pursuit continued until Bunch turned off the main highway and entered a semi-circular driveway. As Bunch entered the south entrance of the driveway, French drove to the north entrance and blocked it with his car. In the meantime, Bunch, discovering that the north exit was blocked by French's car, turned his automobile around to drive out the south end of the driveway.
13
The district judge narrated the last moments of this tragic episode:
14
French ran to the other driveway exit when Bunch turned around and headed out back toward the Highway, this time in a southerly direction as Long reported their pursuit to the dispatcher over the car radio communications system. As Bunch passed within about ten feet of French, the Deputy fired .357 magnum pistol several times at the moving car. At least two of French's shots struck the Bunch car, one of them also striking Trull in the head resulting in almost instantaneous death.
15
In ruling in favor of the defendants, the district judge considered Tennessee case law that authorizes a law enforcement officer in whose presence a felony has been committed to use all means necessary to arrest the offender and to prevent his flight. Love v. Bass, 145 Tenn. 524, 529, 238 S.W. 94 (1921), Lewis v. State, 40 Tenn. 127 (1859). However, an "officer has no absolute right to kill, either to take, or prevent the escape of, a prisoner. If with diligence and caution the prisoner might otherwise be taken or held, the officer will not be justified for the killing, even though the prisoner may have committed a felony." Love v. Bass, 145 Tenn. at 529-30, 238 S.W. at 96, Reneau v. State, 70 Tenn. 720 (1879). Finally, the determination "(w)hether or not there was a reasonable necessity for the killing, and the reasonableness of the grounds upon which the officer acted in killing, are questions for the jury." 145 Tenn. at 530, 238 S.W. at 96.
The district judge found
16
that plaintiff has failed to carry the burden of proof that French acted unreasonably or with excessive force or authority under all the circumstances. There was a reasonable basis for the Sheriff's deputies to believe that a felony had been committed and that plaintiffs might be involved, or that plaintiffs had committed a felony in at least threatening the officers with assault by use of the automobile after the chase began.
17
Appellants present two issues on appeal: (1) whether the district court erred in determining that the officers had probable cause to attempt to stop appellant's automobile in order to question the occupants about the kidnapping and (2) whether the district court erred in determining that the officers lawfully used deadly force because they had probable cause to believe that a felonious assault had occurred during the chase. We believe that both issues must be resolved against appellants.
18
We find it unnecessary to determine whether the officers had probable cause to stop the Bunch automobile in order to arrest its occupants for kidnapping. The deputies' decision to follow the Dodge Demon resulted from a reasonable speculation that the kidnapper, who worked for a Chrysler-Dodge automobile dealer, might be driving a Dodge automobile. In addition, the vehicle was observed in the general vicinity of the abduction, and there were, as expected, two occupants in the vehicle. Appellant's sudden acceleration served to strengthen the suspicion generated by the other factors. These facts, viewed in the light of the officers' experience, could reasonably lead them to conclude that criminal activity might be afoot and that it was necessary to take "swift measures to discover the true facts and neutralize the threat of harm." Terry v. Ohio, 392 U.S. 1, 30, 88 S.Ct. 1868, 1884, 20 L.Ed.2d 889, 911 (1968), Adams v. Williams, 407 U.S. 143, 92 S.Ct. 1921, 32 L.Ed.2d 612 (1972). Accordingly, the officers were authorized to pursue and stop the automobile in order to question its occupants.
19
With respect to the district court's determination that the officers had probable cause to believe that a felonious assault had occurred, we determine that the evidence presented at trial supports the finding. Bunch's rapid departure from the scene of the "spinout," which caused the car to fishtail and knock Deputy Long backwards, could have been viewed as an assault and battery. The district court's findings do not make clear whether it found that a felonious assault occurred then, or at the end of the chase when Bunch drove towards French at the exit from the semi-circular driveway. At the semi-circular driveway, the district judge found that the Bunch automobile passed "without about ten feet of French," as it travelled toward the highway. Deputy French testified: "If I hadn't got out of the way (of Bunch's automobile), I would have been dead today." We also observe that the district court rejected as not credible Bunch's explanation that he was fleeing his pursuers because he believed that he was being chased by robbers.2
20
Accordingly, the district court determined that French had probable cause to believe that he had been feloniously assaulted by Bunch's car, and that under Tennessee law French could use deadly force to apprehend his assailant.
21
We begin our legal analysis by observing that federal, not state, law applies and determines the adequacy of defenses asserted in a civil rights action under 42 U.S.C. § 1983. Scheurer v. Rhodes, 416 U.S. 232, 237-38, 94 S.Ct. 1683, 1686-87, 40 L.Ed.2d 90, 97 (1974); Pierson v. Ray, 386 U.S. 547, 87 S.Ct. 1213, 18 L.Ed.2d 288 (1967); Monroe v. Pape, 365 U.S. 167, 81 S.Ct. 473, 5 L.Ed.2d 492 (1961); Jones v. Marshall, 528 F.2d 132 (2d Cir. 1975); Clark v. Ziedonis, 513 F.2d 79, 81 (7th Cir. 1975); Bell v. Wolff,496 F.2d 1252 (8th Cir. 1974); Nelson v. Knox, 256 F.2d 312, 314 (6th Cir. 1958). Accordingly, although we are not bound by a state law privilege available to a police officer, nevertheless, as the Second Circuit has recently observed in a similar case: "(W)e still are by no means free to elevate whatever view of the privilege we think to be preferable to the constitutional level envisaged by § 1983." Jones v. Marshall, 528 F.2d 138. If we were writing on a blank slate, we would adopt the rule that Judge Oakes proposed in that case. It "would limit the privilege (of police use of deadly force) to the situation where the crime involved causes or threatens death or serious bodily harm, or where there is a substantial risk that the person to be arrested will cause death or serious bodily harm if his apprehension is delayed." 528 F.2d at 140 (footnote omitted). See also Beech v. Melancon, 465 F.2d 425, 426-27 (6th Cir. 1972) (concurring opinion), cert. denied, 409 U.S. 1114, 93 S.Ct. 927, 34 L.Ed.2d 696 (1973).
22
However, we hold that in this case, as in Jones v. Marshall, supra, we should consider the law of the state in determining the federal law to be fashioned to determine the liability of the defendants. Our principal reason for agreeing with the district court that the Tennessee rule should be made the federal rule in this case is that a decision to the contrary would be unfair to an officer who relied, in good faith, upon the settled law of his state that relieved him from liability for the particular acts performed in his official capacity. Most of the state courts that have considered this question follow the old common law rule that deadly force may be used by a police officer only when he has reasonable grounds to believe that the person he is attempting to arrest has committed a felony. See, e. g., People v. Kilvington, 104 Cal. 86, 37 P. 799 (1894); Coldeen v. Reid, 107 Wash. 508, 182 P. 599 (1919); Union Indemnity Co. v. Webster, 218 Ala. 468, 118 So. 794 (1928); § 131 of the Restatement II of Torts. But see, e. g., Fields v. City of New York, 4 N.Y.2d 334, 175 N.Y.S.2d 27, 151 N.E.2d 188, 191 (1958) (Van Voorhis, J.); Petrie v. Cartwright, 114 Ky. 103, 70 S.W. 297 (1902); Commonwealth v. Duerr, 158 Pa.Super. 484, 45 A.2d 235 (1945) (cases requiring certainty that person against whom deadly force is used has committed a felony); ALI Model Penal Code § 3.07 (crime involved "use or threatened use of deadly force"); § 131, Restatement I of Torts (allowing deadly force only if felony "normally causes or threatens death or serious harm"); Moreland, The Use of Force in Effecting or Resisting Arrest, 33 Nebraska L.Rev. 408 (1954).
23
Although there is "a discernible trend in this century away from allowing the use of deadly force by a police officer in effecting a felon's arrest," Jones v. Marshall, 528 F.2d at 139, we do not view it as a mandate here where a kidnapping had been reported to require a higher standard under § 1983 than is afforded by the state rule to which the district judge referred.
24
Accordingly, we determine that the district judge was correct (1) in considering the Tennessee rule in fashioning the federal law to be applied in this case, and (2) in determining that French had probable cause to believe that he had been feloniously assaulted and was therefore privileged to use deadly force. We observe that we would probably reach the same result even under the rule suggested by the Second Circuit that would limit the privilege of using deadly force to circumstances where the crime causes or threatens death or serious bodily harm.
25
The judgment of the district court is AFFIRMED.
1
The terms "plaintiffs" and "appellants" will be used to refer to plaintiff-appellant Bunch and plaintiff-appellant Qualls' decedent Trull
2
Bunch testified that Trull, who was on parole, was drunk and did not want any contact with the police
|
cd artifactId into the application you just generated and run maven install
Before compiling the application, maven will generate the entities and a rest interface to the entities from the sample model.
For the impatient to start the rest server and use the web interface cd artifactId-application/artifactId-jetty and run mvn exec:exec
You can view and use the application at http://localhost:8080/demo/ui2.
Lastly there is a sample junit test org.umlg.test.TestDemo in src/test/java. You can execute it to see UMLG in action.
How the build works
The archetype will generate a maven project with 2 sub modules. The 2 modules are,
generator - This is responsible to load the uml model and generate the entities.
application - This is where the generated entities and optional rest interface will go.
generator module
In the generator module there is only one java class DemoGenerator. Running this class' main method will load
the sample model and generate the corresponding entities into the application module.
For DemoGenerator to compile and be able to generate code the following maven dependency is required.
org.umlg.generation.JavaGenerator is the entry point to generating code in UMLG from UML. JavaGenerator.main takes 3 parameters.
The first is the uml model, the second is the output location of the generated source and the 3rd is the generation visitors.
UMLG generates code via a sequential list of visitors to the UML model. Each visitor implements some feature of the model in java.
It is possible to add custom visitors to customize the entities. This is explained here //TODO.
application module
The application module has 4 sub modules.
entities
This is where the generated entities are. It also contains a resources folder with umlg.env.properties.
It contains the commented out property umlg.db.location=/tmp. UMLG defaults to the systems tmp directory. Change this
property to point the db to a location of your choice.
A sample junit test class org.umlg.test.TestDemo is provided. Run it to see UMLG in action.
To compile the entities the following dependency is required. Replace the artifact with the underlying blueprints graph db of your choice.
This dependency will bring in the underlying graph db and everything the UMLG entities need.
war
The entities
Each class in the uml model have a respective java entity. To create a company and a person just instantiate Company
and Person.
One one = new One();
Many many = new Many();
For each property a standard setter and getter is generated.
one.setName("Coolraid");
many.setName("Joe");
one.setMany(many);
For each property with a multiplicity greater than 1 an adder method is generated. This is different to the setter in
that it appends to the collection as opposed to the setter that replaces the collection.
one.addToMany(many);
or just use the standard java collection 'add' method
one.getMany().add(many);
Similarly a remover is generated
one.removeFromMany(many);
or use the standard java collection 'remove' method
one.getMany().remove(many);
and lastly to persist the entities call,
UMLG.get().commit()
or rollback the transaction
UMLG.get().rollback()
UMLG.get() returns an instance of org.umlg.runtime.adaptor.UmlgGraph. UmlgGraph wraps the underlying blueprints graph and implements
'com.tinkerpop.blueprints.TransactionalGraph' and 'com.tinkerpop.blueprints.KeyIndexableGraph' It is a singleton and is
always available on the current thread via UMLG.get() |
Enhanced fluorescence probes based on Schiff base for recognizing Cu2+ and effect of different substituents on spectra.
Three enhanced fluorescence probes based on Rhodamine B-Schiff base structure were synthesized for detecting Cu2+. The corresponding detection limits were found to be 0.25 μM, 0.15 μM and 0.18 μM. Binding ratio and binding sites were determined by Job's and nuclear magnetic titration experiments. The binding constants obtained by the Benesi-Hildebrand equation to be 341.0 M-0.5,1.8 × 104 M-1, and 265.4 M-0.5, respectively. As isomers, the different effects of probes on Cu2+ detection were researched. By adjusting the position and the size of the substituent group, the effects of binding sites and steric hindrance on the complexation ratio, response time and detection limit were discussed. Optimal spatial combination structure with Cu2+ was obtained through energy calculation. Detection mechanism of Rhodamine B ring opening based on the complex of the Schiff base with Cu2+ was confirmed. E. coli staining and detection of real water samples had expanded their applications. |
Friends, we are underway in our legal battle to strike out Adani’s fake ‘Indigenous Land Use Agreement’ (ILUA). In an extraordinary turn of events, Attorney General George Brandis intervened in our Federal Court hearing on Thursday to advantage Adani, asking the judge to delay our strike-out application. The Attorney General chose to cross the line between the Parliament and the Courts solely in Adani’s interests, and showed just how willing the Government is to take away our rights. They will stop at nothing. This fight is big. Please watch and share our video, and donate when you can to our Defence of Country Fund.
Without this ‘land deal’ being registered, Adani can’t move ahead. And right now, they have nowhere to go without the assistance of the Federal Parliament. We have a knockout punch, but Brandis and the Federal Government are doing everything they can to stop us from throwing it.
The good news is, we are now underway with legal proceedings, and Adani and the Queensland Government are in our sights.
At the very least they will be tied up for many months in litigation and will not have an ILUA to proceed with. And we can win this.
Adani did not negotiate and achieve our free, prior and informed consent. And we have four strong legal grounds against Adani’s pretend ILUA. As the law stands, Adani do not have a document that could even be considered legal. This is why the Turnbull government has been in hyperdrive trying to push the ‘Adani amendments’ through the Senate.
But now, in an act of extraordinary political interference, Brandis has reached into the court to intervene in our case and delay the proceeding.
This puts beyond doubt that Brandis and the [right-of-centre] Turnbull Government, backed all the way by Queensland’s Palaczszuk [Labor] Government, are working in billionaire Adani’s interests. Again, they are making Native Title - and Traditional Owners rights - all about Adani’s coal mine.
They are unrelenting in their support for the Carmichael project. But so are we in our determination to defend our country and safeguard the future from this mine of mass destruction.
We will not surrender. No means no, Adani.
Stand with us as we fight these corrupting and destructive influences, and protect our ancestral lands and waters.
Thank you for your generous and strong support.
Please stay with us and donate when you can to assist with our court battle.
Adrian Burragubba & Murrawah Johnson
for the Wangan & Jagalingou Traditional Owners Council
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
For previous linksunten reporting on the Adani mine controversy click here.
For more background provided by WGAR News click here.
Other reporting:
Labor offered Adani royalty deal in March - Adani takes brinkmanship to new levels - Australian Conservation Foundation vows to pursue all avenues to stop Adani loan - 'Honk to stop Adani': Protesters gather on hwy - Billions in the balance as megamine stalls - Will the real Queensland Premier stand up? - Various coverage - Adani defers Australian coal project investment decision - Half the Great Barrier Reef may have died in last two years - No decision reached yet on Adani royalties agreement - Adani rail line to Abbot Point not a priority, says Infrastructure Australia - Adani mine wouldn't receive 'royalty holiday', Deputy Premier Jackie Trad insists - Experts not asked to assess Adani rail - No royalty holiday for Adani: Trad - Voters reject subsidies for Adani coal mine, poll finds - New coalmines will worsen poverty and escalate climate change, report finds - Queensland Labor factional threat to $16.5 billion Adani mine - Just 7 per cent of voters want the government to invest in Adani mine: poll - True power of coal - Adani deal 'reworked' after faction revolt - Queensland Labor denies split over Adani 'royalties holiday' - Stand firm on Adani mine, Premier - 'No rift' in Qld Labor ranks over Adani - 'No rift' in Labor ranks over Adani deal - Queensland Cabinet 'not split' over Adani - 'Mockery': Turnbull government quietly cuts Adani's Abbot Point turtle controls - Coalition drops ball on Native Title - Deal for megamine divides Cabinet - Galilee Blockade targets contractor over Adani - Australian government may fund South African mine that would compete with Adani - Jackie Trad transcript of exchange over Adani with The Australian - There are better things to spend $1 billion on than the Adani coal mine - Adani vows to pay 'every cent' owed to Queensland as talks of royalty holiday emerge - Queensland government offers Adani 'royalties holiday': Report - Labor sweetens Adani deal by offering to throw in their principles - Adani protesters forced to stuff letter under door - Ignorant and petulant politicians are leading us to climate disaster - Nannas give Kevin a house (office) warming |
The role of intravenous hyperalimentation in intestinal disease.
This article has dealt briefly with intravenous nutrition in intestinal disorders. The indications for its use and techniques of nutritional assessment have been stressed. The use of intravenous hyperalimentation in a few of the more common diseases of the small bowel and colon has been discussed. Every patient with disease of the small or large intestine has some degree of dysfunction in the gastrointestinal tract. In many instances, this functional impairment interferes with normal ingestion or absorption of nutrients and predisposes the patient to malnutrition. The presence of malnutrition increases the morbidity and mortality of surgery and can be reversed by using intravenous hyperalimentation. Those patients with extreme short bowel syndrome secondary to intestinal disease or its parenteral nutrition at home. We stress the importance of a team approach to hyperalimentation. The evolution of a team of nutritional experts will improve the care of the patient and the education of the patient and physician and make nutritional support more readily available to those medical and surgical patients in need. |
By KATIE ZEZIMA
Published: November 25, 2006
Gov. Mitt Romney filed a lawsuit Friday asking the state's highest court to order the legislature to vote on a constitutional amendment banning same-sex marriage or to place it on the 2008 ballot if lawmakers do not take up the provision.
The legislature voted 109 to 87 on Nov. 9 to recess a constitutional convention before the measure was taken up, which appeared to kill it. The convention was recessed until Jan. 2, the last day of the legislative session.
More than 170,000 people have signed a petition asking the legislature to amend the state's Constitution to prohibit same-sex marriage. Massachusetts is the only state that permits it.
Mr. Romney, a Republican who did not seek re-election but is considering running for president, announced plans to file the lawsuit at a rally of same-sex marriage opponents on Sunday. The next day he sent a letter to the 109 lawmakers who had voted to recess, saying they were ''frustrating the democratic process and subverting the plain meaning of the Constitution'' by refusing to vote.
The lawsuit, filed by Mr. Romney, acting as a private citizen, and 10 other opponents of same-sex marriage, said the legislature had a ''legal duty to act'' on citizen petitions but had relied on procedural devices to ''avoid a vote and evade its constitutional duties.'' The legislature recessed before voting on the measure two other times this session.
The suit named the Senate president, Robert E. Travaglini, saying he had ''failed to carry out his ministerial duty to require final action'' on the petition. A spokeswoman for Mr. Travaglini, a Democrat, could not be reached for comment.
The suit asks the Supreme Judicial Court to ''step into the constitutional breach'' and direct Secretary of State William F. Galvin, also named in the suit, to place the amendment on the 2008 ballot if the legislature does not act.
Fifty of 200 legislators must vote in favor of the constitutional amendment in this session and in the next one for it to appear as a referendum on the 2008 ballot. Both sides have said the amendment has enough support to advance to the next session.
In a statement, Kris Mineau, the president of the Massachusetts Family Institute, which circulated petitions for the amendment, applauded the lawsuit. Mr. Mineau said that the recess was a ''deliberate effort by those in the legislature to kill the marriage amendment'' and that the legislature had failed to ''afford the citizens a fair up or down vote.''
Gary Buseck, legal director for Gay and Lesbian Advocates and Defenders, which won the lawsuit that led to the legalization of same-sex marriage before the same court, called the lawsuit frivolous.
''I can't see any way in which this lawsuit has any merit whatsoever,'' Mr. Buseck said. ''The bottom line is, the legislature acted in accordance with its rules and the Constitution and did the right thing to protect the now-declared constitutional rights of same-sex couples to marry. There's no getting around that.''
Lawrence M. Friedman, a specialist on Massachusetts constitutional law at the New England School of Law, said the court must decide if the State Constitution requires the legislature to vote. Professor Friedman signed a brief supporting same-sex marriage in 2003 but has not been involved in the issue since then.
''This case is not about same-sex marriage,'' he said. ''This is a case, first, about what the legislature is required to do, and second, if there is anything the court can do about it.
''It's not at all clear to me how this is something the court can remedy. It doesn't seem likely to me the court will order the legislature to take a vote or subvert constitutional procedures and just put it on the ballot.'' |
xMy friend, Ukulele Jake, wrote this song and its wonderful.
Here's a video: https://www.youtube.com/watch?v=BfrETCbBB_8
Enjoy!
xoxo
Betty Mae
GE7A7D7
Well they took away my porn collection ‘cause they said it was obscene
GE7A7D7
Now I’ve got an infection, my balls are so blue they’re green
CGCG
Well it came to me in a realization, the other day it seems
GE7A7D7
I guess I’m a bit of a pervert coming apart at the seams
GE7A7D7
Got busted for full frontal nudity, I tried to keep my cool
GE7A7D7
It’s amazing what you can’t do in public, they surely don’t teach you in school
CGCG
I could’ve sworn I heard “get devested” while walking past the convent gate
GE7A7D7
How was I supposed to know she said “be blessed”? That nun should enunciate
GE7A7D7
I guess I’m a bit of a pervert coming apart at the seams
GE7A7D7
Don’t always know what I’m doing wrong or why everywhere I go there’s a scene
CGCG
So I spend my days in the county jail where at least it’s nice and clean
GE7A7D7
I guess I’m a bit of a pervert coming apart at the seams |
Air Duct Cleaning in Franklin Lakes, NJ
The Market Pacesetters in Air Duct Cleaning
You're able to trust in Air Duct Cleaning Guys to offer the very best products and services for Air Duct Cleaning in Franklin Lakes, NJ. You'll need the most advanced modern technology around, and our workforce of skilled contractors will offer this. We make certain that you get the most excellent solutions, the best price tag, and the very best quality materials. Call us by dialing 800-376-4281 and we'll be happy to review the choices, resolve your questions, and set up an appointment to start arranging the project.
Dedicated to Customer Care
Our aim will be to make sure that you'll be proud of the outcome of your project. We are going to learn about your situation and objectives of your undertaking, and set out to carry out the tasks to meet with your standards. If you have questions, we offer the right answers. We will be ready to assist you. We will answer the questions and concerns that you don't consider, as we understand exactly what we are doing, so we are able to predict your needs. With regards to making the appropriate choices for your plan, Air Duct Cleaning Guys is able to help out.
We Find Ways To Reduce Costs
Here at Air Duct Cleaning Guys, we understand that you'll want to remain in budget and lower your expenses everywhere it is possible to. Yet, being economical should not signify that you compromise on superior quality for Air Duct Cleaning in Franklin Lakes, NJ. We be certain that our money saving initiatives don't mean a lesser standard of quality work. If you work with our staff, you will get the advantage of our own valuable experience and top quality materials to ensure any project can last while saving your time and cash. For instance, we take care to stay away from costly mistakes, complete the task promptly to save time, and ensure you get the most suitable deals on materials and labor. Choose Air Duct Cleaning Guys when you'd like the ideal service at a minimal price. You can easily connect with us by dialing 800-376-4281 to start out.
For these and any other such services, please contact Air Duct Cleaning Guys on 800-376-4281.
Understand just what can be expected
To come up with the ideal choices for Air Duct Cleaning in Franklin Lakes, NJ, you must be informed. You should not go into it without understanding it, and it's good to know what you should expect. This is the reason we make every attempt to ensure that you comprehend the process and are not confronted with any kind of unexpected situations. Step one will be to call us at 800-376-4281 to arrange your task. During this call, you get all your questions resolved, and we can arrange a time to commence work. We consistently get there at the appointed time, all set to work closely with you.
If you find yourself thinking about a task regarding Air Duct Cleaning in Franklin Lakes, NJ, there are lots of good reasons to call Air Duct Cleaning Guys. Our materials are of the very best quality, our cash saving techniques are realistic and effective, and our customer satisfaction ratings are unparalleled. We've got the skills you need to fulfill all your goals and objectives. Dial 800-376-4281 whenever you need Air Duct Cleaning in Franklin Lakes, and we are going to work closely with you to systematically carry out your job. |
Player's passion for our games is the fuel that keeps my tank engine running every day.
"Never let your sense of morals get in the way of doing what's right" - Isaac Asimov
Follow me on Twitter: @RAIBot01 |
Obafemi Awolowo
Chief Obafemi Jeremiah Oyeniyi Awolowo, GCFR (; 6 March 1909 – 9 May 1987), was a Nigerian nationalist and statesman who played a key role in Nigeria's independence movement, the First and Second Republics and the Civil War. The son of a Yoruba farmer, he was one of the truly self-made men among his contemporaries in Nigeria.
As a young man he was an active journalist, editing publications such as the Nigerian worker, on top of others as well. After receiving his bachelors of commerce degree in Nigeria, he traveled to London to pursue his degree in law.Obafemi Awolowo was the first premier of the Western Region and later federal commissioner for finance, and vice chairman of the Federal Executive Council during the Nigerian Civil War. He was thrice a major contender for his country's highest office.
A native of Ikenne in Ogun State of south-western Nigeria, he started his career, like some of his well-known contemporaries, as a nationalist in the Nigerian Youth Movement in which he rose to become Western Provincial Secretary. Awolowo was responsible for much of the progressive social legislation that has made Nigeria a modern nation. Awolowo was the first Leader of Government Business and Minister of Local Government and Finance, and first Premier of the Western Region under Nigeria's parliamentary system, from 1952 to 1959. He was the official Leader of the Opposition in the federal parliament to the Balewa government from 1959 to 1963. In 1963 he was imprisoned under the accusations of sedition and was not pardoned by the government until 1966, after which he assumed the role as Minister of Finance. In recognition of all of this, Awolowo was the first individual in the modern era to be named as the leader of the Yorubas (Yoruba: Asiwaju Awon Yoruba or Asiwaju Omo Oodua).
Early life
Obafemi Awolowo was born on 6 March 1909 in Ikenne, in present-day Ogun State of Nigeria. His father was a farmer and sawyer who died when Obafemi was about ten years old. He attended various schools, including Baptist Boys' High School (BBHS), Abeokuta; and then became a teacher in Abeokuta, after which he qualified as a shorthand typist. Subsequently, he served as a clerk at the Wesley College Ibadan, as well as a correspondent for the Nigerian Times. It was after this that he embarked on various business ventures to help raise funds to travel to the UK for further studies.
Following his education at Wesley College, Ibadan, in 1927, he enrolled at the University of London as an External Student and graduated with the degree of Bachelor of Commerce (Hons.). He went to the UK in 1944 to study law at the University of London and was called to the Bar by the Honorable Society of the Inner Temple on 19 November 1946. In 1949 Awolowo founded the Nigerian Tribune, a private Nigerian newspaper, which he used to spread nationalist consciousness among Nigerians.
Politics
Awolowo was Nigeria's foremost federalist [Citation needed]. In his Path to Nigerian Freedom (1947) – the first systematic federalist manifesto by a Nigerian politician – he advocated federalism as the only basis for equitable national integration and, as head of the Action Group, he led demands for a federal constitution, which was introduced in the 1954 Lyttleton Constitution, following primarily the model proposed by the Western Region delegation led by him. As premier, he proved to be and was viewed as a man of vision and a dynamic administrator. Awolowo was also the country's leading social democratic politician. He supported limited public ownership and limited central planning in government. He believed that the state should channel Nigeria's resources into education and state-led infrastructural development. Controversially, and at considerable expense, he introduced free primary education for all and free health care for children in the Western Region, established the first television service in Africa in 1959, and the Oduduwa Group, all of which were financed from the highly lucrative cocoa industry which was the mainstay of the regional economy.
Crisis in Western Nigeria
From the eve of independence, he led the Action Group as the Leader of the Opposition in the federal parliament, leaving Samuel Ladoke Akintola as the Western Region Premier. Disagreements between Awolowo and Akintola on how to run the Western region led the latter to an alliance with the Tafawa Balewa-led NPC federal government. A constitutional crisis led to the declaration of a state of emergency in the Western Region, eventually resulting in a widespread breakdown of law and order.
Excluded from national government, Awolowo and his party faced an increasingly precarious position. Akintola's followers, angered at their exclusion from power, formed the Nigerian National Democratic Party (NNDP) under Akintola's leadership. Having previously suspended the elected Western Regional Assembly, the federal government then reconstituted the body after manoeuvres that brought Akintola's NNDP into power without an election. Shortly afterwards Awolowo and several disciples were arrested, charged, convicted (of treason), and jailed for conspiring with the Ghanaian authorities under Kwame Nkrumah to overthrow the federal government.
Legacy
In 1992, the Obafemi Awolowo Foundation was founded as an independent, non-profit, non-partisan organisation committed to furthering the symbiotic interaction of public policy and relevant scholarship with a view to promoting the overall development of the Nigerian nation. The Foundation was launched by the President of Nigeria at that time, General Ibrahim Babangida, at the Liberty Stadium, Ibadan.
However, his most important bequests (styled Awoism) are his exemplary integrity, his welfarism, his contributions to hastening the process of decolonisation and his consistent and reasoned advocacy of federalism-based on ethno-linguistic self-determination and uniting politically strong states-as the best basis for Nigerian unity. Awolowo died peacefully at his Ikenne home, the Efunyela Hall (so named after his mother), on 9 May 1987, at the age of 78 and was laid to rest in Ikenne, amid tributes across political and ethno-religious divides.
Honours
He is featured in the 100 Naira banknote since 1999.
In addition to a variety of other chieftaincy titles, Chief Awolowo held the title of the Odole Oodua of Ile-Ife.
Bibliography
Path to Nigerian Freedom
Awo – Autobiography of Chief Obafemi Awolowo
My Early Life
Thoughts on the Nigerian Constitution
The People’s Republic
The Strategy & Tactics of the People's Republic of Nigeria
The Problems of Africa – The Need for Ideological Appraisal
Awo on the Nigerian Civil War
Path to Nigerian Greatness
Voice of Reason
Voice of Courage
Voice of Wisdom
Adventures in Power – Book 1 – My March Through Prison
Adventures in Power – Book 2 – Travails of Democracy
My march through prison
Socialism in the service of New Nigeria
Selected speeches of Chief Obafemi Awolowo
Philosophy of Independent Nigeria
Memorable Quotes from Awo
The Path to Economic Freedom in Developing Country
Blueprint for Post-War Reconstruction
Anglo-Nigerian Military Pact Agreement
See also
Ikenne Residence of Chief Obafemi Awolowo
References
Category:1909 births
Category:1987 deaths
Category:20th-century Nigerian politicians
Category:Action Group (Nigeria) politicians
Category:Nigerian Christian socialists
Category:Alumni of University of London Worldwide
Category:Egbe Omo Oduduwa politicians
Category:State governors of Nigeria
Category:Grand Commanders of the Order of the Federal Republic
Category:Nigerian democracy activists
Category:Nigerian newspaper founders
Category:Nigerian Pentecostals
Category:Nigerian social democrats
Category:Nigerian socialists
Category:People convicted of treason against Nigeria
Category:People from Ogun State
Category:People of the Nigerian Civil War
Category:Senior Advocates of Nigeria
Category:Yoruba politicians
Category:Yoruba legal professionals
Obafemi
Category:People of colonial Nigeria
Category:Ahmadu Bello University people
Category:Obafemi Awolowo University people
Category:20th-century Nigerian lawyers
Category:University of Ibadan people
Category:Nigerian revolutionaries
Category:Candidates in the 1979 Nigerian presidential election
Category:Candidates in the 1983 Nigerian presidential election
Category:Burials in Ogun State
Category:Anglican socialists
Category:Finance ministers of Nigeria
Category:Members of the Inner Temple
Category:Alumni of Baptist Boys’ High School
Category:Members of the Fabian Society
Category:Independence activists |
37 Wn.2d 79 (1950)
221 P.2d 832
MARSTON BALL et al., Respondents,
v.
STOKELY FOODS, INC., Appellant.[1]
No. 31133.
The Supreme Court of Washington, Department Two.
August 31, 1950.
Eggerman, Rosling & Williams and Joseph J. Lanza, for appellant.
Ward & Barclay, for respondents.
ROBINSON, J.
These were five separate actions, consolidated for purposes of trial, brought by pea growers in Skagit county, against Stokely Foods, Inc. Plaintiffs sought to recover damages allegedly sustained as a result of defendant's delay in harvesting their peas during 1947. From verdicts and judgments in favor of plaintiffs, defendant has appealed.
The suits are predicated upon written contracts, which are substantially identical in terms. They provide that the seller, or grower, should cultivate, for the benefit of the buyer, Stokely, specified quantities of peas. They read further:
"It is understood and agreed that all peas shall be planted, cut, and delivered when so ordered by Buyer or Buyer's representatives....
*81 "When these peas are ready for harvest, it is understood and agreed that the peas sold hereunder shall be mowed, hauled to viners and vined by buyer, provided however that Seller shall pay Buyer the sum of $25.00 per ton for said mowing, hauling and vining. Seller is to be paid only for the weight of peas after vining and deducting the weight of all dirt, pods and leaves that carry over with the peas or peas that will pass through a 10/32 inch mesh screen, and all peas that are over mature. Regardless of Buyer taking possession of said peas at the time of mowing, it is clearly understood that the intention of the parties hereto is that delivery will not be complete until said peas are graded and accepted at Buyer's plant in accordance with the terms of this contract."
There then follows a schedule setting up eight price grades for payment to the grower. These are based upon "tenderometer" readings, ranging from 91 to 140. The tenderometer is a machine which determines the hardness of the pea, the harder the pea, the higher the number shown on the tenderometer reading, and the lower the quality of the pea. This schedule reads as follows:
"TENDEROMETER READINGS
"From To & Including
0 90 $ per ton
91 95 $ 115.00 per ton
96 100 $ 105.00 per ton
101 105 $ 100.00 per ton
106 110 $ 90.00 per ton
111 115 $ 70.00 per ton
116 120 $ 60.00 per ton
121 130 $ 40.00 per ton
131 and over Buyers Option $ per ton"
It appeared from the evidence that peas are customarily divided into grades based upon this schedule. Thus, peas with a tenderometer reading of from 91 to 95 are rated Grade A; from 96 to 100, Grade B; from 101 to 105, Grade C; from 106 to 110, Grade D; from 111 to 115, Grade E; from 116 to 120, Grade F; from 121 to 125, Grade G; and from 126 to 140, Grade H.
The contracts also contained the following provision:
"In case of fire, strikes, or other labor disturbances, lack of transportation facilities, shortage of labor or supplies, perils of the sea, floods, earthquakes, action of the elements, *82 invasion, war, riot, insurrection, rebellion, interference of civil or military authorities, or passage of laws, or any unavoidable casualty or cause beyond the control of Buyer, affecting in any way the conduct of Buyer's business or freezing operations, Buyer will be excused from performance hereunder...."
Peas increase in hardness as they mature on the vine, and the essence of each of these complaints is that Stokely delayed harvesting of the peas beyond the time contemplated by the parties to the contract, which, it was alleged, would have been when the peas had reached an average grade of B. The result of this delay was that the peas, when harvested, were mostly in the low grades, and the plaintiffs, respondents here, received much less money for them than they would have received had the peas been harvested earlier. It is the contention of Stokely that the contracts imposed no obligation upon it to harvest the peas at any particular time. Respondents, on the other hand, urge that Stokely was required to do this when the peas were "ready for harvest," and the court instructed the jury to this effect, saying:
"The contracts in these cases now before you impose a duty upon the defendant to harvest the plaintiffs' crop of peas when the crop is ready for harvest, unless labor shortage beyond the defendant's control, action of the elements, or acts of the plaintiffs themselves, excuse performance of such duty. If you find that the defendant in any or all of these cases failed to harvest the crop when ready for harvest, and if you further find that its failure to do so was not excused by any labor shortage beyond the control of the defendant, action of the elements, or by any act of the plaintiffs themselves, then the defendant's failure to harvest the peas when they should have been harvested would constitute a breach of contract, and your verdict should be for such plaintiff or plaintiffs."
Appellant urges that the sentence in the contract beginning, "When these peas are ready for harvest, it is understood and agreed that the peas sold hereunder shall be mowed, hauled to viners and vined by buyer, ..." does not establish the time when the peas are to be harvested, but merely fixes the party who is to perform the necessary *83 harvesting acts, viz., the buyer. If appellant's construction of this sentence were to be adopted, however, the opening clause would be no more than surplusage. Apart from the familiar canon in the interpretation of contracts that every word and phrase must be presumed to have been employed with a purpose and must be given a meaning and effect whenever reasonably possible (Clark v. State Street Trust Co., 270 Mass. 140, 169 N.E. 897; Hollingsworth v. Robe Lbr. Co., 182 Wash. 74, 45 P. (2d) 614), to conclude that this clause is mere excess would not be in accord with the facts and circumstances of the case as a whole. From the evidence, it is apparent that the time of harvest is a matter of vital concern to the pea farmer. A delay in the harvest results in harder peas, and a correspondingly reduced financial return to him; indeed, as happened in the present instance, it may even result in an overall loss.
[1] It is a well-established rule that, where one construction would make a contract unreasonable or such as prudent men would not ordinarily enter into, while another, equally consistent with the language, would make it reasonable, fair, and just, the interpretation which makes it a rational and probable agreement must be adopted. Jacobs v. Teachout, 126 Wash. 569, 219 Pac. 38; Kandoll v. Penttila, 18 Wn. (2d) 434, 139 P. (2d) 616; Cohn v. Cohn, 20 Cal. (2d) 65, 123 P. (2d) 833. Application of this principle leads to the conclusion that, in the instant case, respondents' interpretation of the contract is the correct one.
Appellant contends, however, that the prior provision of the contract, to the effect that "all peas shall be planted, cut, and delivered when so ordered by Buyer or Buyer's representatives," gives the buyer the right to decide when the peas should be harvested. In Yeremian v. Turlock Dehydrating & Packing Co., 30 Cal. App. (2d) 92, 85 P. (2d) 515, a contract between a grower and buyer of grapes contained the following language:
"Grower to pick fruit starting on or about Oct. 1, 1935, and thereafter at such times and in such quantities as buyer directs."
*84 But the court said that evidence was properly received for the purpose of reconciling this provision
"... with the right of the plaintiff to have his entire crop delivered at a time, and in a manner which would, in the ordinary practice of good husbandry, return to plaintiff his selling price, based upon the harvest of the crop, within a period of time that would permit the harvesting of the greatest quantity of grapes";
and it concluded that the contract should not be given an interpretation
"... which would give the buyer the right to arbitrarily refuse delivery of grapes which met the standard requirements of the contract, by failing to supply boxes in sufficient number to permit the crop to be harvested within a reasonable time."
See, also, Alvernaz v. H.P. Garin Co., 127 Cal. App. 681, 16 P. (2d) 683, upon which the court in the Yeremian case relied.
In any event, nothing in the provision suggests that it was not the obligation of the buyer to order the delivery of the peas at the time when they were "ready for harvest"; and, in view of this latter provision in the contract, we are constrained to conclude that the contract did impose this obligation upon him. The court's instruction, on the matter above quoted, therefore, would seem to have been entirely correct.
But appellant asserts, and rightly so, that nothing specifically included in the written contracts required Stokely to harvest the peas when they were at the top three grades, and urges that respondents have no just cause to complain because they were harvested at a later date. However, the solution to this problem would seem to depend upon the interpretation the jury felt should be given the term "ready for harvest." It is clear that the exact meaning of this phrase does not appear from the context of the contract.
[2] While parol evidence will not be admitted to contradict, or vary, the terms of a written instrument, it is always admissible, in case of ambiguity, for the purpose of ascertaining the sense in which the parties intended to use *85 the ambiguous term or terms. Darling & Co. v. Frank Carter Co., 208 Wis. 222, 242 N.W. 519. In order to determine whether the crops were considered to be ready for harvest, it was necessary and proper to produce extrinsic evidence tending to show when that time would arrive. Such evidence did no more than explain the contract, and was admissible in order that the jury might be assisted in ascertaining the meaning intended by the parties themselves. See Murphy v. Schwaner, 84 Conn. 420, 80 Atl. 295; Ganson v. Madigan, 15 Wis. 158, 82 Am. Dec. 659; Klueter v. Joseph Schlitz Brewing Co., 143 Wis. 347, 128 N.W. 43, 32 L.R.A. (N.S.) 383; Gile v. Tsutakawa, 109 Wash. 366, 187 Pac. 323.
[3] In their complaints, respondents alleged "that the said crop, if harvested when ready for harvest, would have yielded to plaintiff an average return of B grade of peas." There was ample evidence in the record from which the jury could have found that this was the interpretation the parties placed on the term "ready for harvest." Marston Ball, one of the plaintiffs, testified as follows:
"Q. Mr. Lanza asked you when those peas were ready for harvest, what is your answer? A. When they were B's. Q. An average B? A. That is right."
Robert Henry, who had been assistant field man for Stokely's in 1947, testified to this effect:
"Q. Do you know when a pea was considered ready for harvest in 1947? A. Yes. I think I do. Q. And when was that at what stage? A. Well, in my opinion, the first three grades, and possibly to include the first four. Of course, if you include the first four I mean in my opinion this thing can be done right on the beam; I mean, the aim, of course, was to get the peas in the first three brackets, I suppose. Q. That would be in the A, B, and C brackets? A. That would be my opinion from what I was told and had been led to believe. Q. Can you answer why that would be the time the pea would be ready to harvest? A. I think it is a twofold proposition. First of all, it is the kind of pea that the packer wanted, that is, the market conditions were such that he was in need of a quality pea in order to sell. Second, it was the kind of pea that the farmer could make the money on. Q. That is the top three grades, is that correct? A. I would say so, yes, that would be my opinion."
*86 The testimony indicated that, whereas, in 1946, the processors had been chiefly concerned with getting a large quantity of peas, in 1947 they were principally interested in getting peas of higher quality, and that this was reflected in the price schedules set forth in the contracts with the growers. Alex Gordon, who was plant manager for Stokely's Bellingham plant in 1947, testified, on cross-examination, as follows:
"Q. Now, Mr. Gordon, you had sent instructions to your field men to secure as high a quality pea as possible, is that correct? A. Yes. Q. That was what you wanted in 1947? A. Yes. Q. That same thing was not true in 1946, was it? A. No, not to the same extent. Q. In 1946 weren't you after quantity rather than quality? A. No, I wouldn't say that. We have always been after quality. Q. But you were also after quantity, more so than in 1947? A. More so in 1946 than in 1947. Q. In making up the contracts with the growers in 1947 you put a premium on quality peas? A. I did not. Q. Well, the company did put a premium on quality peas? A. Yes. Q. And that was done because you wanted the growers to send you peas in the top grades, and it was made profitable to them to raise high grade peas, isn't that right? A. I suppose so. Q. And you did not want G and H peas? A. Not if it was possible. Q. For that reason the price was set down to $40.00 and $50.? A. Yes. Q. So that the growers would be penalized if they did not raise top quality peas, is that correct? A. Yes."
It is true that there was evidence to controvert this testimony. Martin Olson, Stokely's field man, testified that it probably would have been impossible, even with plenty of help, to average a B grade for everyone; and Alex Gordon, on the day following the day on which he gave the above-quoted testimony, stated, on redirect examination, that he did not think he should answer whether or not the price range going down to forty dollars per ton was intended to penalize anyone ("I am not a judge of that"); and stated further that, if he had previously said anything to the contrary, he must have misunderstood the question. Nevertheless, the jury had the right to weigh all of the testimony on the matter and reach its own conclusions. There having been evidence to support a decision that it was understood *87 between the parties that "ready for harvest" would mean a time when the majority of the peas were in the top three grades, we cannot question the right of the jury to put that interpretation upon the contract. The court's instructions Nos. 4 and 6 permitted them to do so.
Appellant urges that this could not have been the intention of the contracting parties, for the reason that the price schedule specifically provides rates of payment for peas ranging to and including Grade H. However, if prevented from timely harvesting by one of the circumstances specified in the excusatory clause, appellant might well have been justified in harvesting the peas past maturity, and, in such a situation, would not, of course, be liable to the same extent as if they had been harvested in the higher grades. In any case, it is clear from the testimony that minor variations in soil and elevation, even on the same farm, would render it unlikely that, at whatever time the peas were harvested, they would all be in the same grade. The jury could easily have found, under the contract, that the peas should have been harvested when most of them were at a comparatively high rating, even though some others, in the same field, might have rated G or H at the time.
Appellant objects to an instruction by which the jury was told that, even if there was a labor shortage at harvest time, this would not relieve Stokely from liability if the labor shortage
"... resulted from the defendant's own mismanagement or from improper planning on the defendant's part, or from any other cause which the defendant could have reasonably prevented by proper management and planning."
The objection is that this issue was not within the pleadings. But the contracts, copies of which were attached to the complaints, stated that, "in case of ... shortage of labor... or any unavoidable casualty or cause beyond the control of the buyer," the buyer should be excused from performance.
[4] Under the doctrine of "noscitur a sociis," the meaning of words may be indicated or controlled by those with *88 which they are associated (Nunner v. Erickson, 151 Ore. 575, 609, 51 P. (2d) 839, 852; 3 Williston on Contracts 1780, § 618); and applying this maxim, it does not seem unfair to assume that the parties would not have intended this excusatory clause to apply to a situation where a labor shortage was caused as a result of Stokely's own mismanagement. Appellant, in its answers (in language which differed slightly from case to case), pleaded, by way of affirmative defense, as follows:
"That if the peas of plaintiff were not timely harvested, and if plaintiff sustained any damages on account thereof, the same was due entirely to causes beyond the control of defendant, viz., unusual and abnormal dry and hot weather conditions causing said crop of plaintiff to mature much sooner than was normally expected, combined with a shortage of labor adequately to take care of the harvesting of plaintiff's peas, despite diligent efforts on defendant's part to procure sufficient laborers to harvest said crop."
In their replies, respondents denied these allegations. This raised an issue not only as to the fact of the existence of the labor shortage, which was disputed, but also as to whether or not such labor shortage, if any, had been brought about as a result of causes which the appellant could reasonably have prevented.
Appellant also argues that the evidence was insufficient to justify presentation of this issue to the jury. This, however, was not the case. There was, for example, evidence that Stokely had originally made a request from the local farm labor office for fifty Mexican laborers, and that subsequently it had reduced its order to thirty. It was the position of Stokely that it was told that fifty men would not be available, or at least that the suggestion was made that it should reduce its request to the lowest number of men practicable; but Mrs. Lattimer, the office placement woman for the farm labor office, testified that she knew nothing of this, and that had Stokely asked for more men every effort would have been made to obtain them. Again, the issue presented was one of fact for the jury, and the instruction submitting it was proper.
*89 [5] Appellant strenuously objected to the introduction of testimony and other evidence tending to show that, by reason of necessary expenses and deductions, respondents Vogler and Schroeder lost money on their 1947 pea crop. This evidence was not admitted to prove damages, but to show that the pea farmers lost money if their peas were harvested after they had been allowed to grow hard. As bearing on the probable understanding of the parties concerning the time when the peas were to be harvested, perhaps the most significant of the issues in the case, it would seem that this testimony was properly admitted.
There remains the issue of damages. It appears to have been agreed that, assuming that appellant was obligated to harvest the peas at an earlier date, the measure of damages would be the difference between the amount that respondent-growers would have received for them at that time, and the amount which they actually received. In order to prove their damages, respondents introduced Dr. Leonard Carstens, an agricultural expert whose qualifications were duly set forth. He testified that peas gain in weight as they become harder, and thus will increase the yield from a given field. He then took the actual yields received by respondents when their peas were harvested at the higher grades, and determined what the yield would have been had they been harvested at Grade B. From this, employing the price schedule set forth in the contracts, he estimated what the amount of their financial return would have been under these circumstances. As the increase in yield at Grade H did not offset the reduction in price, the return at Grade B, of course, would have been considerably higher.
Appellant argues that Dr. Carstens' estimates were not accurate because they were based, in part, upon studies made by one Dr. Pollard in Utah, purporting to determine the correspondence between increase in maturity and increase in weight of peas. Dr. Pollard himself was produced as a witness, and testified that he did not consider his data reliable when applied to other varieties of peas than those he had studied, grown in areas other than the Utah valley in which he made his experiments. He stated, however, *90 that the trend would be the same for all varieties and confessed unfamiliarity with conditions in Skagit county. Dr. Carstens, who was familiar with the situation there, testified that Dr. Pollard's studies corresponded with his own observations.
[6] It is unnecessary to detail the testimony of these two witnesses further, for, clearly, the situation presented was one in which the jury was presented with a choice between the testimony of two experts. Undoubtedly, the testimony of Dr. Pollard was entitled to great weight, but the jury was not obliged to select it in favor of Dr. Carstens' testimony, particularly as it appeared that, in using Dr. Pollard's studies, Dr. Carstens was employing the most relevant data available. Probably, it would have been impossible to determine exactly what the peas would have weighed had they been harvested at any given previous time; but, in estimating damages in a case such as this, the precise amount of damage need not be shown where the circumstances do not permit of such careful measurement. See Jones v. Shell Oil Co., 164 Wash. 543, 3 P. (2d) 141; McCormick on Damages, p. 103, ch. 21, § 27. Dr. Carstens' estimates appear to have been the result of careful study and as accurate as it was possible for them to be.
[7] But the jury, in awarding its verdicts, did not follow these estimates exactly, and appellant argues that this indicates that conjecture and speculation played more than a permissible part in its decisions. In the case of respondents Wallace Bros., Vogler, and Wear, the jury awarded damages in a sum less than Dr. Carstens had determined they would have been entitled to had the peas been harvested at an average grade of B. Suffice it to say as to these, that the verdicts were well within the proof. The jury could have awarded the full sum established by Dr. Carstens, and the award would have been justified by the evidence. In such a situation, appellant has no cause to complain because the jury selected a lesser amount. Lagomarsino v. Pacific Alaska Nav. Co., 100 Wash. 105, 170 Pac. 368; O'Connor v. Tesdale, 34 Wn. (2d) 259, 209 P. (2d) 274.
*91 Respondent Monte Schroeder, however, who originally claimed $2,439.36, was awarded $1,939.56, although, according to Dr. Carstens' estimates, he was entitled to only $1,757.86; and respondent Marston Ball, who had originally claimed $1,416.32, was awarded $1,292.41, although, according to Dr. Carstens' estimates, he was entitled to only $1,053.64.
Respondents did not contend that they were entitled to a sum greater than the amount the peas would have brought in if harvested at a grade of B, and the only evidence they presented to show what this sum would have been was the testimony of Dr. Carstens. Therefore, it would seem that, unless there were special circumstances changing the situation, their recovery should have been limited to the amounts specified by Dr. Carstens. In Schroeder's case, the testimony showed that seven acres of his peas were almost a total loss, having hardened even beyond Grade H at the time they were harvested. There was no claim made in Schroeder's complaint for compensation for peas that were totally lost, and this item was not reflected in Dr. Carstens' computations. The evidence concerning the fact of the loss came in without objection, but, when, in his closing argument, counsel for respondents suggested to the jury that this damage was in addition to the damage as specified by Dr. Carstens, and attempted to estimate the amount thereof in monetary terms, appellant's counsel objected that such a claim was beyond the issues in the case. Under the circumstances, this objection would seem to have been well-founded.
In Ball's case, it appeared that the peas had become so hard when they were harvested that they all graded H, and there was some suggestion in the testimony that, when peas become harder, many of them will not thresh and are wasted. Respondents contend that this justifies the excess of $238.77 over Dr. Carstens' estimate, which was awarded by the jury to Ball. But not only was there no claim made in Ball's complaint for peas wasted in this fashion, no evidence whatever was introduced to show what percentage of peas would be lost as the peas reached G and H in grade, and, of course, there was no evidence presented to indicate *92 what this would mean in terms of financial loss. In point of fact, it was not Ball, but one of the other respondents, Wallace, who mentioned that, at a time when his peas were running mostly in Grade F, he "noticed a lot of hard ones unthreshed." Clearly, damages suffered from this alleged loss were "not proved with reasonable certainty." Cuschner v. Pittsburgh-Hickson Co., 91 Wash. 371, 157 Pac. 879.
[8] Except for these items which the jury had no right to consider, the situation in which Schroeder and Ball were placed was not substantially different from that of the other plaintiffs, and there appears to be no justification in the evidence for awarding them a greater amount than that which Dr. Carstens stated would have accrued to them had their peas actually been harvested at Grade B. For this reason, the recoveries in the Schroeder and Ball cases will be reduced to $1,757.86 and $1,053.64, respectively. See Alexander v. Al G. Barnes Amusement Co., 105 Wash. 346, 177 Pac. 786.
The judgments in the Schroeder and Ball cases are affirmed as modified; the judgments in the other cases are affirmed in their entirety. All of the respondents will recover their costs in this court.
SIMPSON, C.J., MALLERY, HILL, and HAMLEY, JJ., concur.
NOTES
[1] Reported in 221 P. (2d) 832.
|
#pragma once
namespace nall {
struct bump_allocator {
static constexpr uint32_t executable = 1 << 0;
static constexpr uint32_t zero_fill = 1 << 1;
~bump_allocator() {
reset();
}
explicit operator bool() const {
return _memory;
}
auto reset() -> void {
free(_memory);
_memory = nullptr;
}
auto resize(uint32_t capacity, uint32_t flags = 0) -> bool {
reset();
_offset = 0;
_capacity = capacity + 4095 & ~4095; //alignment
_memory = (uint8_t*)malloc(_capacity);
if(!_memory) return false;
if(flags & executable) {
#if defined(PLATFORM_WINDOWS)
DWORD privileges;
VirtualProtect((void*)_memory, _capacity, PAGE_EXECUTE_READWRITE, &privileges);
#else
mprotect(_memory, _capacity, PROT_READ | PROT_WRITE | PROT_EXEC);
#endif
}
if(flags & zero_fill) {
memset(_memory, 0x00, _capacity);
}
return true;
}
//release all acquired memory
auto release(uint32_t flags = 0) -> void {
_offset = 0;
if(flags & zero_fill) memset(_memory, 0x00, _capacity);
}
auto capacity() const -> uint32_t {
return _capacity;
}
auto available() const -> uint32_t {
return _capacity - _offset;
}
//for allocating blocks of known size
auto acquire(uint32_t size) -> uint8_t* {
#ifdef DEBUG
struct out_of_memory {};
if((_offset + size + 15 & ~15) > _capacity) throw out_of_memory{};
#endif
auto memory = _memory + _offset;
_offset = _offset + size + 15 & ~15; //alignment
return memory;
}
//for allocating blocks of unknown size (eg for a dynamic recompiler code block)
auto acquire() -> uint8_t* {
#ifdef DEBUG
struct out_of_memory {};
if(_offset > _capacity) throw out_of_memory{};
#endif
return _memory + _offset;
}
//size can be reserved once the block size is known
auto reserve(uint32_t size) -> void {
#ifdef DEBUG
struct out_of_memory {};
if((_offset + size + 15 & ~15) > _capacity) throw out_of_memory{};
#endif
_offset = _offset + size + 15 & ~15; //alignment
}
private:
uint8_t* _memory = nullptr;
uint32_t _capacity = 0;
uint32_t _offset = 0;
};
}
|
Dave High, 61, isn’t on Twitter. He doesn’t even own a cell phone. But on his 35th business anniversary, he experienced how powerful social media can be when combined with human kindness.
Located along an outdoor shopping centre in Northwest Fresno, Calif., Dave put out cupcakes and drinks for passing customers earlier this week to celebrate the anniversary of his health food store.
Nobody showed up.
That is, until Kayla Jackson, 23, the wife of a security guard at the shopping centre who often takes breaks inside Dave’s shop, noticed.
“He was just sitting there, he had this sad look on his face and he kept repeating to us, ‘it’s our anniversary, it’s our 35th anniversary today. It’s really slow; it’s one of our slowest days,’” Kayla told CTVNews.ca in a phone interview.
Deciding to do something about it, Kayla took to Twitter and put out this call:
"This is Dave, he owns Sunrise Health. Today is his 35th anniversary, and he was expecting people to come in, and no one showed up. I just got here, and he brought everything out to celebrate. Can we get him some recognition?"
This is Dave. He owns Sunrise Health in Fresno, Ca. Today is his stores 35th anniversary and he was expecting people to come in. He bought cupcakes, soda & decorations and NO ONE showed up. I just got here and he brought everything out to celebrate. Can we get him so recongnition pic.twitter.com/MOSevdzqZE — kayla (@kaylaaa_jackson) July 26, 2018
Not stopping there, Kayla continued to tweet updates, posting more pictures and Dave’s phone number, encouraging people to give him a call and congratulate him.
Within two hours, her tweets went viral, Dave’s store was full and Dave’s phone wouldn’t stop ringing.
Update: Dave is so happy we had some friends come over to the shop we are having cupcakes and smelling Essential Oils pic.twitter.com/GhjOWyEBoh — kayla (@kaylaaa_jackson) July 26, 2018
Dave and his wife, Christina, run the small mom and pop shop together. They have five children and one grandchild. After meeting at Frensno State, they opened the store after they graduated in 1983.
In an interview with ABC Action 30 News, Dave admits he doesn’t know much about social media or advertising, attributing his longevity to old fashion customer service.
“I can memorize dates, or names, and I try to be friendly to people, and I’ve got the Irish background. Maybe it’s just stubbornness,” Dave told Action 30 News.
Kayla described Dave as “amazing.”
“He is just a really friendly guy. It’s so easy to talk to him and get along with him. You just get lost in time talking to him,” she said.
When asked if she would continue to help Dave with social media, Kayla told CTVNews.ca she’s happy to do so, if that’s what Dave wants. |
Q:
Is there a map of the individual motor axons in the limbs?
As of right now, I can only find a map of nerve fibers, but not necessarily the individual neuron axons.
For example, here's a map of the nerves in the arm and hand.
http://www.innerbody.com/anatomy/nervous/arm-hand
A:
No it is not possible to map axons at that fine of a scale between individuals.
You mention a possible use for controlling bionic arms. There are a bunch of problems with that approach.
Like @kmm says, at the fine level there is a ton of variation.
I think you are failing to appreciate just how many axons there are, even going to a particular muscle; while each muscle fiber only gets input from one axon, there are thousands of fibers in a single muscle.
The EEG-based approaches to prosthetic control are most relevant when there is a problem with spinal function, for example caused by an injury. In this case, there is no connection between the motor neurons and the brain.
Axons often degrade if their targets are damaged. Therefore, if a hand is lost in an accident, for example, the neurons projecting to the muscles of the hand will atrophy.
If the axons themselves are in place, there are already prosthetics that operate based on muscle contractions in existing muscles. It takes some time to train users to use these devices, but most movements you make involve all sorts of muscles you may not realize are involved, and it is quite possible for brain plasticity to allow muscles in, for example, the shoulder, to control a bionic hand after an amputation.
If for some reason the issues I raised don't apply, there is another alternative: mapping. You don't need to know which axons you want information from, you only need to be able to record their activity (which you will need anyways to operate the prosthetic). Record the activity while the patient tries to move their finger, for example: now you know the signal you should transduce into a finger movement. The technical challenge, then, is being able to isolate that signal from all the other signals from all the other axons in the same nerve. This challenge is related to, though not precisely the same as, the challenge of isolating a signal in the EEG.
|
1. Field of the Invention
The present invention relates to a video signal processing circuit, a display apparatus, a liquid crystal display, a projection type display apparatus, and a video signal processing method suitable for improving image quality defects caused by a lateral electric field that occurs in a matrix drive type display panel, for example, a liquid crystal display apparatus or the like.
2. Description of the Related Art
A so-called lateral electric field occurs at a signal boundary region (namely between electrodes of two adjacent pixels) where a potential difference occurs in a video signal supplied to individual pixels in a matrix drive type display apparatus. This lateral electric field disturbs electric fields applied to electrodes of individual pixels, resulting in occurrence of image quality defects. The image quality defects cause shading because of a voltage difference between a drive voltage supplied to a pixel under consideration and that supplied to each of adjacent pixels corresponding to a video signal. FIG. 1A, FIG. 1B, and FIG. 1C show examples in which image quality defects occur.
FIG. 1A shows an example of a display image 1 corresponding to an input video signal and an example of a display image 1A where an image quality defect occurs both on a display apparatus having, for example, 7 (vertical)×7 (horizontal) pixels. 3×5 pixels at a center portion of the display image 1 corresponding to the input video signal have a black level as their luminance and pixels adjacent thereto have a gray level as their luminance. In contrast, pixels 2a to 2c and pixels 2d to 2h that are formed adjacent to the left and below, respectively, of the 3×5 pixels at the center portion of the display image 1A where the image defect occurs have a white-blurring display pattern.
FIG. 1B shows an example of a display image 11 corresponding to an input video signal and an example of a display image 11A where an image defect occurs in a display apparatus having, for example, 7 (vertical)×7 (horizontal) pixels. Likewise, 3×5 pixels at a center portion of the display image 11 corresponding to the input video signal have a black level as their luminance and pixels adjacent thereto have a white level as their luminance. In contrast, pixels 12a to 12e and pixels 12f to 12h that are formed adjacent to the above and the right, respectively, of the 3×5 pixels at the center portion of the display image 11A where an image quality defect occurs have a black-blurring display pattern.
FIG. 1C shows an example of a display image 21 corresponding to an input video signal and an example of a display image 21A where an image quality defect occurs on a display apparatus having, for example, 7 (vertical)×7 (horizontal) pixels. 3×5 pixels at a center portion of the display image 21 corresponding to the input video signal have a gray level as their luminance and pixels adjacent thereto have a white level as their luminance. In contrast, pixels 22a to 22g that are formed adjacent to the above and the right, respectively, of the 3×5 pixels at the center portion of the display image 21A have a black-mixed display pattern.
FIG. 2A and FIG. 2B are schematic diagrams showing a theory of occurrence of an image quality defect phenomenon in a liquid crystal display apparatus. FIG. 2A shows microscopic photos of adjacent pixels 31 and 32. FIG. 2B shows alignments of liquid crystal molecules of the pixels 31 and 32. A lateral electric field 33 occurs between the pixels 31 and 32. The lateral electric field 33 causes the alignments of liquid crystal molecules 34a and 35a that leftward tilt to be disturbed as those of liquid crystal molecules 34b and 35b, respectively. In addition, the lateral electric field 33 causes liquid crystal molecules 34c and 35c that are present in the vicinity of the boundary of the pixel 31 and pixel 32 to be aligned perpendicularly to the lateral electric field 33. Since molecules aligned in parallel or perpendicular to the axis of a polarizing plate occur like the liquid crystal molecule 34c and liquid crystal molecule 35c in the pixels 31 and 32, their transmittances change, resulting in occurrence of black lines 36 and 37. According to such a theory, in the liquid crystal display apparatus, the lateral electric field causes the alignment directions of liquid crystal molecules to rotate and the disturbance of the alignment directions causes a domain-caused image quality defect. When one pixel is composed of three sub-pixels of three primary colors R (Red), G (Green), and B (Blue), a lateral electric field occurs between two sub-pixels of the these primary colors.
Next, with reference to FIG. 3A and FIG. 3B, an outlined structure of a liquid crystal display apparatus will be described. FIG. 3A is an exploded perspective view of a liquid crystal display apparatus. FIG. 3B is an enlarged view of a principal portion of FIG. 3A. As shown in FIG. 3A and FIG. 3B, a liquid crystal display apparatus 40 includes a liquid crystal layer 41, an upper glass substrate 42, a lower glass substrate 44, and polarizing plates 46 and 47. The upper glass substrate 42 and the lower glass substrate 44 are aligned with the liquid crystal layer 41. The polarizing plates 46 and 47 are aligned with the upper glass substrate 42 and the lower glass substrate 44, respectively.
As shown in FIG. 3A and FIG. 3B, a transparent electroconductive film 43 is formed on the upper glass substrate 42. A common electrode that is common in the entire pixel pattern is formed on the upper glass substrate 42. In addition, as shown in FIG. 3A and FIG. 3B, formed on the lower glass substrate 44 are pixel electrodes (pixel patterns) 48n and 48n+1 and thin film transistors (TFTs) 49n and 49n+1 that are switch devices that drive the pixel electrodes (pixel patterns) corresponding to pixels. Moreover, formed on the lower glass substrate 44 are patterns of X electrodes (scanning lines) Xn and Xn+1 that are gate inputs of the thin film transistors 49n, 49n+1 and Y electrodes (signal wires) Yn and Yn+1 that are source inputs thereof. The polarizing plates 46 and 47 are disposed such that axes 46a and 47b of the polarizing plates 46 and 47 are perpendicular thereto.
In such a structure, only liquid crystal molecules 41a and 41b in an area sandwiched by a pixel electrode and a common electrode in the liquid crystal layer 41 are affected by an electric field between the pixel electrode and the common electrode and thereby the their alignments are changed, resulting in functioning as a liquid crystal shutter of one pixel. A lateral electric field occurs between Y electrodes or pixels electrodes of two adjacent pixels due to a potential difference of a video signal supplied to the two adjacent pixels.
Liquid crystal display apparatus are mainly categorized as a perfect vertical alignment type and a tilt alignment type. The perfect vertical alignment type is referred to as so-called VA (Vertical Alignment). In this type, liquid crystal molecules in the liquid crystal layer are aligned perpendicularly to the substrate with an alignment film (not shown) in the state that no voltage is applied to an electrode corresponding to a pixel. In other words, tilt angles θ of the liquid crystal molecules 41a and 41b to the substrate are 90 degrees. If a voltage is applied to an electrode corresponding to the pixel, since the direction in which liquid crystal molecules tilt (alignment direction) is free, the alignment directions of the liquid crystal molecules are not matched.
On the other hand, in the tilt alignment type, an alignment film (not shown) causes liquid crystal molecules of the liquid crystal layer to be aligned such that they tilt in the normal direction of a substrate in the state that no voltage is applied to an electrode corresponding to a pixel and the liquid crystal molecules to be aligned such that they are aligned nearly level with the substrate in the state that a voltage is applied. In other words, as shown in FIG. 3B, pre-tilt angles θ of the liquid crystal molecules 41a and 41b against the substrate are smaller than 90 degrees. When the pre-tilt angles are present in the liquid crystal molecules 41a and 41b, if the liquid crystal display apparatus 40 is viewed from the front (in the direction normal to the substrate), the liquid crystal molecules 41a and 41b tilt in a predetermined direction. When a voltage is applied to an electrode corresponding to a pixel in this state, the directions in which the liquid crystal molecules 34a and 35b shown in FIG. 2B tilt depend on the pre-tilt angles. Since the alignment directions of liquid crystal molecules are decided in one direction, light that transmits through the pixels becomes uniform and thereby the liquid crystal display apparatus displays an image in high quality.
In a liquid crystal display apparatus having such a pre-tilt angle, the direction in which the image quality defect phenomenon occurs also depends on the evaporation direction of liquid crystal molecules. FIG. 4A, FIG. 4B, FIG. 4C show examples of display images corresponding to input video signals in a VA, right-evaporated liquid crystal display apparatus and those where image quality defects occur therein.
FIG. 4A shows an example of a display image 51 of one line (seven pixels) corresponding to an input video signal and an example of a display image 51A where an image quality defect occurs. Three pixels at a center portion of the display image 51 corresponding to the input video signal have a black level as their luminance and pixels adjacent thereto have a gray level as their luminance. In contrast, a pixel 51a that is formed adjacent to the left of the three pixels at the center portion in the display image 51A where an image quality defect occurs has a white-blurring display pattern.
FIG. 4B shows an example of a display image 52 of one line (seven pixels) corresponding to an input video signal and an example of a display image 52A where an image quality defect occurs. Three pixels at a center portion of the display image 52 corresponding to the input video signal have a black level as their luminance and pixels adjacent thereto have a white level as their luminance. In contrast, a pixel 52a that is formed adjacent to the right of the three pixels at the center portion in the display image 52A where the image quality defect occurs has a black-blurring display pattern.
FIG. 4C shows an example of a display image 53 of one line (seven pixels) corresponding to an input video signal and an example of a display image where an image quality defect occurs. Three pixels at a center portion of the display image 53 corresponding to the input video signal have a white level as their luminance and pixels adjacent thereto have a white level as their luminance. In contrast, a pixel 53 formed adjacent to a pixel having a white level on the right of the three pixels at the center portion in the display image 53A where the image quality defect occurs has a black-blurring display pattern.
In contrast, in a left-evaporated liquid crystal display apparatus, the image quality defect phenomenon occurs in a direction opposite to that of the right-evaporated liquid crystal display apparatus shown in FIG. 4A and FIG. 4B. For example, in the display image 51 corresponding to the input video signal shown in FIG. 4A, if the liquid crystal display apparatus is of the left-evaporated type, a pixel 51b that is formed adjacent to the right of the three pixels at the center portion in the image 51A where the image quality defect occurs has a white-blurring display pattern. Thus, although the causes of occurrence of the image quality defects are the same, they differently appear.
In addition, liquid crystal display apparatus have a voltage-transmittance (V-T) characteristic where the transmittance of the liquid crystal layer changes with a voltage applied to a pixel electrode. In color liquid crystal display apparatus, since the VT characteristic differs in each of R (red), G (green), and B (blue), shading of the image quality defective phenomenon differs in RGB.
Although the foregoing liquid crystal display apparatus are of the VA type, twisted nematic (TN) type liquid crystal display apparatus are affected by a lateral electric field. However, since their normally white (NW) and normally black (NB) are different, they differently appear. FIG. 5A and FIG. 5B show display patterns that differ in these types of liquid crystal display apparatus.
FIG. 5A shows an example of a display image 61 composed of 7 (vertical)×7 (horizontal) pixels where an image quality defect occurs in a TN type liquid crystal display apparatus (NW). In a display image corresponding to an original input video signal, 3×5 pixels at a center portion have a black level as their luminance and pixels adjacent thereto have a white level as their luminance. In contrast, in a display image 61 where the image quality defect occurs, pixels 61a to 61g that are formed as five upper pixels and three right pixels of the 3 ×5 pixels at the center portion have a white blurring display pattern.
On the other hand, FIG. 5B shows an example of a display image 62 of 7 (vertical)×7 (horizontal) pixels where an image quality defect occurs in a VA type liquid crystal display apparatus (NB). In the display image 62 where the image quality defect occurs corresponding to the same input video signal as that shown in FIG. 5A, pixels 62a to 62e that are formed adjacent to the above of 3×5 pixels at the center portion and pixels 62f to 62h that are formed adjacent to the right of the 3 ×5 pixels have a black-blurring display pattern.
In the foregoing, the image quality defect phenomenon that occurs, for example, in liquid crystal display apparatus, due to the influence of a horizontal electric field has been described. However, the image quality defect phenomenon due to an influence of a lateral electric field also occurs other than liquid crystal display apparatus. In other words, a similar image quality defect phenomenon occurs in display apparatus where pixels are arranged in a matrix shape on a display panel and voltages are applied to a scanning line and a signal wire of a pixel under consideration such that the pixel under consideration is lit. For example, in organic electroluminescence (EL) display apparatus, a lateral electric field causes the motions of electrons and positive holes in pixels to disturb, resulting in occurrence of an image quality defect. Moreover, in plasma display apparatus, a lateral electric field affects generation of plasma in pixels, resulting in occurrence of an image quality defect.
However, so far, in matrix drive type display apparatus, image quality defects affected by a lateral electric field that occurs between two pixels due to a potential difference of a video signal supplied to individual pixels has been improved. For example, Japanese Unexamined Patent Application Publication No. 2001-59957, referred to as Patent Document 1, discloses a technique that scans pixels at a period shorter than a frame period in synchronization therewith and applies a signal that has been modulated with a pulse width to signal wires. This technique allows liquid crystal to be driven by frame inversion free of flickering and declination. |
How Much You Need To Expect You'll Pay For A Good Dog Bark Collar
Professionals: The pro will be the choices of vibration and tone are really practical to those who would like to teach their dogs safely and securely but proficiently. Colpet CP-TC04 is often a Puppy teaching collar which prioritize the comfort of one's dogs. However, it continues to be quite powerful http://www.arcadetrainer.com/index.php?params=profile/view/2299966/ |
Day 6
Easter Island's iconic statues could disappear because of climate change
Share on Facebook Share on Twitter Share by Email
Storm surges and sea level rise threaten the island's cultural heritage
Rising sea levels and increasingly powerful storms are putting Easter Island's cultural heritage at risk. (Aerial-Cam Ltd)
Show more The Moai of Rapa Nui, or Easter Island, have stood for more than 500 years. Now, storm surges and erosion put them at risk of falling into the sea. 8:56
by Brent Bambury
Poet Pablo Neruda called them "severe profiles from the carved crater," but not all of the statues on Easter Island are severe. Some of the iconic sculptures — called moai — wear hats made of red cement, giving the mysterious giant heads a sense of distinction and playfulness.
And they're not just heads. Excavations on the enormous moai have shown that most of them are attached to bodies.
ADVERTISEMENT
"They're always imagined as heads because the pictures you normally see are the ones that are still buried," Dr. Jane Downes says on Day 6.
'Depending on what sea level rises do ... those [statues] could disappear in a catastrophic event, totally.' - Dr. Jane Downes
Downes specializes in archaeology and climate change and has spent nearly a decade uncovering the mysteries of the moai on Rapa Nui, the native name for the island.
She may be running out of time.
Some of the partially-buried moai on the hillside of the Rano Raraku volcano on Easter Island. (MARTIN BERNETTI/AFP/Getty Images)
Rapu Nui, along with other islands in the Pacific, is facing enormous pressure due to climate change. The moai are threatened by intensifying storms and rising seas. Downes says one of the main sites on the island, Ahu Tongariki, is low-lying and prone to inundation, making its moai particularly vulnerable. She says they may even be taken by the sea.
ADVERTISEMENT
"There's a prediction that depending on what sea level rises do, that those [statues] could disappear in a catastrophic event, totally."
Even without a catastrophe, erosion on Rapa Nui is continuous and ongoing.
"There are sites that are being damaged and are falling into the sea as we speak, and some of those probably won't last too much longer," she says.
"To me as an archaeologist, it's tragic, and I absolutely can't describe the sense of sadness that I feel when I see things literally being pulled and torn into the sea."
The moai statues of Easter Island are located near the shores of Easter Island, putting them at greater risk of erosion. (Aerial-Cam Ltd.)
Monolithic moai
ADVERTISEMENT
There are nearly 1,000 moai on Rapa Nui, most of them located along the coast.
"I think you just imagine there will just be a few of these monuments,'" Downes says. "But the island is literally littered with them."
"You can see pictures of the statues in lots of books and magazines, but I don't think anything prepares you for actually being there and standing next to them — because, for a start, they're absolutely immense."
Moai statues sit atop an ahu (platform) on Easter Island, Chile. (Aerial-Cam Ltd.)
Downes is a professor of archaeology at the University of the Highlands and Islands. Along with her climate change research, she studies the platforms the moai were placed on when they were carved between 1000 and 1600 A.D.
ADVERTISEMENT
"The island is formed around a large volcano, which is the quarry where the statues are carved," she explains. "When they've been carved, they've been taken out on sacred roads to these platforms all around the island."
The platforms reveal some of the mysteries of the moai, and offer clues about the island's original inhabitants.
'The disappearance of some sites' monuments through climate change is inevitable.' - Dr. Jane Downes
"The platforms are almost like altars and cemeteries in their own right as well because they incorporate human remains — burials — within them."
But Downes says the platforms are also endangered by the rising seas.
"They're incredibly exposed to the elements, because they are right on the edge of the island," she says.
"They're right situated at the coast, so they're very vulnerable to waves coming in on them, which undermine the platforms that they're situated on and the stonework falls into the sea."
ADVERTISEMENT
Dr. Jane Downes says climate change is causing increasingly violent storms, escalating erosion along Easter Island's rocky coastline. (Aerial-Cam Ltd.)
Few options
Downes says the livelihood of the people who live on Rapa Nui is tied to the moai.
"The economy of the island is very much focused around heritage tourism, you know, people visiting to see the statues."
But while the loss of the moai would be a disaster for the economy, the cultural impact would be worse.
"I think that would be even more devastating, because this is the identity of the island," Downes says.
ADVERTISEMENT
"Even though historically many of the population disappeared — through various things that happened through post-contact time — people feel a very strong affinity with these monuments."
According to Dr. Jane Downes, some sites on Easter Island could be totally swept away by a single catastrophic weather event. (Aerial-Cam Ltd.)
Rapa Nui is a UNESCO World Heritage site, but even with global assistance and input, Downes worries there are few options to protect the giant stones.
"Rapa Nui, unlike other islands in the South Pacific, is not surrounded by a coral reef," she says.
"So there's nothing protecting it from the waves and the storm surges, and anything that you put in the sea to protect the island or the sites would be taken away by the strength of the sea there."
ADVERTISEMENT
Even moving the monuments to safer places on the island is problematic.
"There is a possibility that some could be moved, [but] all this is — well, not contentious, but [it] needs a lot of discussion."
Downes recommends continuing intensive scientific research before climate change takes its toll.
15 Moais, arranged on a perfect line, in the Ahu Tongariki sector of Easter Island, Chile. (HO/AFP/Getty Image)
"I think that something that can be done is to understand sites better before they disappear, because the disappearance of some sites' monuments through climate change is inevitable," she says.
ADVERTISEMENT
Even as a scientist, Downes says she still experiences a pervasive sense of wonder when she approaches Rapu Nui's silent, ancient sculptures.
"Being a scientist, you think you can work things out or, you know, have ideas and theories. But that kind of leaves you for a bit because you just think: 'How did they do this? And why?'
It's hard to describe the emotion you feel when you actually stand by them."
Moai statues stand against the light on Easter Island. (AFP/Getty Images)
To hear the full interview with Dr. Jane Downes, download our podcast or click the 'Listen' button at the top of this page. |
Why Unemployed 23-Year-Old 'Addicted' To Video Games Prefers Virtual World
Justin, 23, plays video games around the clock, sometimes spending up to 30 hours straight in front of his computer screen. "I enjoy gaming more than I enjoy life," he says. "Gaming is simple and rewarding. People care about me there more than in the real world, and I'm good at it."
Describing a typical day in his life, the unemployed college dropout says, "I generally wake up at 4 p.m., because I was up all http://www.mymd-uae.com night gaming the night before." He typically eats only once a day, which is why he has lost more than 50 pounds and is practically emaciated, and only gets up from gaming to go to the bathroom.
His parents are afraid his lifestyle could kill him, and they turn to Dr. Phil for help.
"I've had a lot of opportunities to succeed in the real world, but I've passed them up, and I feel like it's kind of hopeless now," says Justin, who often smokes weed while playing. "It just makes me want to play more, because then I don't have to think about it as much."
He tells Dr. Phil, "I do believe I have a problem with gaming ... I think I'm just not motivated to do anything in the real world anymore, because I've achieved so much online that if I gave up now, all of that would feel like a waste." Justin, who has been ranked in the top 20 worldwide in a couple of different games, adds, "I feel like most of the people I know are way ahead of me in life now. I've messed up on college a couple times. I just can't really see myself being successful anymore."
How did he get here and how can he turn his life around? Watch Tuesday's episode of Dr. Phil check local listings here. |
... superiority pre-eminence or authority ecclesiastical or spiritual within this realm, and therefore I do utterly renounce and forsake all foreign jurisdictions powers superiorities and authorities, and do promise that from henceforth I...The History of Scotland - Página 331por George Buchanan - 1827Vista completa - Acerca de este libro
...superiority, preeminence, or authority, ecclesiastical or spiritual, within this realm : and therefore I do utterly renounce and forsake all foreign jurisdictions, powers, superiorities, and authorities; and do promise, that from henceforth I shall bear faith and true allegiance to the king's Majesty, his...
...superiority, pre-eminency, or authority, ecclesiastics! er, civil, within this realm. And therefore I rfn utterly renounce, and forsake all foreign jurisdictions, powers, superiorities, and authorities : and do promise, that from henceforth I shall bear faith, and true allegiance to the king's majtsty, his...
...superiority, pre-eminence, or authority, ecclesiastical or spiritual, within this realm : and therefore I do utterly renounce, and forsake all foreign jurisdictions, powers, superiorities, and authorities, and do promise, that from henceforth 1 shall bear faith, and true allegiance to the [king's] highness [his]...
...superiority, pre-eminence, or authority, ecclesiastical or spiritual, within this realm; and therefore I do utterly renounce and forsake all foreign jurisdictions, powers, superiorities, and authorities, and do promise that from henceforth I shall bear faith and true allegiance to the queen's highness, her...
...superiority, preeminenence, or authority, ecclesiastical or spiritual, within this realm: and therefore I do utterly renounce and forsake all foreign jurisdictions, powers, superiorities and authorities, and do promise, that from henceforth I shall bear faith, and true allegiance to the [king's] highness,...
...superiority, pre-eminence, or authority ecclesiastical or spiritual within this realm ; and therefore you do utterly renounce and forsake all foreign jurisdictions, powers, superiorities, and authorities, and do promise that from henceforth you shall bear faith and true allegiance to the King's Highness, His...
...Superiority, Pre-eminence or Authority, Ecclesiastical or Spiritual, within this realm : and therefore I do utterly renounce and forsake all foreign Jurisdictions, Powers, Superiorities and Authorities, and do promise, that from henceforth I shall bear faith and true allegiance to the Queen's Highness, her...
...lawful supreme governor of this realm, as well in things temporal, as in conservation and purgation of religion : and that no foreign prince, prelate, state...allegiance to his highness, . his heirs and lawful successors : and to my power shall assist and defend all jurisdictions, privileges, pre-eminences and...
...superiority, pre-eminence, or authority, ecclesiastical or spiritual, within this realm; and therefore I do utterly renounce and forsake all foreign jurisdictions, powers, superiorities, and authorities, and do promise that from henceforth I shall bear faith and true allegiance. to the queen's highness, her...
...superiority, preeminence, or authority, ecclesiastical or spiritual, within this realm, and therefore I do utterly renounce and forsake all foreign jurisdictions, powers, superiorities, and authorities ; and do promise that from henceforth I shall bear faith and true allegiance unto your majesty, your heirs,... |
Q:
Connecting to router, some devices get good speeds, others get awful speeds
I have a strange problem.
I got Virgin 30MB broadband installed yesterday (its been over 24 hours since it was activated)
FYI I am in the United Kingdom.
My home network goes like this:
Item (Speedtest.net result)
Router
- TP-Link Switch
-- Computer (0.01mb dl, 3.00mb ul)
-- IP Phone (Connects)
-- Server (Does not connect)
-- Netbook (30mb dl, 3mb ul)
-- Laptop (30mb dl, 3mb ul)
Wireless
- iPhone (20mb dl)
- iPad (20mb dl)
Can anyone help be get an actual speed on my computer and get my server to connect?
I tried to download a 2.5mb file off the virgin website and it said it would take 6 hours.
I can browse the internet very slowly on my computer, but my server cannot connect at all!
The server is ubuntu-server.
Any help would be greatly appreciated. Calling customer services is a last resort since they will say have you tried turning it off and on again?
I used to have BT broadband (2 days ago) with the exact setup and all was fine.
I have restarted everything, reset the router and switch.
A:
I have solved the problem. So I'll post it in case anyone else has this issue.
In the Virgin router there was an option called Modem mode, which reset the router properly, then I took it out of modem mode.
It cleared the MAC addresses and the problem was solved. So it was a problem with the router.
I rang up virgin and they said it was defiantly my computer, I proved them wrong :)
Thanks for the help guys!
|
Conditions of Agreement
Tax Return
Read the term and conditions below, then click "I Agree" to register
Description of services you are purchasing
You are purchasing an Individual Tax Return which will be lodged electronically. The return contains the items listed above at the heading "What is in The Return", and does not include items listed at the heading "Items not covered by this return".
Currency
Cost is in Australian dollars.
Privacy Policy
All data collected will be used to produce a tax return to be lodged, by us, with the Australian Tax Office. The data you provide will be treated in strict confidence and only used to produce your tax return, unless you give further details to how you want the information used.
Security
The Credit Card/EFT facility we used is E-way Bureau. The site is fully secured and your credit card details are confidential.
Customer Service
Our telephone number is (02) 4351-0384. Our email address is info@ccaccounting.com.au. Please try the email address first if you have any difficulties.
Refund Policy
Once you have made payment we are committed to getting your return lodged. We do not make refunds, but rather we will help you get over difficulties. |
Please note: we have been online over ten years, and we want The Trek BBS to continue as a free site. But if you block our ads we are at risk.Please consider unblocking ads for this site - every ad you view counts and helps us pay for the bandwidth that you are using. Thank you for your understanding.
Welcome! The Trek BBS is the number one place to chat about Star Trek with like-minded fans. Please login to see our full range of forums as well as the ability to send and receive private messages, track your favourite topics and of course join in the discussions.
If you are a new visitor, join us for free. If you are an existing member please login below. Note: for members who joined under our old messageboard system, please login with your display name not your login name.
This thread is to be inclusive discussion about all things AVENGERS going forward. Coming out of Comic-Con 2010 our AVENGERS have been set. Talk about just Joss's involvement is fine in the other thread. This if for talk about evolving develompent of the film and the cast as a whole going forward.
This teaser essentially led up to the first logo for The Avengers, and after it ended, Jackson was introduced and he brings out Clark Gregg, Scarlett Johansson as Black Widow, Chris Hemsworth as Thor, Chris Evans as Captain America... and then Robert Downey Jr. came out as Tony Stark and the crowd went crazy. Downey Jr. then talked about how people were saying how Inception was the most ambitious movie ever made, but Downey said that bringing together all of these heroes for The Avengers was the most ambitious movie. He then introduced and confirmed that playing Clint Barton is.... Jeremy Renner... and "reprising his role as Bruce Banner... Mark Ruffalo!" He then brought out director Joss Whedon, who said that he had always had a dream, presumably about directing The Avengers, "but my dream was never this good!"
We may add more to this article soon, since there was some cool stuff said on the various panels, but just to quickly recap, we got the first teaser for Captain America as well as some of the footage they shot in the first week, an extended bit of footage from Thor, a teaser for The Avengers, essentially just the logo with a Samuel L. Jackson voice-over, and then they brought out the entire cast and director of The Avengers, including all of the actors already introduced and adding Jeremy Renner and Mark Ruffalo as Clint Barton/Hawkeye and the new Bruce Banner/The Hulk
My preference would be for a non-powered Carol Danvers first. Introduce her in Avengers as Fury's second-in-command (taking a cue from the Ultimate line), which would be a bit role than would then take on an expanded significance in the S.H.I.E.L.D. movie Jackson is supposedly signed up to do. In the final act of the S.H.I.E.L.D. film, she gains her powers, which sets her up for a spin-off of her own and participation in Avengers 2.
Spooky, my first thought was that there is only one woman. Ms Marvel would be at the top of my list too, although Mockingbird would be a good low budget alternative since she has links to SHIELD and requires no special effects to implement apart from some wire work for pole vaulting. It may be that Wasp and Ant Man will be making appearances in a sequel. Vision would be really cool too.
I admit, I was never a fan of the Avengers. You put Thor or Hulk next to Hawkeye, the power imbalance is just dopey, although if they 'power down' the movie version of Thor this may improve things. The X-Men seemed to have a better grip on this sort of thing. Still, Iron Man was great and I'm a fan of Cap so I'm looking forwad to this movie.
I admit, I was never a fan of the Avengers. You put Thor or Hulk next to Hawkeye, the power imbalance is just dopey, although if they 'power down' the movie version of Thor this may improve things.
You don't need to 'power down' anyone. Black Widow and Hawkeye and that are utilised when you don't want the attention of huge dudes swinging their fists, hammers, shields, and repulsor rays all over the place.
I admit, I was never a fan of the Avengers. You put Thor or Hulk next to Hawkeye, the power imbalance is just dopey, although if they 'power down' the movie version of Thor this may improve things.
You don't need to 'power down' anyone. Black Widow and Hawkeye and that are utilised when you don't want the attention of huge dudes swinging their fists, hammers, shields, and repulsor rays all over the place.
True. Although that does polarise the situations in which the characters can be useful (I mean Thor can fly, is as strong as the Hulk, can fight really well, and can control the weather). I'm not sure how restrictive that will be in a movie format. Still, Cyclops ended up being neutered in X-men in order to focus on fisticuffs so I suppose they will write in situations that will allow the lower-powered characters to show off their skills.
One of the reasons I like the Captain America comic is that it can showcase the lower powered characters like Cap, Black Widow, Hawkeye, Falcon, Nomad, Demolition Man, Diamondback etc. Sort of Batman level characters with the gimmicks spread across more characters.
As much as I would love to see Ms. Marvel in this movie, I would be very disappointed if she was only added because people complained about gender imbalance. Put her in the movie because she's a cool character, not to fill a quota. Keep your affirmative action out of my movies!
As for Widow and Hawkeye being in the same team as Thor - The Ultimates did this best. Widow and Hawkeye were in the less 'public' team, doing the quiet jobs that needed a bit more subtlety than is possible if you send in the bloody God of Thunder or a guy in a heavily armed suit of mechanised armour.
As much as I would love to see Ms. Marvel in this movie, I would be very disappointed if she was only added because people complained about gender imbalance. Put her in the movie because she's a cool character, not to fill a quota. Keep your affirmative action out of my movies!
What, you don't think it's important to argue for representation in the media?
I'm not expecting to see much more in the way of team-members, though; the cast is pretty big as it stands, and then you need villain(s), etc.
Mockingbird would probably be the easiest of the notable recent female Avengers to introduce, since, like Hawkeye (the only notable new addition in the film itself, from the looks of it), she can be explained in about ten seconds. Wasp doesn't make sense without Ant-Man, and we know he's not in it; She-Hulk and Ms. Marvel likewise have origins that would require a decent bit of time to explain properly; if they even have access to Scarlet Witch, they certainly don't to her backstory.
__________________
"I'm a white male, age 18 to 49. Everyone listens to me, no matter how dumb my suggestions are!" |
With fracking about to recommence in the UK after 8 years, social entrepreneur and writer Jeremy Leggett reviews the short but troubled history of fracking in the U.S. In a devastating slide presentation, he pictures the shale gas industry as a dirty, multi-hundred-billion-dollar doomed-to-burst debt bubble. And he predicts a similar fiasco in the UK. Courtesy Future Today.
This presentation was first published on Jeremy Leggett’s website Future Today and is republished here with permission. |
881 F.Supp. 167 (1995)
William TIZER t/a Basin Street Floors, Plaintiff,
v.
The AMERICAN INSURANCE COMPANY, Defendant.
No. 94-2363.
United States District Court, E.D. Pennsylvania.
March 27, 1995.
*168 Harry P. Begier, Jr., Harry P. Begier, Jr. Ltd., Philadelphia, PA, for plaintiff William Tizer t/a Basin Street Floors.
Thomas J. Duffy, Patrick J. Keenan, Philadelphia, PA, for defendant American Ins. Co.
MEMORANDUM
LOWELL A. REED, Jr., District Judge.
Plaintiff William Tizer t/a Basin Street Floors has brought this action against defendant The American Insurance Company in order to recover the proceeds allegedly due to plaintiff under an insurance policy issued by defendant. This Court has jurisdiction over this case pursuant to 28 U.S.C. § 1332 as the parties are of diverse citizenship and the amount in controversy is in excess of $50,000 exclusive of interest and costs.
Currently before me is the motion of defendant for summary judgment and the supplemental motion of defendant for summary judgment as to Count II of plaintiff's complaint. (Document Nos. 7, 14) For the following reasons, the motions will be denied.
I. FACTUAL BACKGROUND
The following facts are not in dispute.
On July 26, 1991, defendant The American Insurance Company issued an insurance policy to William Tizer trading as Basin Street Floors that covered, among other items, plaintiff's business personal property located at 406 Basin Road in New Castle, Delaware. Included in the policy was the following provision:
4. Legal Action Against Us
No one may bring a legal action against us under this insurance unless:
a. There has been full compliance with all of the terms of this insurance; and
b. The action is brought within 2 years after the date on which the direct physical loss or damage occurred.
Motion for summary judgment, Exhibit C, Property/Liability Policy section at 16. The policy was in effect from July 26, 1991 to July 26, 1992.
Plaintiff alleges that he discovered on April 13, 1992 that some of his business property was missing from the 406 Basin Road location. Plaintiff sought payment for the alleged loss from defendant, but defendant refused to make this payment. On Thursday, April 14, 1994, plaintiff filed suit in this Court against defendant seeking payment under the policy and other relief.
II. DISCUSSION
Under Fed.R.Civ.P. 56(c), summary judgment may be granted when, "after considering the record evidence in the light most favorable to the nonmoving party, no genuine issue of material fact exists and the moving party is entitled to judgment as a matter of law." Turner v. Schering-Plough Corp., 901 F.2d 335, 340 (3d Cir.1990).
Defendant argues in both of its motions that plaintiff's action is time barred by the terms of the insurance policy at issue. Plaintiff concedes that the terms of the policy bar legal actions brought more than two years after the loss occurred. Plaintiff also concedes that the alleged loss occurred, at the latest, on April 13, 1992, and that the instant action was filed on April 14, 1994. Plaintiff *169 argues, however, that the instant action is not time barred for four reasons: (1) April 14, 1994 is within two years "after" April 13, 1992; (2) defendant waived the policy's limitation period provision by not properly notifying plaintiff of that provision; (3) even if the instant action was not filed within two years after April 13, 1992 and the limitation period provision was not waived, plaintiff substantially performed the that provision of the policy; and (4) defendant's bad faith conduct tolled the limitation period. Because I conclude that defendant waived the policy's limitation period provision by not properly notifying plaintiff of that provision, I need not address plaintiff's other arguments.
A. Choice of Law
Federal courts must apply the choice of law rules of the states in which they sit. Klaxon Co. v. Stentor Elec. Mfg. Co., 313 U.S. 487, 496, 61 S.Ct. 1020, 1021-22, 85 L.Ed. 1477 (1941). Pennsylvania has adopted a choice of law methodology which combines contacts analysis and interest analysis. Carrick v. Zurich-American Ins. Group, 14 F.3d 907, 909-10 (3d Cir.1994); Griffith v. United Air Lines, Inc., 416 Pa. 1, 203 A.2d 796 (1964). It is undisputed that the insurance policy at the heart of this case was issued in Delaware, that it insured property in Delaware, and that the alleged loss occurred in Delaware. While the policy was sent to plaintiff in Pennsylvania, there are clearly greater contacts with Delaware than with Pennsylvania; in addition, Delaware clearly has a stronger interest than Pennsylvania in regulating the application of insurance which is issued and applies to events that occur and property within its boundaries. Therefore, while the parties initially disagreed over the applicable law, they are now correct in their determination that Delaware law applies to the instant case.
B. Waiver of Policy Limitation Period Provision
Delaware law states:
§ 3914. Notice of statute of limitations required
An insurer shall be required during the pendency of any claim received pursuant to a casualty insurance policy to give prompt and timely written notice to claimant informing him of the applicable state statute of limitations regarding action for his damages.
Del.Code Ann. tit. 18, § 3914. Casualty insurance is elsewhere defined so as to include insurance against loss or damage by theft; therefore, the policy at issue here is covered by this statutory provision. See Del.Code Ann. tit. 18, § 906.
The Supreme Court of Delaware has held that this statute must be given a broad construction because it "may be deemed remedial legislation designed to benefit claimants." Stop & Shop Companies v. Gonzales, 619 A.2d 896, 898 (Del.1993). This breadth of construction has resulted in the statute being applied to self-insurance and to claims made by persons other than the insured. Id. at 898-99 (self-insurance and claimants other than insured); Samoluk v. Basco, Inc., 528 A.2d 1203 (Del.Super.Ct.1987) (claimants other than insured). While no Delaware court has directly faced the question of whether this statute applies to limitations periods provided for by casualty insurance policies, the Supreme Court of Delaware in affirming a decision of the lower court enforcing a contractual limitation period has implied in dictum that this notice requirement extends to these limitations periods:
At the present time in Delaware there is no duty on the part of an insurance carrier to inform its insured of the existence of a shortened statute of limitations' provision contained in a policy where the carrier has given no assurance that it would not rely upon the one-year limitation provision.[1]
1. As of May 1, 1983 the carrier must give prompt and timely notice of the applicable statute of limitations regarding actions for damages. 18 Del.C. § 3915 [later renumbered as § 3914]
Betty Brooks, Inc. v. Insurance Placement Facility, 456 A.2d 1226, 1228 (Del.1983).
The extension of this statute to include these limitations periods is contrary to a *170 literal reading of the statute. The statute specifically refers to the "applicable state statute of limitations." Del. Code Ann. tit. 18, § 3914 (emphasis added). The Supreme Court of Delaware in Stop & Shop Companies, supra, has been willing, however, to interpret the words "insurer" and "casualty insurance policy" less than literally in order to extend this notification requirement to self insurance in order to foster what that Court concluded was the intent of the legislature. I do not believe, in view of its strong policy statement in the self-insurance context, that the Supreme Court of Delaware would allow insurance companies to avoid the notification requirement by simply including in their policies (as here) a shorter (two years) limitation period than the statutory period of three years.1 Since I find that the Supreme Court of Delaware has evidenced no logical reason for a distinction between a statute of limitations and a policy limitation period, and since the Supreme Court of Delaware has provided some indication, though oblique, in the Betty Brooks, Inc. case that it favors extending the reach of this statute to limitations periods contained within policies themselves, I conclude that if the Supreme Court of Delaware faced this issue today, it would find that Del.Code Ann. tit. 18, § 3914 applies to a limitation period contained in a casualty insurance policy as well as to statutes of limitations.
Defendant sent various letters to plaintiff carefully stating that it was reserving its rights under the insurance policy, and in the last two letters defendant stated explicitly that it was specifically reserving "any and all requirements in its policy that action against [it] be filed in a timely manner." Motion for summary judgment, Exhibit B, letters dated 10/5/92, 10/27/92, 11/17/92, 12/8/92, 2/16/93, 11/12/93. This letters provide the only evidence that has been presented to this Court regarding defendant's notification to plaintiff of the policy's limitation period. The question, therefore, is whether these letters constituted "prompt and timely written notice" as required by Delaware law. Once again, no Delaware court has addressed this exact question; indeed, the only case to provide the specifies of notice given pursuant to § 3914 involved such explicit notice that the plaintiff conceded that the notice itself was proper and only contested whether the proper person had received it. Vance v. Irwin, 619 A.2d 1163, 1165 (Del.1993). The notice in that case stated "`[i]n addition, the applicable Delaware State Statute of Limitations regarding actions for bodily injury and property damage liability is two (2) years from the date of the accident, specifically 4-5-91.'" Id. at 1164.
Given the broad construction required of this statute, I conclude that the Supreme Court of Delaware would find these references to timely filing of actions insufficient to meet this notice requirement. While it may not be necessary to give, as the insurance company did in Vance, the exact date upon which the limitation period would expire, the Supreme Court of Delaware would likely find that it is necessary, at a minimum, to refer the claimant to the specific limitation period which is involved. This is consistent with the language of the statute, which requires insurers to notify claimants of the "applicable state statute of limitations regarding action for his damages." Del.Code Ann. tit. 18, § 3914 (emphasis added). In the instant case, defendant failed to identify with any specificity the length of the limitation period or the location in the policy where the limitation period could be found. Therefore, I conclude that defendant failed to give the notice required under Delaware law and thus it waived the limitation period provision of the policy.[2]
III. CONCLUSION
For the foregoing reasons, the motion of defendant for summary judgment and the supplemental motion of defendant for summary *171 judgment as to Count II of plaintiff's complaint will be denied.
NOTES
[1] Del.Code Ann. tit. 10, § 8106.
[2] While it is undisputed that plaintiff was represented by counsel when these letters were received, the Supreme Court of Delaware has indicated that it would not differentiate between claimants who were represented by counsel and those who were not. See Vance, 619 A.2d at 1165 (stating that § 3914 is insurance regulatory measure and thus relevant issue is not whether claimant's counsel was aware of applicable statute of limitations but whether proper notice was given).
|
Tag Archives: Sega Genesis
Dutifully slapped together and rushed out the door in an attempt to satisfy the allegedly ravenous fans of the firstmovie (review here), Leprechaun 2 was clumsily plopped onto shelves way back in 1994, exactly one year and three months after the release of the original. It’s a good thing too, that Ewok money can’t pay Warwick Davis’ mortgage forever.
“Can we do like, 11 sequels to Willow? For fucks sake, I got full sized bills!!”
The plot-Leprechaun 2 is Leprechaun at his rapiest. The story concerns our little green fuck face and his quest to land a human bride, which he then plans to impregnate and surgically alter, so as to make her appear more Leprechaun like. Why not just date Leprechaun women in the first place? I really don’t know. Maybe there aren’t any. I have no idea how their system works, all I know is that it must be stopped, because it’s already hard enough to meet people in this day and age, we don’t need any percentage of our dating population being kidnapped and mutilated by fucking Leprechauns. Why isn’t Donald Trump working on a wall to separate us from the faerie kingdom? I wish I had the answers, folks, but I do not.
So we start out 1000 years ago in Ireland, on St. Patrick’s Day, which also happens to be Lep’s birthday. What a coincidence! And this is no ordinary birthday, our boy is turning the big one triple zero! To mark the momentous occasion, Leprechaun and his badly abused human slave are out to bamboozle a fair maiden into the loathsome and all binding contract that is matrimony using a time honored tradition of making her sneeze three times. If she sneezes thrice and no one says “God bless you,” her mind, body and soul belong to the Leprechaun, which is a fucked up and nonsensical rule. Even so, Lep’s human slave is happy to participate in the capture of his master’s bride to be, because he’s been promised his freedom once Lep ties the knot- but he suddenly has a change of heart when he learns that the apple of the Leprechaun’s beady little eye is none other than his own daughter, who is hot as hell and just so happens to sneeze pretty often. Shit! That tricky little Leprechaun. Predictably, the slave dude betrays his master and ruins his plan to entrap his bride, an act of cockblockary that costs him his life, and forces Lep to postpone his wedding a full one thousand years, because a Leprechaun is apparently subject to a lot of stupid rules.
So, we fast forward ten centuries to present day (Well- 1994. It WAS present day), and Lep is once again on the prowl to find lady love, this time in twentieth century America. Good luck, asshole. This time he sets his sights on the equally hot descendant of his previous potentially kidnapee, an empty-headed, flinty voiced babe named Bridget, who is already in the early stages of courtship with some bland dumbass called Cody. Cody sucks, folks. He sucks hard. He just doesn’t bring anything to the table, and that’s a problem for Leprechaun 2, because he’s also our protagonist, and nobody in the world would be sad to watch him die gruesomely. On the other end of the spectrum, however, we have Morty, Cody’s money grubbing, alcoholic con-man mentor, who is far and away the best and most enjoyable character in the film. But again, he’s a secondary character, and for most of the film, we’re stuck with fucking Cody.
So, anyway. Lep shows up, he rhymes a lot, Brigitte is kidnapped, and Cody and Morty spring into action to launch an elaborate scheme to somehow rescue Bridgitte, and, if possible, score some of that sweet, sweet Leprechaun treasure. It’s a horror film franchise with a 99% genetic match to a fucking cereal commercial.
The Lucky Charms commercial filmed on Lucky’s 1000th birthday is going to go down very, very differently.
So, the upside here is that there’s actually an idea behind Leprechaun 2’s plot- this is a good, old fashioned cautionary tale against the destructive powers of greed. Lep is greedy, Morty is greedy, Cody has to learn not to be greedy, and if you’re greedy, it doesn’t end well for you. That’s all well and good. Problems pop up, however, when you factor in how the character of Bridgitte is handled- she’s basically immediately downgraded to being an object that men fight over for the entire film. She could just as easily be a 20 dollar bill, or a really great sandwich. To the ultra sensitive eyes of the Millennial, this shit is like, PRIME trigger fuel, but back in ’94, absolutely zero fucks were given. Also, we had better music, and the Sega Genesis. It was an awesome time to be alive.
Another mark against Leprechaun 2 is that ALL the actors are total garbage, except, of course, for Mr. Warwick Davis, and Sandy Baron, who plays Morty. Actually, strike that, Tony Cox has a small role in this one, too- you might remember him from Bad Santa. Cox is a fine actor in his own right, but he doesn’t get much of an opportunity to shine in Leprechaun 2. What he does get to do is to play an integral role in the single most bizarre and disturbing men’s restroom scene I have ever seen this side of No Holds Barred (Review Here).
This isn’t a classic, but by all objective criteria, this is a much better movie than the first. It’s less childish, never as bland, and it features quite a few memorable scenes. Or at least I thought it did. When I rewatched it just now for the purpose of writing this review, I didn’t actually remember ever having seen any of these so called “memorable sequences” ever before, except for one; the one wherein Leprechaun uses his magical illusion powers to make one of Brigitte’s more date-rapey suitors believe he is slowly moving in to motorboat her bare chest, when in actuality, he’s gently ramming his face into the whirling blade of an upturned lawn mower. That was pretty awesome. Later, Lep uses his illusion powers to make out with Cody, though, so that mostly negates the coolness of the lawnmower kill.
Still, it’s mostly good. The one thing this movie has working against it in comparison to the first film is that this is fucking Leprechaun 2. That’s a pretty fatal flaw. With the first film, you could throw that puppy on for an annual “leave it on in the background” type deal at a St. Patrick’s Day party, and people might be onboard with it, but nobody puts on Leprechaun 2 every year. Your friends would just look at you like you were a fucking idiot… and let’s face it… you might be!
In 2005, Daiei’s phenomenal Yokai franchise from the 1960’s enjoyed a brief, regrettable resurgence when famed director Takashi Miike decided to bless the Earth with The Great Yokai War. This unfortunate semi-sequel really only checks off on about half of the things that SHOULD be on the checklist for any Yokai film, and instead injects it with more Miike-isms than were desirable, or appropriate. I’m pretty hot and cold on Miike as a director to begin with, but in the case of The Great Yokai War, I’m straight up irritated.
Worth mentioning; this shit is a kid’s movie, but Miike isn’t the sort of bro you let babysit. The Great Yokai War is way, way scarier than your average children’s film, and periodically, it’s more sexually suggestive, as well. For the adults in the audience, I guess this is SORT OF a win, but it doesn’t really go far enough with the spooks or the sex to satisfy the shameful smut-hounds inside all of us, and I’m damn sure not going to let my kids watch this thing; so in the end we have a movie which lingers pointlessly between two polar opposite demographics. Honestly, that’s Miike to a T.
THE PLOT~ When an evil, ancient sorcerer type dude who dresses really nice decides that he wants revenge against both humans, AND the yokai, he does some stupid bullshit that’s super uninteresting and lame. Then, later on, some little kid finds himself wrapped up in a grand, cookie-cutter fantasy adventure, which forces him to battle alongside the Yokai and save the world. Holy shit, man it’s JUST that boring and generic.
UGH.
So… What, if anything, is GOOD in The Great Yokai War…?
Well, it’s does have a ton of monsters in it, which is definitely a non-negotiable requirementfor this franchise. Not providing this most bare-bones of requirements would be nothing less than inexcusable, and while Miike is ordinarily quick to disappoint/and/or blatantly defy expectations, I am happy to report that in this case, he does indeed bring the thunder, monster style. Thank heavens.
The monsters also LOOK pretty darn good… Well, the Yokai do, at least. They’re mostly live action, and that’s a straight up blessing. The film also has “bad-guy” monsters in it, which are all CG… They fucking suck so bad, but we’ll cover them in greater detail later.
The Great Yokai War also succeeds pretty admirably at replicating the fun vibe seen in Spook Warfare, we get a real feel of urgency, and the human and yokai worlds are intermixed in a way that feels very similar to what the earlier Daiei films did so well. I’d say Miike passes with flying colors in this arena (imagine that!). He also nails the characterization of most of the central cast (with the exception of the bad guys- again, more to come on this), who feel like real, fully developed personalities, full of flaws and peculiar traits which make them feel relatable. Some of the jokes are even funny, the Yokai are all pokey and selfish, unmotivated to do anything even when oblivion is starring them in the face, and the only way to successfully get them to march off to battle is by misleading them into thinking they’ve been invited to a party. It’s weird, but I almost want to throw Miike a thumbs up in regards to how well this is done… But then I remember Ichi The Killer, and I get pissed off again.
Possibly the best thing the movie does, though, is that it actually has a fairly intelectual thesis statement, which is most unexpected in a shabby-ass kid’s fantasy adventure film. At the heart of it, The Great Yokai War is all about the transition from youth to adulthood, the moment when abandon our naïve, youthful perspective, and instead adopt of a more complex understanding of morality, and our roles in society. This is illustrated adequately in the personal journey of our central character, some Japanese Kid, and also mirrored more casually in the journey society has undertaken as it slowly forgets about the traditions of yesteryear, and becomes more preoccupied with the Internet and getting to work on time. As much as this movie full-on pisses me off, The Great Yokai War is ABOUT something, and credit where credit is due, that’s worth pointing out in any fair critique.
Not that that’s out of the way…
What DOESN’T work….?
The first (and worst) mistake Miike makes is that he takes the film out of the period setting seen in the old Daiei movies, and plops it down shittily into modern times. Damn, that sucks. This change allows Miike to flood our screen with his desired bad guys, who, again, are exempt from every single compliment I’ve paid to this film thus far, and it also sets up the comparison between the evolution of Japanese culture, and the journey to adulthood seen in our central character (some Japanese Kid), but it sucks like nobody’s business and isn’t worth it. It’s lame, lame as hell, this film would immediately jump up a full letter grade AT LEAST if it were set in Japan’s feudal era. It’s just so much more interesting.
Second inexcusable flaw: TheCG. ALL of the CG in this film is fucking horrible. It’s just appaling, and really, this is a very common complaint for most any Japanese film in this day and age. It’s actually impossible to look at these characters and not feel a profound distatisfaction with how freaking shitty they all look. It would be enough to ruin the film, if there was even a decent film to ruin, and so I propose a new rule; If you can’t accocmplish your end goal with digital effects that are at least passable, then tough cookies, dude, change your goals. Do NOT launch a project that you can’t realistically pull off and then chuck the dog shit results out into the cinematic community, expecting a pardon. The CG in The Great Yokai War is a hole that would sink any boat; Miike, may God have mercy on your soul, you should have done anything other than this.
THIRD INEXCUSABLE FLAW: The Bad Guys. All of the bad guys in this movie are completely terrible. Firstly, the evil sorcerer dude: his plan is to capture all of the yokai, round them up, and toss them into this miasmic flame he’s got in a furnace (this is actually a yokai as well, oddly enough. It looks like slimy, pink fire.). After that, he tosses garbage in with them, and let’s it all mix together, thereby transforming the yokai into stupid looking steampunk robots that carry out his evil bidding. Re-read that, basically, this guy’s evil scheme is exactly the same thing Dr. Robotnik did back in the Sonic The Hedgehog video games, Sega Genesis era. To be clear, I think that shit was more believable when it was 16 Bit. Also, it had better graphics.
Shitty robots aside, the bad guys are also saddled with those familiar and all too unwelcome anime tropes, which have slowly wormed their way into Japanese live action cinema, and which really are just the worst things ever. His main henchman is easily the most aggravatingly lame character in the entire film, she’s some turncoat Yokai, played by the often obnoxious Chiaki Kuriyama. Sorry, Chiaki, if I hated you in Kill Bill, I’ll probably hate you forever.
If you were wondering who the second lamest character in the film is, it’s probably Sunekosuri, a little fury creature who forms a special friendship with our lead kid early on. Sunekosuri is basically just a B-squad Mogwai that pees a lot. It sucks.
FORTH INEXCUSABLE FLAW: HOLY SHIT, THIS MOVIE IS GENERIC: It doesn’t help that Miike took these neat little movies that were essentially brilliant live action interpetations of Japan’s rich folk lore, and then made a sequel which mashed them into the most generic fantasy storyline ever. The Great Yokai War really feels like it’s less concerned with exploring folklore, and more concerned with being the Japanese Neverending Story. Really, It’s more like The Neverending Story part 5. Probably. I never saw Part 5, but if it exists, I’m sure it sucks, just like The Great Yokai War does.
The last two things I have to say:
1) One of these yokai looks like what you’d get if Mickey Rourke got wasted in a Hawaiin Punch bottling plant and lept into one of the vats.
2) This kid’s shirt says something about midget racing, I shit you not. What in the hell is going on in Japan!??! |
type Blank = null | undefined | void
/**
* @private
*/
export type NonArray<T> = T extends any[] ? never : T
/**
* Object collects {} and Array<any> so adding both {} and Array<any> is not needed
*
* @private
*/
export type AllowedEmptyCheckTypes = Blank | string | object
/**
* GetEmpty mapped type that will cast any AllowedEmptyCheckTypes to empty equivalent
*
* @private
*/
export type GetEmpty<T extends AllowedEmptyCheckTypes> = T extends Blank
? T
: T extends string
? ''
: T extends any[]
? Empty.Array
: T extends object
? {}
: never
export interface NonEmptyArray<T> extends Array<T> {
0: T
}
// https://twitter.com/karoljmajewski/status/1037618989801893888?s=20
export type Empty = Empty.Array | Empty.Object | Empty.String
export declare namespace Empty {
type String = ''
type Array = never[]
type Object = Record<string, never>
}
export type Bottom<T> = T extends string
? Empty.String
: T extends any[]
? Empty.Array
: T extends object
? Empty.Object
: never
|
Academic-Practice Partnerships for Unemployed New Graduates in California.
In California, academic-practice partnerships offer innovative transition programs to new registered nurse (RN) graduates who have not yet found positions in nursing. This report describes the formation of 4 partnerships between 1 or more schools of nursing and clinical practice sites that included hospitals and nonacute care settings, such as hospice, clinics, school districts, and skilled nursing facilities. Factors facilitating the partnerships included relationships established as nurse leaders from practice and academia came together to address previous workforce issues, positive interpersonal experiences, an independent convening and coordinating organization, a shared understanding of the employment challenge faced by new RN graduates, and a shared vision for its solution. Partnerships face continuing challenges that include sustaining engagement, resource constraints, and insufficient nursing leadership succession planning. Partnership benefits include improved relationships between academia and practice, a forum to address contemporary issues in nursing education and practice advances, and stimulation of a reassessment of how to integrate ambulatory, transitional, and community-based nursing into prelicensure education. |
This article is more than 9 years old
This article is more than 9 years old
Jon Stewart, fresh from his "Rally to Restore Sanity" in Washington last weekend, has made US ratings history by beating rivals David Letterman and Jay Leno to become last month's highest-rating talkshow.
The Daily Show with Jon Stewart, which airs on cable channel Comedy Central, attracted the highest average adult audience of any late night talkshow in October, according to the Hollywood Reporter. The Reporter states that it has been "at least a decade" since a talkshow other than David Letterman's Late Show on CBS or Jay Leno-fronted The Tonight Show on NBC topped the ratings.
Stewart, who has generated a lot of publicity since announcing in September he would hold a rally in Washington's National Mall, attracted an average of 1.3 million adult viewers per show during October. That put him ahead of Leno and Letterman, who averaged 1.2 million.
Late Night Talk Shows – average audiences, October 2010
1. The Daily Show with Jon Stewart (Comedy Central) - 1.3 million viewers
2. The Tonight Show with Jay Leno (NBC) - 1.2
3. Late Show with David Letterman (NBC) - 1.2
4. The Colbert Report (Comedy Central) - 900,000
5. Late Night with Jimmy Fallon (NBC) - 800,000
6. The Late Late Show with Craig Ferguson (CBS) - 700,000
7. Jimmy Kimmel Live (ABC) - 700,000
8. Chelsea Lately (E!) - 650,000
9. Lopez Tonight (TBS) - 450,000
• To contact the MediaGuardian news desk email editor@mediatheguardian.com or phone 020 3353 3857. For all other inquiries please call the main Guardian switchboard on 020 3353 2000.
• If you are writing a comment for publication, please mark clearly "for publication". |
Finding a reliable Hamilton Plumber is not an easy job. It would be idea if you are able to hire the best in the business, someone who knows what he is doing and does the job sincerely, whilst qualifying an 8 point of satisfaction guarantee. Narrow down your choices from the pool of candidates with these simple and effective guidelines.
Trust only one who guarantee these 8 points of satisfaction –
Always on Time – Expect your Hamilton plumber to be punctual. Some of them may even offer the first hour of their service free of chargeif they fail to show up on time.
Maintenance Guarantee –The best plumbers in Hamilton should be confident enough about their work to be able to offerlong maintenanceguarantee periods, some even up to 2 years. This means that after their first job, if something is wrong with what they have done, you can call them to fix it for up to 2 years without having to pay for it again.
Expert Plumbing and Gasfitting –Only fully qualified and experienced plumbers should be doing the work. A good plumbing contractor will be qualified as a plumber and will have adequate work experience as well.The best plumbers in Hamilton have decades of experience on average plus they undergo drug tests and police verifications as well. They must have good work history featuring multiple happy clients on the list.
One Call Should Sort It – Professional plumbersunderstandthe importance of time. They shouldn’t take a minute more of your time than necessary. Your job will be booked instantly so you can get on with your life. A plumber who initiates sending updates andgets the job on time is the one you’re looking for.
Timely& Convenient Billing – An experienced plumbing contractor knows the downsides of fake invoice and un-estimated price details. One who follows strict rules and regulations, will use modern, specialized software and have GPS on all vehicles to ensure all costings are timely and accurate. You should have your bills in hand without any delays.
Clean and Tidy After job – After a professional completes the job, the property should be left clean and tidy. Everything should be left exactly like it was before. There should be no loss of items from your residential or commercial property after the technicians leave the space.
Assurance to Safety – Ensure safety to life and property with certified plumbing contractors in Hamilton. A certified member of SiteSafe practice means, all employees undergo training to ensure the safety of themselves, their co-workers and the client.
Complete work-satisfaction –Last, but not the least, the most important point! Professional and knowledgeable plumbers are well-versed in developing a cordial relationship with their clients. They will provide prompt responses and ensure complete work satisfaction so that they getmore work from repeat customers. For building confidence, you can check their reviews and real testimonials.
These are the eight points of satisfaction you should get if working with a professional Hamilton Plumber. Be precise and selective and verify details first before making any decisions. |
FILED
United States Court of Appeals
Tenth Circuit
April 11, 2011
UNITED STATES COURT OF APPEALS
Elisabeth A. Shumaker
Clerk of Court
TENTH CIRCUIT
UNITED STATES OF AMERICA,
Plaintiff-Appellee, No. 09-1551
v. (D. of Colo.)
MICHAEL LEE MONTOYA, (D.C. No. 09-CR-288-CMA)
Defendant-Appellant.
ORDER AND JUDGMENT *
Before O’BRIEN, McKAY, and TYMKOVICH, Circuit Judges. **
Michael Lee Montoya was convicted by a federal jury on five counts
related to bank robbery. He appeals his conviction and sentence. Montoya’s
counsel, finding no meritorious grounds for an appeal, moves to withdraw
pursuant to Anders v. California, 386 U.S. 738 (1967). We have jurisdiction
*
This order and judgment is not binding precedent except under the
doctrines of law of the case, res judicata and collateral estoppel. It may be cited,
however, for its persuasive value consistent with Fed. R. App. P. 32.1 and 10th
Cir. R. 32.1.
**
After examining the briefs and the appellate record, this three-judge
panel has determined unanimously that oral argument would not be of material
assistance in the determination of this appeal. See Fed. R. App. P. 34(a); 10th
Cir. R. 34.1(G). The cause is therefore ordered submitted without oral argument.
under 28 U.S.C. § 1291. We GRANT counsel’s motion to withdraw and
DISMISS Montoya’s appeal.
I. Background
In April 2009, a man, wearing a bright blue hat and carrying a black Wells
Fargo bank bag, robbed the Pine River Valley Bank in Durango, Colorado. After
presenting a lengthy demand note to a teller, he showed her a hypodermic syringe
with an orange top. He then fled after taking about $4,000 from her.
The robber’s description was circulated to neighboring banks in Durango.
Five weeks later, the branch manager of the Bank of San Juans, which is directly
across the street from the Pine River Valley Bank, noticed a man matching the
robber’s description acting suspiciously outside her building. The man was
wearing a bright blue hat with an “F” logo on the back. After looking through the
glass bank doors and walking away, the man looped around and began
approaching the side of the bank from an alley. At this point, the branch manager
called 911, and she asked a teller to watch where the man was walking. The teller
lost sight of the man after he passed the drive-through window, but he never
entered the bank.
An hour later, two men robbed the Community Banks of Colorado in
Cortez, Colorado. While his accomplice waited in the getaway vehicle, a tan and
maroon Ford Bronco, one of the men, wearing a blue Florida Gators baseball cap,
entered the bank. He then pulled a handgun on a teller and demanded money.
-2-
After taking $4,576, the robber fled, escaping in the getaway vehicle. As the
Ford Bronco sped out of an adjacent parking lot, its front passenger-side tire
struck a curb, and a cigarette butt and tire marks were left at the scene.
Eyewitnesses saw the robber’s escape, and a description of the Ford Bronco
was aired to law enforcement. Shortly thereafter, an off-duty officer stopped a
similar vehicle, with scuff marks on a tire that were consistent with recently
striking a curb. After one of the eyewitnesses identified the stopped Ford Bronco
as the getaway vehicle, the driver, James McBride, was arrested. McBride’s
vehicle contained a cell phone and a traffic ticket from the morning of the bank
robbery, both of which linked him to Montoya. McBride later confessed to
involvement in the Community Banks robbery, the Pine River Valley Bank
robbery, and the attempted robbery of the Bank of San Juans. He named Montoya
as his co-conspirator and testified against him at trial.
In addition, both of the robbed tellers identified Montoya in photo line-ups.
The branch manager and teller at the Bank of San Juans testified about their
observations of the suspicious man outside their building. Moreover, the
videotape of a nearby business demonstrated that, during the attempted robbery of
that bank, a tan and maroon Ford Bronco was circling the area.
Finally, investigating officers obtained search warrants and other court
orders that uncovered additional evidence. Montoya’s cell phone records
indicated many calls and texts between Montoya’s phone and the phone recovered
-3-
in McBride’s vehicle, including an early-morning text message stating,
“Whenever you are ready, Loco.” Montoya’s phone was eventually recovered
inside his yellow Dodge pickup truck, the vehicle McBride was driving on the
morning of the robberies when he was ticketed. A search of Montoya’s family
home in Farmington, New Mexico, produced circumstantial evidence of bank
hold-up notes, bank bags, disguises, hypodermic needles with orange caps, and
correspondence linking Montoya to the crimes.
In June 2009, a federal grand jury indicted McBride and Montoya on one
count of armed bank robbery in connection to the Community Banks robbery. In
August 2009, a superseding indictment added a second count of armed bank
robbery based on the Pine River Valley Bank robbery, and a count of using a
firearm during a crime of violence based on the Community Banks robbery. Less
than two weeks later, a second superseding indictment was issued, adding two
more counts: attempted bank robbery in connection with the Bank of San Juans
incident, and conspiracy to commit bank robbery.
McBride entered into a plea agreement with the government in exchange
for cooperating and testifying against Montoya; he received a sentence of 3.5
years. Montoya pleaded not guilty and was tried before a federal jury, which
convicted him of all five counts. Although the government strongly supported the
presentence investigation report (PSR) recommendation of a sentence of 30 years’
-4-
imprisonment, the district court sentenced Montoya to a below-guidelines
sentence of 20 years’ imprisonment.
Following Montoya’s timely notice of appeal, his counsel filed an Anders
brief explaining that, after reviewing the record and completing the necessary
research, he determined the appeal had no merit. Montoya was granted additional
time to file a response to that brief, but he has not done so. The government filed
a notice of its intention not to file an answer brief in this appeal.
II. Discussion
Under Anders v. California, 386 U.S. 738 (1967), defense counsel may
“request permission to withdraw where counsel conscientiously examines a case
and determines that any appeal would be wholly frivolous.” United States v.
Calderon, 428 F.3d 928, 930 (10th Cir. 2005). If counsel makes that
determination, he may “submit a brief to the client and the appellate court
indicating any potential appealable issues based on the record.” Id. The client
may also submit arguments to the court in response. We must then fully examine
the record “to determine whether defendant’s claims are wholly frivolous.” Id. If
we find they are, we may dismiss the appeal.
The Anders brief submitted by Montoya’s counsel identifies three issues
that Montoya would like to appeal: (1) the evidentiary decision to permit
testimony that bank hold-up notes were found in the Farmington residence, (2) the
-5-
effectiveness of trial counsel’s assistance, and (3) the reasonableness of the
sentence. We address each of these in turn.
A. The Bank Hold-Up Notes
Evidentiary rulings “generally are committed to the very broad discretion
of the trial judge, and they may constitute an abuse of discretion only if based on
an erroneous conclusion of law, a clearly erroneous finding of fact or a manifest
error in judgment.” Webb v. ABF Freight Sys., Inc., 155 F.3d 1230, 1246 (10th
Cir. 1998). Even if the court finds an erroneous evidentiary ruling, a new trial
will be ordered only if the error affects the substantial rights of the party. Id.
(citing Hinds v. GM, 988 F.2d 1039, 1049 (10th Cir. 1993)).
The government has not contended any of the hold-up notes found in the
Farmington residence were used during the Pine River Valley Bank robbery. But
both the demand note used during the robbery and the discovered hold-up notes
were lengthy. On the third day of trial, outside the presence of the jury,
Montoya’s defense counsel objected to the admission of the hold-up notes’
content on the grounds that the notes contained inflammatory language and could
prejudice the jury. But defense counsel indicated he was willing to stipulate to
the fact that hold-up notes were found at the house, so long as their content was
not revealed.
Later that day, after reviewing the hold-up notes, the district court excluded
the content of the notes after finding they “contain explicit information that may
-6-
lead the jury to infer that the author or owner of the notes participated or planned
to participate in bank robberies which are not charged in this case.” R., Vol. 3 at
425. Thus, the notes were received as exhibits but were not shown to the jury.
But the court permitted discussion or testimony regarding the existence of the
notes and the fact they were taken from the residence. Id.
The district court did not abuse its discretion in permitting testimony
regarding the discovery of bank hold-up notes. The existence of the notes was
highly probative, since it linked Montoya to the Pine River Valley Bank robbery.
The length of the discovered notes was also relevant, since it corresponded to the
length of the note used in that robbery. 1 Because the prejudicial effect of these
hold-up notes was greatly diminished when the actual content of the notes was
excluded, the permitted testimony was not improper.
B. The Effectiveness of Trial Counsel
Montoya contends his counsel’s performance was objectively deficient and
deprived him of a fair trial with a reliable result. See Fox v. Ward, 200 F.3d
1286, 1295 (10th Cir. 2000) (citing Strickland v. Washington, 466 U.S. 668, 687
(1984)). In a letter requesting a time extension to file a response to the Anders
brief, Montoya avers his counsel “rushed the case to trial and didn’t properly
1
A police officer involved in the search of the residence, testified that
three hold-up notes, each typed, were found. One note was nearly an entire page,
another was about two-thirds of a page, and the third was a little less than half a
page in length. Similarly, the Pine River Valley Bank teller testified she only
skimmed the demand note that was given to her because it was fairly long.
-7-
investigate an alibi defense,” although he gives no further details as to what this
alibi defense would be. In response, defense counsel contends Montoya was a
voluntary and knowing participant in a strategic decision to push speedy time
requirements. The objective of this strategy was to force the government to
prepare its multiple bank robbery cases against Montoya in a single trial, which
would not give the government adequate time to complete DNA matching and
other forms of identification.
To prevail on his ineffective assistance of counsel claim, Montoya “must
overcome the strong presumption that ‘counsel’s conduct falls within the wide
range of reasonable professional assistance.’” United States v. Smith, 10 F.3d
724, 728 (10th Cir. 1993) (quoting Strickland, 466 U.S. at 689). Furthermore,
strategic decisions are constitutionally ineffective only if they are “completely
unreasonable, not merely wrong, so that they bear no relationship to a possible
defense strategy.” Fox, 200 F.3d at 1296. Indeed, under Strickland, “strategic
choices made after thorough investigation of law and facts relevant to plausible
options are virtually unchallengeable; and strategic choices made after less than
complete investigation are reasonable precisely to the extent that reasonable
professional judgments support the limitations on investigation.” 466 U.S. at
690–91.
Montoya’s ineffective assistance of counsel claims is not appropriate for
review by this court on direct appeal, due to the insufficiency of the record.
-8-
Except in rare circumstances not present here, “[i]neffective assistance of counsel
claims should be brought in collateral proceedings, not on direct appeal.” United
States v. Galloway, 56 F.3d 1239, 1240 (10th Cir. 1995) (en banc). When
ineffective assistance of counsel claims are pursued on direct appeal, they “are
presumptively dismissible, and virtually all will be dismissed.” Id. This rule is
based on the following rationale:
A factual record must be developed in and addressed by the district
court in the first instance for effective review. Even if evidence is
not necessary, at the very least counsel accused of deficient
performance can explain their reasoning and actions, and the district
court can render its opinion on the merits of the claim.
Id.
Because the record is insufficient for us to properly assess it on direct
appeal, we dismiss Montoya’s ineffective assistance of counsel claim.
C. The Sentencing Decision
We review sentences for procedural and substantive reasonableness. See
United States v. Kristl, 437 F.3d 1050, 1053 (10th Cir. 2006). When a defendant
is sentenced within a properly-calculated guidelines range, the sentence “is
entitled to a rebuttable presumption of reasonableness.” Id. at 1054. “The
defendant may rebut this presumption by demonstrating that the sentence is
unreasonable in light of the other sentencing factors laid out in [18 U.S.C.]
§ 3553(a).” Id. at 1055. These factors include “the nature and circumstances of
the offense[,] the history and characteristics of the defendant[,] the need for the
-9-
sentence imposed . . . to afford adequate deterrence to criminal conduct[, and] the
need to avoid unwarranted sentencing disparities among defendants with similar
records who have been found guilty of similar conduct.” 18 U.S.C. § 3553(a).
Montoya’s counsel asserts the district court calculated the guidelines range
properly and made no procedural errors. We agree, and thus we presume the
sentence to be reasonable. The advisory guidelines sentencing range was 30
years’ to life imprisonment. However, the district court sentenced Montoya to
only 20 years’ imprisonment, significantly below the advisory range. The court
explained:
Although a career offender, the Court believes the guideline range is
unreasonable when considering 18 U.S.C. § 3553 factors. Four of
the defendant’s 9 felonies were committed more than 10 years ago,
when he was 18, and during what appears to be a 2½ month crime
spree.
R., Vol. 1 at 186.
Although McBride received a much shorter sentence than Montoya, the
district court properly concluded the difference did not constitute a disparity
under § 3553 because the two defendants were not similarly situated. McBride
had previously been convicted of only one drug possession felony, and he pleaded
guilty to a single count of aiding and abetting armed robbery. In contrast,
Montoya had been convicted of nine previous felonies, and he was tried and
found guilty of five serious counts, including two counts of armed robbery.
-10-
Not only did the district court properly consider § 3553 factors, but it
sentenced Montoya to a decade below the minimum advisory sentence in light of
them. As a result, the sentence is substantively reasonable.
III. Conclusion
We conclude no meritorious appellate issue exists. Accordingly, we
GRANT counsel’s motion to withdraw and DISMISS Montoya’s appeal.
Montoya’s motion requesting appointment of new counsel is DENIED.
ENTERED FOR THE COURT
Timothy M. Tymkovich
Circuit Judge
-11-
|
Recession Doesn't Mean Doing His Housework
By Rivers and Barnett
WeNews commentators
Thursday, May 14, 2009
In a raging recession, Caryl Rivers and Rosalind C. Barnett caution women against thinking they'll help their husbands' and male partners' job security by doing his share of the housework. The first of two parts.
Editor's Note: The following is a commentary. The opinions expressed are those of the author and not necessarily the views of Women's eNews.
Subhead:
In a raging recession, Caryl Rivers and Rosalind C. Barnett caution women against thinking they'll help their husbands' and male partners' job security by doing his share of the housework. The first of two parts.
Byline:
Rivers and Barnett
(WOMENSENEWS)--In the very near future, women will outnumber men in the labor force, meaning that more women are juggling work and family.
As of November 2008, women held 49 percent of the country's jobs, according to the Bureau of Labor Statistics--and that number is expected to rise.
At the same time, thanks to a raging recession, women are more worried than ever that their husbands' or partners' jobs might be at risk.
Could this mean women that will be more reticent to negotiate who does the housework and the child care?
Men have long worried that being too involved with their families will cost them at work. The ambitious male bosses are supposed to love the guy who's the last one out the office door at night and who volunteers to work on weekends.
An overburdened working woman might bite her lip instead of speaking out, because if her partner takes on more at home, maybe he will be more vulnerable at work. This fear could put women behind the 8-ball, feeling increased stress and fearful of negotiating for a better deal at home.
So now is the time to examine the findings of a 2008 study called "Can a Manager Have a Life and a Career?"
No Penalty at Work
Professors Karen S. Lyness of Baruch College, City University of New York, and Michael K. Judiesch of Manhattan College studied 9,627 managers in 33 countries and found that those who were "high on work-life balance"--in other words, very involved with their families--did not suffer in their jobs. In fact, they scored higher in career advancement potential than peers who were primarily work-oriented. The full findings were reported in the Journal of Applied Psychology.
That study was echoed by another last year that appeared in the Journal of Marriage and the Family. It found that men's household labor was unrelated to their earnings. The man who pitched in big time at home was just as likely to do well financially as the couch potato.
A couple of key factors may be at work in these findings.
First, society has come to see balancing work and family successfully as a positive value.
Second, in an increasingly globalized economy that demands multitasking and the ability to work well under pressure, the "juggler" may be developing skills neglected by the nose-to-the-grindstone worker.
Whatever the reason, it's a plus--not a minus--for male workers if they are seen as involved with home and family.
Both male and female managers can stop worrying that their devotion to their families will cost them a promotion. A woman can negotiate with her husband for a better deal at home without worrying that she will hurt his job prospects.
Caryl Rivers and Rosalind C. Barnett and are authors of "Same Difference: How Gender Myths Are Hurting Our Relationships, Our Children and Our Jobs" (Basic Books 2004). Barnett is senior scientist at the Women's Studies Research Center at Brandeis University and Rivers is a professor of journalism at Boston University. |
Monday, March 2, 2015
The 15 Most Miserable Economies in the World
BloombergMarch 2, 2015
Inflation is a disease that can wreck a society, Milton Friedman, the late Nobel laureate economist, once said. Add rising unemployment to the diagnosis, and his profession ascribes a rather non-technical term to the debilitating effect on people: misery.
That affliction this year will be most acute in Venezuela, Argentina, South Africa, Ukraine and Greece — the five most painful economies in which to live and work, according to Bloomberg survey data that make up the so-called misery index for 2015. (It's a simple equation: unemployment rate + change in the consumer price index = misery.)
In Ukraine's case, war will exact greater economic casualties. Tension with Russia-backed rebels will prolong joblessness in the eastern-European nation, and inflation won't offer much relief, the surveys showed. The one-two punch means Ukrainian consumers are set to be the fourth-saddest among 51 economies (including the euro area) based on forecasts for the misery measure.
Other Blogs by Aristides Hatzis
Subscribe to "The Greek Crisis"
Search This Blog
by Petar Pismestrovic
About
This blog is dedicated to the understanding of the current Greek (but also European) economic, political and institutional crisis. It was created by Prof. Aristides Hatzis of the University of Athens, after many requests by his students who seek a source of reliable analysis on the Greek current affairs. Its aim is to post commentary and reports published mainly in the major U.S. and European media and to encourage a rigorous discussion. |
Advanced Video Coding
Advanced Video Coding (AVC), also referred to as H.264 or MPEG-4 Part 10, Advanced Video Coding (MPEG-4 AVC), is a video compression standard based on block-oriented, motion-compensated integer-DCT coding. It is by far the most commonly used format for the recording, compression, and distribution of video content, used by 91% of video industry developers . It supports resolutions up to and including 8K UHD.
The intent of the H.264/AVC project was to create a standard capable of providing good video quality at substantially lower bit rates than previous standards (i.e., half or less the bit rate of MPEG-2, H.263, or MPEG-4 Part 2), without increasing the complexity of design so much that it would be impractical or excessively expensive to implement. This was achieved with features such as a reduced-complexity integer discrete cosine transform (integer DCT), variable block-size segmentation, and multi-picture inter-picture prediction. An additional goal was to provide enough flexibility to allow the standard to be applied to a wide variety of applications on a wide variety of networks and systems, including low and high bit rates, low and high resolution video, broadcast, DVD storage, RTP/IP packet networks, and ITU-T multimedia telephony systems. The H.264 standard can be viewed as a "family of standards" composed of a number of different profiles, although its "High profile" is by far the mostly commonly used format. A specific decoder decodes at least one, but not necessarily all profiles. The standard describes the format of the encoded data and how the data is decoded, but it does not specify algorithms for encoding video that is left open as a matter for encoder designers to select for themselves, and a wide variety of encoding schemes has been developed. H.264 is typically used for lossy compression, although it is also possible to create truly lossless-coded regions within lossy-coded pictures or to support rare use cases for which the entire encoding is lossless.
H.264 was standardized by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC JTC1 Moving Picture Experts Group (MPEG). The project partnership effort is known as the Joint Video Team (JVT). The ITU-T H.264 standard and the ISO/IEC MPEG-4 AVC standard (formally, ISO/IEC 14496-10 – MPEG-4 Part 10, Advanced Video Coding) are jointly maintained so that they have identical technical content. The final drafting work on the first version of the standard was completed in May 2003, and various extensions of its capabilities have been added in subsequent editions. High Efficiency Video Coding (HEVC), a.k.a. H.265 and MPEG-H Part 2 is a successor to H.264/MPEG-4 AVC developed by the same organizations, while earlier standards are still in common use.
H.264 is perhaps best known as being the most commonly used video encoding format on Blu-ray Discs. It is also widely used by streaming Internet sources, such as videos from Netflix, Hulu, Prime Video, Vimeo, YouTube, and the iTunes Store, Web software such as the Adobe Flash Player and Microsoft Silverlight, and also various HDTV broadcasts over terrestrial (ATSC, ISDB-T, DVB-T or DVB-T2), cable (DVB-C), and satellite (DVB-S and DVB-S2) systems.
H.264 is protected by patents owned by various parties. A license covering most (but not all) patents essential to H.264 is administered by a patent pool administered by MPEG LA.
The commercial use of patented H.264 technologies requires the payment of royalties to MPEG LA and other patent owners. MPEG LA has allowed the free use of H.264 technologies for streaming Internet video that is free to end users, and Cisco Systems pays royalties to MPEG LA on behalf of the users of binaries for its open source H.264 encoder.
Naming
The H.264 name follows the ITU-T naming convention, where the standard is a member of the H.26x line of VCEG video coding standards; the MPEG-4 AVC name relates to the naming convention in ISO/IEC MPEG, where the standard is part 10 of ISO/IEC 14496, which is the suite of standards known as MPEG-4. The standard was developed jointly in a partnership of VCEG and MPEG, after earlier development work in the ITU-T as a VCEG project called H.26L. It is thus common to refer to the standard with names such as H.264/AVC, AVC/H.264, H.264/MPEG-4 AVC, or MPEG-4/H.264 AVC, to emphasize the common heritage. Occasionally, it is also referred to as "the JVT codec", in reference to the Joint Video Team (JVT) organization that developed it. (Such partnership and multiple naming is not uncommon. For example, the video compression standard known as MPEG-2 also arose from the partnership between MPEG and the ITU-T, where MPEG-2 video is known to the ITU-T community as H.262.) Some software programs (such as VLC media player) internally identify this standard as AVC1.
History
Overall history
In early 1998, the Video Coding Experts Group (VCEG – ITU-T SG16 Q.6) issued a call for proposals on a project called H.26L, with the target to double the coding efficiency (which means halving the bit rate necessary for a given level of fidelity) in comparison to any other existing video coding standards for a broad variety of applications. VCEG was chaired by Gary Sullivan (Microsoft, formerly PictureTel, U.S.). The first draft design for that new standard was adopted in August 1999. In 2000, Thomas Wiegand (Heinrich Hertz Institute, Germany) became VCEG co-chair.
In December 2001, VCEG and the Moving Picture Experts Group (MPEG – ISO/IEC JTC 1/SC 29/WG 11) formed a Joint Video Team (JVT), with the charter to finalize the video coding standard. Formal approval of the specification came in March 2003. The JVT was (is) chaired by Gary Sullivan, Thomas Wiegand, and Ajay Luthra (Motorola, U.S.: later Arris, U.S.). In July 2004, the Fidelity Range Extensions (FRExt) project was finalized. From January 2005 to November 2007, the JVT was working on an extension of H.264/AVC towards scalability by an Annex (G) called Scalable Video Coding (SVC). The JVT management team was extended by Jens-Rainer Ohm (RWTH Aachen University, Germany). From July 2006 to November 2009, the JVT worked on Multiview Video Coding (MVC), an extension of H.264/AVC towards 3D television and limited-range free-viewpoint television. That work included the development of two new profiles of the standard: the Multiview High Profile and the Stereo High Profile.
Throughout the development of the standard, additional messages for containing supplemental enhancement information (SEI) have been developed. SEI messages can contain various types of data that indicate the timing of the video pictures or describe various properties of the coded video or how it can be used or enhanced. SEI messages are also defined that can contain arbitrary user-defined data. SEI messages do not affect the core decoding process, but can indicate how the video is recommended to be post-processed or displayed. Some other high-level properties of the video content are conveyed in video usability information (VUI), such as the indication of the color space for interpretation of the video content. As new color spaces have been developed, such as for high dynamic range and wide color gamut video, additional VUI identifiers have been added to indicate them.
Fidelity range extensions and professional profiles
The standardization of the first version of H.264/AVC was completed in May 2003. In the first project to extend the original standard, the JVT then developed what was called the Fidelity Range Extensions (FRExt). These extensions enabled higher quality video coding by supporting increased sample bit depth precision and higher-resolution color information, including the sampling structures known as Y′CBCR 4:2:2 (a.k.a. YUV 4:2:2) and 4:4:4. Several other features were also included in the FRExt project, such as adding an 8×8 integer discrete cosine transform (integer DCT) with adaptive switching between the 4×4 and 8×8 transforms, encoder-specified perceptual-based quantization weighting matrices, efficient inter-picture lossless coding, and support of additional color spaces. The design work on the FRExt project was completed in July 2004, and the drafting work on them was completed in September 2004.
Five other new profiles (see version 7 below) intended primarily for professional applications were then developed, adding extended-gamut color space support, defining additional aspect ratio indicators, defining two additional types of "supplemental enhancement information" (post-filter hint and tone mapping), and deprecating one of the prior FRExt profiles (the High 4:4:4 profile) that industry feedback indicated should have been designed differently.
Scalable video coding
The next major feature added to the standard was Scalable Video Coding (SVC). Specified in Annex G of H.264/AVC, SVC allows the construction of bitstreams that contain layers of sub-bitstreams that also conform to the standard, including one such bitstream known as the "base layer" that can be decoded by a H.264/AVC codec that does not support SVC. For temporal bitstream scalability (i.e., the presence of a sub-bitstream with a smaller temporal sampling rate than the main bitstream), complete access units are removed from the bitstream when deriving the sub-bitstream. In this case, high-level syntax and inter-prediction reference pictures in the bitstream are constructed accordingly. On the other hand, for spatial and quality bitstream scalability (i.e. the presence of a sub-bitstream with lower spatial resolution/quality than the main bitstream), the NAL (Network Abstraction Layer) is removed from the bitstream when deriving the sub-bitstream. In this case, inter-layer prediction (i.e., the prediction of the higher spatial resolution/quality signal from the data of the lower spatial resolution/quality signal) is typically used for efficient coding. The Scalable Video Coding extensions were completed in November 2007.
Multiview video coding
The next major feature added to the standard was Multiview Video Coding (MVC). Specified in Annex H of H.264/AVC, MVC enables the construction of bitstreams that represent more than one view of a video scene. An important example of this functionality is stereoscopic 3D video coding. Two profiles were developed in the MVC work: Multiview High profile supports an arbitrary number of views, and Stereo High profile is designed specifically for two-view stereoscopic video. The Multiview Video Coding extensions were completed in November 2009.
3D-AVC and MFC stereoscopic coding
Additional extensions were later developed that included 3D video coding with joint coding of depth maps and texture (termed 3D-AVC), multi-resolution frame-compatible (MFC) stereoscopic and 3D-MFC coding, various additional combinations of features, and higher frame sizes and frame rates.
Versions
Versions of the H.264/AVC standard include the following completed revisions, corrigenda, and amendments (dates are final approval dates in ITU-T, while final "International Standard" approval dates in ISO/IEC are somewhat different and slightly later in most cases). Each version represents changes relative to the next lower version that is integrated into the text.
Version 1 (Edition 1): (May 30, 2003) First approved version of H.264/AVC containing Baseline, Main, and Extended profiles.
Version 2 (Edition 1.1): (May 7, 2004) Corrigendum containing various minor corrections.
Version 3 (Edition 2): (March 1, 2005) Major addition containing the first amendment, establishing the Fidelity Range Extensions (FRExt). This version added the High, High 10, High 4:2:2, and High 4:4:4 profiles. After a few years, the High profile became the most commonly used profile of the standard.
Version 4 (Edition 2.1): (September 13, 2005) Corrigendum containing various minor corrections and adding three aspect ratio indicators.
Version 5 (Edition 2.2): (June 13, 2006) Amendment consisting of removal of prior High 4:4:4 profile (processed as a corrigendum in ISO/IEC).
Version 6 (Edition 2.2): (June 13, 2006) Amendment consisting of minor extensions like extended-gamut color space support (bundled with above-mentioned aspect ratio indicators in ISO/IEC).
Version 7 (Edition 2.3): (April 6, 2007) Amendment containing the addition of the High 4:4:4 Predictive profile and four Intra-only profiles (High 10 Intra, High 4:2:2 Intra, High 4:4:4 Intra, and CAVLC 4:4:4 Intra).
Version 8 (Edition 3): (November 22, 2007) Major addition to H.264/AVC containing the amendment for Scalable Video Coding (SVC) containing Scalable Baseline, Scalable High, and Scalable High Intra profiles.
Version 9 (Edition 3.1): (January 13, 2009) Corrigendum containing minor corrections.
Version 10 (Edition 4): (March 16, 2009) Amendment containing definition of a new profile (the Constrained Baseline profile) with only the common subset of capabilities supported in various previously specified profiles.
Version 11 (Edition 4): (March 16, 2009) Major addition to H.264/AVC containing the amendment for Multiview Video Coding (MVC) extension, including the Multiview High profile.
Version 12 (Edition 5): (March 9, 2010) Amendment containing definition of a new MVC profile (the Stereo High profile) for two-view video coding with support of interlaced coding tools and specifying an additional supplemental enhancement information (SEI) message termed the frame packing arrangement SEI message.
Version 13 (Edition 5): (March 9, 2010) Corrigendum containing minor corrections.
Version 14 (Edition 6): (June 29, 2011) Amendment specifying a new level (Level 5.2) supporting higher processing rates in terms of maximum macroblocks per second, and a new profile (the Progressive High profile) supporting only the frame coding tools of the previously specified High profile.
Version 15 (Edition 6): (June 29, 2011) Corrigendum containing minor corrections.
Version 16 (Edition 7): (January 13, 2012) Amendment containing definition of three new profiles intended primarily for real-time communication applications: the Constrained High, Scalable Constrained Baseline, and Scalable Constrained High profiles.
Version 17 (Edition 8): (April 13, 2013) Amendment with additional SEI message indicators.
Version 18 (Edition 8): (April 13, 2013) Amendment to specify the coding of depth map data for 3D stereoscopic video, including a Multiview Depth High profile.
Version 19 (Edition 8): (April 13, 2013) Corrigendum to correct an error in the sub-bitstream extraction process for multiview video.
Version 20 (Edition 8): (April 13, 2013) Amendment to specify additional color space identifiers (including support of ITU-R Recommendation BT.2020 for UHDTV) and an additional model type in the tone mapping information SEI message.
Version 21 (Edition 9): (February 13, 2014) Amendment to specify the Enhanced Multiview Depth High profile.
Version 22 (Edition 9): (February 13, 2014) Amendment to specify the multi-resolution frame compatible (MFC) enhancement for 3D stereoscopic video, the MFC High profile, and minor corrections.
Version 23 (Edition 10): (February 13, 2016) Amendment to specify MFC stereoscopic video with depth maps, the MFC Depth High profile, the mastering display color volume SEI message, and additional color-related VUI codepoint identifiers.
Version 24 (Edition 11): (October 14, 2016) Amendment to specify additional levels of decoder capability supporting larger picture sizes (Levels 6, 6.1, and 6.2), the green metadata SEI message, the alternative depth information SEI message, and additional color-related VUI codepoint identifiers.
Version 25 (Edition 12): (April 13, 2017) Amendment to specify the Progressive High 10 profile, Hybrid Log-Gamma (HLG), and additional color-related VUI code points and SEI messages.
Version 26 (Edition 13): (June 13, 2019) Amendment to specify additional SEI messages for ambient viewing environment, content light level information, content color volume, equirectangular projection, cubemap projection, sphere rotation, region-wise packing, omnidirectional viewport, SEI manifest, and SEI prefix.
Patent holders
|}
Applications
The H.264 video format has a very broad application range that covers all forms of digital compressed video from low bit-rate Internet streaming applications to HDTV broadcast and Digital Cinema applications with nearly lossless coding. With the use of H.264, bit rate savings of 50% or more compared to MPEG-2 Part 2 are reported. For example, H.264 has been reported to give the same Digital Satellite TV quality as current MPEG-2 implementations with less than half the bitrate, with current MPEG-2 implementations working at around 3.5 Mbit/s and H.264 at only 1.5 Mbit/s. Sony claims that 9 Mbit/s AVC recording mode is equivalent to the image quality of the HDV format, which uses approximately 18–25 Mbit/s.
To ensure compatibility and problem-free adoption of H.264/AVC, many standards bodies have amended or added to their video-related standards so that users of these standards can employ H.264/AVC. Both the Blu-ray Disc format and the now-discontinued HD DVD format include the H.264/AVC High Profile as one of three mandatory video compression formats. The Digital Video Broadcast project (DVB) approved the use of H.264/AVC for broadcast television in late 2004.
The Advanced Television Systems Committee (ATSC) standards body in the United States approved the use of H.264/AVC for broadcast television in July 2008, although the standard is not yet used for fixed ATSC broadcasts within the United States. It has also been approved for use with the more recent ATSC-M/H (Mobile/Handheld) standard, using the AVC and SVC portions of H.264.
The CCTV (Closed Circuit TV) and Video Surveillance markets have included the technology in many products.
Many common DSLRs use H.264 video wrapped in QuickTime MOV containers as the native recording format.
Derived formats
AVCHD is a high-definition recording format designed by Sony and Panasonic that uses H.264 (conforming to H.264 while adding additional application-specific features and constraints).
AVC-Intra is an intraframe-only compression format, developed by Panasonic.
XAVC is a recording format designed by Sony that uses level 5.2 of H.264/MPEG-4 AVC, which is the highest level supported by that video standard. XAVC can support 4K resolution (4096 × 2160 and 3840 × 2160) at up to 60 frames per second (fps). Sony has announced that cameras that support XAVC include two CineAlta cameras—the Sony PMW-F55 and Sony PMW-F5. The Sony PMW-F55 can record XAVC with 4K resolution at 30 fps at 300 Mbit/s and 2K resolution at 30 fps at 100 Mbit/s. XAVC can record 4K resolution at 60 fps with 4:2:2 chroma sampling at 600 Mbit/s.
Design
Features
H.264/AVC/MPEG-4 Part 10 contains a number of new features that allow it to compress video much more efficiently than older standards and to provide more flexibility for application to a wide variety of network environments. In particular, some such key features include:
Multi-picture inter-picture prediction including the following features:
Using previously encoded pictures as references in a much more flexible way than in past standards, allowing up to 16 reference frames (or 32 reference fields, in the case of interlaced encoding) to be used in some cases. In profiles that support non-IDR frames, most levels specify that sufficient buffering should be available to allow for at least 4 or 5 reference frames at maximum resolution. This is in contrast to prior standards, where the limit was typically one; or, in the case of conventional "B pictures" (B-frames), two.
Variable block-size motion compensation (VBSMC) with block sizes as large as 16×16 and as small as 4×4, enabling precise segmentation of moving regions. The supported luma prediction block sizes include 16×16, 16×8, 8×16, 8×8, 8×4, 4×8, and 4×4, many of which can be used together in a single macroblock. Chroma prediction block sizes are correspondingly smaller when chroma subsampling is used.
The ability to use multiple motion vectors per macroblock (one or two per partition) with a maximum of 32 in the case of a B macroblock constructed of 16 4×4 partitions. The motion vectors for each 8×8 or larger partition region can point to different reference pictures.
The ability to use any macroblock type in B-frames, including I-macroblocks, resulting in much more efficient encoding when using B-frames. This feature was notably left out from MPEG-4 ASP.
Six-tap filtering for derivation of half-pel luma sample predictions, for sharper subpixel motion-compensation. Quarter-pixel motion is derived by linear interpolation of the halfpixel values, to save processing power.
Quarter-pixel precision for motion compensation, enabling precise description of the displacements of moving areas. For chroma the resolution is typically halved both vertically and horizontally (see 4:2:0) therefore the motion compensation of chroma uses one-eighth chroma pixel grid units.
Weighted prediction, allowing an encoder to specify the use of a scaling and offset when performing motion compensation, and providing a significant benefit in performance in special cases—such as fade-to-black, fade-in, and cross-fade transitions. This includes implicit weighted prediction for B-frames, and explicit weighted prediction for P-frames.
Spatial prediction from the edges of neighboring blocks for "intra" coding, rather than the "DC"-only prediction found in MPEG-2 Part 2 and the transform coefficient prediction found in H.263v2 and MPEG-4 Part 2. This includes luma prediction block sizes of 16×16, 8×8, and 4×4 (of which only one type can be used within each macroblock).
Integer discrete cosine transform (integer DCT), a type of discrete cosine transform (DCT) where the transform is an integer approximation of the standard DCT. It has selectable block sizes and exact-match integer computation to reduce complexity, including:
An exact-match integer 4×4 spatial block transform, allowing precise placement of residual signals with little of the "ringing" often found with prior codec designs. It is similar to the standard DCT used in previous standards, but uses a smaller block size and simple integer processing. Unlike the cosine-based formulas and tolerances expressed in earlier standards (such as H.261 and MPEG-2), integer processing provides an exactly specified decoded result.
An exact-match integer 8×8 spatial block transform, allowing highly correlated regions to be compressed more efficiently than with the 4×4 transform. This design is based on the standard DCT, but simplified and made to provide exactly specified decoding.
Adaptive encoder selection between the 4×4 and 8×8 transform block sizes for the integer transform operation.
A secondary Hadamard transform performed on "DC" coefficients of the primary spatial transform applied to chroma DC coefficients (and also luma in one special case) to obtain even more compression in smooth regions.
Lossless macroblock coding features including:
A lossless "PCM macroblock" representation mode in which video data samples are represented directly, allowing perfect representation of specific regions and allowing a strict limit to be placed on the quantity of coded data for each macroblock.
An enhanced lossless macroblock representation mode allowing perfect representation of specific regions while ordinarily using substantially fewer bits than the PCM mode.
Flexible interlaced-scan video coding features, including:
Macroblock-adaptive frame-field (MBAFF) coding, using a macroblock pair structure for pictures coded as frames, allowing 16×16 macroblocks in field mode (compared with MPEG-2, where field mode processing in a picture that is coded as a frame results in the processing of 16×8 half-macroblocks).
Picture-adaptive frame-field coding (PAFF or PicAFF) allowing a freely selected mixture of pictures coded either as complete frames where both fields are combined together for encoding or as individual single fields.
A quantization design including:
Logarithmic step size control for easier bit rate management by encoders and simplified inverse-quantization scaling
Frequency-customized quantization scaling matrices selected by the encoder for perceptual-based quantization optimization
An in-loop deblocking filter that helps prevent the blocking artifacts common to other DCT-based image compression techniques, resulting in better visual appearance and compression efficiency
An entropy coding design including:
Context-adaptive binary arithmetic coding (CABAC), an algorithm to losslessly compress syntax elements in the video stream knowing the probabilities of syntax elements in a given context. CABAC compresses data more efficiently than CAVLC but requires considerably more processing to decode.
Context-adaptive variable-length coding (CAVLC), which is a lower-complexity alternative to CABAC for the coding of quantized transform coefficient values. Although lower complexity than CABAC, CAVLC is more elaborate and more efficient than the methods typically used to code coefficients in other prior designs.
A common simple and highly structured variable length coding (VLC) technique for many of the syntax elements not coded by CABAC or CAVLC, referred to as Exponential-Golomb coding (or Exp-Golomb).
Loss resilience features including:
A Network Abstraction Layer (NAL) definition allowing the same video syntax to be used in many network environments. One very fundamental design concept of H.264 is to generate self-contained packets, to remove the header duplication as in MPEG-4's Header Extension Code (HEC). This was achieved by decoupling information relevant to more than one slice from the media stream. The combination of the higher-level parameters is called a parameter set. The H.264 specification includes two types of parameter sets: Sequence Parameter Set (SPS) and Picture Parameter Set (PPS). An active sequence parameter set remains unchanged throughout a coded video sequence, and an active picture parameter set remains unchanged within a coded picture. The sequence and picture parameter set structures contain information such as picture size, optional coding modes employed, and macroblock to slice group map.
Flexible macroblock ordering (FMO), also known as slice groups, and arbitrary slice ordering (ASO), which are techniques for restructuring the ordering of the representation of the fundamental regions (macroblocks) in pictures. Typically considered an error/loss robustness feature, FMO and ASO can also be used for other purposes.
Data partitioning (DP), a feature providing the ability to separate more important and less important syntax elements into different packets of data, enabling the application of unequal error protection (UEP) and other types of improvement of error/loss robustness.
Redundant slices (RS), an error/loss robustness feature that lets an encoder send an extra representation of a picture region (typically at lower fidelity) that can be used if the primary representation is corrupted or lost.
Frame numbering, a feature that allows the creation of "sub-sequences", enabling temporal scalability by optional inclusion of extra pictures between other pictures, and the detection and concealment of losses of entire pictures, which can occur due to network packet losses or channel errors.
Switching slices, called SP and SI slices, allowing an encoder to direct a decoder to jump into an ongoing video stream for such purposes as video streaming bit rate switching and "trick mode" operation. When a decoder jumps into the middle of a video stream using the SP/SI feature, it can get an exact match to the decoded pictures at that location in the video stream despite using different pictures, or no pictures at all, as references prior to the switch.
A simple automatic process for preventing the accidental emulation of start codes, which are special sequences of bits in the coded data that allow random access into the bitstream and recovery of byte alignment in systems that can lose byte synchronization.
Supplemental enhancement information (SEI) and video usability information (VUI), which are extra information that can be inserted into the bitstream for various purposes such as indicating the color space used the video content or various constraints that apply to the encoding. SEI messages can contain arbitrary user-defined metadata payloads or other messages with syntax and semantics defined in the standard.
Auxiliary pictures, which can be used for such purposes as alpha compositing.
Support of monochrome (4:0:0), 4:2:0, 4:2:2, and 4:4:4 chroma sampling (depending on the selected profile).
Support of sample bit depth precision ranging from 8 to 14 bits per sample (depending on the selected profile).
The ability to encode individual color planes as distinct pictures with their own slice structures, macroblock modes, motion vectors, etc., allowing encoders to be designed with a simple parallelization structure (supported only in the three 4:4:4-capable profiles).
Picture order count, a feature that serves to keep the ordering of the pictures and the values of samples in the decoded pictures isolated from timing information, allowing timing information to be carried and controlled/changed separately by a system without affecting decoded picture content.
These techniques, along with several others, help H.264 to perform significantly better than any prior standard under a wide variety of circumstances in a wide variety of application environments. H.264 can often perform radically better than MPEG-2 video—typically obtaining the same quality at half of the bit rate or less, especially on high bit rate and high resolution video content.
Like other ISO/IEC MPEG video standards, H.264/AVC has a reference software implementation that can be freely downloaded. Its main purpose is to give examples of H.264/AVC features, rather than being a useful application per se. Some reference hardware design work has also been conducted in the Moving Picture Experts Group.
The above-mentioned aspects include features in all profiles of H.264. A profile for a codec is a set of features of that codec identified to meet a certain set of specifications of intended applications. This means that many of the features listed are not supported in some profiles. Various profiles of H.264/AVC are discussed in next section.
Profiles
The standard defines several sets of capabilities, which are referred to as profiles, targeting specific classes of applications. These are declared using a profile code (profile_idc) and sometimes a set of additional constraints applied in the encoder. The profile code and indicated constraints allow a decoder to recognize the requirements for decoding that specific bitstream. (And in many system environments, only one or two profiles are allowed to be used, so decoders in those environments do not need to be concerned with recognizing the less commonly used profiles.) By far the most commonly used profile is the High Profile.
Profiles for non-scalable 2D video applications include the following:
Constrained Baseline Profile (CBP, 66 with constraint set 1) Primarily for low-cost applications, this profile is most typically used in videoconferencing and mobile applications. It corresponds to the subset of features that are in common between the Baseline, Main, and High Profiles.
Baseline Profile (BP, 66) Primarily for low-cost applications that require additional data loss robustness, this profile is used in some videoconferencing and mobile applications. This profile includes all features that are supported in the Constrained Baseline Profile, plus three additional features that can be used for loss robustness (or for other purposes such as low-delay multi-point video stream compositing). The importance of this profile has faded somewhat since the definition of the Constrained Baseline Profile in 2009. All Constrained Baseline Profile bitstreams are also considered to be Baseline Profile bitstreams, as these two profiles share the same profile identifier code value.
Extended Profile (XP, 88) Intended as the streaming video profile, this profile has relatively high compression capability and some extra tricks for robustness to data losses and server stream switching.
Main Profile (MP, 77) This profile is used for standard-definition digital TV broadcasts that use the MPEG-4 format as defined in the DVB standard. It is not, however, used for high-definition television broadcasts, as the importance of this profile faded when the High Profile was developed in 2004 for that application.
High Profile (HiP, 100) The primary profile for broadcast and disc storage applications, particularly for high-definition television applications (for example, this is the profile adopted by the Blu-ray Disc storage format and the DVB HDTV broadcast service).
Progressive High Profile (PHiP, 100 with constraint set 4) Similar to the High profile, but without support of field coding features.
Constrained High Profile (100 with constraint set 4 and 5) Similar to the Progressive High profile, but without support of B (bi-predictive) slices.
High 10 Profile (Hi10P, 110) Going beyond typical mainstream consumer product capabilities, this profile builds on top of the High Profile, adding support for up to 10 bits per sample of decoded picture precision.
High 422 Profile (Hi422P, 122) Primarily targeting professional applications that use interlaced video, this profile builds on top of the High 10 Profile, adding support for the 4:2:2 chroma sampling format while using up to 10 bits per sample of decoded picture precision.
High 444 Predictive Profile (Hi444PP, 244) This profile builds on top of the High 4:2:2 Profile, supporting up to 4:4:4 chroma sampling, up to 14 bits per sample, and additionally supporting efficient lossless region coding and the coding of each picture as three separate color planes.
For camcorders, editing, and professional applications, the standard contains four additional Intra-frame-only profiles, which are defined as simple subsets of other corresponding profiles. These are mostly for professional (e.g., camera and editing system) applications:
High 10 Intra Profile (110 with constraint set 3) The High 10 Profile constrained to all-Intra use.
High 422 Intra Profile (122 with constraint set 3) The High 4:2:2 Profile constrained to all-Intra use.
High 444 Intra Profile (244 with constraint set 3) The High 4:4:4 Profile constrained to all-Intra use.
CAVLC 444 Intra Profile (44) The High 4:4:4 Profile constrained to all-Intra use and to CAVLC entropy coding (i.e., not supporting CABAC).
As a result of the Scalable Video Coding (SVC) extension, the standard contains five additional scalable profiles, which are defined as a combination of a H.264/AVC profile for the base layer (identified by the second word in the scalable profile name) and tools that achieve the scalable extension:
Scalable Baseline Profile (83) Primarily targeting video conferencing, mobile, and surveillance applications, this profile builds on top of the Constrained Baseline profile to which the base layer (a subset of the bitstream) must conform. For the scalability tools, a subset of the available tools is enabled.
Scalable Constrained Baseline Profile (83 with constraint set 5) A subset of the Scalable Baseline Profile intended primarily for real-time communication applications.
Scalable High Profile (86) Primarily targeting broadcast and streaming applications, this profile builds on top of the H.264/AVC High Profile to which the base layer must conform.
Scalable Constrained High Profile (86 with constraint set 5) A subset of the Scalable High Profile intended primarily for real-time communication applications.
Scalable High Intra Profile (86 with constraint set 3) Primarily targeting production applications, this profile is the Scalable High Profile constrained to all-Intra use.
As a result of the Multiview Video Coding (MVC) extension, the standard contains two multiview profiles:
Stereo High Profile (128) This profile targets two-view stereoscopic 3D video and combines the tools of the High profile with the inter-view prediction capabilities of the MVC extension.
Multiview High Profile (118) This profile supports two or more views using both inter-picture (temporal) and MVC inter-view prediction, but does not support field pictures and macroblock-adaptive frame-field coding.
The Multi-resolution Frame-Compatible (MFC) extension added two more profiles:
MFC High Profile (134) A profile for stereoscopic coding with two-layer resolution enhancement.
MFC Depth High Profile (135)
The 3D-AVC extension added two more profiles:
Multiview Depth High Profile (138) This profile supports joint coding of depth map and video texture information for improved compression of 3D video content.
Enhanced Multiview Depth High Profile (139) An enhanced profile for combined multiview coding with depth information.
Feature support in particular profiles
Levels
As the term is used in the standard, a "level" is a specified set of constraints that indicate a degree of required decoder performance for a profile. For example, a level of support within a profile specifies the maximum picture resolution, frame rate, and bit rate that a decoder may use. A decoder that conforms to a given level must be able to decode all bitstreams encoded for that level and all lower levels.
The maximum bit rate for the High Profile is 1.25 times that of the Constrained Baseline, Baseline, Extended and Main Profiles; 3 times for Hi10P, and 4 times for Hi422P/Hi444PP.
The number of luma samples is 16×16=256 times the number of macroblocks (and the number of luma samples per second is 256 times the number of macroblocks per second).
Decoded picture buffering
Previously encoded pictures are used by H.264/AVC encoders to provide predictions of the values of samples in other pictures. This allows the encoder to make efficient decisions on the best way to encode a given picture. At the decoder, such pictures are stored in a virtual decoded picture buffer (DPB). The maximum capacity of the DPB, in units of frames (or pairs of fields), as shown in parentheses in the right column of the table above, can be computed as follows:
Where MaxDpbMbs is a constant value provided in the table below as a function of level number, and PicWidthInMbs and FrameHeightInMbs are the picture width and frame height for the coded video data, expressed in units of macroblocks (rounded up to integer values and accounting for cropping and macroblock pairing when applicable). This formula is specified in sections A.3.1.h and A.3.2.f of the 2017 edition of the standard.
For example, for an HDTV picture that is 1,920 samples wide (PicWidthInMbs = 120) and 1,080 samples high (FrameHeightInMbs = 68), a Level 4 decoder has a maximum DPB storage capacity of floor(32768/(120*68)) = 4 frames (or 8 fields). Thus, the value 4 is shown in parentheses in the table above in the right column of the row for Level 4 with the frame size 1920×1080.
It is important to note that the current picture being decoded is not included in the computation of DPB fullness (unless the encoder has indicated for it to be stored for use as a reference for decoding other pictures or for delayed output timing). Thus, a decoder needs to actually have sufficient memory to handle (at least) one frame more than the maximum capacity of the DPB as calculated above.
Implementations
In 2009, the HTML5 working group was split between supporters of Ogg Theora, a free video format which is thought to be unencumbered by patents, and H.264, which contains patented technology. As late as July 2009, Google and Apple were said to support H.264, while Mozilla and Opera support Ogg Theora (now Google, Mozilla and Opera all support Theora and WebM with VP8). Microsoft, with the release of Internet Explorer 9, has added support for HTML 5 video encoded using H.264. At the Gartner Symposium/ITXpo in November 2010, Microsoft CEO Steve Ballmer answered the question "HTML 5 or Silverlight?" by saying "If you want to do something that is universal, there is no question the world is going HTML5." In January 2011, Google announced that they were pulling support for H.264 from their Chrome browser and supporting both Theora and WebM/VP8 to use only open formats.
On March 18, 2012, Mozilla announced support for H.264 in Firefox on mobile devices, due to prevalence of H.264-encoded video and the increased power-efficiency of using dedicated H.264 decoder hardware common on such devices. On February 20, 2013, Mozilla implemented support in Firefox for decoding H.264 on Windows 7 and above. This feature relies on Windows' built in decoding libraries. Firefox 35.0, released on January 13, 2015 supports H.264 on OS X 10.6 and higher.
On October 30, 2013, Rowan Trollope from Cisco Systems announced that Cisco would release both binaries and source code of an H.264 video codec called OpenH264 under the Simplified BSD license, and pay all royalties for its use to MPEG LA for any software projects that use Cisco's precompiled binaries, thus making Cisco's OpenH264 binaries free to use. However, any software projects that use Cisco's source code instead of its binaries would be legally responsible for paying all royalties to MPEG LA. Current target CPU architectures are x86 and ARM, and current target operating systems are Linux, Windows XP and later, Mac OS X, and Android; iOS is notably absent from this list, because it doesn't allow applications to fetch and install binary modules from the Internet. Also on October 30, 2013, Brendan Eich from Mozilla wrote that it would use Cisco's binaries in future versions of Firefox to add support for H.264 to Firefox where platform codecs are not available.
Cisco published the source to OpenH264 on December 9, 2013.
Software encoders
Hardware
Because H.264 encoding and decoding requires significant computing power in specific types of arithmetic operations, software implementations that run on general-purpose CPUs are typically less power efficient. However, the latest quad-core general-purpose x86 CPUs have sufficient computation power to perform real-time SD and HD encoding. Compression efficiency depends on video algorithmic implementations, not on whether hardware or software implementation is used. Therefore, the difference between hardware and software based implementation is more on power-efficiency, flexibility and cost. To improve the power efficiency and reduce hardware form-factor, special-purpose hardware may be employed, either for the complete encoding or decoding process, or for acceleration assistance within a CPU-controlled environment.
CPU based solutions are known to be much more flexible, particularly when encoding must be done concurrently in multiple formats, multiple bit rates and resolutions (multi-screen video), and possibly with additional features on container format support, advanced integrated advertising features, etc. CPU based software solution generally makes it much easier to load balance multiple concurrent encoding sessions within the same CPU.
The 2nd generation Intel "Sandy Bridge" Core i3/i5/i7 processors introduced at the January 2011 CES (Consumer Electronics Show) offer an on-chip hardware full HD H.264 encoder, known as Intel Quick Sync Video.
A hardware H.264 encoder can be an ASIC or an FPGA.
ASIC encoders with H.264 encoder functionality are available from many different semiconductor companies, but the core design used in the ASIC is typically licensed from one of a few companies such as Chips&Media, Allegro DVT, On2 (formerly Hantro, acquired by Google), Imagination Technologies, NGCodec. Some companies have both FPGA and ASIC product offerings.
Texas Instruments manufactures a line of ARM + DSP cores that perform DSP H.264 BP encoding 1080p at 30fps. This permits flexibility with respect to codecs (which are implemented as highly optimized DSP code) while being more efficient than software on a generic CPU.
Licensing
In countries where patents on software algorithms are upheld, vendors and commercial users of products that use H.264/AVC are expected to pay patent licensing royalties for the patented technology that their products use. This applies to the Baseline Profile as well.
A private organization known as MPEG LA, which is not affiliated in any way with the MPEG standardization organization, administers the licenses for patents applying to this standard, as well as other patent pools, such as for MPEG-4 Part 2 Video, HEVC and MPEG-DASH. The patent holders include Fujitsu, Panasonic, Sony, Mitsubishi, Apple, Columbia University, KAIST, Dolby, Google, JVC Kenwood, LG Electronics, Microsoft, NTT Docomo, Philips, Samsung, Sharp, Toshiba and ZTE, although the majority of patents in the pool are held by Panasonic ( patents), Godo Kaisha IP Bridge ( patents) and LG Electronics ( patents).
On August 26, 2010, MPEG LA announced that royalties won't be charged for H.264 encoded Internet video that is free to end users. All other royalties remain in place, such as royalties for products that decode and encode H.264 video as well as to operators of free television and subscription channels. The license terms are updated in 5-year blocks.
Since the first version of the standard was completed in May 2003 ( years ago) and the most commonly used profile (the High profile) was completed in June 2004 ( years ago), a substantial number of the patents that originally applied to the standard have been expiring, although one of the US patents in the MPEG LA H.264 pool lasts at least until 2027.
In 2005, Qualcomm sued Broadcom in US District Court, alleging that Broadcom infringed on two of its patents by making products that were compliant with the H.264 video compression standard. In 2007, the District Court found that the patents were unenforceable because Qualcomm had failed to disclose them to the JVT prior to the release of the H.264 standard in May 2003. In December 2008, the US Court of Appeals for the Federal Circuit affirmed the District Court's order that the patents be unenforceable but remanded to the District Court with instructions to limit the scope of unenforceability to H.264 compliant products.
See also
High Efficiency Video Coding
VP8
VP9
AOMedia Video 1
Comparison of H.264 and VC-1
Dirac (video compression format)
Ultra-high-definition television
IPTV
References
Further reading
External links
MPEG-4 AVC/H.264 Information Doom9's Forum
H.264/MPEG-4 Part 10 Tutorials (Richardson)
(dated December 2007)
(dated April 2009)
(dated May 2010)
Category:High-definition television
Category:Open standards covered by patents
Category:Video codecs
Category:Video compression
Category:Videotelephony
Category:ITU-T recommendations
Category:ITU-T H Series Recommendations
Category:H.26x
Category:ISO standards
MPEG-4 Part 10
Category:IEC standards
Category:Japanese inventions
Category:South Korean inventions |
The corporate development & support team works towards providing an exceptional level of customer service to clients, franchisees and other corporate staff alike. The programming team works hard to develop new tools and enhance existing tools to improve the BarterPay experience. The graphic design team is dedicated to creating cutting-edge designs that market client offerings in a professional manner. The admin team makes sure that the administrative functions of the exchange are running smoothly.
When barter has appeared, it wasn’t as part of a purely barter economy, and money didn’t emerge from it—rather, it emerged from money. After Rome fell, for instance, Europeans used barter as a substitute for the Roman currency people had gotten used to. “In most of the cases we know about, [barter] takes place between people who are familiar with the use of money, but for one reason or another, don’t have a lot of it around,” explains David Graeber, an anthropology professor at the London School of Economics.
Debts in the wir currency, assigned the same value as the Swiss franc, could be paid with sales to any member of the bartering circle: if a baker needed to “purchase” eggs and flour from a farmer, the baker could pay off the debt by “selling” baked goods to another wir member. The farmer, in turn, could use his newly acquired credit to “buy” his own needed items or services. Despite a bank-led campaign to discredit the system, wir stuck. Today, it has more than 60,000 business participants and does the equivalent of about $4.4 billion in annual trade. |
Adaptation of the endothelium to fluid flow: in vitro analyses of gene expression and in vivo implications.
Biomechanical forces generated by blood flow play an important role in the pathogenesis of vascular disease. For example, regions exposed to non-uniform shear stresses develop early atherosclerotic lesions while areas exposed to uniform shear stresses are protected. A variety of in vitro flow apparatuses have been created to apply well-characterized flow patterns to endothelial cells in an effort to dissect the cellular and molecular pathways involved in these distinct processes. Recent advances in biotechnology have permitted large-scale transcriptional profiling techniques to replace candidate gene screens and have allowed the genome-wide examination of biomechanical force-induced endothelial gene expression profiles. This review provides an overview of biomechanical force-induced modulation of endothelial phenotype. It examines the effect of sustained laminar shear stress (LSS), a type of uniform shear stress, on in vitro endothelial gene expression by synthesizing data from the early candidate gene and differential display polymerase chain reaction (PCR) approaches to the numerous, recent, high throughput functional genomic analyses. These studies demonstrate that prolonged LSS regulates the expression of only a small percentage (approximately 1-5%) of endothelial genes, and this transcriptional profile produces an endothelial phenotype that is quiescent, being protected from apoptosis, inflammation and oxidative stress. These observations provide a possible molecular mechanism for the strong correlation between patterns of blood flow and the occurrence of vascular pathologies, such as atherosclerosis, in vivo. |
Q:
how to import external JSON jar file into ANDROID project
i want to convert some data into JSON to transmit. but since the android methods about JSON are quite tedious. i create a user library and import some JSON jar files i always use in JSE project. then when i launch the android project, Android Packaging Problem occurs. in the problem tag, it says like " Description: Conversion to Dalvik format failed with error 1 Location: Unknown Type: Android Packaging Problem".
i check out the problem on the internet, and try to fix it by CLEANING the project. however, it doesn' work.
somebody has some experience dealing with this problem?
pls help me.
thanks in advance.
A:
i import the JSON jar by adding new library in the Java Build Path
Do not do that.
Instead, create a libs/ directory in your project and put the JAR there. If you are on the latest version of the ADT plugin for Eclipse, having your JAR be in libs/ will automatically add it to your build path and will automatically include the JAR's contents in your APK.
|
//
// Generated by class-dump 3.5 (64 bit) (Debug version compiled Oct 25 2017 03:49:04).
//
// class-dump is Copyright (C) 1997-1998, 2000-2001, 2004-2015 by Steve Nygard.
//
#import <AppKit/NSViewController.h>
@class NSArray, NSImageView, NSStackView, NSTextField, NSView, UAShortcutCategory;
@interface UACCategoryViewController : NSViewController
{
BOOL _showBottomDelim;
UAShortcutCategory *_shortcutCategory;
NSArray *__controllers;
NSStackView *__stackView;
NSView *__lineView;
NSTextField *__title;
NSImageView *__imageView;
}
- (void).cxx_destruct;
@property __weak NSImageView *_imageView; // @synthesize _imageView=__imageView;
@property __weak NSTextField *_title; // @synthesize _title=__title;
@property __weak NSView *_lineView; // @synthesize _lineView=__lineView;
@property __weak NSStackView *_stackView; // @synthesize _stackView=__stackView;
@property(retain, nonatomic) NSArray *_controllers; // @synthesize _controllers=__controllers;
@property(nonatomic) BOOL showBottomDelim; // @synthesize showBottomDelim=_showBottomDelim;
@property(retain, nonatomic) UAShortcutCategory *shortcutCategory; // @synthesize shortcutCategory=_shortcutCategory;
- (void)viewDidLoad;
- (void)updateState;
- (id)_categoryImage;
- (id)initWithShortcutCategory:(id)arg1;
- (id)init;
- (id)initWithNibName:(id)arg1 bundle:(id)arg2;
@end
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.