text
stringlengths 1
2.31M
| meta
dict |
|---|---|
It's boring but I don't mind it. It's the constant killing of the ball with complete impunity that really fvcks me off. The only ref ever to call it properly was Wayne Barnes and he was vilified by the Irish rugby press as a result.
Oh come on, Ireland play some fantastic rugby, their handling is a joy to behold at times. If the opposition's idea of a defensive plan is to put the FH in A&E don't be surprised if it goes up in the air.
For the English fans - wtf? He has got to sort out the captain issue. His loyalty to Hartley is looking foolish. Hartley is not world class. England need their captain to be the first or second name on the team sheet.
Full back. Thanks Mike Brown - you have been a great servant but we need attacking options from full back.
Performance wise, they have been very patchy save for the dynamic first twenty against Italy and the last 10 against Wales.
Still someway short of the 2003 side and nowhere near NZ on yesterday's performance.
thing is when you watch the game live you can watch one player. I did watch Brown once when they played us and I think his best is behind him and NZ are ruthless with players like that. Thanks mate. Bye.
We have great strength in certain positions and less so in others. But we have a big step up to be a side that can wing ugly but also run in when there is space to do so and have players with real pace and guile. Wem won't win the world cup by scoring penalties so we need to have different styles of play and this needs on field leadership and players who as Wilko said ' play as one'
Congratulations Ireland, who played brilliantly. We looked rubbish (we were, generally) because their intensity at the breakdown was phenomenal. They cheated a bit (as do we usually, as do most decent sides, so no issues with that), scrapped for everything, and flustered us out of ideas.
Don't forget though, the 2003 side did this. I'm actually ok with it - learning some hard lessons this season should stand us in good stead, and with George to come in, Brown to go, and fully fit players to come back, we should be ok. Missed Robshaw I thought.
Let's be honest though, if Wales and Ireland played most games the way they do against us, they'd win far more often.
Anyway, as Sarries said this should be the end of Mike Brown. Faithful servant, thanks very much, but goodbye please.
Ireland have traditionally played that way..fast and frenetic and try to disrupt although to be fair they have some class players but they love a game to open up. I just cannot work out what our game plan was? Eddie says the team was not prepared well by him and that JS coached Ireland brilliantly. It just seemed that England could not get into the game and change their tactics. Even after half -time it seemed there was no clear idea other than carry on and hope for some penalties.
There was a point in the third quarter where England started to put a bit of territory and possession together, and I thought this would be the start of the comeback, but the Irish snuffed it out again fairly quick. I would have like to see what Farrell and Teo would have down with what ball they had but Teo's injury seemed to coincide with the Irish raising their game again.
|
{
"pile_set_name": "Pile-CC"
}
|
Dear editor,
A patient with sudden complete loss of vision is presented, which ultimately turned out to be caused by a pituitary adenoma coincidentally infected with MRSA.
A 39-year-old male patient woke up with complete blindness of both eyes. History taking revealed slightly blurred vision for several weeks. Also, the patient suffered from a nose bleeding and nasal discharge for which he visited his general practitioner. No further action was taken at that time. On physical examination, we saw an acutely ill patient (body temperature, 39 °C) with complete blindness, but without any focal neurological deficit. Magnetic resonance imaging (MRI) showed a sellar mass with suprasellar expansion and compression of the optic chiasm (Fig. [1a](#Fig1){ref-type="fig"}). Also, the right sphenoid sinus was filled with debris, possibly in continuity with the suprasellar region. Laboratory testing did not reveal any endocrine disturbances, while leukocyte and C-reactive protein count were slightly elevated. Fig. 1**a** Preoperative T2-weighted coronal MR image showing a sellar mass with suprasellar expansion and severe compression of the chiasm. It has a somewhat dumbbell shape. **b** Postoperative T1-weighted contrast-enhanced, coronal MR image showing a contrast-enhancing sellar mass remnant, while the optic chiasm is adequately decompressed
An emergency left-sided pterional craniotomy was performed to decompress the optic system. During surgery, to our surprise, extensive signs of infection were seen, with debris and pus in the interhemispheric fissure and around the suprasellar mass. After surgery, the patient was admitted to the intensive care unit, while upfront therapy with broad spectrum antibiotics (intravenous ceftriaxone, metrodinazole and penicillin) was initiated. Pathologic examination revealed a non-functioning adenoma, while infection with MRSA was determined in the same specimen 1 day after surgery. Subsequently, intravenous Vancomycin was added.
Direct postoperative MRI showed adequate decompression of the optic system. However, the right sphenoid sinus was still filled with debris. Subsequently, the ear, nose and throat physician performed a sphenoidectomy. A mass was evacuated that contained MRSA. Additional history taking revealed that, approximately 6 months before admission, the patient had adopted a child from Brazil who was treated for MRSA.
In the days after surgery, the patients\' clinical course declined dramatically with cerebritis, ventriculitis and meningitis, also requiring external ventricular drainage. The patient still suffered from complete blindness. However, after 3 weeks of intensive antibiotic treatment (systemic and intrathecal), the patient showed signs of recovery, after which he was weaned from the ventilator. Two months after surgery, the patient was discharged with a ventriculoperitoneal shunt device. At that time, the patient was able to have light perception. On a visit at our outpatient department 6 months after surgery, the patient\'s vision was almost completely recovered, and he did not suffer from any focal neurological deficit. MRI showed adequate decompression of the optic system with a small tumour remnant (Fig. [1b](#Fig1){ref-type="fig"}). Because of the involvement of the cavernous sinus within the remnant, total surgical resection was considered to be impossible. For that reason, the patient received radiotherapy on the remnant.
Pituitary abscess itself is rare, with an incidence of less than 1% of all cases of pituitary disease \[[@CR2], [@CR3], [@CR7]\], and radiological distinction from other sellar masses is difficult \[[@CR4], [@CR5]\]. As a result, pituitary abscesses are often diagnosed only during surgery when empyema or pus is found.
Primary pituitary abscess may develop in a normal pituitary gland, either due to haematogenous seeding or by direct extension of adjacent infection (either in the CSF or sphenoid sinus) \[[@CR2]\]. Secondary pituitary abscesses occur in glands that harbour a pre-existing lesion, such as an adenoma \[[@CR3]\]. Other risk factors are an underlying immunocompromised condition, previous pituitary surgery or irradiation of the pituitary gland.
Our patient probably suffered from a pre-existing pituitary adenoma, secondarily infected by an infection in the sphenoid sinus. This may also explain the clinical course of our patient: a history of blurred vision because of the macro-adenoma, with a co-existing nasal discharge because of the sphenoid sinus infection. The sudden loss of vision may be explained by a sudden rupture of an abscess wall or because of a sudden breakthrough of pus and debris from the sphenoid sinus into the pituitary cavity. Another explanation may be inflammation of the optic nerves due to the infected adenoma \[[@CR6]\].
One could discuss the transcranial approach that was performed in this patient, as it is argued that a transsphenoidal approach should be favoured over transcranial surgery as it may limit the spread of micro-organisms \[[@CR1], [@CR7]\].
However, because of the sudden complete blindness, we wanted to be absolutely sure that the optic chiasm was immediately decompressed in an adequate manner. In addition, as can be seen on the preoperative coronal view (Fig. [1b](#Fig1){ref-type="fig"}), the process has a somewhat dumbbell shape, which may limit adequate transsphenoidal decompression. Also, retrospectively, significant cerebral dissemination was already present before the patient was operated on.
**Open Access** This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
|
{
"pile_set_name": "PubMed Central"
}
|
James Earl Jones in New Season at the Old Vic Theatre
Thursday, 6th Dec 2012
Kim Cattrall and Vanessa Redgrave Will Also Appear in the Old Vic 2013 Season
James Earl Jones is set to make a return to the London theatre scene in 2013 as the Old Vic Theatre announces its new season. Next year the American thespian will be appearing in “Much Ado About Nothing” alongside Vanessa Redgrave and this will be the latest occasion that the two have worked together following “Driving Miss Daisy” at the Wyndham’s Theatre. The season will also include a production of “The Winslow Boy” and “Sweet Bird of Youth”.
“The Winslow Boy”, which opens the season from March to May 2013, is the famous play by Terence Rattigan, which premiered in London back in the 1940s and is based upon a true story from a previous century. It focuses on a father as he attempts to clear the name of his son, who has been accused of stealing. The show will be directed by Lindsay Posner.
It is followed by Kim Cattrall in “Sweet Bird of Youth” from June to August 2013. The focus is on a Hollywood actress who has faded from fame and attempts a comeback with a brand new movie. But when that fails she is racked with despair and eventually seeks refuge with a new identity – The Princess Kosmonopolis. Those with theatre tickets will see a production that is set to be directed by Marianne Elliot.
Finally, James Earl Jones and Vanessa Redgrave will have their chance to make an impression when “Much Ado About Nothing” runs at the Old Vic Theatre from September 2013 to November 2013. The Shakespearean classic will be directed by London favourite Mark Rylance and follows two young lovers whose romance could ruined by a jealous prince. Meanwhile, another couple face the prospect of marriage that they do not want.
James Earl Jones is best known for providing the voice of Darth Vader in the “Star Wars” movies and King Mufasa in “The Lion King”. He has also appeared in movies like “Field of Dreams” and in stage productions like “Cat On A Hot Tin Roof” at the Novello Theatre, in addition to a range of shows on Broadway.
|
{
"pile_set_name": "Pile-CC"
}
|
Taiwan: Taipei Zoo reflects on last 20 years
0
Shares
What Happened Next? — updates on the TJ Retrospective
By Allen Hsu and Mark Caltonhill
September 1986
Twenty years ago last month, the Free China Journal–the forerunner of this paper–ran an article titled “Zoo on the Move,” which reported that “over 300,000 Taipei residents poured into the streets on Sunday morning Sept. 14 to witness a spectacular hour-and-a-half exodus of 65 Taipei Zoo animals from downtown Yuanshan to Mucha, to their new home in spacious suburb southeast of town.”
Among this last batch of the zoo’s 170 species making the 14.3 kilometer move were “four tigers, five African lions, four swans, 10 peacocks, five angora goats, 12 Taiwan monkeys, one raccoon, two giant turtles and several parrots,” it reported. Zoo Director Wang Kuang-ping was commended for having overcome enormous logistical difficulties in moving the animals, and was quoted as saying he expected the menagerie to grow to around 3,000 species at its new home, which was 30 times bigger than the 72-year-old site used at Yuanshan.
The Taiwan Journal recently called on current Taipei Zoo Director Chen Pao-chung, who has worked at the zoo since earning a Ph.D. from National Taiwan University’s forestry department in 1977, to ask him about the zoo’s 1986 move, developments since then, and plans for the future.
Taiwan Journal: What were the reasons behind the move? After all, Yuanshan is much closer to downtown Taipei than Muzha.
Chen Pao-chung: The number of Taipei residents visiting the zoo at Yuanshan had always been very large, and by 1981, the 5.8-hectare site was already too small. Furthermore, as the zoo was located beneath the flight path of airplanes arriving at Taipei’s Songshan Airport, the noise meant the zoo was unable to provide all the normal functions a modern zoo should. City Hall therefore decided to move the zoo to the Muzha site which, at about 165 hectares, was around 30 times bigger.
You cannot imagine how difficult it is to move a zoo, particularly moving large, nervous animals such as zebras, elephants and giraffes. Some people suggested we should have hired an experienced professional foreign company specializing in moving animals. Due to the extremely high charges, however, we decided to do it on our own. Fortunately, the zoo had an advisor who had 50 years of experience in the capture, training and transportation of animals and, with his guidance, we drew up a carefully thought-out plan to move the animals. We spent more than three months moving the 1,500 animals to their new home. Thousands of people, not just from Taipei but from all over Taiwan, came to give them a warm send-off.
One result of the careful planning, and one of the achievements we were most proud of, was the zero-percent death rate during the move. Before moving the elephants, for example, we placed their carrying boxes in their cages so that they would become familiar with them. When the time to move came, we simply asked them to enter the boxes without even trying to anesthetize them. Each species differed in the method needed to box them, transport them and to acclimatize them to their new environments. The media watched us closely, so this achievement meant a lot to the zoo.
Q: What new animals and facilities have been added since the move, and which have disappeared?
A: A few of the animals that made the move are still alive today, 20 years later. These include orangutans, gibbons, saltwater crocodiles, crab-eating monkeys, rhinos and, until very recently, the zoo’s famous Asian elephants, Lin Wang and Malan. Since that relocation, the zoo has expanded to contain about 3,000 animals belonging to 410 species. New facilities include the insectarium, amphibian and reptile house, penguin house, koala house, Asian tropical rainforest area and a rescue and rehabilitation center. The latter is where our experienced veterinarians help heal injured or sick animals sent to us from around Taiwan. The penguin and koala houses are the most popular sites with visitors. In the future, however, we plan to put more emphasis on the amphibian and reptile house, since these two types of animals are in dire straits due to farmland expansion, pollution by pesticide and habitat destruction. This is the main reason we built the house, which we intend will play a significant role in the study and conservation of amphibians and reptiles in Asia.
Q: What changes have been made to the running of the zoo, and what plans do you have for the future?
A: The old and new zoos differ in several ways. The old zoo focused on entertainment and even held circus-like animal performances, although these ended in 1979. Before that time, the public viewed zoos as places of entertainment. Between 1979 and 1981, we reassessed the zoo’s management, taking on board new concepts from the World Society for the Protection of Animals that zoos should help educate people and protect animals. In 1981, we trained teams of volunteers that then offered educational services to schoolchildren and the general public.
The zoo still had to enhance its conservation function. Animals are raised in zoos for humans to watch, so in return, humans should be responsible for taking care of animals. A zoo without visitors is unable to bring all its functions into full play, however, so new zoos focus on combining the roles of entertainment, education and conservation. As a result of these integrated functions, the zoo’s visitors have always outnumbered those of any other recreational site islandwide.
Research is yet another important function. By understanding animals’ habits and behavior, we can know more about their management and protection. After moving to the new zoo, we started to set up a biological database for animals that can be referenced in the study of wild animals as well as used for educational purposes. In short, therefore, the new zoo’s main emphasis–as well as its main difference with the old zoo–is on conservation.
As for short-term objectives, we plan to establish a research and conservation center and to organize a group of experts to run the center. Obviously we want to do more research on animals, such as completion of the DNA database, studying behavior patterns and undertaking ecological surveys in the wild. Long-term objectives include plans to establish an environmental education center in Taipei, recreational center in Taiwan and wild animal conservation hub in Asia. We also intend to establish sister-zoo relationships with as many zoos around the world as possible, with each zoo focusing on different animals and cooperating with each other.
Q: How do you balance the zoo’s educational, conservation and leisure functions?
A: Visitors go to zoos because they want to relax and have a good time. While enjoying that leisure time, they can also learn about animals through direct observation. This is a crucial point: Only when a person is interested in animals, and fond of them, is he willing to protect them.
To fulfill this educational function, we put up informative signs so visitors can learn while they are having fun. Meanwhile, social changes mean that people are starting to pay more attention to animal welfare, which is in line with the zoo’s long-term goal. To improve animal welfare, we enhanced the exhibition areas, changed the way we feed animals, train our animals appropriately and conduct regular physical checkups.
Animals are no longer trained just to perform. For example, if a well-trained elephant has a problem with its leg, we only need to ask it to sit down and raise its wounded leg for us to examine. It is much easier and safer to do checkups on trained animals than untrained ones. Training, in fact, is an interaction between animals and humans. We never use whips to train animals; instead, we use “positive psychology,” which rewards animals for following directions. Animal welfare is always the top consideration.
Q: What is the zoo’s position on importing pandas to Taiwan?
A: We first applied to import pandas more than 10 years ago. The request was eventually turned down by the government due mainly to politics. Last year, when China’s leaders confirmed they were willing to send a pair of pandas to Taiwan as a gift, this issue was debated fervently. People were also concerned whether there would be a place to house and care for the pandas if they did come. The zoo’s position is very clear. We do not need pandas if importing them is not beneficial to wild animal conservation–otherwise, why would we want pandas? There are, therefore, three conditions to be met before we would accept pandas: first, we must be able to raise them in a healthy way. Second, we should be able to contribute to panda conservation. And third, we should be able to contribute to the conservation of Taiwan’s wild animals through this. We anticipate that if pandas come, they will draw attention to Taiwan’s own endangered species and help raise more funds from local businesses that could be used to conserve species on the verge of extinction. Normally, very few people or businesses really care much about wild animals, so we hope the arrival of such a popular animal could generate more resources and allow these resources to be diverted to other animals in need.
We think pandas will come sooner or later, so we continue to prepare by studying pandas, observing pandas in situ and constructing a panda house, which is scheduled for completion next year. We will continue these preparations, but whether or not we get pandas is not something we at the zoo will decide.
A wonderful way to support the cats at no cost to you is to go to your Amazon.com account and sign up for Amazon Smile to have .5% of your purchases donated to Big Cat Rescue.
Cat Quiz
Help Feed Big Cats
One of the best ways to help is through general donations that can be used however it is most needed at the time.To make a general donation just click the Donate Now button below. This is the best way to give as it has the lowest credit card processing fees and is immediate help for the cats.
Big Cat Rescue is a 501(c)(3) nonprofit organization, FEID 59-3330495. Florida law requires that all charities soliciting donations disclose their registration number and the percentage of your donation that goes to the cause and the amount that goes to the solicitor. We do not utilize professional solicitors, so 0% of your donation goes to a professional solicitor, 100% goes to Big Cat Rescue. Non-program expenses are funded from tour income, so 100% of your donations go to supporting the cats and stopping the abuse.A COPY OF THE OFFICIAL REGISTRATION AND FINANCIAL INFORMATION FOR BIG CAT RESCUE, A FL-BASED NONPROFIT CORPORATION (REGISTRATION NO. CH 11409), MAY BE OBTAINED FROM THE DIVISION OF CONSUMER SERVICES BY CALLING TOLL-FREE 1-800-435-7352 WITHIN THE STATE OR BY VISITING www.800helpfla.com. REGISTRATION DOES NOT IMPLY ENDORSEMENT, APPROVAL, OR RECOMMENDATION BY THE STATE.
|
{
"pile_set_name": "Pile-CC"
}
|
A programmable device (e.g., a programmable microcontroller) contains configuration registers which hold configuration data to establish functional blocks (e.g., which perform user-defined logic functions), I/O blocks (e.g., which configures input/output blocks interfacing to external devices), and/or signal routing resources (e.g., which connect the functional blocks to each other and/or the I/O blocks).
The configuration data may be represented as configuration bits of configuration registers (e.g., stored as volatile memory). Upon the boot-up of the programmable device, the configuration data stored in non-volatile memory may be copied to the configuration registers of the volatile memory. However, the configuration data residing in the configuration registers may be compromised due to several factors.
For example, an unintended software execution may create a write-over condition where improper data may be written to the configuration registers. Additionally, cosmic rays, X-rays, and/or other environmental factors may cause the configuration data to degrade (e.g., flip); these are known as soft errors. These errors (e.g., the write-over, soft errors, etc.) of the configuration data may compromise the functional block, the I/O blocks, and/or the routing resources, thereby rendering the programmable device inoperable for its intended purposes.
In case when the programmable device is used in a critical condition (e.g., involving an emergency situation) or life critical function, the reliability of the programmable device becomes ever more critical. For instance, the functionality of an airbag deployment system may rely on the operation of a programmable device. Furthermore, the configuration data (e.g., bits) become more susceptible to the soft errors as the feature size (e.g., a silicon geometry) of the programmable device gets smaller.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
At least 23 people have been hurt in clashes outside the Spanish Parliament in Madrid, as hundreds of protesters gathered on Saturday to demonstrate against newly proposed anti-protest legislation.
The demonstrators held signs that said 'Freedom to protest’ and 'People's Party, shame of Spain!' while police and barricades prevented them from getting any closer to the parliament building.
The rally in the heart of the Spanish capital finished by 10 p.m. local time, with at least seven protesters detained and 23 people injured - 14 of them police - EFE news agency reports.
Watch Ruptly's footage of the violent scenes on the streets of Madrid:
The new law, drafted by Spain's ruling People's Party, would introduce fines for activists taking part in unauthorized protests, publishing images of police, or interrupting public events.
Demonstrating near parliament without permission could result in a fine as high as 600,000 euro (US$824,040), while insulting a police officer could cost a demonstrator up to 30,000 euro ($41,202).
Watch Ruptly's footage of the protest outside parliament:
|
{
"pile_set_name": "OpenWebText2"
}
|
# Copyright 1999-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# @ECLASS: java-virtuals-2.eclass
# @MAINTAINER:
# java@gentoo.org
# @AUTHOR:
# Original Author: Alistair John Bush <ali_bush@gentoo.org>
# @BLURB: Java virtuals eclass
# @DESCRIPTION:
# To provide a default (and only) src_install function for ebuilds in the
# java-virtuals category.
inherit java-utils-2
DEPEND=">=dev-java/java-config-2.2.0-r3"
RDEPEND="${DEPEND}"
S="${WORKDIR}"
EXPORT_FUNCTIONS src_install
# @FUNCTION: java-virtuals-2_src_install
# @DESCRIPTION:
# default src_install
java-virtuals-2_src_install() {
java-virtuals-2_do_write
}
# @FUNCTION: java-pkg_do_virtuals_write
# @INTERNAL
# @DESCRIPTION:
# Writes the virtual env file out to disk.
java-virtuals-2_do_write() {
java-pkg_init_paths_
dodir "${JAVA_PKG_VIRTUALS_PATH}"
{
if [[ -n "${JAVA_VIRTUAL_PROVIDES}" ]]; then
echo "PROVIDERS=\"${JAVA_VIRTUAL_PROVIDES}\""
fi
if [[ -n "${JAVA_VIRTUAL_VM}" ]]; then
echo "VM=\"${JAVA_VIRTUAL_VM}\""
fi
if [[ -n "${JAVA_VIRTUAL_VM_CLASSPATH}" ]]; then
echo "VM_CLASSPATH=\"${JAVA_VIRTUAL_VM_CLASSPATH}\""
fi
echo "MULTI_PROVIDER=\"${JAVA_VIRTUAL_MULTI=FALSE}\""
} > "${JAVA_PKG_VIRTUAL_PROVIDER}"
}
|
{
"pile_set_name": "Github"
}
|
--require ./test/config.js
--require ./test/setup-unit-tests.js
|
{
"pile_set_name": "Github"
}
|
Patients' journeys to a narcolepsy diagnosis: a physician survey and retrospective chart review.
Narcolepsy is a lifelong disorder with potentially debilitating symptoms. Obtaining an accurate diagnosis often requires multiple tests and physician visits. This report describes results from an online, quantitative, company-sponsored survey in which physicians provided information from the charts of their patients with narcolepsy. Neurologists, pulmonologists, psychiatrists, and other specialists who were board certified in sleep medicine; had 2 to 30 years of clinical experience; and treated ≥ 5 narcolepsy patients per month were invited to complete ≤ 6 surveys using charts of patients who were treated for narcolepsy in the last 6 months. Data from 252 patients were collected from 77 physicians. Patients were predominantly male (55%), white (67%), and had a median age of 38 years (range: 12-83 years). Referral to the respondent physician was common, mainly from primary care physicians. The most common initial symptoms were excessive daytime sleepiness (91%), trouble staying awake during the day (44%), and trouble concentrating/functioning during the day (43%). Overall, initial symptoms were of at least moderate severity in 85% of patients. Most patients completed overnight polysomnography (83%), a Multiple Sleep Latency Test (76%), and/or the Epworth Sleepiness Scale (62%). The median time from patient-reported symptom onset to diagnosis was 22 months (range: 0-126 months); at least half saw ≥ 2 providers before being diagnosed; and 60% of patients had previously been misdiagnosed with other disorders, including depression (31%), insomnia (18%), and/or obstructive sleep apnea (13%). In this study, the journey to a narcolepsy diagnosis required evaluation by multiple physicians and took nearly 2 years in 50% of patients, and > 5 years in 18%. These data highlight the need for increased awareness of the signs and symptoms of narcolepsy.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
[The mechanism of carbofuran interacts with calf thymus DNA].
Carbofuran is an insecticide used on a variety of corps. Acute and chronic occupational exposure of humans to carbofuran has been observed to cause cholinesterase inhibition, but little is known about the interaction of carbofuran with DNA. Using the technique of UV spectrum and fluorescence quenching respectively, the interaction between carbofuran and ct DNA was studied. The UV spectrum showed that ct DNA can lead to the hypochromic effect and red shift of the UV spectrum of carbofuran. The quenching process was proved to be the single static quenching and quenching constant decreases with temperature increasing. The basis of this specificity is intercalation of insecticide between base pairs to produce ct DNA-carbofuran adducts. Furthermore, ethanol can produce Franck-condon effect on the ct DNA-carbofuran adducts. At different sodium chloride concentrations, quenching constant had no significant change, which appeared that there was little electrostatic interaction between ct DNA and carbofuran and it was intercalation.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Q:
cin skips over the next cin only when a non-numeric value is entered
The following code works as intended when it is given 2 integers, however if a non numeric value (like 'a') is given, it skips the second cin.
int num1; // lesser integer value input by user
int num2; // greater integer value input by user
cout << "\n\nNumber 1: ";
cin >> num1;
cout << "Number 2: ";
cin >> num2;
if (!cin)
{
cout <<"\nError" <<endl;
return 0;
}
When entering a number for the first prompt the program carries on, however if something like a is entered for the first prompt, it skips the second prompt and hits the error condition
A:
When the formatted input operator >> fails (like for example you give a as input when a number was expected) the input in the buffer is not removed, it's still there for the next time you want to read input (which will attempt to read the very same a again).
The flags are also not cleared automatically.
You could solve this by checking when you read the input:
if (!(std::cin >> num1))
{
// Failure of some kind
if (std::cin.eof())
{
// End of file, handle this any way you like or need
}
else
{
// Not end-of-file
std::cin.ignore(std::numeric_limits<std::streamsize>::max(), '\n'); // Skip bad input
std::cin.clear(); // Clear error flags
}
}
References:
operator>>
ignore
clear
|
{
"pile_set_name": "StackExchange"
}
|
1981 Central African presidential election
Presidential elections were held in the Central African Republic on 15 March 1981. They were the first national elections of any sort since 1964, the first elections since the overthrow of longtime ruler Jean-Bédel Bokassa in 1979, and the first multiparty presidential elections since independence. Five candidates—David Dacko, Ange-Félix Patassé, François Pehoua, Henri Maïdou and Abel Goumba—ran for the election.
The elections were won by Dacko, who had been restored back to the Presidency as part of Operation Barracuda, which overthrew Emperor Bokassa I (Jean-Bédel Bokassa). Dacko tried to pose as the inheritor of Barthélemy Boganda, the national hero who founded the country.
Results
References
Central African Republic
Category:Presidential elections in the Central African Republic
Category:1981 in the Central African Republic
|
{
"pile_set_name": "Wikipedia (en)"
}
|
---
author:
- 'D. M. Edwards'
- 'F. Federici'
title: 'Current-driven switching of magnetisation- theory and experiment'
---
Introduction {#intro}
============
Recently there has been a lot of interest in magnetic nanopillars of 10-100 nm in diameter. The pillar is a metallic layered structure with two ferromagnetic layers, usually of cobalt, separated by a non-magnetic spacer layer, normally of copper. Non-magnetic leads are attached to the magnetic layers so that an electric current may be passed through the structure. In the simplest case the pillar may exist in two states, with the magnetisation of the two magnetic layers parallel or anti-parallel. The state of a pillar can be read by measuring its resistance, this being smaller in the parallel state than in the anti-parallel one. This dependence of the resistance on magnetic configuration is the giant magnetoresistance (GMR) effect [@1]. A dense array of these nanopillars could form a magnetic memory for a computer. Normally one of the magnetic layers in a pillar is relatively thick and its magnetisation direction is fixed. In order to write into the memory the magnetisation direction of the second thinner layer must be switched. This might be achieved by a local magnetic field of suitable strength and methods have been proposed [@2] for providing such a local field by currents in a criss-cross array of conducting strips. However an alternative, and potentially more efficient method, proposed by Slonczewski [@3] makes use of a current passing up the pillar itself. Slonczewski’s effect relies on “spin transfer” and not on the magnetic field produced by the current which in the nanopillar geometry is ineffective. The idea of spin-transfer is as follows. In a ferromagnet there are more electrons of one spin orientation than of the other so that current passing through the thick magnetic layer (the polarising magnet) becomes spin-polarised. In general its state of spin-polarisation changes as it passes through the second (switching) magnet so that spin angular momentum is transferred to the switching magnet. This transfer of spin angular momentum is called spin-transfer torque and, if the current exceeds a critical value, it may be sufficient to switch the direction of magnetisation of the switching magnet. This is called current-induced switching.
In the next section we show how to calculate the spin-transfer torque for a simple model.
Spin-transfer torque in a simple model {#due}
======================================
For simplicity we consider a structure of the type shown in Fig. \[fig1\], where [**p**]{} and [**m**]{} are unit vectors in the direction of the magnetisations. This models the layered structure of the pillars used in experiments but the atomic planes shown are considered to be unbounded instead of having the finite cross-section of the pillar. This means that there is translational symmetry in the $x$ and $z$ directions. The structure consists of a thick (semi-infinite) left magnetic layer (polarising magnet), a non-magnetic metallic spacer layer, a thin second magnet (switching magnet) and a semi-infinite non-magnetic lead. In the simplest model we assume the atoms form a simple cubic lattice, with lattice constant $a$, and we adopt a one-band tight-binding model with hopping Hamiltonian
$$H_0=t\sum_{{\bf k}_{\parallel}\sigma}
\sum_{n}c^{\dagger}_{k_{\parallel} n\sigma} c_{k_{\parallel}
n-1\sigma}\,\, +\,\, {\rm h. c.}. \label{hamiltonian}$$
Here $c^{\dagger}_{k_{\parallel} n\sigma}$ creates an electron on plane $n$ with two-dimensional wave-vector ${\bf k}_{\parallel}$ and spin $\sigma$, and $t$ is the nearest-neighbour hopping integral.
In the tight-binding description the operator for spin angular momentum current between planes $n-1$ and $n$, which we require to calculate spin-transfer torque, is given by $${\bf j}_{n-1}=-\frac{{\rm i}t}2\sum_{{\bf
k}_{\parallel}}\left(c^{\dagger}_{k_{\parallel},n,\sigma\uparrow},
c^{\dagger}_{k_{\parallel},n,\downarrow}\right){\bm\sigma}
\left(c_{k_{\parallel} ,n-1,\uparrow}, c_{k_{\parallel}
,n-1,\downarrow} \right)^{\dagger} +\,\, {\rm h. c.}. \label{gamma}$$ Here ${\bm\sigma}=\left(\sigma_x,\sigma_y,\sigma_z\right)$ where the components are Pauli matrices. Eq. (\[gamma\]) yields the charge current $j_{n-1}^{\rm c}$ if $\frac12{\bm\sigma}$ is replaced by a unit matrix multiplied by the number $e/\hbar$, where $e$ is the electronic charge (negative). All currents flow in the $y$ direction, perpendicular to the layers, and the components of the vector ${\bf j}$ correspond to transport of $x$, $y$ and $z$ components of spin. The justification of Eq. (\[gamma\]) for ${\bf j}_{n-1}$ relies on an equation of continuity, as pointed out in Section \[quattro\].
To define the present model completely we must supplement the hopping Hamiltonian $H_0$ by specifying the on-site potentials in the various layers. For simplicity we assume the on-site potential for both spins in non-magnetic layers, and for majority spin in ferromagnetic layers, is zero. We assume an infinite exchange splitting in the ferromagnets so that the minority spin potential in these layers is infinite. Thus minority spin electrons are completely excluded from the ferromagnets. Clearly the definition of majority and minority spin relate to spin quantisation in the direction of the local magnetisation. We take $\alpha=0$, so that the magnetisation of the switching magnet is in the z direction and take $\theta=\psi$, where $\psi$ is the angle between the magnetisations.
To describe spin transport in the structure we adopt the generalised Landauer approach of Waintal [*et. al.*]{} [@4]. Thus the structure is placed between two reservoirs, one on the left and one on the right, with electron distributions characterised by Fermi functions $f(\omega-\mu_{\rm L})$, $f(\omega-\mu_{\rm R})$ respectively. The system is then subject to a bias voltage $V_{\rm
b}$ given by $eV_{\rm b}=\mu_{\rm L}-\mu_{\rm R}$, the difference between the chemical potentials. We discuss the ballistic limit where scattering occurs only at interfaces, the effect of impurities being negligible. We label the atomic planes so that $n=0$ corresponds to the last atomic plane of the polarising magnet. the planes of the spacer layer correspond to $n=1,2\, \cdot\cdot\,N $ and $n=N+1$ is the first plane of the switching magnet.
Consider first an electron incident from the left with wave-function $|k,maj\rangle$, where $k>0$, which corresponds to a Bloch wave $|k\rangle = \sum_{n}{\rm e}^{{\rm i}kna}|{\bf
k}_{\parallel}n\rangle$ with majority spin in the polarising magnet. In this notation the label ${\bf k}_{\parallel}$ is suppressed. The particle is partially reflected by the structure and finally emerges as a partially transmitted wave in the lead, with spin $\uparrow$ corresponding to majority spin in the switching magnet. Thus the wave-function is of the form $$|P_k\rangle =|k,maj\rangle + B|-k,maj\rangle
\label{p}$$ in the polariser and $$|L_k\rangle =F|k,\uparrow\rangle
\label{p2}$$ in the lead. A majority spin in either ferromagnet enters or leaves the spacer without scattering, since in our simple model there is no potential step. Also the minority spin wave-function entering a ferromagnet is zero. The spacer wave-function may therefore be written in two ways: $$|S_k\rangle =F|k,\uparrow\rangle+E \left({\rm e}^{-{\rm
i}k(N+1)a}|k,\downarrow\rangle- {\rm e}^{{\rm
i}k(N+1)a}|-k,\downarrow\rangle \right)
\label{pp}$$ or $$\begin{aligned}
\label{ppp}
|S_k\rangle &=& |k,maj\rangle+B|-k,maj\rangle+D\left(|k,min\rangle-
|-k,min\rangle\right)\nonumber\\
&=& \cos\left(\psi/2\right)|k,\uparrow\rangle+\sin\left(\psi/2\right)
|k,\downarrow\rangle
+B\left[\cos\left(\psi/2\right)|-k,\uparrow\rangle+\sin\left(\psi/2\right)|-k,\downarrow\rangle\right]\\
&&+D\left[-\sin\left(\psi/2\right)|k,\uparrow\rangle+\cos\left(\psi/2\right)
|k,\downarrow\rangle+\sin\left(\psi/2\right)
|-k,\uparrow\rangle-\cos\left(\psi/2\right)|-k,\downarrow\rangle\right].\nonumber\end{aligned}$$ On equating coefficients of $|k,\uparrow\rangle$, $|k,\downarrow\rangle$, $|-k,\uparrow\rangle$, $|-k,\downarrow\rangle$ in expressions (\[pp\]) and (\[ppp\]) we have four equations which may be solved for $B$, $D$, $E$, $F$. In particular the transmission coefficient $T$ is given by $$\label{T}
T=\left|F\right|^2=\frac{4\cos^2(\psi/2)\sin^2
k(N+1)a}{\sin^4(\psi/2)+
4\cos^2(\psi/2)\sin^2k(N+1)a}.$$ Similarly an electron incident from the right with wave-function $|-k\uparrow\rangle$ in the lead is partially reflected and finally emerges as a partially transmitted wave $F^{\prime}|-k,maj\rangle$ in the polarising magnet. It is found that $F^{\prime}=F$ so that the transmission coefficient is the same for particles from left or right.
The spin angular momentum current in a particular layer, which we shall denote by S although it need not be the spacer layer, is the sum of currents carried by left and right moving electrons. Thus we have a Landauer-type formula [@5] $$\label{landau}
{\bf j}_{\rm s}=\frac{a}{2\pi}\sum_{{\bf k }_{\parallel}}
\left\{\int_{k>0}{\rm d}k\left[\langle S_k|{\bf
j}_{n-1}|S_k\rangle f(\omega-\mu_L)+\langle S_{-k}|{\bf
j}_{n-1}|S_{-k}\rangle f(\omega-\mu_{R})\right]\right\}$$ where $|S_k\rangle$, $|S_{-k}\rangle$ are wave-functions in the layer considered corresponding to electrons incident from left and right, respectively. Here $\omega$, the energy of the Bloch wave $k$, is given by the tight-binding formula $$\label{omega}
\omega=u_{{\bf k}_{\parallel}}+2t\cos ka$$ where $u_{{\bf k}_{\parallel}}=2t(\cos k_xa+\cos k_za)$. We take $t<0$ so that positive $k$ corresponds to positive velocity $\hbar^{-1}\partial\omega/\partial k$ as we have assumed. The current ${\bf j}_{\rm s}$ in layer $S$ calculated by Eq. (\[landau\]) does not depend on the particular planes $n-1$, $n$ between which it is calculated. On changing the integration variable in Eq. (\[landau\]) we find $$\label{landau2} {\bf j}_{\rm s}=\frac{1}{2\pi} \sum_{{\bf
k}_{\parallel}}\int{\rm d}\omega\left[{\bf J}_{+}f(\omega-\mu_{\rm
L})+{\bf J}_{-}f(\omega-\mu_{\rm R})
\right]$$ where $$\label{landau3}
{\bf J}_{\pm}=\frac{\langle S_{\pm k}|{\bf j}_{n-1}|S_{\pm
k}\rangle}{-2t\sin ka}.$$ Here $k=k(\omega,{\bf k}_{\parallel})$ is the positive root of Eq. (\[omega\]). Eq. (\[landau2\]) may be written as $$\label{landau33}
{\bf j}_{\rm s}=\frac{1}{4\pi}\sum_{{\bf k }_{\parallel}}
\int{\rm d}\omega\left\{
\left({\bf J}_{+}+{\bf J}_{-}\right)
\left[f(\omega-\mu_{\rm L})+f(\omega-\mu_{\rm R})\right]+
\left({\bf J}_{+}-{\bf J}_{-}\right)
\left[f(\omega-\mu_{\rm L})-f(\omega-\mu_{\rm R})\right]
\right\}.$$
Before discussing this spin current we briefly consider the charge current $j^{\rm c}$, and we denote the analogues of ${\bf J}_{\pm}$ by $J_{\pm}^{\rm c}$. Since the charge current is conserved throughout the structure $J_{+}^{\rm c}$ and $J_{-}^{\rm c}$ can be calculated in different ways, [*e.g.*]{} in the lead for $J_{+}^{\rm c}$ and in the polariser for $J_{-}^{\rm c}$. Since $T=\left|F\right|^2=\left|F^{\prime}\right|^2$ we find $J_{+}^{\rm
c}+J_{-}^{\rm c}=0$ and for small bias $eV_{b}=\mu_{\rm L}-\mu_{\rm R}$ the charge current is given by $$\label{current}
j^{\rm c}=\frac{2e^2V_{\rm b}}{h}\sum_{{\bf
k}_{\parallel}}T$$ where the transmission coefficient $T$ is given by Eq. (\[T\]) with $k=k(\mu,{\bf k}_{\parallel})$, $\mu$ being the common chemical potential as $V_{\rm B}\rightarrow 0$. This is the well-known Landauer formula [@5].
The spin transfer torque on the switching magnet is given by $$\label{torque}
{\bf T}^{\rm s-t}=\langle{\bf j}_{\rm spacer}\rangle-
\langle{\bf j}_{\rm lead}\rangle,$$ where $\langle{\bf j}_{\rm spacer}\rangle$ and $\langle{\bf j}_{\rm
lead}\rangle$ are spin currents in the spacer and lead respectively. For zero bias ($\mu_{\rm L}=\mu_{\rm
R}$) there is clearly no charge current in the structure and straight-forward calculation shows that all components of spin current in the spacer and the lead vanish, except for a non-zero $y$-spin current in the spacer. There is therefore a non-zero $y$ component of spin-transfer torque acting on the switching magnet for zero bias, and its dependence on the angle $\psi$ between the magnetisations is found to be approximately $\sin\psi$. This torque is due to exchange coupling, analogous to an RKKY coupling, between the two magnetic layers. This coupling oscillates as a function of spacer thickness and tends to zero as the thickness tends to infinity. For finite bias $V_{\rm B}$ the second term in the integrand of Eq. (\[landau33\]) comes into play. In general this leads to finite $x$ and $y$ components of ${\bf T}^{\rm s-t}$ proportional to $V_{\rm b}$ (for small $V_{\rm b}$) whereas $T_{z}^{\rm s-t}=0$. However for the special model considered here with infinite exchange splitting in both ferromagnets it turns out that $T_y^{\rm s-t}=0$. For this model the only non-zero component of ${\bf T}^{\rm s-t}$ proportional to $V_{\rm b}$ is found to be $$\label{torquex}
T_x^{\rm s-t}=\frac{\hbar j^{\rm c}}{2|e|}\tan\frac{\psi}{2}$$ where $j^{\rm c}$ is the charge current given by Eq. (\[current\]).
Slonczewski [@3] originally obtained this result for the analogous parabolic band model. From Eqs. (\[torquex\]), (\[current\]) and (\[T\]) it follows that $T_{x}^{\rm s-t}$ contains an important factor $\sin\psi$ although this does not represent the whole angle dependence. Clearly, from Eq. (\[torquex\]), the torque proportional to bias remains finite for arbitrarily large spacer thickness, in the ballistic limit. For this model, with infinite exchange splitting, the torque is independent of the thickness of the switching magnet.
From the results of this simple model we can infer a general form of the spin-transfer torque ${\bf T}^{\rm s-t}$ which is independent of the choice of coordinate axes. Thus we write $$\label{torqueperp}
{\bf T}^{\rm s-t}={\bf T}_{\perp}+{\bf T}_{\parallel}$$ where $$\begin{aligned}
\label{torque2} {\bf T}_{\perp}&=&\left(g^{\rm ex}+g_{\perp} e
V_{\rm b}\right)({\bf
m}\times{\bf p}) \nonumber\\
{\bf T}_{\parallel}&=&g_{\parallel} e V_{\rm b}{\bf m}\times({\bf
p}\times{\bf m}).\end{aligned}$$ With the choice of axes in Fig. \[fig1\] ${\bf T}_{\parallel}$ corresponds to the $x$ component of torque, that is the component parallel to the plane containing the magnetisation directions ${\bf
m}$ and ${\bf p}$. Similarly ${\bf T}_{\perp}$ corresponds to the $y$ component of torque, this being perpendicular to the plane of ${\bf m}$ and ${\bf p}$. The modulus of both the vectors ${\bf
m}\times{\bf p}$ and ${\bf m}\times({\bf p}\times{\bf m})$ is $\sin\psi$, so that the factors $g^{\rm ex}$, $g_{\perp}$ and $g_{\parallel}$ are functions of $\psi$ which contain deviations from the simple $\sin\psi$ behaviour. The bias-independent term $g^{\rm ex}$ corresponds to the interlayer exchange coupling, as discussed above, and henceforth we assume that the spacer is thick enough for this term to be negligible. Sometimes the $\sin\psi$ factor accounts for most of the angular dependence of $T_{\perp}$ and $T_{\parallel}$ so that $g_{\perp}$ and $g_{\parallel}$ may be regarded as constant parameters for the given structure. In the next section we use Eqs. (\[torque2\]) for the spin-transfer torque in a phenomenological theory of current-induced switching of magnetisation. This phenomenological treatment enables us to understand most of the available experimental data. It is more usual in experimental works to relate spin-transfer torque to current rather than bias. However in theoretical work, based on the Landauer or Keldysh approach, bias is more natural. In practice the resistance of the system considered is rather constant (the GMR ratio is only a few percent) so that bias and current are in a constant ratio.
Phenomenological treatment of current-induced switching of magnetisation {#tre}
========================================================================
In this section we explore the consequences of the spin-transfer torque acting on a switching magnet using a phenomenological Landau Lifshitz equation with Gilbert damping (LLG equation). This is essentially a generalisation of the approach used originally by Slonczewski [@3] and Sun [@sun]. We assume that there is a polarising magnet whose magnetisation is pinned in the $xz$-plane in the direction of a unit vector ${\bf p}$, which is at general fixed angle $\theta$ to the $z$-axis as shown in Fig. \[fig1\].
![Schematic picture of a magnetic layer structure for current-induced switching (magnetic layers are darker, non-magnetic layers lighter).[]{data-label="fig1"}](Fig1.eps){width="11cm"}
The pinning of the magnetisation of the polarising magnet can be due to its large coercivity (thick magnet) or a strong uniaxial anisotropy. The role of the polarising magnet is to produce a stream of spin-polarised electrons, [*i.e.*]{} spin current, that is going to exert a torque on the magnetisation of the switching magnet whose magnetisation lies in the general direction of a unit vector ${\bf
m}$. The orientation of the vector ${\bf m}$ is defined by the polar angles $\alpha$, $\phi$ shown in Fig. \[fig1\]. There is a non-magnetic metallic layer inserted between the two magnets whose role is merely to separate magnetically the two magnetic layers and allow a strong charge current to pass. The total thickness of the whole trilayer sandwiched between two non-magnetic leads must be smaller than the spin diffusion length $l_{\rm sf}$ so that there are no spin flips due to impurities or spin-orbit coupling. A typical junction in which current-induced switching is studied experimentally [@albert] is shown schematically in Fig. \[fig2\]. The thickness of the polarising magnet is 40nm, that of the switching magnet 2.5nm and the non-magnetic spacer is 6nm thick. The materials for the two magnets and the spacer are cobalt and copper, respectively, which are those most commonly used. The junction cross section is oval-shaped with dimensions 60nm$\times$130nm. A small diameter is necessary so that the torque due to the Oersted field generated by a charge current of $10^{7}$-$10^8$ A/cm$^2$, required for current-induced switching, is much smaller than the spin-transfer torque we are interested in.
The aim of most experiments is to determine the orientation of the switching magnet moment as a function of the current (applied bias) in the junction. Sudden jumps of the magnetisation direction, [*i.e.*]{} current-induced switching, are of particular interest. The orientation of the switching magnet moment ${\bf m}$ relative to that of the polarising magnet ${\bf p}$, which is fixed, is determined by measuring the resistance of the junction. Because of the GMR effect, the resistance of the junction is higher when the magnetisations of the two magnets are anti-parallel than when they are parallel, In other words, what is observed are hysteresis loops of resistance versus current. A typical experimental hysteresis loop of this type [@16] is reproduced in Fig. \[fig3\].
![Resistance vs current hysteresis loop (after Grollier [*et al.*]{} [@16]). []{data-label="fig3"}](Fig3.eps){width="8cm"}
It can be seen from Fig. \[fig3\] that, for any given current, the switching magnet moment is stationary (the junction resistance has a well defined value), [*i.e.*]{} the system is in a steady state. This holds everywhere on the hysteresis loop except for the two discontinuities where current-induced switching occurs. As indicated by the arrows jumps from the parallel (P) to anti-parallel (AP) configurations of the magnetisation, and from AP to P configurations, occur at different currents. It follows that in order to interpret experiments which exhibit such hysteresis behaviour, the first task of the theory is to determine from the LLG equation all the possible states and then investigate their dynamical stability. At the point of instability the system seeks out a new steady state, [*i.e.*]{} a discontinuous transition to a new steady state with the switched magnetisation occurs. We have tacitly assumed that there is always a steady state available for the system to jump to. There is now experimental evidence that this is not always the case. In the absence of any stable steady state the switching magnet moment remains permanently in the time-dependent state. This interesting case is implicit in the phenomenological LLG treatment and we shall discuss it in detail later.
In describing the switching magnet by a unique unit vector ${\bf m}$, we assume that it remains uniformly magnetised during the switching process. This is only strictly true when the exchange stiffness of the switching magnet is infinitely large. It is generally a good approximation as long as the switching magnet is small enough to remain single domain, so that the switching occurs purely by rotation of the magnetisation as in the Stoner-Wohlfarth theory [@17] of field switching. This seems to be the case in many experiments [@albert; @16; @18; @19]
Before we can apply the LLG equation to study the time evolution of the unit vector ${\bf m}$ in the direction of the magnetisation of the switching magnet, we need to determine all the contributions to the torque acting on the switching magnet. Firstly, there is the spin-transfer torque ${\bf T}^{\rm s-t}$ which we discussed in Section \[due\]. Secondly, there is a torque due to the uniaxial in-plane and easy plane (shape) anisotropies. The easy-plane shape anisotropy torque arises because the switching magnet is a thin layer typically only a few nanometers thick. The in-plane uniaxial anisotropy is usually also a shape anisotropy arising from an elongated cross section of the switching magnet [@albert]. We take the uniaxial anisotropy axis of the switching magnet to be parallel to the $z$-axis of the coordinate system shown in Fig. \[fig1\]. Since the switching magnet lies in the $xz$-plane, we can write the total anisotropy field as $$\label{hh}
{\bf H}_{\rm A}={\bf H}_{\rm u}+{\bf H}_{\rm p}$$ where ${\bf H}_{\rm u}$ and ${\bf H}_{\rm p}$ are given by $$\label{hu}
{\bf H}_{\rm u}= H_{\rm u0}({\bf m}\cdot{\bf e}_{z}){\bf e}_{z},$$
$$\label{hp}
{\bf H}_{\rm p}=-H_{\rm p0}({\bf m}\cdot{\bf e}_{y}){\bf e}_{y}.$$
Here ${\bf e}_x$, ${\bf e}_y$ and ${\bf e}_z$ are unit vectors in the directions of the axes shown in Fig. 1. If we write the energy of the switching magnet in the anisotropy field as $-{\bf H}_{\rm
A}\cdot\langle{\bf S}_{\rm tot}\rangle$, where $\langle{\bf S}_{\rm
tot}\rangle$ is the total spin angular momentum of the switching magnet, them $H_{\rm u0}$,$H_{\rm p0}$ which measure the strengths of the uniaxial and easy-plane anisotropies have dimensions of frequency. These quantities may be converted to a field in tesla by multiplying by $\hbar/2\mu_{\rm
B}=5.69\times 10^{-12}$.
We are now ready to study the time evolution of the unit vector ${\bf
m}$ in the direction of the switching magnet moment. The LLG equation takes the usual form $$\label{llg}
\frac{{\rm d}{\bf m}}{{\rm d}t}+\gamma{\bf m}\times\frac{{\rm d}{\bf m}}{{\rm d}t}={\bm\Gamma}$$ where the reduced total torque ${\bm\Gamma}$ acting on the switching magnet is given by $$\label{Gamma}
{\bm\Gamma}=\left[-\left({\bf H}_{\rm A}+{\bf H}_{\rm
ext}\right)\times\langle{\bf S}_{\rm tot}\rangle+{\bf
T}_{\perp}+{\bf T}_{\parallel}\right]/
\left|\langle {\bf S}_{\rm tot}\rangle\right|.$$ Here ${\bf H}_{\rm ext}$ is an external field, in the same frequency units as ${\bf H}_{\rm A}$, and $\gamma$ is the Gilbert damping parameter. Following Sun [@sun], Eq. (\[llg\]) may be written more conveniently as $$\label{Gamma2}
\left(1+\gamma^2\right)\frac{{\rm d}{\bf m}}{{\rm
d}t}={\bm\Gamma}-\gamma{\bf m}\times{\bm\Gamma}.$$ It is also useful to measure the strengths of all the torques in units of the strength of the uniaxial anisotropy [@sun]. We shall, therefore, write the total reduced torque ${\bm\Gamma}$ in the form $$\label{Gamma3}
{\bm\Gamma}=H_{uo}\left\{({\bf m}\cdot{\bf e}_{z}){\bf m}\times{\bf e}_z-
h_{\rm p}({\bf m}\cdot{\bf e}_{y}){\bf m}\times
{\bf e}_y+v_{\parallel}(\psi)
{\bf m}\times({\bf p}\times{\bf m})+\left[v_{\perp}(\psi)+h_{\rm ext}\right]
{\bf m}\times{\bf p}\right\}$$ where the relative strength of the easy plane anisotropy $h_{\rm
p}=H_{\rm p0}/H_{\rm u0}$ and $v_{\parallel}(\psi)=vg_{\parallel}(\psi)$, $v_{\perp}(\psi)=vg_{\perp}(\psi)$ measure the strengths of the torques ${\bf T}_{\parallel}$ and ${\bf T}_{\perp}$. The reduced bias is defined by $v=eV_{\rm b}/(|\langle {\bf S}_{\rm
tot}\rangle|H_{\rm u0})$ and has the opposite sign from the bias voltage since $e$ is negative. Thus positive $v$ implies a flow of electrons from the polarising to the switching magnet. The last contribution to the torque in Eq. (\[Gamma3\]) is due to the external field $H_{\rm ext}$ with $h_{\rm ext}=H_{\rm ext}/H_{\rm
u0}$. The external field is taken in the direction of the magnetisation of the polarising magnet, as is the case in most experimental situations.
It follows from Eq. (\[llg\]) that in a steady state ${\bm\Gamma}=0$. We shall first consider some cases of experimental importance where the steady state solutions are trivial and the important physics is concerned entirely with their stability. To discuss stability, we linearise Eq. (\[Gamma2\]), using Eq. (\[Gamma3\]), about a steady state solution ${\bf m}={\bf m}_{0}$. Thus $$\label{Gamma4}
{\bf m}={\bf m}_0+\xi{\bf e}_{\alpha}+\eta{\bf e}_{\phi},$$ where ${\bf e}_{\alpha}$, ${\bf e}_{\phi}$ are unit vectors in the direction ${\bf m}$ moves when $\alpha$ and $\phi$ are increased independently. The linearised equation may be written in the form $$\label{Gamma5}
\frac{{\rm d}\xi}{{\rm d}\tau}=A\xi+B\eta;\quad
\frac{{\rm d}\eta}{{\rm d}\tau}=C\xi+D\eta.$$ Following Sun [@sun], we have introduced the natural dimensionless time variable $\tau=tH_{\rm u0}/(1+\gamma^2)$. The conditions for the steady state to be stable are $$\label{Gamma6}
F=A+D\leqslant 0;\quad G=AD-BC\geqslant 0$$ excluding $F=G=0$ [@20]. For simplicity we give these conditions explicitly only for the case where either $v_{\parallel}^{\prime}(\psi_0)=v_{\perp}^{\prime}(\psi_0)=0$, with $\psi_0=\cos^{-1}({\bf p}\cdot{\bf m}_0)$, or ${\bf m}_0=\pm{\bf p}$. The case ${\bf m}_0=\pm{\bf p}$ is very common experimentally as is discussed below. The stability condition $G\geqslant 0$ may be written $$\begin{aligned}
\label{stab}
&&Q^2v_{\parallel}^2+(Qh+\cos 2\alpha_0)(Qh+\cos^2\alpha_0)+h_{\rm
p}\left\{Qh(1-3\sin^2\phi_0\sin^2\alpha_0)+\cos 2\alpha_0
(1-2\sin^2\alpha_0\sin^2\phi_0)\right\}-\nonumber\\
&&h_{\rm
p}^2\sin^2\alpha_0\sin^2\phi_0(1-2\sin^2\phi_0\sin^2\alpha_0)
\geqslant 0,\end{aligned}$$ where $v_{\parallel}=v_{\parallel}(\psi_0)$, $h=v_{\perp}(\psi_0)+h_{\rm ext}$ and $Q=\cos\psi_0$. The condition $F\leqslant 0$ takes the form $$\label{F0}
-2(v_{\parallel}+\gamma h)Q-\gamma(\cos
2\alpha_0+\cos^2\alpha_0)-\gamma h_{\rm
p}(1-3\sin^2\phi_0\sin^2\alpha_0)
\leqslant 0.$$
We now discuss several interesting examples, the first of these relating to experiments of Grollier [*et al.*]{} [@18] and others. In these experiments the magnetisation of the polarising magnet, the uniaxial anisotropy axis and the external field are all collinear (along the in-plane $z$-axis in our convention). In this case the equation ${\bm\Gamma}=0$, with ${\bm\Gamma}$ given by Eq. (\[Gamma3\]), shows immediately that possible steady states are given by ${\bf m}_0=\pm{\bf p}(\alpha_0=0,\pi)$, corresponding to the switching magnet moment along the $z$-axis. These are the only solutions when $h_{\rm p}=0$. For $h_{\rm p}\neq 0$ other steady-state solutions may exist but in the parameter regime which has been investigated they are always unstable [@13]. We shall assume this is always the case and concentrate on the solutions ${\bf m}_0=\pm{\bf
p}$. In the state of parallel magnetisation (P) ${\bf m}_0={\bf p}$ we have $v_{\parallel}=vg_{\parallel}(0)$, $h=vg_{\perp}(0)+h_{\rm
ext}$, $\alpha_0=0$ and $Q=1$. The stability conditions (\[stab\]) and (\[F0\]) become $$\label{stab2}
\left[g_{\parallel}(0)\right]^2v^2+(vg_{\perp}(0)+h_{\rm
ext}+1)^2+h_{\rm p}\left[vg_{\perp}(0)+h_{\rm
ext}+1\right]\geqslant 0$$
$$\label{F0-2}
g_{\parallel}(0)v+\gamma\left[vg_{\perp}(0)+h_{\rm
ext}+1+\frac12 h_{\rm p}\right]\geqslant 0.$$
In the state of anti-parallel magnetisation (AP) ${\bf m}_0=-{\bf p}$ we have $v_{\parallel}=vg_{\parallel}(\pi)$, $h=vg_{\perp}(\pi)+h_{\rm
ext}$, $\alpha_0=\pi$ and $Q=-1$. The stability conditions for the AP state are thus $$\label{stab2AP}
\left[g_{\parallel}(\pi)\right]^2v^2+(-vg_{\perp}(\pi)-h_{\rm
ext}+1)^2+h_{\rm p}\left[-vg_{\perp}(\pi)-h_{\rm
ext}+1\right]\geqslant 0$$
$$\label{F0-2AP}
g_{\parallel}(\pi)v+\gamma\left[vg_{\perp}(\pi)+h_{\rm
ext}-1-\frac12 h_{\rm p}\right]\leqslant 0.$$
In the regime of low external field ($h_{\rm ext}\approx 1$, [ *i.e.*]{} $H_{\rm ext}\approx H_{u0}$) we have $H_{\rm p}>>H_{\rm ext}$ ($h_{\rm p}\approx 100$). Eqs. (\[stab2\]) and (\[stab2AP\]) may be then approximated by $$\label{stab2AP-approx}
vg_{\perp}(0)+h_{\rm ext}+1>0$$
$$\label{F0-2AP-approx}
vg_{\perp}(\pi)+h_{\rm ext}-1<0.$$
Equation (\[stab2AP-approx\]) corresponds to P stability and (\[F0-2AP-approx\]) to AP stability. It is convenient to define scalar quantities $T_{\perp}$, $T_{\parallel}$ by $T_{\perp}=g_{\perp}(\psi)\sin\psi$, $T_{\parallel}=g_{\parallel}(\psi)\sin\psi$, these being scalar components of spin-transfer torque in units of $eV_{\rm b}$ (cf. Eq. (\[torque2\])). Then $g_i(0)=[{\rm d}T_i/{\rm
d}\psi]_{\psi=0}$ and $g_i(\pi)=-[{\rm d}T_i/{\rm
d}\psi]_{\psi=\pi}$ with $i=\perp,\parallel$. Model calculations [@13] show that both $g_{\perp}$ and $g_{\parallel}$ can be of either sign, although positive values are more common. Also there is no general rule about the relative magnitude of $g_i(0)$ and $g_i(\pi)$.
We now illustrate the consequences of the above stability conditions by considering two limiting cases. We first consider the case $g_{\perp}(\psi)=0$, $g_{\parallel}>0$, as assumed by Grollier [ *et. al*]{} [@18] in the analysis of their data. In Fig. 4 we plot the regions of P and AP stability deduced from Eqs. (\[F0-2\]),(\[F0-2AP\])-(\[F0-2AP-approx\]), in the ($v$,$h_{\rm ext}$)-plane. Grollier [*et al.*]{} plot current instead of bias but this should not change the form of the figure. Theirs is rather more complicated, owing to a less transparent stability analysis with unnecessary approximation. The only approximations made above, to obtain Eqs. (\[stab2AP-approx\]) and(\[F0-2AP-approx\]), can easily be removed, which results in the critical field lines $h_{\rm
ext}=\pm1$ acquiring a very slight curvature given by $h_{\rm
ext}\approx 1+[vg_{\parallel}(\pi)]^2/h_{\rm p}$ and $h_{\rm
ext}\approx -1-[vg_{\parallel}(0)]^2/h_{\rm p}$. The critical biases in the figure are give by $$\begin{aligned}
\label{crbias}
v_{{\rm AP}\rightarrow{\rm P}}&=&\gamma \left[1+\frac12 h_{\rm p}-h_{\rm
ext}\right]/g_{\parallel}(\pi)\nonumber\\
v_{{\rm P}\rightarrow{\rm AP}}&=&-\gamma \left[1+\frac12 h_{\rm p}+h_{\rm
ext}\right]/g_{\parallel}(0).\end{aligned}$$ A downward slope from left to right of the corresponding lines in Fig. \[fig4\] is not shown there. Since the damping parameter $\gamma$ is small ($\gamma\approx 0.01$) this downward slope of the critical bias lines is also small. From Fig. \[fig4\] we can deduce the behaviour of resistance versus bias in the external field regimes $|h_{\rm ext}|<1$ and $|h_{\rm ext}|>1$.
![Bias-field stability diagram for $g_{\perp}(\psi)=0$, $g_{\parallel}(\psi)>0$. A small downward slope of the lines $V_{{\rm
AP}\rightarrow{\rm P}}$,$V_{{\rm P}\rightarrow{\rm AP}}$ (see Eq. (\[crbias\])) is not shown.[]{data-label="fig4"}](Fig4.eps){width="13cm"}
Consider first the case $|h_{\rm ext}|<1$. Suppose we start in the AP state with a bias $v=0$ which is gradually increased to $v_{{\rm
AP}\rightarrow P}$. At this point the AP state becomes unstable and the system switches to the P state as $v$ increases further. On reducing $v$ the hysteresis loop is completed via a switch back to the AP state at the negative bias $v_{{\rm P}\rightarrow AP}$. The hysteresis loop is shown in Fig. \[fig5\](a). The increase in resistance R between the P and AP states is the same as would be produced by varying the applied field in a GMR experiment.
![(a) Hysteresis loop of resistance vs bias for $|h_{\rm
ext}|<1$; (b) Reversible behaviour (no hysteresis) for $|h_{\rm
ext}|<-1$ (upper curve) and $h_{\rm ext}>1$ (lower curve). The dashed lines represent hypothetical behaviour of average resistance in regions of Fig. \[fig4\] marked “both unstable” where no steady states exist.[]{data-label="fig5"}](Fig5.eps){width="13cm"}
Now consider the case $h_{\rm ext}<-1$. Starting again in the AP state at $v=0$ we see from Fig. \[fig4\] that, on increasing $v$ to $v_{{\rm AP}\rightarrow{\rm P}}$, the AP state becomes unstable but there is no stable P state to switch to. This point is marked by an asterisk in Fig. \[fig5\](b). For $v>v_{{\rm AP}\rightarrow{\rm P}}$, the moment of the switching magnet is in a persistently time-dependent state. However, if $v$ is now decreased below $v_{{\rm P}\rightarrow{\rm AP}}$ the system homes in on the stable AP state and the overall behaviour is reversible, [*i.e.*]{} no switching and no hysteresis occur. When $h_{\rm
ext}>1$ similar behaviour, now involving the P state, occurs at negative bias, as shown in Fig. \[fig5\](b). The dashed curves in Fig. \[fig5\](b) show a hypothetical time-averaged resistance in the regions of time-dependent magnetisation. As discussed later time-resolved measurements of resistance suggest that several different types of dynamics can occur in these regions.
It is clear from Fig. \[fig5\](a) that the jump AP$\rightarrow$P always occurs for positive bias $v$, which corresponds to flow of electrons from the polarising to the switching magnet. This result depends on the assumption that $g_{\parallel}>0$; if $g_{\parallel}<0$ it is easy to see that the sense of the hysteresis loop is reversed and the jump P$\rightarrow$AP occurs for positive $v$. To our knowledge this reverse jump has never been observed, although $g_{\parallel}<0$ can occur in principle and is predicted theoretically [@13] for the Co/Cu/Co(111) system with a switching magnet consisting of a single atomic plane of Co. It follows from Eq. (\[crbias\]) that $|v_{{\rm P}\rightarrow{\rm AP}}/v_{{\rm AP}\rightarrow{\rm
P}}|=|g_{\parallel}(\pi)/g_{\parallel}(0)|$ in zero external field. Experimentally this ratio, essentially the same as the ratio of critical currents, may be considerably less than 1 ([*e.g.*]{} $<0.5$ [@albert]), greater than 1 ([*e.g.*]{} $\approx 2$ [@19]) or close to 1 [@16]. Usually the field dependence of the critical current is found to be stronger than that predicted by Eq. (\[crbias\]) [@albert; @16].
We now discuss the reversible behaviour shown in Fig. \[fig5\](b) which occurs for $|h_{\rm ext}|>1$. The transition from hysteretic to reversible behaviour at a critical external field seems to have been first seen in pillar structures by Katine [*et al.*]{} [@21]. Curves similar to the lower one in Fig. \[fig5\](b) are reported with $|v_{{\rm P}\rightarrow{\rm AP}}|$ increasing with increasing $h_{\rm ext}$, as expected from Eq. (\[crbias\]). Plots of the differential resistance ${\rm d}V/{\rm d}I$ show a peak near the point of maximum gradient of the dashed curve. Similar behaviour has been reported by several groups [@22; @23; @24]. It is particularly clear in the work of Kiselev [*at al.*]{} [@22] that the transition from hysteretic behaviour (as in Fig. \[fig5\](a)) to reversible behaviour with peaks in ${\rm d}V/{\rm
d}I$ occurs at the coercive field 600 Oe of the switching layer ($h_{\rm ext}=1$). The important point about the peaks in ${\rm
d}V/{\rm d}I$ is that for a given sign of $h_{\rm ext}$ they only occur for one sign of the bias. This clearly shows that this effect is due to spin-transfer and not to Oersted fields. Myers [*et al.*]{} [@25] show a current-field stability diagram similar to the bias-field one of Fig. \[fig4\] with a critical field of 1500 Oe. They examine the time dependence of the resistance at room temperature with the field and current adjusted so that the system is in the “both unstable” region in the fourth quadrant of Fig. \[fig4\] but very close to its top left-hand corner. They observe telegraph-noise-type switching between approximately P and AP states with slow switching times in the range 0.1-10 s. Similar telegraph noise with faster switching times was observed by Urazhdin [*et al.*]{} [@23] at current and field close to a peak in ${\rm
d}V/{\rm d}I$. In the region of P and AP instability Kiselev [*et al.*]{} [@22] and Pufall [*et al.*]{} [@24] report various types of dynamics of precessional type and random telegraph switching type in the microwave Ghz regime. Kiselev [*et al.*]{} [@22] propose that systems of the sort considered here might serve as nanoscale microwave sources or oscillators, tunable by current and field over a wide frequency range.
We now return to the stability conditions (\[F0-2\]),(\[F0-2AP\])-(\[F0-2AP-approx\]) and consider the case of $g_{\perp}(\psi)\neq 0$ but $h_{\rm ext}=0$. These conditions of stability of the P state may be written approximately, remembering that $\gamma << 1$, $h_{\rm p}>> 1$, as $$\label{vgperp}
vg_{\perp}(0)>-1, \quad vg_{\parallel}(0)>-\frac12 \gamma h_{\rm p}.$$ The conditions for stability of the AP state are $$\label{vgperpAP}
vg_{\perp}(\pi)<1, \quad vg_{\parallel}(\pi)<\frac12 \gamma h_{\rm p}.$$ In Fig. \[fig6\] we plot the regions of P and AP stability, assuming $g_{\perp}(0)=g_{\perp}(\pi)=g_{\perp}$ and $g_{\parallel}(0)=g_{\parallel}(\pi)=g_{\parallel}$ for simplicity. We also put $r=g_{\perp}/g_{\parallel}$.
![Stability diagram for $h_{\rm ext}=0$.[]{data-label="fig6"}](Fig6.eps){width="10cm"}
For $r>0$ we find the normal hysteresis loop as in Fig. \[fig5\](a) if we plot $R$ against $vg_{\parallel}$ (valid for either sign of $g_{\parallel}$). In Fig. \[fig7\] we plot the hysteresis loops for the cases $r_{\rm c}<r<0$ and $r<r_{\rm c}$, where $r_{\rm c}=-2/(\gamma h_{\rm p})$ is the value of $r$ at the point $X$ in Fig. \[fig6\].
![Hysteresis loop for (a) $r_{\rm c}<r<0$; (b) $r<r_{\rm
c}$.[]{data-label="fig7"}](Fig7.eps){width="13cm"}
The points labelled by asterisks have the same significance as in Fig. \[fig5\](b). If in Fig. \[fig7\](a) we increase $vg_{\parallel}$ beyond its value indicated by the right-hand asterisk we move into the “both-unstable” region where the magnetisation direction of the switching magnet is perpetually in a time-dependent state. Thus negative $r$ introduces behaviour in zero applied field which is similar to that found when the applied field exceeds the coercive field of the switching magnet for $r=0$. This behaviour was predicted by Edwards [*et al.*]{} [@13], in particular for a Co/Cu/Co(111) system with the switching magnet consisting of a Co monolayer. Zimmler [*et al.*]{} [@26] use methods similar to the ones described here to analyse their data on a Co/Cu/Co nanopillar and deduce that $g_{\parallel}>0$, $r=g_{\perp}/g_{\parallel}\approx -0.2$. It would be interesting to carry out time-resolved resistance measurements on this system at large current density (corresponding to $vg_{\perp}<-1$) and zero external field.
So far we have considered the low-field regime ($H_{\rm ext}\approx$ coercive field of switching magnet) with both magnetisations and the external field in-plane. There is another class of experiments in which a high field, greater than the demagnetising field ($>2T$), is applied perpendicular to the plane of the layers. The magnetisation of the polarising magnet is then also perpendicular to the plane. This is the situation in the early experiments where a point contact was employed to inject high current densities into magnetic multilayers [@27; @28; @29]. In this high-field regime a peak in the differential resistance ${\rm
d}V/{\rm d}I$ at a critical current was interpreted as the onset of current-induced excitation of spin waves in which the spin-transfer torque leads to uniform precession of the magnetisation [@6; @27; @28]. No hysteretic magnetisation reversal was observed and it seemed that the effect of spin-polarised current on the magnetisation is quite different in the low- and high-field regimes. Recently, however, Özyilmaz [*et al.*]{} [@30] have studied Co/Cu/Co nanopillars ($\approx 100$nm in diameter) at $T=4.2$K for large applied fields perpendicular to the layers. They observe hysteretic magnetisation reversal and interpret their results using the Landau-Lifshitz equation. We now give a similar discussion within the framework of this section.
Following Özyilmaz [*et al.*]{}, we neglect the uniaxial anisotropy term in Eq. (\[Gamma3\]) for the reduced torque ${\bm\Gamma}$ while retaining $H_{u0}$ as a scalar factor. Hence $$\label{gamma25}
{\bm\Gamma}=H_{u0}\left\{\left[h_{\rm
ext}+v_{\perp}(\psi)-h_{\rm
p}\cos\psi\right]{\bf m}\times{\bf p}+v_{\parallel}(\psi){\bf m}\times({\bf p}\times{\bf m})\right\}$$ where ${\bf p}$ is the unit vector perpendicular to the plane. When $v_{\parallel}(\psi)\neq 0$ the only possible steady-state solutions of ${\bm\Gamma} =0$ are ${\bf m}_0=\pm{\bf p}$. On linearizing Eq. \[Gamma2\] about ${\bf m}_0$ as before we find that the condition $G\geqslant
0$ is always satisfied. The second stability condition $F<0$ becomes $$\label{stabcond}
\left[v_{\parallel}(\psi_0)+\gamma(v_{\perp}(\psi_0)+h_{\rm
ext}-h_{\rm p})\right]\cos\psi_0>0$$ where $\psi_0=\cos^{-1}({\bf m}_0\cdot{\bf p})$. Applying this to the P state ($\psi_0=0$) and the AP state ($\psi_0=\pi$) we obtain the conditions $$\label{conditiona} v>\gamma(h_{\rm p}-h_{\rm ext})/g(0)$$ $$\label{conditionb}
v<-\gamma(h_{\rm p}+h_{\rm ext})/g(\pi),$$ where the first condition applies to the P stability and the second to the AP stability. Here $g(\psi)=g_{\parallel}(\psi)+\gamma
g_{\perp}(\psi)$. The corresponding stability diagram is shown in Fig. \[fig8\], where we have assumed $g(\pi)>g(0)>0$ for definiteness.
![Bias-field stability diagram for large external field ($h_{\rm ext}>h_{\rm p}$) perpendicular to the layers. []{data-label="fig8"}](Fig8.eps){width="11cm"}
The boundary lines cross at $h_{\rm ext}=h_{\rm c}$, where $h_{\rm
c}=h_{\rm p}[g(\pi)+g(0)]/[g(\pi)-g(0)]$. This analysis is only valid for fields larger than the demagnetising field ($h_{\rm
ext}>h_{\rm p}$) and we see from the figure that for $h_{\rm
ext}>h_{\rm c}$ hysteretic switching occurs. This takes place for only one sign of the bias (current) and the critical biases (currents) increase linearly with $h_{\rm ext}$ as does the width of the hysteresis loop $|v_{{\rm P}\rightarrow{\rm AP}}-v_{{\rm
AP}\rightarrow{\rm P}}|$. This accords with the observations of Özyilmaz [*et al.*]{} The critical currents are not larger than those in the low-field or zero-field regimes (cf. Eqs. (\[conditiona\]), (\[conditionb\]) with Eq. (\[crbias\])) and yet the magnetisation of the switching magnet can be switched against a very large external field. However, in this case the AP state is only stabilised by maintaining the current.
The experiments on spin transfer discussed above have mainly been carried out at constant temperature, typically $4.2$K or room temperature. The effect on current-driven switching of varying the temperature has recently been studied by several groups [@23; @25; @31]. The standard Néel-Brown theory of thermal switching [@32] does not apply because the Slonczewski in-plane torque is not derivable from an energy function. Li and Zhang [@33] have generalised the standard stochastic Landau-Lifschitz equation, which includes white noise in the effective applied field, to include spin transfer torque. In this way they have successfully interpreted some of the experimental data. A full discussion of this work is outside the scope of the present review. However it should be pointed out that in addition to the classical effect of white noise there is an intrinsic temperature dependence of quantum origin. This arises from the Fermi distribution functions which appear in expressions for the spin-transfer torque (see Eqs. (\[torque\]) and (\[landau33\])).
So far we have discussed steady-state solutions of the LLG equation (\[Gamma2\]). It is important to study the magnetisation dynamics of the switching layer in the situation during the jumps AP$\rightarrow$P and P$\rightarrow$AP of the hysteresis curve in zero external field, and secondly under conditions where only time-dependent solutions are possible, for example in the regions of sufficiently strong current and external field marked ”both unstable“ in Fig. \[fig4\]. The first situation has been studied by Sun [@sun], assuming single-domain behaviour of the switching magnet, and by Miltat [*et al.*]{} [@34] with more general micromagnetic configurations. Both situations have been considered by Li and Zhang [@35]. In the second case they find precessional states, and the possibility of ”telegraph noise” at room temperature, as seen experimentally in Refs. [@22; @24]. Switching times (AP$\rightarrow$P and P$\rightarrow$AP) are estimated to be of the order 1ns. Micromagnetic simulations [@34] indicate that the Oersted field cannot be completely ignored for typical pillars with diameter of the order of 100nm.
Finally, in this section, we briefly discuss some practical considerations which may ultimately decide whether current-induced switching is useful in spintronics. Sharp switching, with nearly rectangular hysteresis loops, is obviously desirable and this demands single-domain behaviour. In experiments on nanopillars of circular cross section [@21] multidomain behaviour was observed with the switching transition spread over a range of current. Subsequently the same group [@albert] found sharp switching in pillars whose cross-section was an elongated hexagon, which introduces strong uniaxial in-plane shape anisotropy. It was known from earlier magnetisation studies of nanomagnet arrays [@36] that such a shape anisotropy can result in single domain behaviour. A complex switching transition need not necessarily indicate multidomain behaviour. It could also arise from a marked departure of $T_{\perp}(\psi)$ and/or $T_{\parallel}(\psi)$ from sinusoidal behaviour, such as occurs near $\psi=\pi$ in calculations for Co/Cu/Co(111) with two atomic planes of Co in the switching magnet (see Fig. \[fig13\](b)). In the calculations of the corresponding hysteresis loops (Fig. \[fig16\]) the torques were approximated by sine curves but an accurate treatment would certainly complicate the AP$\rightarrow$P transition which occurs at negative bias in Fig. \[fig16\](b). Studies of this effect are planned.
The critical current density for switching is clearly an important parameter. From Eq. (\[crbias\]) the critical reduced bias for the P$\rightarrow$AP transition is to a good approximation given by $-\gamma h_{\rm p}/[2g_{\parallel}(0)]$. Using the definitions of reduced quantities given after Eq. (\[Gamma3\]), we may write the actual critical bias in volts as $$\label{VPAP}
V_{P\rightarrow AP}=M\gamma M_{\rm s}H_{\rm d}/[2g_{\parallel}(0)|e|],$$ where $M$ is the number of atomic planes in the switching magnet, $M_{\rm s}$ is the average moment ($J/T$) of the switching magnet per atomic plane per unit area, and $H_{\rm d}=\hbar H_{\rm
p0}/(2\mu_{\rm B})$ is the easy-plane anisotropy field in tesla. As expressed earlier $g_{\parallel}(0)=({\rm d}T_{\parallel}/{\rm
d}\psi)_{\psi =0}$ where the torque $T_{\parallel}$ is per unit area in units of $eV_{\rm B}$. (The calculated torques in Figs. \[fig13\] and \[fig14\] of Sec. \[cinque\] are per surface atom so that if these are used to determine $g_{\parallel}(0)$ in Eq. (\[VPAP\]) $M_{\rm s}$ must be taken per surface atom.)
An obvious way to reduce the critical bias, and hence the critical current, is to reduce $M$, the thickness of the switching magnet. Calculations show [@13] (see also Fig. \[fig14\]) that $g_{\parallel}$ does not decrease with $M$ and may, in fact, increase for small values such as $M=2$. Careful design of the device might also increase $g_{\parallel}(0)$ beyond the values ($<0.01$ per surface atom) which seem to be obtainable in simple trilayers [@13]. Jiang [*et al.*]{} [@37; @38], have studied various structures in which the polarising magnet is pinned by an adjacent antiferromagnet (exchange biasing) and in which a thin Ru layer is incorporated between the switching layer and the lead. Critical current densities of $2\times 10^{6}$Acm$^{-2}$ have been obtained which are substantially lower than those in Co/Cu/Co trilayers. Such structures can quite easily be investigated theoretically by the methods of Section \[cinque\].
Decreasing the magnetisation $M_{\rm s}$, and hence the demagnetising field ($\propto H_{\rm d}$), would be favourable but $g_{\parallel}$ then tends to decrease also [@13]. A possible way of decreasing $H_{\rm d}$ without decreasing local magnetic moments in the system is to use a synthetic ferrimagnet as the switching magnet [@39]. The Gilbert damping factor $\gamma$ is another crucial parameter but it is uncertain whether this can be decreased significantly. However, the work of Capelle and Gyorffy [@40] is an interesting theoretical development. The search for structures with critical current densities low enough for use in spintronic devices ($10^{5}$Acm$^{-2}$ perhaps) [@41] is an enterprise where experiment and quantitative calculations [@13] should complement each other fruitfully.
Quantitative theory of spin-transfer torque {#quattro}
===========================================
General principles
------------------
To put the phenomenological treatment of Sec. \[tre\] on a first-principle quantitative basis we must calculate the spin-transfer torques (Eqs. (\[torque2\]) in a steady state for real systems. For this purpose it is convenient to describe the magnetic and nonmagnetic layers of Fig. \[fig1\] by tight-binding models, in general multiorbital with s, p, and d orbitals whose one-electron parameters are fitted to first-principle bulk band structure [@42]. The hamiltonian is therefore of the form $$\label{HAM}
H=H_0+H_{\rm int}+H_{\rm anis}$$ where the one-electron hopping term $H_0$ is given by $$\label{H0}
H_0=\sum_{k_{\parallel}\sigma}\sum_{m\mu,n\nu}t_{m\mu,n\nu}({\bf
k}_{\parallel})
c^{\dagger}_{{\bf k}_{\parallel}m\mu\sigma}
c_{{\bf k}_{\parallel}n\nu\sigma},$$ where $c^{\dagger}_{k_{\parallel}m\mu\sigma}$ creates an electron in a Bloch state, with in-plane wave vector ${\bf k}_{\parallel}$ and spin $\sigma$, formed from a given atomic orbital $\mu$ in plane $m$. Eq. \[H0\] generalises the single orbital eq. (\[hamiltonian\]). $H_{\rm int}$ is an on-site interaction between electrons in d orbitals which leads to an exchange splitting of the bands in the ferromagnets and is neglected in the spacer and lead. Finally, $H_{\rm anis}$ contains anisotropy fields in the switching magnet and is given by $$\label{Hanis}
H_{\rm anis}=-\sum_{n}{\bf S}_{n}\cdot{\bf H}_{\rm A},$$ where ${\bf S}_n$ is the operator of the total spin angular momentum of plane $n$ and ${\bf H}_{\rm A}$ is given by Eqs. (\[hh\])-(\[hp\]) with the unit vector ${\bf m}$ in the direction of $\sum_{n}\langle{\bf S}_n\rangle$, where $\langle{\bf S}_n\rangle$ is the thermal average of ${\bf S}_n$. We assume here that the anisotropy fields $H_{\rm u0}$,$H_{\rm p}$ are uniform throughout the switching magnet but we could generalise to include, for example, a surface anisotropy.
In the tight-binding description, the spin angular momentum operator ${\bf S}_n$ is given by $$\label{Sn}
{\bf S}_{n}=\frac12\hbar\sum_{k_{\parallel\mu}}
(c^{\dagger}_{k_{\parallel}n\mu\uparrow},
c^{\dagger}_{k_{\parallel}n\mu\downarrow})
{\bm\sigma}(c_{k_{\parallel}n\mu\uparrow},
c_{k_{\parallel}n\mu\downarrow})^{\rm T}$$ and the corresponding operator for the spin angular momentum current between planes $n-1$ and $n$ is $$\label{jj}
{\bf j}_{n-1}=-\frac12 \hbar\sum_{{\bf k}_{\parallel}\mu\nu}
t({\bf k}_{\parallel})_{n\nu,n-1\mu}
\left(c^{\dagger}_{k_{\parallel}n\nu\uparrow},
c^{\dagger}_{k_{\parallel}n\nu\downarrow}\right)
{\bm\sigma}
\left(c_{k_{\parallel}n-1\mu\uparrow},
c_{k_{\parallel}n-1\mu\downarrow}\right)^{\rm T}+{\rm h.c.},$$ which generalises the single orbital expression (\[gamma\]). The rate of change of ${\bf S}_n$ in the switching magnet is given by $$\label{rate}
{\rm i}\hbar\frac{{\rm d}{\bf S}_n}{{\rm d}t}=[{\bf S}_n,H_0]+[{\bf
S}_n,H_{\rm anis}].$$ This results holds since the spin operator commutes with the interaction hamiltonian $H_{\rm int}$.
It is straightforward to show that $$\label{commut}
[{\bf S}_n,H_0]={\rm i}\hbar({\bf j}_{n-1}-{\bf j}_{n}),$$ and $$\label{commut2}
[{\bf S}_n,H_{\rm anis}]=-{\rm i}\hbar({\bf H}_{\rm A}\times{\bf
S}_n).$$ On taking the thermal average, Eq. (\[rate\]) becomes $$\label{thermal}
\langle\frac{{\rm d}{\bf S}_n}{{\rm d}t}\rangle=\langle
{\bf j}_{n-1}\rangle-\langle {\bf j}_n\rangle-{\bf H}_{\rm
A}\times\langle
{\bf S}_{\rm tot}
\rangle,$$ This corresponds to an equation of continuity, stating that the rate of change of spin angular momentum on plane $n$ is equal to the difference between the rate of flow of this quantity onto and off the plane, plus the rate of change due to precession around the field ${\bf H}_{\rm A}$. When Eq. (\[thermal\]) is summed over all planes in the switching magnet we have $$\label{thermal2}
\frac{{\rm d}}{{\rm d}t} \langle{\bf S}_{\rm tot}\rangle=
{\bf T}^{\rm s-t}-{\bf H}_{\rm
A}\times \langle {\bf S}_{\rm tot}\rangle,$$ where the total spin-transfer torque ${\bf T}^{\rm s-t}$ is given by Eq. (\[torque\]) and $\langle{\bf S}_{\rm tot}\rangle$ is the total spin angular momentum of the switching magnet. Equation (\[thermal2\]) is equivalent to Eq. (\[llg\]), for zero external field, in the absence of damping. Equation (\[torque\]) shows how ${\bf T}^{\rm s-t}$ required for the phenomenological treatment of Sec. \[tre\] is to be determined from the calculated spin currents in the spacer and lead. As discussed in Sec. \[tre\], the magnetization of a single-domain sample is essentially uniform and the spin-transfer torque ${\bf T}^{\rm s-t}$ depends on the angle $\psi$ between the magnetisations of the polarising and switching magnets.
To consider time-dependent solutions of Eq. (\[llg\]) it is necessary to calculate ${\bf T}^{\rm s-t}$ for arbitrary angle $\psi$ and for this purpose ${\bf H}_{\rm A}$ can be neglected. To reduce the calculation of the spin-transfer torque to effectively a one-electron problem, we replace $H_{\rm int}$ by a selfconsistent exchange field term $-\sum_n{\bf S}_n\cdot{\bm\Delta}_n$, where the exchange field ${\bm\Delta}_n$ should be determined selfconsistently in the spirit of an unrestricted Hartree-Fock (HF) or local spin density (LSD), approximation. The essential selfconsistency condition in any HF or LSD calculation is that the local moment $\langle{\bf S}_n\rangle$ in a steady state is in the same direction as ${\bm\Delta}_n$. Thus we require $$\label{delta}
{\bm\Delta}_n\times\langle{\bf S}_n\rangle=0$$ for each atomic plane of the switching magnet. It is useful to consider first the situation when there is no applied bias and the polarising and switching magnets are separated by a spacer which is so thick that the zero-bias oscillatory exchange coupling [@44] is negligible. In that case we have two independent magnets and the selfconsistent exchange field in every atomic plane of the switching magnet is parallel to its total magnetisation which is uniform and assumed to be along the $z$-axis. Referring to Fig. \[fig1\] the selfconsistent solution therefore corresponds to uniform exchange fields in the polarising and switching magnets which are at an assumed angle $\psi=\theta$ with respect to one another.
When a bias $V_{\rm b}$ is applied , with a uniform exchange field ${\bm\Delta }=\Delta{\bf e}_z$ in the switching magnet imposed, the calculated local moments $\langle{\bf S}_n\rangle$ will deviate from the $z$-direction so that the solution is not selfconsistent. To prepare a selfconsistent state with ${\bm\Delta}$ and all $\langle{\bf S}_n\rangle=\langle {\bf S}\rangle$ in the $z$-direction it is necessary to apply fictitious constraining fields ${\bf H}_n$ of magnitude proportional to $V_{\rm b}$. The local field for plane $n$ is thus ${\bm\Delta}+{\bf H}_n$ but to calculate the spin currents in the spacer and lead, and hence ${\bf T}^{\rm s-t}$ from Eq. (\[torque\]), the fields ${\bf H}_n$, of the order of $V_{\rm b}$, may be neglected compared with ${\bf
\Delta}$. Although the fictitious constraining fields ${\bf H}_n$ need therefore never be calculated, it is interesting to see that they are in fact related to ${\bf T}^{\rm s-t}$. For the constrained self-consistent steady state ($\langle{\bf S}_n\rangle=\langle {\bf
S}\rangle$, $\langle{\bf \dot{S}}_n\rangle=0$) in the presence of the constraining fields, with ${\bf H}_{\rm A}$ neglected as discussed above, it follows from Eq. (\[thermal\]) that $$\label{jjj} \langle{\bf j}_{\bf n-1}\rangle-\langle{\bf j}_{\bf
n}\rangle=(\Delta+{\bf H}_n)\times\langle S\rangle={\bf
H}_n\times\langle{\bf S}\rangle,$$ where the local field ${\bm\Delta}+{\bf H}_n$ replaces ${\bf
H}_{\rm A}$. On summing over all atomic planes $n$ in the switching magnet we have $$\label{ttt} {\bf T}^{\rm s-t}=\langle{\bf j}_{\rm
spacer}\rangle-\langle{\bf j}_{\rm lead}\rangle=\sum_n{\bf
H}_n\times\langle{\bf S}\rangle.$$ Thus, as expected, in the prepared state with a given angle $\psi$ between the magnetisations of the magnetic layers the spin-transfer torque is balanced by the total torque due to the constraining fields.
In the simple model of Section \[due\], with infinite exchange splitting in the magnets, the local moment is constrained to be in the direction of the exchange field so the question of selfconsistency is not raised.
The main conclusion of this Section is that the spin-transfer torque for a given angle $\psi$ between magnetisations may be calculated using uniform exchange fields making the same angle with one another. Such calculations are described in Sec. \[due\] and \[cinque\]. The use of this spin-transfer torque in the LLG equation of Section \[tre\] completes what we shall call the ”standard model“ (SM). It underlies the original work of Slonczewski [@3] and most subsequent work. The spin-transfer torque calculated in this way should be appropriate even for time-dependent solutions of the LLG equation. This is based on the reasonable assumption that the time for the electronic system to attain a ”constrained steady state” with given $\psi$ is short compared with the time-scale ($\approx$1ns) of the macroscopic motion of the switching magnet moment.
Although the SM is a satisfactory way of calculating the spin-transfer torque its lack of selfconsistency leads to some non-physical concepts. The first of these is the “transverse spin accumulation” in the switching magnet [@46; @47], This refers to the deviations of local moments $\langle{\bf S}_n\rangle$ from the direction of the exchange field, assumed uniform in the SM. In a self-consistent treatment such deviations do not occur because the exchange field is always in the direction of the local moment. A related non-physical concept is the ”spin decoherence length" over which the spin accumulation is supposed to decay [@46; @47], More detailed critiques of these concepts are given elsewhere [@13; @em].
Keldysh formalism for fully realistic calculations of the spin-transfer torque
------------------------------------------------------------------------------
The wave-function approach to spin-transfer torque described in Section \[due\] is difficult to apply to realistic multiorbital systems. For this purpose Green functions are much more convenient and Keldysh [@11] developed a Green function approach to the non-equilibrium problem of electron transport. In this section we apply this method to calculate spin currents in a magnetic layer structure, following Edwards [*et al.*]{} [@13].
The structure we consider is shown schematically in Fig. \[fig1\]. It consists of a thick (semi-infinite) left magnetic layer (polarising magnet), a nonmagnetic metallic spacer layer of $N$ atomic planes, a thin switching magnet of $M$ atomic planes, and a semi-infinite lead. The broken line between the atomic planes $n-1$ and $n$ represents a cleavage plane separating the system into two independent parts so that charge carriers cannot move between the two surface planes $n-1$ and $n$. It will be seen that our ability to cleave the whole system in this way is essential for the implementation of the Keldysh formalism. This can be easily done with a tight-binding parametrisation of the band structure by simply switching off the matrix of hopping integrals $t_{n\nu,n-1\mu}$ between atomic orbitals $\nu$, $\mu$ localised in planes $n-1$ and $n$. We therefore adopt the tight-binding description with the Hamiltonian defined by Eqs. (\[HAM\]-\[Sn\]).
To use the Keldysh formalism [@11; @12; @53] to calculate the charge or spin currents flowing between the planes $n-1$ and $n$, we consider an initial state at time $\tau=-\infty$ in which the hopping integral $t_{n\nu,n-1\mu}$ between planes $n-1$ and $n$ is switched off. Then both sides of the system are in equilibrium but with different chemical potentials $\mu_{\rm L}$ on the left and $\mu_{\rm R}$ on the right, where $\mu_{\rm L}-\mu_{\rm R}=eV_{\rm
b}$. The interplane hopping is then turned on adiabatically and the system evolves to a steady state. The cleavage plane, across which the hopping is initially switched off, may be taken in either the spacer or in one of the magnets or in the lead. In principle, the Keldysh method is valid for arbitrary bias $V_{\rm b}$ but here we restrict ourselves to small bias corresponding to linear response. This is always reasonable for a metallic system. For larger bias, which might occur with a semiconductor or insulator as spacer, electrons would be injected into the right part of the system far above the Fermi level and many-body processes neglected here would be important. Following Keldysh [@11; @12], we define a two-time matrix $$\label{kel}
G_{\rm RL}^{+}(\tau,\tau^{\prime})={\rm i}\langle c_{\rm
L}^{\dagger}(\tau^{\prime})c_{\rm R}(\tau)\rangle,$$ where $R\equiv(n,\nu,\sigma^{\prime})$ and $L\equiv(n-1,\mu,\sigma)$, and we suppress the $k_{\parallel}$ label. The thermal average in Eq. (\[kel\]) is calculated for the steady state of the coupled system. The matrix $G_{\rm
RL}^{\dagger}$ has dimensions $2m\times 2m$ where $m$ is the number of orbitals on each atomic site, and is written so that the $m\times
m$ upper diagonal block contains matrix elements between $\uparrow$ spin orbitals and the $m\times m$ lower diagonal block relates to $\downarrow$ spin. $2m\times 2m$ hopping matrices $t_{\rm LR}$ and $t_{\rm RL}$ are written similarly and in this case only the diagonal blocks are nonzero. If we denote $t_{\rm LR}$ by $t$, then $t_{\rm RL}=t^{\dagger}$. We also generalise the definition of ${\bm\sigma}$ so that its components are now direct products of the $2\times 2$ Pauli matrices $\sigma_x$, $\sigma_y$, $\sigma_z$, and the $m\times m$ unit matrix. The thermal average of the spin current operator, given by Eq. (\[rate\]), may now be expressed as $$\label{uffa}
\langle{\bf j}_{n-1}\rangle=\frac12\sum_{{\bf k}_{\parallel}}{\rm
Tr}\left\{\left[G_{\rm RL}^{+}\left(\tau,\tau\right)t-G_{\rm
LR}^{+}(\tau,\tau)t^{\dagger}\right]{\bm\sigma}\right\}.$$ Introducing the Fourier transform $G^{+}(\omega)$ of $G^{+}(\tau,\tau^{\prime})$ , which is a function of $\tau-\tau^{\prime}$, we have $$\label{uffa2}
\langle{\bf j}_{n-1}\rangle=\frac12\sum_{{\bf k}_{\parallel}}
\int\frac{{\rm d}\omega}{2\pi}{\rm Tr}\left\{\left[G_{\rm
RL}^{+}\left(\omega\right)t-G_{\rm
LR}^{+}(\omega)t^{\dagger}\right]{\bm\sigma}\right\}.$$ The charge current is given by Eq. (\[uffa2\]) with $\frac12{\bm\sigma}$ replaced by the unit matrix multiplied by $e/\hbar$.
Following Keldysh [@11; @12] we now write $$\label{uffa3}
G_{\rm AB}^{+}(\omega)=\frac12\left(F_{\rm AB}+G_{\rm AB}^{\rm
a}-G_{\rm AB}^{\rm r}\right),$$ where the suffices $A$ and $B$ are either $R$ or $L$. $F_{\rm
AB}(\omega)$ is the Fourier transform of $$\label{uffa4}
F_{\rm AB}(\tau,\tau^{\prime})=-{\rm i}\langle[c_{\rm A}(\tau),c_{\rm
B}^{\dagger}(\tau^{\prime})]_{-}\rangle$$ and $G^{\rm a}$, $G^{\rm r}$ are the usual advanced and retarded Green functions [@54]. Note that in [@11] and [@12] the definitions of $G^{\rm a}$ and $G^{\rm r}$ are interchanged and that in the Green function matrix defined by these authors $G^{+}$ and $G^{-}$ should be interchanged.
Charge and spin current are related by Eqs. (\[uffa2\]) and (\[uffa3\]) to the quantities $G^{\rm a}$, $G^{\rm r}$ and $F_{\rm AB}$. The latter are calculated for the coupled system by starting with decoupled left and right systems, each in equilibrium, and turning on the hopping between planes L and R as a perturbation. Hence, we express $G^{\rm a}$, $G^{\rm r}$ and $F_{\rm AB}$ in terms of retarded surface Green functions $g_{L}\equiv g_{\rm LL}$, $g_{\rm R}\equiv
g_{\rm RR}$ for the decoupled equilibrium system. It is then found [@13] that the spin current between the planes $n-1$ and $n$ can be written as the sum $\langle{\bf j}_{n-1}\rangle=\langle{\bf
j}_{n-1}\rangle_1+\langle{\bf j}_{n-1}\rangle_2$, where the two contributions to the spin current $\langle{\bf j}_{n}\rangle_1$, $\langle{\bf j}_{n}\rangle_2$ are given by $$\label{nonloso1}
\langle{\rm j}_{n-1}\rangle_1=\frac1{4\pi}\sum_{{\bf
k}_\parallel}\int{\rm d}\omega\Re{\rm Tr}[(B-A){\bm\sigma}]
[f(\omega-\mu_{\rm L})+f(\omega-\mu_{\rm R})].$$
$$\label{nonloso2}
\langle{\rm j}_{n-1}\rangle_2=\frac1{2\pi}\sum_{{\bf
k}_\parallel}\int{\rm d}\omega\Re{\rm Tr}\left\{[g_{\rm L}tABg_{\rm
R}^{\dagger}t^{\dagger}-AB+\frac12(A+B)]{\bm\sigma}\right\}
[f(\omega-\mu_{\rm L})-f(\omega-\mu_{\rm R})].$$
Here, $A=[1-g_{\rm R}t^{\dagger}g_{\rm L}t]^{-1}$, $B=[1-g^{\dagger}_{\rm R}t^{\dagger}g^{\dagger}_{\rm L}t]^{-1}$, and as in Section \[due\] $f(\omega-\mu)$ is the Fermi function with chemical potential $\mu$ and $\mu_{\rm L}-\mu_{\rm R}=eV_{\rm b}$. In the linear-response case of small bias which we are considering, the Fermi functions in Eq. (\[nonloso2\]) are expanded to first order in $V_{\rm b}$. Hence the energy integral is avoided, being equivalent to multiplying the integrand by $eV_{\rm b}$ and evaluating it at the common zero-bias chemical potential $\mu_0$.
It can be seen that Eqs. (\[nonloso1\]) and (\[nonloso2\]), which determine the spin and the charge currents, depend on just two quantities, [*i.e.*]{} the surface retarded one-electron Green functions for a system cleaved between two neighbouring atomic planes. The surface Green functions can be determined without any approximations by the standard adlayer method (see [*e.g.*]{} [@42; @44]) for a fully realistic band structure.
We first note that there is a close correspondence between Eqs. (\[nonloso1\]), (\[nonloso2\]) and the generalised Landauer formula (\[landau33\]). The first term in Eq. (\[landau33\]) corresponds to the zero-bias spin current $\langle{\bf j}_{n-1}\rangle_1$ given by Eq. (\[nonloso1\]). When the cleavage plane is taken in the spacer, the spin current $\langle{\bf j}_{n-1}\rangle_1$ determines the oscillatory exchange coupling between the two magnets and it is easy to verify that the formula for the exchange coupling obtained from Eq. (\[nonloso1\]) is equivalent to the formula used in previous total energy calculations of this effect [@42; @44]. The contribution to the transport spin current given by Eq. (\[nonloso2\]) clearly corresponds to the second term in the Landauer formula (\[landau33\]) which is proportional to the bias in the linear response limit. Placing the cleavage plane first between any two neighbouring atomic planes in the spacer and then between any two neighbouring planes in the lead, we obtain from Eq. (\[nonloso2\]) the total spin-transfer torque ${\bf T}^{\rm s-t}$ of Eq. (\[torque\]) in Section \[due\].
The equivalence of the Keldysh and Landauer methods has been demonstrated by calculating the currents (\[nonloso1\]) and (\[nonloso2\]) analytically for the simple single orbital model of Section \[due\]. The results of that section, such as Eq. (\[torquex\]) are reproduced [@13].
Quantitative results for C/C/C(111) {#cinque}
===================================
We now discuss the application of the Keldysh formalism to a real system. In particular we consider a realistic multiorbital model of fcc Co/Cu/Co(111) with tight-binding parameters fitted to the results of the first-principles band structure calculations, as described previously [@42; @44].
Referring to Fig. \[fig1\], the system considered by Edwards [*et al.*]{} [@13] consists of a semi-infinite slab of Co (polarising magnet), the spacer of 20 atomic planes of Cu, the switching magnet containing $M$ atomic planes of Co, and the lead which is semi-infinite Cu. The spacer thickness of 20 atomic planes of Cu was chosen so that the contribution of the oscillatory exchange coupling term is so small that it can be neglected. The spin currents in the right lead and in the spacer were determined from Eq. (\[nonloso2\]). Figure \[fig13\](a),(b) shows the angular dependences of $T_{\parallel}$, $T_{\perp}$ for the cases $M=1$ and $M=2$. respectively.
![Dependence of the spin-transfer torque $T_{\parallel}$ and $T_{\perp}$ for Co/Cu/Co(111) on the angle $\psi$. The torques per surface atom are in units of $eV_{\rm b}$. Figure (a) is for $M=1$, and (b) for $M=2$ monolayers of Co in the switching magnet. []{data-label="fig13"}](Fig13.eps){width="12cm"}
For the monolayer switching magnet, the torques $T_{\parallel}$ and $T_{\perp}$ are equal in magnitude and they have opposite sign. However, for $M=2$, the torques have the same sign and $T_{\perp}$ is somewhat smaller than $T_{\parallel}$. A negative sign of the ratio of the two torque components has important and unexpected consequences for hysteresis loops as already discussed in Section \[tre\]. It can be seen that the angular dependence of both torque components is dominated by a $\sin\psi$ factor but distortions from this dependence are clearly visible. In particular, the slopes at $\psi=0$ and $\psi=\pi$ are quite different. As pointed out in Section \[tre\], this is important in the discussion of the stability of steady states and leads to quite different magnitudes of the critical biases $V_{\rm P}\rightarrow V_{\rm AP}$ and $V_{\rm
AP}\rightarrow V_{\rm P}$.
In Fig. \[fig14\] we reproduce the dependence of $T_{\perp}$ and $T_{\parallel}$ on the thickness of the Co switching magnet. It can be seen that the out-of-plane torque $T_{\perp}$ becomes smaller than $T_{\parallel}$ for thicker switching magnets.
![Dependence of the spin-transfer torque $T_{\parallel}$ and $T_{\perp}$ for Co/Cu/Co(111) on the thickness of the switching magnet $M$ for $\psi=\pi/3$. The torques are in units of $eV_{\rm
b}$.[]{data-label="fig14"}](Fig14.eps){width="8cm"}
However, $T_{\perp}$ is by no means negligible (27$\%$ of $T_{\parallel}$) even for a typical experimental thickness of the switching Co layer of ten atomic planes. It is also interesting that beyond the monolayer thickness, the ratio of the two torques is positive with the exception of $M=4$.
The microscopically calculated spin-transfer torques for Co/Cu/Co(111) were used by Edwards [*et al.*]{} [@13] as an input into the phenomenological LLG equation. For simplicity the torques as functions of $\psi$ were approximated by sine curves but this is not essential. The LLG equation was first solved numerically to determine all the steady states and then the stability discussion outlined in the phenomenological section was applied to determine the critical bias for which instabilities occur. Finally, the ballistic resistance of the structure was evaluated from the real-space Kubo formula at every point of the steady state path. Such a calculation for the realistic Co/Cu system then gives hysteresis loops of the resistance versus bias which can be compared with the observed hysteresis loops. The LLG equation was solved including a strong easy-plane anisotropy with $h_{\rm p}=100$. If we take $H_{\rm u0}=1.86\times 10^9$sec$^{-1}$, corresponding to a uniaxial anisotropy field of about 0.01T, this value of $h_{\rm p}$ corresponds to the shape anisotropy for a magnetisation of $1.6\times 10^6$A/m, similar to that of Co [@sun]. Also a realistic value [@sun] of the Gilbert damping parameter $\gamma=0.01$ was used. Finally, referring to the geometry of Fig. \[fig1\], two different values of the angle $\theta$ were employed in these calculations: $\theta=2$rad and $\theta=3$rad, the latter value being close to the value of $\pi$ which is realised in most experiments.
We first reproduce in Fig. \[fig16\] the hysteresis loops for the case of Co switching magnet consisting of two atomic planes. We note that the ratio $r=T_{\perp}/T_{\parallel}\approx 0.65$ deduced from Fig. \[fig13\] is positive in this case. Fig. \[fig16\](a) shows the hysteresis loop for $\theta=2$ and Fig. \[fig16\](b) that for $\theta=3$.
![Resistance of the Co/Cu/Co(111) junction as a function of the applied bias, with $M=2$ monolayers of Co in the switching magnet. (a) is for $\theta=2$rad and (b) for $\theta=3$rad.[]{data-label="fig16"}](Fig16.eps){width="13cm"}
The hysteresis loop for $\theta=3$ shown in Fig. \[fig16\](b) is an illustration of the stability scenario in zero applied field with $r>0$ discussed in Section \[tre\]. As pointed out there the hysteresis curve is that of Fig. \[fig5\](a) which agrees with Fig. \[fig16\](b) when we remember that the reduced bias used in Fig. \[fig5\] has the opposite sign from the bias in volts used in Fig. \[fig16\]. It is rather interesting that the critical bias for switching is $\approx
0.2$mV both for $\theta=2$ and $\theta=3$. When this bias is converted to the current density using the calculated ballistic resistance of the junction, it is found [@13] that the critical current for switching is $\approx 10^7$A/cm$^2$, which is in very good agreement with experiments [@albert].
The hysteresis loops for the case of the Co switching magnet consisting of a single atomic plane are reproduced in Fig. \[fig17\]. The values of $h_{\rm p}$, $\gamma$, $H_{\rm u0}$, and $\theta$ are the same as in the previous example.
![Resistance of the Co/Cu/Co(111) junction as a function of the applied current, with $M=1$ monolayer of Co in the switching magnet. (a) is for $\theta=2$rad and (b) for $\theta=3$rad.[]{data-label="fig17"}](Fig17.eps){width="13cm"}
However the ratio $r\approx -1$ is now negative and the hysteresis loops in Fig. \[fig17\] illustrate the interesting behaviour discussed in Section \[tre\] when the system subjected to a bias higher than a critical bias moves to the ”both unstable" region shown in Fig. \[fig6\]. As in Fig. \[fig7\] the points on the hysteresis loop in Fig. \[fig17\] corresponding to the critical bias are labelled by asterisks. Fig. \[fig17\](b) and Fig. \[fig7\](a) are in close correspondence because Fig. \[fig7\](a) is for $r_{\rm c}<r<0$ and in the present case $r=-1$, $r_{\rm c}=-2/(\gamma h_{\rm p})=-2$. Also, from Fig. \[fig13\](a), $g_{\parallel}<0$ so that $vg_{\parallel}$ in Fig. \[fig7\](a) has the same sign as the voltage $V$ in Fig. \[fig17\](b).
Summary
=======
Spin-transfer torque is responsible for current-driven switching of magnetisation in magnetic layered structures. The simplest theoretical scheme for calculating spin-transfer torque is a generalised Landauer method and this is used in Section \[due\] to obtain analytical results for a simple model. The general phenomenological form of spin-transfer torque is deduced in Section \[tre\] and this is introduced into the Landau-Lifshitz-Gilbert equation, together with torques due to anisotropy fields. This describes the motion of the magnetisation of the switching magnet and the stability of the steady states (constant current and stationary magnetisation direction) is studied under different experimental conditions, with and without external field. This leads to hysteretic and reversible behaviour in resistance versus bias (or current) plots in agreement with a wide range of experimental observations. In Section \[quattro\] the general principles of a self-consistent treatment of spin-transfer torque are discussed and the Keldysh formalism for quantitative calculations is introduced. This approach to the non-equilibrium problem of electron transport uses Green functions which are very convenient to calculate for a realistic multiorbital tight-binding model of the layered-structure. In Section \[cinque\] quantitative calculations for Co/Cu/Co(111) systems are presented which yield switching currents of the observed magnitude.
This study of current-driven switching of magnetisation was carried out in collaboration with J. Mathon and A. Umerski and financial support was provided by the UK Engineering and Physical Research Council (EPSRC).
[99]{}
P. Grünberg, R. Schreiber, Y. Pang, M. B. Brodsky, and H. Sower, Phys. Rev. Lett. [**5**7]{}, 2442 (1986); M. N. Baibich, J. M. Broto, A. Fert, Van Dau Nguyen, F. Petroff, P. Etienne, G. Creuset, A. Friederich, and J. Chazelas, Phys. Rev. Lett. [ **61**]{}, 2472 (1998) S. S. P. Parkin [*et al.*]{}, J. Appl. Phys. [**85**]{}, 5828 (1999). J. C. Slonczewski, J. Magn. Magn. Mater. [**159**]{}, L1 (1996). X. Waintal, E. B. Myers, P. W. Brouwer, and D. C. Ralph, Phys. Rev. B [**62**]{}, 12317 (2000) “Transport in Nanostructures” by D. K. Ferry and S. M. Goodnick (Cambridge University Press 1997). J. Z. Sun, Phys. Rev. B [**62**]{}, 570 (2000). F. J. Albert, J. A. Katine, R. A. Buhrman, and D. C. Ralph, Appl. Phys. Lett. [**77**]{}, 3809 (2000). J. Grollier, V. Cross, A. Hamzic, J. M. George, H. Jaffres, A. Fert, G. Faini, J. Ben Youssef, and H. Le Gall, Appl. Phys. Lett. [**78**]{}, 3663 (2001). E. C. Stoner and E. P. Wohlfarth, Phil, Trans. Roy. Soc. A [**240**]{}, 599 (1948). J. Grollier, V. Cross, H. Jaffres, A. Hamzic, J. M. George, G. Faini, J. Ben Youssef, H. Le Gall, and A. Fert, Phys. Rev. B [**67**]{}, 174402 (2003). F. J. Albert, N. C. Emley, E. B. Myers, D. C. Ralph, and R. A. Buhrman, Phys. Rev. Lett. [**89**]{}, 226802 (2002). D. W. Jordan and P. Smith, Nonlinear Ordinary Differential Equations, Clarendon Press, Oxford (1977). D. M. Edwards, F. Federici, G. Mathon, and A. Umerski, Phys. Rev. B [**71**]{}, 134501 (2005). J.A. Katine, F. J. Albert, R. A. Buhrman, E. B. Myers, and D. C. Ralph, Phys. Rev. Lett. [**84**]{}, 3149 (2000). S. I. Kiselev, J. C. Sankey, I. N. Krivorotov, N. C. Emley, R. J. Schoelkopf, R. A. Buhrman, and D. C. Ralph, Nature [**425**]{}, 380 (2003). S. Urazhdin, N. O. Birge, W. P. Pratt, Jr., and J. Bass, Phys. Rev. Lett. [**91**]{}, 146803 (2003) M. R. Pufall, W. H. Rippard, S. Kaka, S. E. Russek, T. J. Silva, J. Katine, and M. Carey, Phy. Rev. Lett. [**69**]{}, 214409 (2004). E. B. Myers, F. J. Albert, J. C. Sankey, E. Bonet, R. A. Buhrman, and D. C. Ralph, Phys. Rev. Lett. [**89**]{}, 196801 (2002) M. A. Zimmler, B. Özyilmaz, W. Chen, A. D. Kent, J. Z. Sun, M. J. Rooks, and R. H. Koch, Phy. Rev. B [**70**]{}, 184438 (2004). M. Tsoi, A. G. M. Jansen, J. Bass, W. C. Chiang. M. Seck, V.Tsoi, and P. Wyder, Phys. Rev. Lett. [**80**]{}, 4281 (1998). M. Tsoi, A. G. M. Jansen, J. Bass, W. C. Chiang. V. Tsoi, and P. Wyder, Nature [**406**]{}, 46 (2000) E. B. Myers, D. C. Ralph, J. A. Katine, R. N. Louie and R. A. Buhrman, Science [**285**]{}, 867 (1999). J. C. Slonczewski, J. Magn. Magn. Mater. [**195**]{}, L261 (1999); [**247**]{}, 324 (2002). B. Özyilmaz, A. D. Kent, D. Monsma, J. Z. Sun, M. J. Rooks, and R. H. Koch, Phys. Rev. Lett. [**91**]{}, 067203 (2003). M. Tsoi, J. Z. Sun, M. J. Rooks, R. H. Koch, and S. S. P. Parkin, Phys. Rev. B [**69**]{}, 100406(R) (2004). W. F. Brown, Phys. Rev. B [**130**]{}, 1677 (1963). Z. Li and S. Zhang, Phys. Rev. B [**69**]{}, 134416 (2004). J. Miltat, G. Albuquerque, A. Thiaville, and C. Vouille, J. Appl. Phys. [**89**]{}, 6982 (2001). Z. Li and S. Zhang, Phys. Rev. B [**68**]{}, 024404 (2003). C. R. P. Cowburn, C. K. Koltsov, A. O. Adeyeye, and M. E. Welland, Phys. Rev. Lett. [**83**]{}, 1042 (1999). Y. Jiang, S. Abe, T. Ochiai, T. Nozaki, A. Hirohata, N. Tezuka, and K. Inomata, Phys. Rev. Lett. [**92**]{}, 167204 (2004). Y. Jiang, T. Nozaki, S. Abe, T. Ochiai, A. Hirohata, N. Tezuka, and K. Inomata, Nature Materials [**3**]{}, 361 (2004). N. Tezuka (private communication) K. Capelle and B. L. Gyorffy, Europhys. Lett. [**61**]{}, 354 (2003). J. Sun, Nature [**424**]{}, 359 (2003). J. Mathon, Murielle Villeret, A. Umerski, R. B. Muniz, J. d’Albuquerque e Castro, and D. M. Edwards, Phys. Rev B, [**56**]{}, 11797 (1997). J. Mathon, Murielle Villeret, R. B. Muniz, J. d’Albuquerque e Castro, and D. M. Edwards, Phys. Rev. Lett. [**74**]{}, 3696 (1995). S. Zhang, P. M. Levy, and A. Fert , Phys. Rev. Lett. [**88**]{},236601 (2002). A. A. Kovalev, A. Brataas, and G. E. W. Bauer, Phys. Rev. B [**66** ]{}, 224424 (2002). D. M. Edwards and J. Mathon in ”Nanomagnetism: Multilayers, Ultrathin Films and Textured Media", eds. J. A. C. Bland and D. L. Mills (Elsevier, to be published). L. V. Keldysh, Sov. Phys. JETP, [**20**]{}, 1018 (1965). C. Caroli, R. Combescot, P. Nozieres, and D. Saint-James, J. Phys. C [**4**]{}, 916 (1971). D. M. Edwards in: Exotic states in quantum nanostructures, ed. by S. Sarkar, Kluwer Academic Press (2002). G. D. Mahan, Many Particle Physics, 2nd Ed., Plenum Press, New York (1990).
|
{
"pile_set_name": "ArXiv"
}
|
UN nuclear watchdog has "a number of questions" about possible undeclared nuclear activities at 3 sites in Iran UN nuclear watchdog has "a number of questions" about possible undeclared nuclear activities at 3 sites in Iran
|
{
"pile_set_name": "OpenWebText2"
}
|
United Nations Ambassador Nikki was highly prepared for the numerous questions CNN’s Wolf Blitzer fired at her regarding Trump’s decision to recognize Jerusalem as the capital of Israel.
“This is just common sense,” Haley stated on CNN’s “The Situation Room” Wednesday. “This is just reality.”
Wednesday President Trump made an announcement that the administration will start to move the U.S. Embassy from Tel Aviv to Jerusalem, under the 1995 law requiring the move which the previous presidents didn’t do.
Blitzer attempted to make Nikki agree with the critics of the decision, and failed.
“But it is significant, ambassador, that you’re not willing to say, the old city of Jerusalem or East Jerusalem is part of Israel. You say, that’s still open to negotiation. Is that what I’m hearing?” he asked.
“Wolf, why would the United States say that? When we’re pushing a peace process, that’s really for the Palestinians and the Israelis to decide,” Haley replied. “If we decided that, we would be picking a side.”
The Israelis and Palestinians should make the decision of who owns parts of Jerusalem, Haley pointed out, making it clear that the U.S. does not want to defend one group over another.
“The last thing we’re going to do is pick what we think should happen because, at the end of the day, Palestinians and Israelis need to live together and live in the situation that they settle together. This is not something the United States wants to do,” she said.
Haley said that other U.S. stances on Israel and the Palestinians will no be changed because of Trump’s decision, including settlements in the region.
“The U.S. was not talking about, in any way, settlements or anything else,” Haley stated. “This is just talking about the embassy being in the capital of Israel. We have long said that settlements are not a good idea. The Israelis, they are very familiar with us telling them that we don’t think it’s a good idea, especially when we’re moving through this peace process. We’re going to continue to say that.”
Blitzer pointed out that some Palestinian officials criticized Trump’s announcement, with Palestine Liberation Organization Secretary-General Saeb Erekat claiming the president “just destroyed any possibility of a two-state” solution.
“Anytime we make a decision, we get positive and negative reactions,” Haley responded about the comments, adding that tensions were “expected” to run high after Trump’s decision but “it will pass.”
When Blitzer asked about the president’s advisers opposing his decision, Haley responded:
“We’re moving something forward that hasn’t been done in 22 years. And this is about results and this is about courage, and I’ll remind you that when President Reagan made that famous speech that said, Mr. Gorbachev, bring down this wall, all of the people around him told him it was a bad idea.”
“Sometimes you have to take risks,” she added. “Courage leads to leadership. Leadership leads to peace. And we have to go with the truth.”
|
{
"pile_set_name": "Pile-CC"
}
|
Targeting cytokine expression in glial cells by cellular delivery of an NF-kappaB decoy.
Inhibition of nuclear factor (NF)-kappaB has emerged as an important strategy for design of anti-inflammatory therapies. In neurodegenerative disorders like Alzheimer's disease, inflammatory reactions mediated by glial cells are believed to promote disease progression. Here, we report that uptake of a double-stranded oligonucleotide NF-kappaB decoy in rat primary glial cells is clearly facilitated by noncovalent binding to a cell-penetrating peptide, transportan 10, via a complementary peptide nucleic acid (PNA) sequence. Fluorescently labeled oligonucleotide decoy was detected in the cells within 1 h only when cells were incubated with the decoy in the presence of cell-penetrating peptide. Cellular delivery of the decoy also inhibited effects induced by a neurotoxic fragment of the Alzheimer beta-amyloid peptide in the presence of the inflammatory cytokine interleukin (IL)-1beta. Pretreatment of the cells with the complex formed by the decoy and the cell-penetrating peptide-PNA resulted in 80% and 50% inhibition of the NF-kappaB binding activity and IL-6 mRNA expression, respectively.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Mingis on Tech: Coding for Alexa
Alexa, the helpful assistant best known as the voice of Amazon Echo and Echo Dot devices, offers a range of "skills" right out of the box. It can perform a variety of tasks such as looking up information, setting a timer, playing music, activating s...
Alexa, the helpful assistant best known as the voice of Amazon Echo and Echo Dot devices, offers a range of "skills" right out of the box. It can perform a variety of tasks such as looking up information, setting a timer, playing music, activating smart home devices and more.
But what happens if there's a certain skill you want that Alexa doesn't do?
You can do what Sharon Machlis did and code your own.
Machlis, IDG's director of editorial analytics and data, explained to Computerworld Executive Editor Ken Mingis why you might want to develop your own skill and detailed some of the things to keep in mind if you decide to do so.
|
{
"pile_set_name": "Pile-CC"
}
|
---
abstract: 'We study linear subset regression in the context of the high-dimensional overall model $y = \vartheta+\theta'' z + \epsilon$ with univariate response $y$ and a $d$-vector of random regressors $z$, independent of $\epsilon$. Here, ‘high-dimensional’ means that the number $d$ of available explanatory variables is much larger than the number $n$ of observations. We consider simple linear sub-models where $y$ is regressed on a set of $p$ regressors given by $x = M''z$, for some $d \times p$ matrix $M$ of full rank $p < n$. The corresponding simple model, i.e., $y=\alpha+\beta'' x + e$, can be justified by imposing appropriate restrictions on the unknown parameter $\theta$ in the overall model; otherwise, this simple model can be grossly misspecified. In this paper, we establish asymptotic validity of the standard $F$-test on the surrogate parameter $\beta$, in an appropriate sense, even when the simple model is misspecified.'
author:
- |
Hannes Leeb (University of Vienna and DataScience@UniVienna)\
Lukas Steinberger (University of Freiburg)
bibliography:
- 'lit.bib'
title: 'Statistical inference with $F$-statistics when fitting simple models to high-dimensional data'
---
Introduction
============
The $F$-test is a staple tool of applied statistical analyses. It is widely used, sometimes also in situations where its applicability is debatable because underlying assumptions may not be met. We study a situation of this kind: An $F$-test after fitting a (possibly misspecified) working model. We focus, in particular, on a scenario where the fitted model has $p$ explanatory variables while the true model has $d$ explanatory variables, with $p \ll d$, and where sample size is of the same order as $p$, i.e., $p = O(n)$. Scenarios like this occur, for example, in quality control studies like [@Sou91a], where a model with 18 explanatory variables (out of a total of about 8,000) is fit based on a sample of size 50; in time series forecasting with principal components as in [@Sto02a], who extract a handful of factors from 149 explanatory variables based on 480 monthly observations; or in genetic analyses like [@Vee02a], who select and fit a model with 70 genes (out of a total of about 25,000) based on a sample of size 78. In situations like these, the question whether the fitted model has any explanatory value is of particular interest. We show that, approximately, the usual $F$-statistic is $F$-distributed under a corresponding null-hypothesis, and that it is non-central $F$-distributed in a local neighborhood of the null. Approximation errors go to zero as $n\to \infty$ if $n^2 / \log d \to 0$ and if, at the same time, $p$ is of the same, or of slower, order as $n$; cf. Theorem \[t1\] and Remark \[rateofp\], respectively. Our results are uniform over a large region of the parameter space that we consider. In particular, our results also cover situations where the fitted model is misspecified. The setting of our analysis is non-standard in that we require a particular constellation of $d$, $p$ and $n$. This is a challenging setting of practical relevance, for which few theoretical results are available so far. Our findings, which are given for independent observations, also prompt the question whether similar results can be obtained under serial correlation.
The $F$-statistic is exactly $F$-distributed in a correctly specified linear model with Gaussian errors; and it is asymptotically $F$-distributed under the strong Gau[ß]{}-Markov condition on the errors if $n\to\infty$ while the model dimension stays fixed; cf. @And58a. $F$-tests in correctly specified models in settings where $p$ is allowed to increase with $n$ are studied, among others, by [@Akr00a; @Bat05a; @Boo95a; @Har08a; @Por84a; @Por85a; @Wan13a]. In addition, there are several viable alternatives to the $F$-test in potentially misspecified settings; see, for example, @Che10b [@Eic67a; @Hub67a; @Whi80a; @Whi80b; @Zho11a]. For further results on hypothesis testing and marginal screening in misspecified models, see, for example, @Boo13a [@Cho11a; @Fom03a; @Jen91a; @Ram91a], and the references therein.
On a technical level, this paper relies on @Wan13a, the corresponding extensions and corrections in [@Ste16a], and also on @Ste18a [@Ste18b]; all but the first of these references are based on @Ste15.
The rest of the paper is structured as follows: In Section \[thetruemodel\], we describe the true data-generating model and the underlying parameter space. The (typically misspecified) working model and the corresponding $F$-statistic are described in Section \[theworkingmodel\]. Our main theoretical result is given in Section \[mainresult\], and a simulation study in Section \[numericalresults\] demonstrates that our asymptotic approximations can ‘kick-in’ reasonably fast.
The true model {#thetruemodel}
==============
Throughout, we consider the (true) linear model $$\label{y}
y \quad = \quad \vartheta+\theta' z + \epsilon$$ with $\vartheta\in {{\mathbb R}}$ and $\theta \in {{\mathbb R}}^d$ for some $d \in {{\mathbb N}}$. We assume that the error $\epsilon$ is independent of $z$, with mean zero and finite variance $\sigma^2>0$; its distribution will be denoted by $\mathcal L(\epsilon)$. Moreover, we assume that the vector of regressors $z$ has mean $\mu \in {{\mathbb R}}^d$ and positive definite variance/covariance matrix $\Sigma$. Our model assumptions are further discussed in @Ste18a [Remark 7.1]. No additional restrictions will be placed on the regression coefficients $\vartheta$ and $\theta$, on the moments $\mu$ and $\Sigma$, or on the error distribution $\mathcal L(\epsilon)$.
We do place some assumptions on the distribution of the explanatory variables. First, we assume that $z$ can be written as an affine transformation of independent random variables. With this, we can represent the $d$-vector $z$ as $$\label{z}
z \quad=\quad \mu + \Sigma^{1/2} R \tilde{z}$$ for a $d$-vector $\tilde{z}$ with independent (but not necessarily identically distributed) components so that ${{\mathbb E}}[\tilde{z}]=0$ and ${{\mathbb E}}[\tilde{z}\tilde{z}'] = I_d$, where $\Sigma^{1/2}$ is the positive definite and symmetric square root of $\Sigma$, and where $R$ is an orthogonal (non-random) matrix. Second, we assume that $\tilde{z}$ has a Lebesgue density, which we denote by $f_{\tilde{z}}$, with bounded marginal densities and finite marginal moments of sufficiently high order. In particular, we will assume that $f_{\tilde{z}}$ belongs to one of the classes ${\mathcal F}_{d,k}(D,E)$ that are defined in the next paragraph, for appropriate constants $k$, $D$ and $E$. Our assumptions on $z$ are similar to those maintained by @Bai96a and [@Zho11a]. For later use, note that the distribution of $(y,z)$ in – is characterized by $\vartheta$ and $\theta$, by $\mathcal L(\epsilon)$, by $\Sigma$ and $\mu$, by $f_{\tilde{z}}$, and by $R$.
Fix an integer $k\geq 1$ and positive (finite) constants $D$ and $E$. With this, write ${\mathcal F}_{d,k}(D,E)$ for the class of Lebesgue densities on ${{\mathbb R}}^d$ that are products of univariate marginal densities such that each such marginal density is bounded from above by $D$, and such that each univariate marginal density has absolute moments of order up to $k$ that are bounded by $E$.
The sub-model and the $F$-test {#theworkingmodel}
==============================
Consider a sub-model where $y$ is regressed on $x$, with $x$ given by $$\label{x}
x \quad=\quad M'z$$ for some full-rank $d\times p$ matrix $M$ with $p<d$. For example, $M$ can be a selection matrix that picks out $p$ components of the $d$-vector $z$. Submodels with regressors of the form $x=M'z$ also occur in principal component regression, partial least squares, and certain sufficient dimension reduction methods. We are particularly interested in situations where $d$ is *much* larger than $p$, i.e., $p \ll d$. Trivially, we can write $$\label{workingmodel}
y = \alpha + \beta' x + e$$ with $e = y - \alpha-\beta'x$, where $\alpha$ and $\beta$ minimize ${{\mathbb E}}[(y-\alpha-\beta' x)^2]$. The ‘error’ $e$ has mean zero (because both and include an intercept), and we denote its variance by $s^2 = {{\mathbb E}}[e^2]$. Note that $\alpha= \vartheta+\mu'\theta - \mu' M (M' \Sigma M)^{-1} M' \Sigma \theta$ and, for later use, that $$\begin{aligned}
\label{betas2}
\begin{split}
\beta \quad & = \quad (M'\Sigma M)^{-1} M'\Sigma \theta\quad\text{and}\\
s^2 \quad & = \quad \theta'\Sigma\theta +
\theta'\Sigma M (M'\Sigma M)^{-1} M'\Sigma\theta+\sigma^2.
\end{split}\end{aligned}$$ Irrespective of whether the working model is correctly specified, the ‘surrogate’ parameters $\alpha$, $\beta$ and $s^2$ are always well-defined. Here, $\beta$ is our main object of interest, instead of the underlying true parameter $\theta$. Such surrogate parameters are well-known in the statistics literature, certainly since @Hub67a, and have recently gained new popularity, as witnessed by, e.g., @Aba14a [@Bra14a; @Bac15a; @Buj14a]. In particular, such surrogate parameters can be consistently estimated, in a standard $M$-estimation setting, by the OLS estimator or by robust alternatives, provided that $p$ is not too large relative to $n$ [see @Por84a; @Por85a; @Whi80a; @Whi80b]; cf. also Lemma A.3 in @Ste15 and Lemma A.4 in @Ste18a for analyses tailored to our present setting.
The working model is correct (in the usual sense) if ${{\mathbb E}}[y\|z] = {{\mathbb E}}[y\|x]$, i.e., if $\vartheta+\theta'z = \alpha + \beta'x$ or, equivalently, if $\epsilon=e$. This is the case if $\theta$ lies in the column space of $M$; if $M$ is a selection matrix, this means that $M'\theta$ selects all the non-zero components of $\theta$. Here, we do not assume that the working model is correct. In particular, we stress that $e$ may differ from $\epsilon$, and that $e$ may depend on $x$.
When working with the simple sub-model , a natural question is whether $x$ has any explanatory value for the response variable $y$. Given a sample of $n > p+1$ independent and identically distributed (i.i.d.) observations of $y$ and $x$ from , a classical approach to this question is to use the $F$-test of the hypotheses $$\label{hypotheses}
H_0: \beta = 0 \quad\text{versus}\quad H_1: \beta\neq 0.$$ Let $Y$ and $X$ denote the $n\times 1$ vector of responses and the $n\times p$ matrix of explanatory variables, respectively. Write $\hat{\beta}$ for the OLS-estimator for $\beta$ when $Y$ is regressed on $X$ and a constant, set $\hat{s}^2 = \|(I_n - P_{\iota,X})Y\|^2/(n-p-1)$, and write $\hat{F}_n=\hat{F}_n(X,Y)$ for the usual $F$-statistics for testing $H_0$, i.e., $\hat{F}_n= \|(I_n - P_\iota)X \hat{\beta}\|^2/ (p \hat{s}^2)$ if the numerator is well-defined and the denominator is positive and $\hat{F}_n=0$ otherwise. Here, $P_{\dots}$ denotes the orthogonal projection on the space spanned by the column-vectors indicated in the subscript and $\iota$ denotes the $n$-vector $\iota=(1,\dots,1)'$. Note that $\hat{F}_n>0$ with probability one by our assumptions.
$H_0$ may be re-phrased as the hypothesis that the best linear predictor of $y$ given $x$ is constant. An alternative to $H_0$ is the hypothesis that the Bayes-estimator of $y$ given $x$ is constant, i.e., $$\tilde{H}_0: {{\mathbb E}}[y\|x] \text{ is constant.}$$ Testing this non-parametric hypothesis is more difficult. In the asymptotic setting that we consider in the next section, however, we find that $H_0$ and $\tilde{H}_0$ are close to each other in the sense that the Bayes predictor and the best linear predictor (of $y$ given $x$) are close in terms of mean-squared prediction error; see Remark \[pinsker\] for details.
Main result {#mainresult}
===========
Our main result is concerned with the asymptotic distribution of the $F$-statistic in a local neighborhood of the null-hypothesis. Here, the local neighborhood is defined through the requirement that $$\Delta \quad=\quad
\text{Var}( \beta' x) / \text{Var}( e)
\quad=\quad
\beta' M' \Sigma M \beta /s^2$$ is small. This quantity can be interpreted as a signal-to-noise ratio in and depends on $\theta$, $M$, $\Sigma$ and $\sigma^2={{\mathbb E}}[\epsilon^2]$; cf. . If the error $e$ in is Gaussian and independent of $x$, then the $F$-statistic $\hat{F}_n$ is $F$-distributed with parameters $p$, $n-p-1$ and non-centrality parameter $n \Delta$; in that case, we have ${{\mathbb P}}(\hat{F}_n \leq t) =
F_{n,n-p-1,n\Delta}(t)$, where $F_{n,n-p-1,n \Delta}(\cdot)$ denotes the cumulative distribution function (c.d.f.) of the $F$-distribution with indicated parameters. In our present setting, however, the error $e$ in need not be Gaussian and can depend on $x$.
We will show that the distribution of $\hat{F}_n$ can be approximated by an $F$-distribution, uniformly over most parameters in the model. Only for $\epsilon$, $f_{\tilde{z}}$ and $R$, i.e., for the error in and for the density of the standardized explanatory variables as well as the orthogonal matrix in , some restrictions are needed. We will require a moment restriction on $\epsilon/\sigma$, and we will require that $f_{\tilde{z}}$ belongs to one of the classes ${\mathcal F}_{d,k}(D,E)$ introduced earlier. To formulate the restriction on $R$, write ${\mathcal O}_{d}$ for the collection of all orthogonal $d\times d$ matrices and write $\nu_{d}$ for the uniform distribution on that set; i.e., $\nu_{d}$ is the normalized Haar measure on the $d$-dimensional orthogonal group. For $R$, we will require that it belongs to a Borel set ${\mathbb U}\subseteq {\mathcal O}_{d}$ that is large in terms of $\nu_{d}$.
\[t1\] Fix finite constants $D\geq1 $ and $E\geq1$, and positive finite constants $\rho\in (0,1)$, $\lambda$, $L$ and $\gamma$. For each full-rank $d\times p$ matrix $M$, each $d\times d$ variance/covariance matrix $\Sigma>0$ and each $f_{\tilde{z}}\in {\mathcal F}_{d,20}(D,E)$ there exists a Borel set ${\mathbb U} = {\mathbb U}(M,\Sigma,f_{\tilde{z}}) \subseteq
{\mathcal O}_{d}$ so that $$\sup_{\footnotesize \begin{array}{c}
M
\end{array}
}\;
\sup_{
\Sigma
}\;
\sup_{f_{\tilde{z}} \in {\mathcal F}_{d,20}(D,E)}\;
\nu_{d}({\mathbb U})
\quad
\stackrel[]{\frac{ p }{\log d} \to 0}{\longrightarrow}
\quad
1$$ and so that the following holds: If $\Xi_n$ denotes either the quantity $$\label{t1.1}
\sup_{t\in{{\mathbb R}}} \left|
{{\mathbb P}}\Big( \hat{F}_n \leq t \Big) -
F_{p, n-p-1, n\Delta}(t)
\right|$$ or the quantity $$\label{t1.2}
{{\mathbb P}}\Big(
\hat{F}_n > F^{-1}_{p, n-p-1, 0}(\alpha)
\Big)
-
\Phi\Big(
- \Phi^{-1}(\alpha)
+ \sqrt{n} \Delta \sqrt{ \frac{1-p/n}{2 p/n}}
\Big)$$ for some fixed $\alpha\in [0,1]$, then $$\sup_{\footnotesize \begin{array}{c}
M
\end{array}
}\;
\sup_{
\footnotesize \begin{array}{c}
\vartheta, \theta, {\mathcal L}(\epsilon), \mu,
\Sigma\\ {{\mathbb E}}|\epsilon/\sigma|^{8+\lambda}\leq L\\
\Delta< \gamma/\sqrt{n}
\end{array}}\;
\sup_{f_{\tilde{z}}\in {\mathcal F}_{d,20}(D,E)}\;
\sup_{R \in {\mathbb U}}\;
\;\;\Xi_n
\quad
\stackrel[\frac{n^2 }{\log d} \to 0,
\frac{p}{n}\to \rho]{ n\to\infty}{\longrightarrow}
\quad
0.$$ This statement continues to hold if the restriction $\Delta<\gamma/\sqrt{n}$ in the last display is replaced by $\Delta < g(n)$ provided that $\lim_{n\to\infty} g(n) = 0$. \[Here, the suprema are taken over all full-rank $d\times p$ matrices $M$, all $\vartheta\in {{\mathbb R}}$, all $d$-vectors $\theta$ and $\mu$, all distributions ${\mathcal L}(\epsilon)$ so that $\epsilon$ has mean zero and finite positive variance, and all symmetric and positive definite $d\times d$ matrices $\Sigma$, subject to the indicated restrictions.\]
\[pinsker\] Write ${\mathcal R}_N$ and ${\mathcal R}_L$ for the prediction risk of the Bayes predictor and of the best linear predictor, respectively, of $y$ given $x$. That is, ${\mathcal R}_N = {{\mathbb E}}[ (y - {{\mathbb E}}[y\|x])^2]$ and ${\mathcal R}_L = {{\mathbb E}}[ (y - (\alpha+\beta'x))^2]$. The results of @Ste18a then entail that, in the setting of Theorem \[t1\], ${\mathcal R}_N/{\mathcal R}_L$ converges to one, uniformly over all the parameters indicated in the last display of that theorem. In fact, the risk-ratio converges to one uniformly even if the restriction on $\Delta$ is removed altogether, and a similar statement holds for the ratio of conditional risks given $x$, i.e., $ {{\mathbb E}}[ (y - {{\mathbb E}}[y\|x])^2\|x]/{{\mathbb E}}[ (y - (\alpha+\beta'x))^2\|x]$. See Theorem 3.1 of @Ste18a for a more general form of this statement under weaker assumptions.
\[rateofp\] Although the asymptotic approximations in Theorem \[t1\] require that $p$ is of the same order as $n$, we point out that the non-central $F$-distribution should still give a reasonable approximation to the distribution of the $F$-statistic, i.e., the expression in should be small, even if $p/n$ is very small, and, in particular, if $p$ is fixed while $n$ increases. This situation is further discussed in @Ste15 [p. 31, Section 3.2.2] in a setting where $n\to\infty$, $p$ is fixed and $p/\log d\to 0$. Clearly, the same is not true for the expression in , because the normal approximation to the $F$ is valid only if both degrees of freedom, i.e., $p$ and $n-p-1$, are large. The statement regarding in Theorem \[t1\] coincides with the conclusion of Theorem 1 in @Zho11a obtained for the correctly specified Gaussian error case. Moreover, the Gaussian approximation in has the advantage that it is easier to interpret than the more complicated distribution function of the non-central $F$-distribution in ; see also the discussion in @Ste16a [Remark 2.4].
Simulation analysis {#numericalresults}
===================
Theorem \[t1\] is an asymptotic result. In this section, we study a range of non-asymptotic scenarios through simulation to investigate how soon these asymptotic approximations become accurate. We consider a rather small sample size of $n=50$ and look at different configurations of the model dimensions $d$ and $p$ with $p<d$, and also at different points in parameter space.
The theorem contains two asymptotic statements, one about the distribution of the $F$-statistic and one about the size of the set $\mathbb U$. For the distribution of the $F$-statistic, we compare the rejection probability of the $F$-test under the null hypothesis with the nominal significance level $\alpha=0.05$. The nominal significance level provides a natural benchmark. \[Clearly, one can also investigate the power of the $F$-test through simulation experiments, but, unlike the significance level, it is less obvious what the right benchmark for the power should be.\] In particular, we simulate 1000 independent realizations $F_{j,r}$, $j=1,\dots,1000$ of the $F$-statistic at sample size $n=50$ under the null for each point in parameter space (the index $r$ will be explained shortly), and compare the empirical significance level $\overline{p}_r = 1000^{-1} \sum_{j=1}^{1000} {\mathbf 1}\{ F_{j,r} >
F^{-1}_{p,n-p-1,0}(1-\alpha)\}$ with the nominal level $\alpha$.
Gauging the size of $\mathbb U$ is more difficult, because that set is not given explicitly. We proceed as follows: We fix all the parameters in – except for the orthogonal matrix $R$ in . We then simulate 100 independent realizations $R_r$ of $R$, compute $\overline{p}_r$ as outlined above, $r=1,\dots, 100$, and finally compute $\overline{D}=100^{-1} \sum_{r=1}^{100} |\overline{p}_r - \alpha|$. If $R_r \in \mathbb U$, then $\overline{p}_r$ should be close to $\alpha$, in view of the last display in Theorem \[t1\]. We use $\overline{D}$ and the empirical distribution of the $\overline{p}_r$, $r=1,\dots,100$, as indicators for the size of $\mathbb U$.
The remaining parameters in – and the submodel matrix $M$ are chosen as follows for any fixed values of $d$ and $p$: We do not include an error term in the true model, i.e., we set $\sigma^2 = 0$, because the effect of misspecification becomes more pronounced when the error variance $\sigma^2$ is small.[^1] \[Note that the case where $\sigma^2=0$ is not covered by Theorem \[t1\], but inspection of the proof shows that our results also apply in this case; cf. Remark \[zerovariance\].\] For $\tilde{z}$, we consider product distributions with zero mean and i.i.d. components from the student-$t$ distribution with $2$, $3$ and $5$ degrees of freedom, as well as from the centered exponential, uniform, Bernoulli$\{-1,1\}$ and Gaussian distributions. \[Note that the scaling of these distributions is inconsequential, because of the scale-invariance of the $F$-statistic $\hat{F}(X,Y)$ in both arguments and the fact that we do not include an error term in the full model, i.e., scaling of $\tilde{z}_i$ is equivalent to scaling of both $y_i = \theta'z_i$ and $x_i = B'z_i$. Similarly, also the scaling of $\theta$ and $\Sigma$ has no impact on the value of the $F$-statistic.\] For $\Sigma$, we chose a spiked covariance matrix $\Sigma = U\text{diag}(\lambda_1,\dots,\lambda_n) U'$ with eigenvalues $\lambda_1 = \lambda_2 = 400$ and $\lambda_3=\dots=\lambda_d=1$ and an orthogonal matrix of eigenvectors $U$ chosen randomly from the uniform distribution on the orthogonal group.[^2] The intercept terms $\vartheta$ and $\mu$ are set to zero, for convenience. For the matrix $M$, which describes the working model, we take $M$ equal to the $d\times p$ matrix whose $k$-th column is the $k$-th standard basis vector in $\mathbb R^d$, $1\leq k \leq p$. In other words, we consider a sub-model that includes only the first $p$ regressors (out of $d$). For the parameter $\theta\in \mathbb R^d$, we need to ensure that the null hypothesis is satisfied, i.e., that $\beta = (M'\Sigma M)^{-1}M'\Sigma\theta = 0$. By construction of $\Sigma$, $M'\Sigma M$ is regular, and we choose $\theta = (I_d-P_{\Sigma M}) V/\|(I_d-P_{\Sigma M}) V\|$, for one realization of $V\thicksim N(0,I_d)$, to guarantee that $M'\Sigma\theta = 0$.
The results of the simulations are summarized in Table \[tab:NullMod\] and Figures \[fig:Boxplots1\] and \[fig:Boxplots2\]. From Table \[tab:NullMod\], the overall picture we get is consistent with what was predicted by our theory. For all distributions except the Gaussian, the average absolute difference between the true (simulated) rejection probabilities and the nominal level decreases as $d$ increases. This phenomenon is most pronounced for the exponential distribution, which has a finite moment generating function around the origin, and is weakest for the $t(2)$-distribution, which does not even have finite variance. For uniformly distributed design, which is bounded, the effect of misspecification on the size of the $F$-test is relatively mild already for small dimensions. In the Gaussian case, all sub-models of the form are correct in the sense that the error $e$ is Gaussian with mean zero and independent of $x$, so that theoretically the corresponding panel in Table \[tab:NullMod\] should contain only zeroes. The numbers therefore represent only the simulation error and serve as a benchmark for the other panels. We also see a monotonic increase, in the deviation of the size of the $F$-test from the nominal level, as the dimension $p$ of the sub-model increases, which was also suggested by our theory. However, if we fix the ratio $p/d=1/2$, i.e., if we move along the staircase pattern in each of the panels, except for the heavy tailed distributions $t(3)$ and $t(2)$, we still see the effect of misspecification decrease as $d$ increases. This suggests that convergence of $n^2 /\log(d)\sim p^2/\log(d)$ to zero, as required in Theorem \[t1\], may not be necessary, at least in the scenarios considered here.
[c|cccc | Hcccc]{} $d\backslash p$ & 1 & 2 & 5 & 25 & $d\backslash p$ & 1 & 2 & 5 & 25\
& && & & &&\
2 & 0.077 & & & & 2 & 0.141 & & &\
4 & 0.056 & 0.076 & & & 4 & 0.093 & 0.140 & &\
10 & 0.032 & 0.047 & 0.066 & & 10 & 0.052 & 0.071 & 0.109 &\
50 & 0.009 & 0.013 & 0.017 & 0.019 & 50 & 0.014 & 0.015 & 0.020 & 0.033\
100 & 0.007 & 0.008 & 0.009 & 0.010 & 100 & 0.009 & 0.009 & 0.012 & 0.015\
200 & 0.006 & 0.007 & 0.006 & 0.008 & 200 & 0.007 & 0.007 & 0.006 & 0.009\
& && & & &&\
2 & 0.188 & & & & 2 & 0.025 & & &\
4 & 0.158 & 0.225 & & & 4 & 0.020 & 0.023 & &\
10 & 0.122 & 0.167 & 0.238 & & 10 & 0.011 & 0.014 & 0.016 &\
50 & 0.062 & 0.084 & 0.116 & 0.123 & 50 & 0.006 & 0.006 & 0.007 & 0.007\
100 & 0.048 & 0.061 & 0.081 & 0.082 & 100 & 0.005 & 0.006 & 0.006 & 0.005\
200 & 0.033 & 0.044 & 0.057 & 0.055 & 200 & 0.005 & 0.005 & 0.005 & 0.006\
& && & & &&\
2 & 0.335 & & & & 2 & 0.005 & & &\
4 & 0.332 & 0.458 & & & 4 & 0.006 & 0.005 & &\
10 & 0.301 & 0.411 & 0.563 & & 10 & 0.005 & 0.005 & 0.006 &\
50 & 0.250 & 0.335 & 0.456 & 0.518 & 50 & 0.005 & 0.006 & 0.005 & 0.005\
100 & 0.228 & 0.314 & 0.412 & 0.457 & 100 & 0.005 & 0.005 & 0.006 & 0.005\
200 & 0.212 & 0.286 & 0.383 & 0.407 & 200 & 0.005 & 0.005 & 0.006 & 0.006\
![ Box-plots of simulated rejection probabilities $(\bar{p}_r)_{r=1}^{100}$ (gray crosses) of the $F$-test with $n=50$, $p=5$ and $d=10,50,100,1000$, for different design distributions. Every $r\in\{1,\dots, 100\}$ corresponds to a different $R_r$ applied to the standardized design $\tilde{z}$. []{data-label="fig:Boxplots1"}](t3t5Unif.pdf){width="\textwidth"}
![ Box-plots of simulated rejection probabilities $(\bar{p}_r)_{r=1}^{100}$ (gray crosses) of the $F$-test with $n=50$, $p=5$ and $d=10,50,100,1000$, for Bernoulli$\{-1,1\}$ and exponential design distributions and a benchmark panel of Binomial samples with different success probabilities. []{data-label="fig:Boxplots2"}](BernExpBench.pdf){width="\textwidth"}
In Table \[tab:NullMod\], the effect of the orthogonal matrix $R$ on the actual significance level of the $F$-test was compressed into one summary statistic, namely the mean absolute deviation from the nominal significance level. To get a more comprehensive picture, Figures \[fig:Boxplots1\] and \[fig:Boxplots2\] show plots of the sample $(\bar{p}_r)_{r=1}^{100}$ (gray crosses) and superimposed box-plots for different design distributions. Due to limited space we present only the results for sub-models of dimension $p=5$. In view of Theorem \[t1\], we expect that the size $\mathbb U$, i.e., the family of matrices $R$ for which and get small, grows with $d$. Consequently, we expect that many of the $\bar{p}_r$ should be close to $\alpha=0.05$. On the other hand, if $d$ is not large then many matrices $R$ will lead to a biased rejection probability due to misspecification of the working model. This is exactly what we observe in Figures \[fig:Boxplots1\] and \[fig:Boxplots2\]. For small values of $d$, the rejection probabilities $\bar{p}_r$ are systematically biased and we see some variability of their values due to the variation in the choice of $R_r$ (compare benchmark panel in Figure \[fig:Boxplots2\]). Both the bias and the variability in $\bar{p}_r$ reduce when $d$ increases, which is what we expected, as for large $d$, most $R_r$ will be favorable and we obtain small misspecification errors uniformly over these favorable $R_r$. What is remarkable is the systematic over-rejection in case of the $t$- and exponential distribution and the under-rejection for Bernoulli and uniformly distributed designs. We currently can not explain the mechanism that is responsible for this pattern. Finally, the benchmark panel shows i.i.d. samples $(\tilde{p}_r)_{r=1}^{100}$ with $\tilde{p}_r \thicksim \text{Binomial}(1000,\alpha)/1000$ and success probabilities $\alpha = 0.05, 0.1, 0.15,0.2$. This provides some idea what portion of the variability observed in the other panels is due to random simulation error. Clearly, the results in the benchmark panel could have been equivalently obtained by repeating the previous simulation for the $F$-test with Gaussian design at significance levels $\alpha= 0.05,0.1, 0.15, 0.2$.
Acknowledgments {#acknowledgments .unnumbered}
===============
The first author’s research was partially supported by FWF projects P 26354-N26 and P 28233-N32.
Proofs
======
We begin with some preliminary considerations that connect this paper with the results of @Ste18b. In particular, we use Theorem 2.1, parts (ii) and (iii), in that reference with $Z=\tilde{z}$ and $\tau=1/2$: If $f_{\tilde{z}}\in\mathcal F_{d,20}(D,E)$, then the assumptions of that result are satisfied in view of Example 3.1 in @Ste18b. The theorem guarantees existence of a Borel subset $\mathbb G = \mathbb G(f_{\tilde{z}})\subseteq \mathcal V_{d,p}$ of the Stiefel manifold $\mathcal V_{d,p}$ of order $d\times p$, that depends on the density $f_{\tilde{z}}$, such that for all $t>0$ both $$\sup_{B\in\mathbb G} {{\mathbb P}}\left(\big\| {{\mathbb E}}[\tilde{z}\|
B'\tilde{z}] - BB'\tilde{z}\big\| > t \right)$$ and $$\sup_{B\in\mathbb G} {{\mathbb P}}\left(
\big\|{{\mathbb E}}[\tilde{z}\tilde{z}'\| B'\tilde{z}] -
(I_d-B B' + B B' \tilde{z}\tilde{z}' B B')\big\|
> t \right)$$ are bounded from above by $$\begin{aligned}
\label{approxMeanVar}
\frac{1}{t} d^{-1/20} + 4\gamma\frac{p}{\log{d}},\\\end{aligned}$$ such that $$\begin{aligned}
\label{eq:G1size}
\nu_{d,p}(\mathbb G^c) \;\le\; \kappa d^{-(1-20\gamma\frac{p}{\log{d}})/20},\end{aligned}$$ where $\nu_{d,p}$ denotes the uniform distribution on the Stiefel manifold, and such that the set $\mathbb G$ is right-invariant under the action of $\mathcal O_p$, i.e., $\mathbb G R = \mathbb G$ whenever $R \in \mathcal O_d$. Here, the constant $\gamma = \gamma(D)$ depends only on $D$, and the constant $\kappa = \kappa(E)$ depends only on $E$.
For any full rank $d\times p$ matrix $M$, any symmetric positive definite $d\times d$ matrix $\Sigma$ and $f_{\tilde{z}}\in\mathcal F_{d,20}(D,E)$, we define the set $$\mathbb U \;:=\; \mathbb U(M,\Sigma, f_{\tilde{z}}) \;:=\;
\left\{ R\in \mathcal O_d : R'\Sigma^{1/2} M (M'\Sigma M)^{-1/2}
\in \mathbb G(f_{\tilde{z}})\right\}.$$ Now take a random matrix $U$ that is uniformly distributed on $\mathcal O_d$ and another random matrix $V$ that is uniformly distributed on $\mathcal O_p$, such that $U$ and $V$ are independent, and note that by right-invariance of $\mathbb G$, $$\begin{aligned}
\nu_d(\mathbb U) \; &= \; {{\mathbb P}}(U \Sigma^{1/2}M(M'\Sigma M)^{-1/2} \in \mathbb G)\\
\;&=\; {{\mathbb P}}(U \Sigma^{1/2}M(M'\Sigma M)^{-1/2} V \in \mathbb G)
\;=\; \nu_{d,p}(\mathbb G),\end{aligned}$$ because $\Sigma^{1/2}M(M'\Sigma M)^{-1/2}\in\mathcal V_{d,p}$ and $\nu_{d,p}$ is characterized by left and right invariance under the appropriate orthogonal groups. It follows that $\nu_d(\mathbb U^c)$ is bounded by the expression on the right-hand side of whenever $f_{\tilde{z}} \in \mathcal F_{d,20}(D,E)$, which establishes the first claim of Theorem \[t1\]. The proof of the second claim is more elaborate.
The results in the preceding paragraph also show that the error $e$ in the working model is such that ${{\mathbb E}}[e\|x]$ is approximately zero and $\operatorname{Var}[e\|x]$ is approximately constant, provided that $R \in \mathbb U$: We first re-write the error $e$ in a convenient form. Set $\tilde{\theta} = R'\Sigma^{1/2} \theta$ and $\tilde{M} = R' \Sigma^{1/2} M$. Then it is easy to see that $e =\tilde{\theta}'(I_d-P_{\tilde{M}}) \tilde{z} + \epsilon$ and hence $$\begin{aligned}
\label{tmp1}
\begin{split}
{{\mathbb E}}[e\|x] &\quad=
\quad\tilde{\theta}'(I_d-P_{\tilde{M}})
\Big\{{{\mathbb E}}[\tilde{z}\|P_{\tilde{M}}\tilde{z}] -
P_{\tilde{M}}\tilde{z}\Big\} \quad\text{and}
\\
{{\mathbb E}}[e^2\|x] - s^2 &\quad=\quad\\
&\hspace{-1cm}
\tilde{\theta}'(I_d-P_{\tilde{M}}) \Big\{
{{\mathbb E}}[\tilde{z}\tilde{z}'\|P_{\tilde{M}}\tilde{z}]
- ((I_d - P_{\tilde{M}})+P_{\tilde{M}}\tilde{z}\tilde{z}'P_{\tilde{M}})
\Big\}
(I_d-P_{\tilde{M}}) \tilde{\theta};
\end{split}\end{aligned}$$ see also –. Our goal is to show that the expressions in the preceding two displays are approximately zero. To this end, we focus on the expressions in curly brackets and use Cauchy-Schwarz: For each $t>0$, we have $$\begin{aligned}
{{\mathbb P}}( |{{\mathbb E}}[e\|x]| > t) &\quad\leq\quad
{{\mathbb P}}\left(\Big\|{{\mathbb E}}[\tilde{z}\|P_{\tilde{M}}\tilde{z}] -
P_{\tilde{M}}\tilde{z}\Big\|
> t / \|(I_d-P_{\tilde{M}})\tilde{\theta}\|\right) \quad\text{and}\\
{{\mathbb P}}( |{{\mathbb E}}[e^2\|x] - s^2| > t) &\quad\leq\quad
\\ & \hspace{-1cm}
P\left( \Big\| {{\mathbb E}}[\tilde{z}\tilde{z}'\|P_{\tilde{M}}\tilde{z}] -
((I_d - P_{\tilde{M}}) +
P_{\tilde{M}}\tilde{z}\tilde{z}'P_{\tilde{M}})
\Big\| > t/\|(I_d-P_{\tilde{M}})\tilde{\theta}\|^2\right).\end{aligned}$$ Now if $R \in \mathbb U(M,\Sigma,f_{\tilde{z}})$, then it is easy to see that $\tilde{M}(\tilde{M}'\tilde{M})^{-1/2} \in \mathbb G(f_{\tilde{z}})$. Because conditioning on $P_{\tilde{M}}\tilde{z}$ is equivalent to conditioning on $(\tilde{M}'\tilde{M})^{-1/2} \tilde{M}'\tilde{z}$, it follows that ${{\mathbb P}}( |{{\mathbb E}}[e\|x]| > t)$ is bounded from above by with $t$ replaced by $t / \|(I_d-P_{\tilde{M}})\tilde{\theta}\|$ and that ${{\mathbb P}}( |{{\mathbb E}}[e^2\|x] - s^2| > t)$ is bounded by with $t$ replaced by $t/\|(I_d-P_{\tilde{M}})\tilde{\theta}\|^2$.
The consideration in the preceding paragraph suggests that the effect of misspecification in , where ${{\mathbb E}}[e\|x]$ may be non-zero and $\operatorname{Var}[e\|x]$ may be non-constant, may be negligible in an asymptotic setting where $p/\log d$ becomes small, provided that $f_{\tilde{z}} \in \mathcal F_{d,20}$ and that $R \in \mathbb U(M,\Sigma,f_{\tilde{z}})$. This idea is formalized in the following two results, which show that the distribution of certain statistics is unaffected asymptotically if the error $e$ is replaced by a substitute error $e^\ast$ that has mean zero and constant variance conditional on $x$. The following results are stated for sequences where the data-generating model - and the working model are allowed to depend on $n$, that is, a ‘triangular array’ setting where all parameters depend on $n$.
\[lemma:LinQuad\] Fix finite positive constants $D$ and $E$. For every $n\in{{\mathbb N}}$, let $p_n\le d_n$ be positive integers so that $n p_n / \log d_n \to 0$ as $n\to\infty$. For each $n$, consider $(y,z,x)$ as in – but with $d_n$ and $p_n$ replacing $d$ and $p$, respectively, with $f_{\tilde{z}} \in \mathcal F_{d_n, 20}(D,E)$ and with $R \in \mathbb U(M,\Sigma,f_{\tilde{z}})$. And for each $n$, consider a sample of $n$ i.i.d. observations $(y_i, z_i, x_i)$, $1\leq i \leq n$, of $(y,z,x)$, stack the values of the individual variables into a vector $Y$ and matrices $Z$ and $X$, respectively, and write $E = Y - \alpha \iota - X \beta = (e_1,\dots, e_n)'$ for the vector of errors from . Finally, define a vector $E^\ast = (e^\ast_1,\dots,e^\ast_n)'$ of substitute errors through $e_i^\ast = s(\operatorname{Var}[e_i\|x_i])^{-1/2}(e_i-{{\mathbb E}}[e_i\|x_i])$. Then, for every $k\in{{\mathbb R}}$ and (possibly random) symmetric idempotent $n\times n$ matrices $P_n$, $$\begin{aligned}
n^k \|E - E^*\|/s \;&\stackrel{p}{\longrightarrow} \; 0
\quad\text{and} \label{eq:LinXi}\\
n^k |E'P_nE - {E^*}'P_nE^*|/s^2\;
&\stackrel{p}{\longrightarrow}\;0,\label{eq:QuadXi}\end{aligned}$$ as $n\to\infty$. As a by product, we also obtain that $$\begin{aligned}
&\max_{i=1,\dots, n} |\operatorname{Var}[e_i\|x_i]/s^2 - 1| \;
\stackrel{p}{\longrightarrow}\;0.\end{aligned}$$
First, note that $\operatorname{Var}[e_i\|x_i] =
\operatorname{Var}[y_i\|x_i] = \operatorname{Var}[\theta'z_i\|x_i] + \sigma^2 > 0$, so that $e_i^*$ is well defined (almost surely). For the claim in , fix $k\in{{\mathbb R}}$ and $t>0$, and consider ${{\mathbb P}}(n^k\|E-E^*\|/s >t) \le
n {{\mathbb P}}(n^{2k+1} |e_1-e_1^*|^2/s^2>t^2)$. Now, using the simple observation $|\sqrt{\operatorname{Var}[e_1\|x_1]}-s| =
|\operatorname{Var}[e_1\|x_1]-s^2|/|\sqrt{\operatorname{Var}[e_1\|x_1]}+s| \le
|\operatorname{Var}[e_1\|x_1]-s^2|/s$, we get $$\begin{aligned}
|e_1-e_1^*|/s &=
(s^2 \operatorname{Var}[e_1\|x_1])^{-1/2}
\left|e_1(\sqrt{\operatorname{Var}[e_1\|x_1]} - s) +
s{{\mathbb E}}[e_1\|x_1]\right| \\
&\le
\frac{s}{\sqrt{\operatorname{Var}[e_1\|x_1]}}
\left( \frac{|e_1|}{s}\frac{|\operatorname{Var}[e_1\|x_1] -
s^2|}{s^2} + \frac{|{{\mathbb E}}[e_1\|x_1]|}{s}\right),\end{aligned}$$ and furthermore $$\begin{aligned}
&{{\mathbb P}}(n^{2k+1} |e_1-e_1^*|^2/s^2>t^2)\nonumber\\
&\le
{{\mathbb P}}\left(
n^{k+1/2}\left| \frac{|e_1|}{s}\frac{|\operatorname{Var}[e_1\|x_1]
- s^2|}{s^2} + \frac{|{{\mathbb E}}[e_1\|x_1]|}{s}\right|
> t/\sqrt{2}
\right)\notag\\
&\quad\quad + {{\mathbb P}}\left(
\frac{s^2}{\operatorname{Var}[e_1\|x_1]} > 2
\right) \notag\\
&\le
{{\mathbb P}}\left(
\left|\frac{\operatorname{Var}[e_1\|x_1]}{s^2} - 1\right|> \frac{1}{2}
\right)
+
{{\mathbb P}}\left(
n^{k+1/2} \frac{|e_1|}{s}\frac{|\operatorname{Var}[e_1\|x_1] - s^2|}{s^2}
> t/2^{3/2}
\right) \notag\\
&\quad\quad+
{{\mathbb P}}\left(
n^{k+1/2}\frac{|{{\mathbb E}}[e_1\|x_1]|}{s}
> t/2^{3/2}
\right)\notag\\
\begin{split} \label{tmp2}
&\le
{{\mathbb P}}\left(
\frac{|\operatorname{Var}[e_1\|x_1]-s^2|}{s^2}> \frac{1}{2}
\right)
+
{{\mathbb P}}\left(
n^{k+3/2} \frac{|\operatorname{Var}[e_1\|x_1] - s^2|}{s^2}
> t/2^{3/2}
\right)\\
&\quad\quad+{{\mathbb P}}\left(
\frac{|e_1|}{s} > n
\right)
+
{{\mathbb P}}\left(
n^{k+1/2}\frac{|{{\mathbb E}}[e_1\|x_1]|}{s}
> t/2^{3/2}
\right).
\end{split}\end{aligned}$$ The claim will follow if each of the four terms in is of the order $o(1/n)$. Because $f_{\tilde{z}}\in\mathcal F_{d_n,20}(D,M)$ and $R \in \mathbb U(M,\Sigma,f_{\tilde{z}})$, the considerations leading up to Lemma \[lemma:LinQuad\] apply. Also note that $\|(I_d-P_{\tilde{M}})\tilde{\theta}\|^2 \leq s^2$. For the last term in , we obtain, for every $t>0$, that $$\begin{aligned}
{{\mathbb P}}\left(
n^{k+1/2}\frac{|{{\mathbb E}}[e_1\|x_1]|}{s} > t
\right)
\le
t^{-1}n^{k+1/2}d_n^{-1/20} + 4 \gamma \frac{p_n}{\log{d_n}},\end{aligned}$$ and the upper bound goes to zero as $o(1/n)$ in view of the assumption that $n p_n / \log d_n \to 0$. For the second-to-last term in , we have ${{\mathbb P}}(|e_1|/s>n) \leq n^{-2} {{\mathbb E}}[e_1^2/s^2] = 1/n^2$. For the second term in , we proceed like for the last term in . In particular, we obtain, for any $t>0$, that $$\begin{aligned}
\label{eq:CondVarop1}
&{{\mathbb P}}\left(
n^{k+3/2}\frac{|\operatorname{Var}[e_1\|x_1]-s^2|}{s^2} > t
\right)\\
&\quad\le
{{\mathbb P}}\left(
n^{k+3/2}\frac{|{{\mathbb E}}[e_1^2\|x_1]-s^2|}{s^2} > t/2
\right)
+
{{\mathbb P}}\left(
n^{k+3/2}\frac{|{{\mathbb E}}[e_1\|x_1]|^2}{s^2} > t/2
\right)\notag\\
&\quad\le
\frac{2}{t} n^{k+3/2} d^{-1/20} +
\left(\frac{2}{t} n^{k+3/2}\right)^{1/2} d^{-1/20} +
8 \gamma \frac{ p_n }{\log d_n}.\end{aligned}$$ Again, this upper bound goes to zero as $o(1/n)$ because $n p_n/\log d_n \to 0$. Note that the considerations in the preceding display also entail that ${{\mathbb P}}(\max_{i=1,\dots,n} |\operatorname{Var}[e_i\|x_i]/s^2 - 1|>t) \le
n {{\mathbb P}}(|\operatorname{Var}[e_1\|x_1]/s^2 - 1|>t) \to 0$.
For the claim in , write $$\begin{aligned}
|E'P_nE - {E^*}'P_nE^*|
&=
|(E - E^*)'P_nE + {E^*}'P_n(E-E^*) |\\
&\le
\|E - E^*\| \|E\| + \|E - E^*\| \|E^*\|,\end{aligned}$$ and note that by definition of $e_1^*$ and the variance decomposition formula, we have ${{\mathbb E}}[e_1^*] = {{\mathbb E}}[{{\mathbb E}}[e_1^*\|x_1]] = 0$ and $\operatorname{Var}[e_1^*] = {{\mathbb E}}[\operatorname{Var}[e_1^*\|x_1]] + \operatorname{Var}[{{\mathbb E}}[e_1^*\|x_1]] = s^2$, so that by independence $\|E^*\|/s = O_{{\mathbb P}}(\sqrt{n})$. Premultiplying by $n^k/s^2$ in the previous display and applying finishes the proof of the second claim.
\[Fstat\] Fix $K \in (0,\infty)$ and an integer $l\geq -1$. Under the assumptions and in the notation of Lemma \[lemma:LinQuad\], assume that ${{\mathbb E}}[| \epsilon/\sigma|^4] \leq K$ for each $n$, that $\Delta = \operatorname{Var}(\beta'x)/\operatorname{Var}(e) = O(n^l)$ and that $\limsup_{n\to\infty} p_n/n < 1$. Define substitute data $Y^\ast = \iota \alpha + X\beta + E^\ast$. Then, for every $k\in \mathbb R$, we have $$n^k \left( \hat{F}_n(X,Y) - \hat{F}_n(X,Y^\ast)
\right)\quad\stackrel{p}{\longrightarrow}\quad 0$$ as $n\to\infty$.
The idea is to use Lemma \[lemma:LinQuad\] to approximate $\hat{F}_n(X,Y)$ by $\hat{F}_n(X,Y^*)$. In particular, we will show that on some event $C_n$ to be defined below, we have $$n^k\left|\hat{F}_n(X,Y) - \hat{F}_n(X,Y^*)\right|
\le n^{k+l+1}|\delta_n^{(1)}-1|\hat{F}_n(X,Y^*)/n^{l+1} + n^k|\delta_n^{(2)}|,$$ where $\delta_n^{(1)}$ converges to one and $\delta_n^{(2)}$ converges to zero, both at an arbitrary polynomial rate in $n$, and where $\hat{F}_n(X,Y^*)/n^{l+1} = O_{{\mathbb P}}(1)$. The probability of $C_n$ will be shown to converge to one. The claim of the lemma follows from this.
Set $U = [\iota, X]$, where $\iota=(1,\dots,1)'\in{{\mathbb R}}^n$. With this, define the event $C_n = \{\det{U'U}\ne0, E'(I_n-P_{U})E>0,
{E^*}'(I_n-P_{U})E^*>0\}$. On $C_n$, by block matrix inversion, we have $[0,I_{p_n}](U'U)^{-1}U' = [X'(I_n-P_\iota)X]^{-1}X'(I_n-P_\iota)$. Using the abbreviation $V=(I_n-P_\iota)X$, we thus see that $\hat{\beta} = \beta + (V'V)^{-1} V' E$ and that the $F$-statistic $\hat{F}_n(X,Y)$ can be written as $$\begin{aligned}
\hat{F}_n(X,Y) &=
\frac{n-p_n-1}{p_n} \frac{\|V \hat{\beta}\|^2}{ \|(I-P_U)Y\|^2}
\;=\;
\frac{n-p_n-1}{p_n}
\frac{E'P_VE + 2E'V\beta + \beta'V'V\beta}
{E'(I_n-P_U)E}\\
&=
\frac{{E^*}'(I_n-P_U)E^*}{E'(I_n-P_U)E} \hat{F}_n(X,Y^*)
\; + \;
\frac{E'P_VE - {E^*}'P_V{E^*} + 2(E-{E^*})'V\beta}
{p_n{E}'(I_n-P_U)E/(n-p_n-1)}.\end{aligned}$$ This establishes a representation $\hat{F}_n(X,Y) = \delta_n^{(1)} \hat{F}_n(X,Y^*) + \delta_n^{(2)}$ on $C_n$. On the complement of $C_n$, we set $\delta_n^{(1)} = \delta_n^{(2)}=0$, say. We next show that for every fixed $k\in{{\mathbb R}}$, $n^k(\delta_n^{(1)}-1) =o_{{\mathbb P}}(1)$ and $n^k\delta_n^{(2)} =o_{{\mathbb P}}(1)$.
To verify the claimed properties of these quantities, on $C_n$, consider first $$\begin{aligned}
\delta_n^{(1)} -1 =
\frac{{E^*}'(I_n-P_U)E^*-E'(I_n-P_U)E}{s^2
(n-p_n-1)}\frac{s^2(n-p_n-1)}{E'(I_n-P_U)E}.\end{aligned}$$ Using Lemma \[lemma:LinQuad\], we see that the first fraction in this representation multiplied by $n^k$ converges to zero in probability. The second fraction obviously equals $s^2 / \hat{s}^2$. Define $\hat{s}^{*2}$ like $\hat{s}^2$ (see the discussion following ) but with $Y^*$ replacing $Y$. We show that $\hat{s}^2/s^2 =
\hat{s}^{*2}/s^2 + (\hat{s}^2 -
\hat{s}^{*2})/s^2 \to 1$ in probability. To see this, first note that the convergence to zero of $(\hat{s}^2 - \hat{s}^{*2})/s^2$ follows again from Lemma \[lemma:LinQuad\]. For the ratio $\hat{s}^{*2}/s^2$, convergence to $1$ in probability follows, e.g., from Lemma C.1 in @Ste16a, upon verifying its assumptions. To this end, it remains to show that $n^{-1} \sum_{i=1}^n {{\mathbb E}}[ (e^*_i/s)^4\|x_i] = O_{{\mathbb P}}(1)$. Using $(a+b)^4\le 2^{3}(a^4+b^4)$, for $a,b\in{{\mathbb R}}$, we have $$\begin{aligned}
\frac{1}{n}\sum_{i=1}^n {{\mathbb E}}[(e_i^*/s)^4\|x_i]
&\le
\max_{j=1,\dots,n}\left(\frac{s^2}{\operatorname{Var}[e_j\|x_j]}\right)^2
\frac{1}{n}\sum_{i=1}^n {{\mathbb E}}[(e_i/\sigma-{{\mathbb E}}[e_i/s\|x_i])^4\|x_i] \\
&\le
\max_{j=1,\dots,n}\left(\frac{s^2}{\operatorname{Var}[e_j\|x_j]}\right)^2
2^4 \frac{1}{n}\sum_{i=1}^n {{\mathbb E}}[(e_i/s)^4\|x_i].\end{aligned}$$ The maximum in the preceding display converges to one in probability if $\min_j \operatorname{Var}[e_j/s\|x_j]$ converges to one in probability, which follows from Lemma \[lemma:LinQuad\]. The arithmetic mean of the conditional fourth moments is $O_{{\mathbb P}}(1)$ if the unconditional mean of forth moments is bounded in $n$. To this end, note that we have $e = \tilde{\theta}'(I_d - P_{\tilde{M}}) \tilde{z}+\epsilon$ and $s^2 = \|(I_d-P_{\tilde{B}})\tilde{\theta}\|^2 +\sigma^2$; cf. and the discussion right before . With this, we get $$\begin{aligned}
(e_i/s)^4
&= \left( \theta'(I_d-P_{\tilde{M}})\tilde{z}_i /s + \epsilon_i/s\right)^4
\le
2^3[(\tilde{\theta}'(I_d-P_{\tilde{B}})\tilde{z}_i/s)^4 +
(\epsilon_i/s)^4]\\
&\le
2^3[(\tilde{\theta}'(I_d-P_{\tilde{B}})\tilde{z}_i/\|
\tilde{\theta}'(I_d-P_{\tilde{B}})\|)^4 + (\epsilon_i/\sigma)^4],\end{aligned}$$ and take expectations. The claim follows now from ${{\mathbb E}}[(\epsilon_i/\sigma)^4]\le K$ and the fact that the fourth spherical moment of $\tilde{z}_i$ is uniformly bounded in view of Rosenthal’s inequality [@Ros70 Theorem 3] and the assumption that $f_{\tilde{z}} \in \mathcal F_{d_n,20}(D,E)$. Note that this also entails ${{\mathbb P}}(C_n^c) \le {{\mathbb P}}(\hat{s}^{*2}=0)+{{\mathbb P}}({\hat{s}_n}^2=0)
\le {{\mathbb P}}(|\hat{s}^{*2}/s^2 - 1|>1/2)+
{{\mathbb P}}(|{\hat{s}}^2/s^2 - 1|>1/2) \to 0$.
To see that also $\delta_n^{(2)}$ behaves as desired, first note that on $C_n$, $$\begin{aligned}
n^k \delta_n^{(2)} =
\frac{n^k}{p_n}
\left(
\frac{E'P_V E - {E^*}'P_V{E^*} }{s^2} + \frac{2(E-{E^*})'V\beta}{s^2}
\right)
\frac{s^2}{\hat{s}^2}.\end{aligned}$$ The factor $n^k/p_n$ can be bounded by $\kappa n^{k-1}$ for some constant $\kappa$ by assumption; the ratio $s^2/\hat{s}^2$ was shown to converge to one in probability in the preceding paragraph. The difference of quadratic forms converges to zero in probability by Lemma \[lemma:LinQuad\], even when multiplied by $\kappa n^{k-1}$. Noting that $\|V \beta\| = \|(I_n-P_\iota) X \beta \| \leq
\|(I_n - P_\iota) X (\tilde{M}'\tilde{M})^{-1/2}\|
\| (\tilde{M}'\tilde{M})^{1/2} \beta\|$, the scaled second term in parentheses, i.e., $(n^k/p_n) 2 (E-E^*)' V \beta/s^2$, can be bounded by $$\begin{aligned}
2 \kappa n^{k+l/2} \frac{\|E-{E^*}\|}{s}
\frac{\|(\tilde{M}'\tilde{M})^{1/2}\beta\|}{s n^{l/2}}
\left\|(I_n-P_\iota)X(\tilde{M}'\tilde{M})^{-1/2}\right\|/n, \end{aligned}$$ where $n^{k+l/2}\|E-E^*\|/s$ converges to zero in probability by Lemma \[lemma:LinQuad\] and $n^{-l}\beta'(\tilde{M}'\tilde{M})\beta/s^2 = n^{-l} \Delta = O(1)$ by assumption. It remains to show that the largest singular value of $(I_n-P_\iota)X(\tilde{M}'\tilde{M})^{-1/2}/n$ is bounded in probability. Due to the projection onto the orthogonal complement of $\iota$, the distribution of this quantity does not depend on the parameter $\mu$, which is why we may assume that $\mu=0$ for this part of the argument. Abbreviate $\bar{X} = X (\tilde{M}'\tilde{M})^{-1/2}$, $\bar{x}_i = (\tilde{M}'\tilde{M})^{-1/2}x_i$ and consider $\|(I_n-P_\iota)\bar{X}/n\|^2 \le \operatorname{trace}(\bar{X}'\bar{X}/n^2) =
\sum_{i=1}^n \|\bar{x}_i\|^2/n^2$. Taking expectation, noting that ${{\mathbb E}}[\|\bar{x}_1\|^2] = p_n$ and $p_n/n=O(1)$, we arrive at the desired boundedness in probability.
It remains to show that $\hat{F}_n(X,Y^*)/n^{l+1}=O_{{\mathbb P}}(1)$. To this end, recall that $\hat{s}^{*2}/s^2 \to 1$ in probability, and one easily verifies that $$\begin{aligned}
{{\mathbb E}}\left[\frac{ \hat{s}^{*2} }{s^2}\hat{F}_n(X,Y^*)/n^{l+1} \right]
&=
{{\mathbb E}}\left[
({E^*}'P_V{E^*} + 2{E^*}'V\beta + \beta'V'V\beta)/(p_ns^2n^{l+1})
\right]\\
&=
\frac{1}{n^{l+1}} + \frac{n-1}{np_n} \frac{\Delta}{n^l}
= O(1);\end{aligned}$$ here, the first equality is obtained by arguing as in the first paragraph of the proof but with $Y^\ast$ replacing $Y$, and the second equality follows upon noting that $\beta'V'V\beta = \operatorname{trace}(I_n-P_\iota) X\beta \beta'X'$ and that $X\beta$ is a vector with i.i.d. components, each of which has variance $\beta'M'\Sigma M\beta = s^2 \Delta$.
Define $\mathbb U = \mathbb U(M,\Sigma,f_{\tilde{z}})$ as in the beginning of the appendix and note that the first statement in the theorem, concerning $\nu_d(\mathbb U)$, has already been established there. For the second statement, concerning $\Xi_n$, let $p_n\leq d_n$ be positive integers so that $n^2 p_n/\log d_n \to 0$ and so that $p_n/n \to \rho \in (0,1)$ as $n\to\infty$. For each $n$, consider a sample of i.i.d. observations $(y_i, z_i, x_i)$, $1\leq i \leq n$, as in Lemma \[lemma:LinQuad\], so that the underlying quantities (i.e., $M$, $\vartheta$, $\theta$, $\mathcal L(\epsilon)$, $\mu$, $\Sigma$, $\Delta$, $f_{\tilde{z}}$, and $R$) satisfy the restrictions in the suprema in the last display of Theorem \[t1\]. For given $M$, we stress that the restriction on $\Delta$ implicitly also restricts the parameters $\theta$, $\Sigma$ and $\sigma^2$; see the definition of $\Delta$ at the beginning of Section \[mainresult\] as well as the relations in . We have to show that $\Xi_n \to 0$ as $n\to\infty$.
Set $a_n = 2(1/p_n+1/(n-p_n-1))$ and $b_n = \sqrt{\frac{(1-(p_n+1)/n)(1-1/n)}{2p_n/n}}$ for each $n$, and define $Y^\ast$ for each $n$ as in Lemma \[Fstat\]. We first show that $$\begin{aligned}
\label{eq:FstarConv}
a_n^{-1/2}(\hat{F}_n(X,Y^*) - 1) - \sqrt{n}\Delta b_n
\quad\xrightarrow[n\to\infty]{w} \quad N(0,1)\end{aligned}$$ by verifying the assumptions of Theorem 2.1(i) in @Ste16a for the sample $(y_i^*,x_i)_{i=1}^n$, with the symbols $s_n$, $\Delta_\gamma$ and $R_0$ in that reference equal to $a_n$, $\Delta$, and $[0,I_{p_n}]$, respectively. In particular, we need to verify conditions (A1).(a,b,c,d) and (A2) in that reference. The design conditions (A1).(a,c,d) are easily verified by use of Lemma A.2(i) in @Ste16a. And our assumptions that $f_{\tilde{z}}\in\mathcal F_{d_n,20}(D,M)$ and that $p_n<n-1$ imply condition (A1).(b). Assumption (A2) on the scaled errors $e_i^*/s$ is established by an argument similar to the one also used in the third paragraph of the proof of Lemma \[Fstat\] but for the $(8+\kappa)$-th moment instead of the fourth moment: Simply decompose $e_i^* = e^\circ_i \tilde{{{\varepsilon}}}_i$, with $e^\circ_i = \sqrt{s^2/\operatorname{Var}[e_i\|x_i]}$ and $\tilde{{{\varepsilon}}}_i = e_i-{{\mathbb E}}[e_i\|x_i]$, and use Lemma \[lemma:LinQuad\] as before to get $\max_{i=1,\dots,n}e^\circ_i \to 1$ in probability. Then, the assumption that ${{\mathbb E}}[|\epsilon/\sigma|^{8+\kappa}]\le K$ and the fact that the marginals of $\tilde{z}\in \mathcal F_{d_n,20}(D,M)$ have bounded 20th moment, together with Rosenthal’s inequality establish the boundedness of ${{\mathbb E}}[|\tilde{{{\varepsilon}}}_i/s|^{8+\kappa}]$, which is sufficient for (A2). Using Lemma \[Fstat\] and noting that $a_n^{-1/2} = n^k(1+o(1))$ for some $k\in \mathbb R$, it follows that continues to hold with $\hat{F}_n(X,Y)$ replacing $\hat{F}_n(X,Y^*)$.
Now standard arguments conclude the proof: First, note that an appropriately scaled and centered $F$-distributed random variable $\mathcal F_{p_n,n-p_n-1,n\Delta}$ with $p_n$ and $n-p_n-1$ degrees of freedom and non-centrality parameter $n\Delta$ is also asymptotically normal, i.e., $$\begin{aligned}
\label{eq:Fconv}
a_n^{-1/2}(\mathcal F_{p_n,n-p_n-1,n\Delta} - 1) -
\sqrt{n}\Delta b_n\;\xrightarrow[n\to\infty]{w}\; N(0,1),\end{aligned}$$ because $p_n/n\to \rho\in(0,1)$ implies that $p_n\to\infty$. Hence, we have $$\begin{aligned}
&\sup_{t\in{{\mathbb R}}} \left|{{\mathbb P}}\left(\hat{F}_n(X,Y) \le t\right)
-
{{\mathbb P}}(\mathcal F_{p_n,n-p_n-1,n\Delta}\leq t) \right|\\
&\quad=
\sup_{t\in{{\mathbb R}}} \left|
{{\mathbb P}}\left(a_n^{-1/2}(\hat{F}_n(X,Y)-1) -\sqrt{n}\Delta b_n
\le t \right)\right.\\
&\quad\quad\quad\quad\quad-\left.
{{\mathbb P}}\left(a_n^{-1/2}(\mathcal F_{p_n,n-p_n-1,n\Delta}-1) -
\sqrt{n}\Delta b_n \le t \right)
\right|\\
&\quad\leq
\sup_{t\in{{\mathbb R}}} \left|
{{\mathbb P}}\left(a_n^{-1/2}(\hat{F}_n(Y,X)-1) -
\sqrt{n}\Delta_\beta b_n \le t \right) - \Phi(t)\right|\\
&\quad\quad\quad+\sup_{t\in{{\mathbb R}}}
\left|
{{\mathbb P}}\left(a_n^{-1/2}(\mathcal F_{p_n,n-p_n-1,n\Delta}-1) -
\sqrt{n}\Delta_\beta b_n \le t \right) - \Phi(t)
\right|,\end{aligned}$$ and the last two suprema converge to zero in view of Polya’s theorem, which establishes the $\Xi_n \to 0$ in case $\Xi_n$ equals . Finally, it is elementary to verify that $\Xi_n$ also converges to zero in case $\Xi_n$ equals : This follows from with $\hat{F}_n(X,Y)$ replacing $\hat{F}_n(X,Y^*)$, because the quantiles of the central $F$-distribution satisfy $a_n^{-1/2}(F^{-1}_{p_n,n-p_n-1,0}(\alpha)) \to \Phi^{-1}(\alpha)$.
\[zerovariance\]Inspection of the proof reveals that the assumption that $\sigma^2$ is positive is used only to guarantee that $\operatorname{Var}[e\|x]>0$ almost surely (and hence also $s^2 = \operatorname{Var}[e] > 0$). If this assumption is dropped, we thus see that $\Xi_n$ (defined in Theorem \[t1\]) converges to zero along sequences of parameters as used in the proof of Theorem \[t1\], provided that $\operatorname{Var}[\theta'z\|x] > 0$ almost surely for each $n$ (as then $\operatorname{Var}[e\|x] = \operatorname{Var}[y\|x]>0$ a.s.).
[^1]: Note that if the error variance $\sigma^2 = \operatorname{Var}[\epsilon_i]$ in the true model $y_i = \theta'z_i + \epsilon_i$ is overly large, i.e., much larger than $\theta' \Sigma\theta$, then the scaled true model is essentially given by $y_i/\sigma \approx \epsilon_i/\sigma$. Since the $F$-statistic is scale-invariant and $\epsilon$ is independent of $X$, we then have $\hat{F}(X,Y) = \hat{F}(X,Y/\sigma) \approx
\hat{F}(X,(\epsilon_i)_{i=1}^n/\sigma)
= \hat{F}(X,(\epsilon_i)_{i=1}^n)$. In that case, the $F$-statistic will essentially follow the null-distribution and we expect a rejection probability close to the nominal level, irrespective of $\theta$ and $R$.
[^2]: The spiked covariance model corresponds to a factor model where the identity matrix is perturbed by a low rank matrix. It has received much attention in the literature on high dimensional random matrices [e.g., @Bai06a; @Cai13a; @Don13a; @Joh01a]. We have repeated the simulations also with covariance matrices of an AR$(1)$ process and obtained essentially the same results.
|
{
"pile_set_name": "ArXiv"
}
|
Q:
Irreducible factors of a polynomial in a Galois extension
Let $E|F$ be a finite Galois extension and $f(x) \in F[x]$ an irreducible polynomial. Prove that each of the irreducible factors of $f(x)$ in $E[x]$ have the same degree.
An idea: Let $\phi \in Gal(E|F)$ and $f(x)=g_1(x) \ldots g_m(x)$, where $g_i(x) \in E[x]$ is irreducible. Since $\phi$ fixes F, $f(x)=\phi f(x)=\phi g_1(x) \ldots \phi g_m(x)$. Therefore, since factorization is unique $\phi$ permutes the $g_i$'s. Note that $\phi$ preserves the polynomial's degree. So, if we can show that $Gal(E|F)$ acts transitively on $\{g_1(x), \ldots ,g_m(x)\}$, then we are done.
A:
Let $g_1(x)$ be one of the irreducible factors in $E[x]$. Let $H\le G$ be the subgroup of automorphisms that fixes all the coefficients of $g_1$, and let $D=\{\tau_1,\tau_2,\ldots,\tau_k\}$ be a set of representatives of left cosets of $H$.
Consider the polynomial
$$
g(x)=\prod_{i=1}^k(\tau_i g_1)(x).
$$
Fix an element $\phi\in G$. We have that $\phi D$ is another set of reprensentatives of left cosets of $H$. In other words there is a permutation $\alpha\in S_k$ such that
$$
\phi\tau_i=\tau_{\alpha(i)}h_i
$$
for all $i=1,2,\ldots,k,$ and some elements $h_i\in H$. Therefore
$$
\begin{aligned}
(\phi g)(x)&=\prod_{i=1}^k(\phi\tau_i g_1)(x)\\
&=\prod_{i=1}^k(\tau_{\alpha(i)}h_i g_1)(x)\\
&=\prod_{i=1}^k(\tau_{\alpha(i)}g_1)(x)\\
&=g(x).
\end{aligned}
$$
As $\phi$ was arbitrary this means that $g(x)$ is fixed under all of $G$. Therefore $g(x)\in F[x]$.
As you observed, all the polynomials $\tau_i g_1$ are factors of $f$. As they are distinct, so is their product $g(x)$. But $f(x)$ was assumed to be irreducible, so we must have $g(x)=f(x)$. The claim follows from this the way you outlined.
|
{
"pile_set_name": "StackExchange"
}
|
First, I want to congratulate all of archBoston's photographers whose work is displayed in this contest. It's all quite good. I'm especially fond of Kenmore at Night, love the effect on Pregnant Building, and appreciate what lighting and setting can do for a building in Not North Point.
But the image that really blows me away here is endus' View Downtown. The light balancing, and the contrast between both indoors and out, crowded city and surreal emptiness, make this the most thought-provoking photo of the lot.
|
{
"pile_set_name": "Pile-CC"
}
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
import sys
from decimal import Decimal
from clickhouse_mysql.dbclient.chclient import CHClient
from clickhouse_mysql.writer.writer import Writer
from clickhouse_mysql.tableprocessor import TableProcessor
from clickhouse_mysql.event.event import Event
class CHWriter(Writer):
"""ClickHouse writer"""
client = None
dst_schema = None
dst_table = None
dst_distribute = None
def __init__(
self,
connection_settings,
dst_schema=None,
dst_table=None,
dst_table_prefix=None,
dst_distribute=False,
next_writer_builder=None,
converter_builder=None,
):
if dst_distribute and dst_schema is not None:
dst_schema += "_all"
if dst_distribute and dst_table is not None:
dst_table += "_all"
logging.info("CHWriter() connection_settings={} dst_schema={} dst_table={} dst_distribute={}".format(
connection_settings, dst_schema, dst_table, dst_distribute))
self.client = CHClient(connection_settings)
self.dst_schema = dst_schema
self.dst_table = dst_table
self.dst_table_prefix = dst_table_prefix
self.dst_distribute = dst_distribute
def insert(self, event_or_events=None):
# event_or_events = [
# event: {
# row: {'id': 3, 'a': 3}
# },
# event: {
# row: {'id': 3, 'a': 3}
# },
# ]
events = self.listify(event_or_events)
if len(events) < 1:
logging.warning('No events to insert. class: %s', __class__)
return
# assume we have at least one Event
logging.debug('class:%s insert %d event(s)', __class__, len(events))
# verify and converts events and consolidate converted rows from all events into one batch
rows = []
event_converted = None
for event in events:
if not event.verify:
logging.warning('Event verification failed. Skip one event. Event: %s Class: %s', event.meta(), __class__)
continue # for event
event_converted = self.convert(event)
for row in event_converted:
for key in row.keys():
# we need to convert Decimal value to str value for suitable for table structure
if type(row[key]) == Decimal:
row[key] = str(row[key])
rows.append(row)
logging.debug('class:%s insert %d row(s)', __class__, len(rows))
# determine target schema.table
schema = self.dst_schema if self.dst_schema else event_converted.schema
table = None
if self.dst_distribute:
table = TableProcessor.create_distributed_table_name(db=event_converted.schema, table=event_converted.table)
else:
table = self.dst_table if self.dst_table else event_converted.table
if self.dst_schema:
table = TableProcessor.create_migrated_table_name(prefix=self.dst_table_prefix, table=table)
logging.debug("schema={} table={} self.dst_schema={} self.dst_table={}".format(schema, table, self.dst_schema, self.dst_table))
# and INSERT converted rows
sql = ''
try:
sql = 'INSERT INTO `{0}`.`{1}` ({2}) VALUES'.format(
schema,
table,
', '.join(map(lambda column: '`%s`' % column, rows[0].keys()))
)
self.client.execute(sql, rows)
except Exception as ex:
logging.critical('QUERY FAILED')
logging.critical('ex={}'.format(ex))
logging.critical('sql={}'.format(sql))
sys.exit(0)
# all DONE
if __name__ == '__main__':
connection_settings = {
'host': '192.168.74.230',
'port': 9000,
'user': 'default',
'passwd': '',
}
writer = CHWriter(connection_settings=connection_settings)
writer.insert()
|
{
"pile_set_name": "Github"
}
|
Ask HN: What are good blogs/forums/etc to keep up with MySQL best practices? - ceohockey60
======
bgrainger
I wrote some guidelines (with a specific focus on .NET) recently:
[http://faithlife.codes/blog/2017/10/mysql_best_practices_for...](http://faithlife.codes/blog/2017/10/mysql_best_practices_for_dotnet/)
------
pwg
[http://sql-info.de/mysql/gotchas.html](http://sql-info.de/mysql/gotchas.html)
|
{
"pile_set_name": "HackerNews"
}
|
Bagnolians
The Bagnolians were a sect in the 8th century, deemed heretical, who rejected the Old Testament and part of the New Testament. They held the world to be eternal, and affirmed that God did not create the soul, when he infused it into the body. They derived their name from Bagnols, a city in Languedoc, France. Their doctrine generally agreed with that of the Manicheans.
See also
Manichaeism
Marcionism
References
Category:Gnosticism
Category:Christian denominations established in the 8th century
|
{
"pile_set_name": "Wikipedia (en)"
}
|
Expression of TAK1, a mediator of TGF-beta and BMP signaling, during mouse embryonic development.
TGF-beta activated kinase 1 (TAK1) is a MAP kinase kinase kinase (MAPKKK) that has been shown to function downstream of BMPs and TGF-beta (J. Biol. Chem. 275 (2000) 17647; EMBO J. 17 (1998) 1019; Science 270 (1995) 2008), as well as in the interleukin-1 (IL-1) signaling pathway (J. Biol. Chem. 276 (2001) 3508; Nature 398 (1999) 252). Using immunohistochemistry (IHC), we demonstrate that TAK1 is expressed ubiquitously during early development. At mid-gestation, TAK1 expression becomes more restricted, with high levels seen specifically during development of diverse organs and tissues including the nervous system, testis, kidney, liver and gut. Additionally, TAK1 expression is seen in the developing lung and pancreas. Our results suggest that TAK1 may play multiple roles in mouse development.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
AP
UNITED NATIONS (AP) — The United Nations is recognizing the gay marriages of all its staffers, U.N. Secretary-General Ban Ki-moon announced Monday.
Previously, the United Nations only recognized the unions of staffers who came from countries where gay marriage is legal, U.N. deputy spokesman Farhan Haq said.
"This is a step forward that many of the staffers at the United Nations had been seeking for some time," Haq said.
The new policy became effective June 26, and will impact the U.N.'s approximately 43,000 employees worldwide. Employees of separate U.N. agencies, such as the children's agency UNICEF and the U.N. cultural agency UNESCO, are not affected by the change in policy, Haq said.
According to the Pew Research Center, gay marriage is legal in 18 countries, plus parts of the United States and Mexico. But prejudice remains deep in many countries. An extreme case is Uganda, which in February passed a law making gay sex punishable by a life sentence.
|
{
"pile_set_name": "OpenWebText2"
}
|
Improvement in health-related quality of life with recombinant factor IX prophylaxis in severe or moderately severe haemophilia B patients: results from the BAX326 Pivotal Study.
Little is known about the health-related quality of life (HRQoL) burden of haemophilia B. The aim of this study was to assess HRQoL burden of haemophilia B, the benefit of recombinant factor IX (rFIX) prophylaxis and the HRQoL benefit of achieving a zero annual bleed rate. Subjects receiving rFIX (BAX326) prophylaxis or on-demand completed the SF-36 survey. Baseline SF-36 scores were compared to the general US population scores to understand the HRQoL burden. Changes in SF-36 scores between baseline and follow-up were tested using t-tests. Subgroup analysis was conducted to examine SF-36 change among subjects who switched to BAX326 prophylaxis. SF-36 scores were also compared between those with zero bleeds and those who bled during the study. Compared to the US norms, subjects reported lower average scores in all physical and several mental HRQoL domains. At follow-up, prophylaxis subjects reported statistically significant and clinically meaningful improvements in overall physical HRQoL, as measured by the Physical Component Score (PCS) (mean change 2.60, P = 0.019), Bodily Pain (BP) (3.45, P = 0.015) and Role Physical (RP) domains (3.47, P = 0.016). Subjects who switched to prophylaxis from intermittent prophylaxis or on-demand experienced more pronounced improvements not only in the PCS (3.21, P = 0.014), BP (3.71, P = 0.026), RP (4.43, P = 0.008) but also in Vitality (3.71, P = 0.04), Social Functioning (5.06, P = 0.002) and General Health domains (3.40, P = 0.009). Subjects achieving zero bleeds reported lower BP (P = 0.038). Prophylaxis with BAX326 significantly improved HRQoL in patients with moderately severe or severe haemophilia B by reducing bleeds.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Spring/Summer 2011 Maxi Trends
'Maxi' is the key word for the upcoming warm season, describing the already famous ground-skimming dresses, but also long skirts and elegant jumpsuits. There are breezy, sheer fabrics and a riot of dazzling colors, romantic florals, and sexy animal prints. Besides, the eternal, old faithful white is back again proving that its classic beauty and sophistication will never go out of style. Take a look at some of the hottest spring/summer 2011 maxi trends.
Spring can definitely be felt in the air. And with the warm season just around the corner, it's time to invest into a beautiful maxi dress. There is no doubt that when it comes to the maxi trend, whether it's a dress, skirt or jumpsuit, these items can be a real knockout. We have fallen in love with floor-sweepers as these are exactly what we need during hot summer days. They are utterly romantic and feminine regardless the occasion. Wear them at the beach, in the park, at festivals and cocktail parties.
This season's catwalks were an explosion of flowing maxis. Among a riot of floral and wild animal prints, there is the classic white and second-skin shades that can make you get the wow-factor despite their apparent simplicity. Spotted at Dolce & Gabbana, Antonio Marras, Tommy Hilfiger, white is the best choice when the heat outside becomes rather unbearable, proving once again that it can be as effective as a bright color. Besides, you can also opt for nude alternatives as seen at Max Azria and Ports 1961.
Antonio MarrasDolce & Gabbana
Paul & JoeTommy Hilfiger
Ports 1961Max Azria
The maxi trend proved to be very adaptable, being reinvented season after season. If what you want is to step out of the box and try something bolder, then the splash of dazzling brights should be your source of inspiration. The catwalks were full of boldly clashing combinations that seem to break all fashion rules. Extravagant and fascinating maxi dresses, skirt and jumpsuits in bright colors were spotted at Jil Sander, Paul & Joe, Issa and Sonya Rykiel, among others. Or, if the occasion requires something more elegant and colorful, make a special entrance in bold red by Lanvin or Valentino.
LanvinValentino
Jil SanderIssa
Paul & JoeSonia Rykiel
Now, for spring/summer 2011 maxi trend we can speak about two extremes. First, there is the oh-so-romantic and adorable version with sweet floral prints seen on maxi dresses, skirts and jumpsuits (D&G, Etro, Ralph Lauren, Rebecca Taylor, Erdem). Sweet, chic and utterly modern, the long dresses with cute florals are gently floating around the body expressing maximum femininity and an eternal bohemian flair. On the other hand, there is the wild, sexy side with the already famous leopard print (Just Cavalli, Blumarine), which represents your best ticket for a wow-factor entrance. Instead, if you are not so much into floral or animal prints, you can always try graphic or ethnic patterns (Emilio Pucci, Luca Luca).
|
{
"pile_set_name": "Pile-CC"
}
|
1 Field of the Invention
This invention relates to an improved system for integrating automatic outbound dialer functions with automatic call distribution functions, and more particularly to a system that balances the number of agents assigned to these functions while maintaining the inbound call waiting time within prescribed limits.
2 Description of the Prior Art
Many call centers provide both an automatic inbound telephone call distribution function and an automatic outbound call function. As will be appreciated by those skilled in the art, there have been proposals in the prior art to link inbound automatic call distribution and automatic outbound calling in order to improve staffing efficiency. Inbound call distribution typically has peaks and valleys in its load, since the demand is generated by outside callers. By linking inbound and outbound call functions, outbound agents can be switched to inbound duty during peak inbound demand periods and switched back to outbound duty during slack periods in inbound demand, thus improving overall staffing efficiency.
In a typical prior art system, an inbound performance parameter is monitored based upon statistics tracked by the inbound call distributor; for example, number of calls in the inbound queue or average time to answer an inbound call. Target values, and upper and lower thresholds are established for the inbound performance parameter; for example a five second target to respond to incoming calls with an upper threshold of seven seconds and a lower threshold of three seconds. The assumptions of the thresholds is; if upper threshold is exceeded, the performance is unacceptable and more inbound agents are needed; and (b) if the lower threshold is exceeded, the inbound function is considered to be overstaffed and overall efficiency would be improved by transferring inbound agents to outbound operations.
In the prior art, so called blend agents are typically transferred, as they become available, when the upper or lower threshold values are exceeded. Agents continue to be transferred until the performance is between the upper and lower threshold limits, when the transfer stops. U.S. Pat. No. 5,425,093, assigned to the assignee of this application and incorporated herein by reference, discloses the concept of providing hysteresis to provide added stabilization to agent transfer. With hysteresis, a higher threshold value is used to trigger the transfer of agents as the upper range of acceptable performance is being exceeded than the threshold value used to stop transfer as the performance moves toward the acceptable range. Similarly for the lower limit; the trigger value to start transfer is lower as measured performance leaves the range than to stop transfer as the performance moves toward the range. However, this and other prior art call center balance algorithms for determining the transfer of agents between inbound and outbound functions operate in response to current inbound performance, and have not proven to be altogether satisfactory in operation.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
Tuition fees increase as funding falls
Budget 2018-19 brings cuts to research, rise in fees
This fall, U of M students will find themselves dealing with the repercussions of changes initiated by the provincial government — primarily, increased educational expenses.
Both undergraduate and graduate students will see the maximum allowable 6.6 per cent rise in tuition fees. This comes after the provincial government passed Bill 31, the Advanced Education Administration Act, in November, lifting a cap on tuition hikes that had been in place since 2012.
When combined with an expected 0.5 per cent increase in enrolment, the increased fees are expected to generate an additional $9.5 million in revenue for the university.
But the increases follow a provincial budget in March that saw support for universities fall by 0.9 per cent, leading UMSU president Jakob Sanderson to point out that the additional revenue is not entirely what it seems.
“Sadly we need to recognize that there really is not [$9.5] million in additional revenue because the university’s operating grant was cut by the province by a similar rate when adjusted by inflation,” Sanderson said.
Sanderson added that if there is additional revenue being generated, he hopes the university will choose to put it toward “improving educational standards.”
Provost and vice-president (academic) Janice Ristock has referred to $60 million in bursaries and scholarships distributed annually by the university as one method the U of M takes to meet the needs of students.
The province has also reduced 10.4 per cent of its funding for Access grants, an extended education program that provides academic support, personal support and financial assistance to students, with preference given to Indigenous students, students from Northern areas, newcomers to the country and low-income earners.
The recent cuts follow a trend of what critics are calling dwindling provincial support for post-secondary education, including the 2017 nixing of an income tax rebate offered to graduates working in Manitoba.
Premier Brian Pallister said at the time the program “wasn’t working,” and added “the thing we should be doing with those resources is lowering the barriers to entry in the first place.”
Unpacking Budget 2018-2019
This marks the first year of a new budgetary model at the U of M.
The new model takes a more decentralized stance to fund allocation, and the changes made incentivize revenue growth within academic units. Academic units that partake in activities contributing to the U of M’s net finances receive more in terms of budgeted funding.
The General Operating Fund revenue for the 2018-2019 year is just over $663.9 million, with 53 per cent coming from government grants.
At the faculty level, the majority of revenue is generated by the Rady faculty of health sciences, with a revenue of just over $183.1 million, with the faculties of arts and science coming in a distant second, with $93.4 million and $89.9 million in revenue, respectively.
Of the total, a portion from each academic unit augments the University Fund, bringing it to a total of $100.8 million. These funds, also collected through taxes on allocated tuition and grant revenues, are invested in the university’s strategic priorities.
Most of the University Fund, $85.2 million, is returned to faculties through a grant-giving process that allocates money based on several factors, including revenue generated.
The Rady faculty of health sciences receives $23.5 million from the fund, which translates to about 12.8 per cent of its $183 million revenue.
The faculty of science receives $848,000, 0.9 per cent of its overall revenue of just under $90 million. The faculty of arts receives $674,000, 0.7 per cent of its revenue of $93.4 million.
The expenses of the Rady faculty of health sciences are similarly disproportionate — 72.8 per cent compared to the faculties of science and arts that devote 42.2 and 49.6 per cent of their revenue to expenses, respectively.
Other allocations for the fund include support for the Senior Canada 150 Research Chair’s research, new faculty member salaries, research start-up funds and $2.1 million to help international students transfer their healthcare plans.
The provincial government also slashed $3 million from funding to Research Manitoba, which provides funding to research in the health, natural and social sciences, of which the University of Manitoba is the primary beneficiary. This follows a $2 million cut in 2017. A $1.65 million allotment of total funding is going to expenses previously afforded by Research Manitoba.
From the University Fund, $900,000 is being allocated to “continue to support the objectives identified in the agreement between the Truth and Reconciliation Commission of Canada and the University,” and $500,000 is set aside for to the 2018-2019 Indigenous Initiatives Fund, which supports projects relating to Indigenous achievement.
So, what’s next?
Sanderson added that working on contributing to a more sustainable budget plan for next year are goals UMSU will be working to achieve over the upcoming school year.
Sanderson stressed that both himself and Sarah Bonner-Proulx (VP Advocacy) voted against the tuition increase in the board of governors.
Sanderson and his team are planning to work with a student-led accessible education working group, which will be struck at the next board of directors meeting, with the expressed purpose to “implement more open educational resources in classrooms to make course materials freely accessible for students.”
|
{
"pile_set_name": "Pile-CC"
}
|
package system // import "github.com/docker/docker/pkg/system"
// MemInfo contains memory statistics of the host system.
type MemInfo struct {
// Total usable RAM (i.e. physical RAM minus a few reserved bits and the
// kernel binary code).
MemTotal int64
// Amount of free memory.
MemFree int64
// Total amount of swap space available.
SwapTotal int64
// Amount of swap space that is currently unused.
SwapFree int64
}
|
{
"pile_set_name": "Github"
}
|
Cuando una serie se emite en un canal de pago como HBO los productores se pueden permitir el lujo de incluir tacos y mostrar desnudos. Sirve también para diferenciarse de la competencia y Juego de tronos se aprovecha al máximo de esta circunstancia. ¿Quién puede olvidar esa escena en la que Meñique soltaba un monólogo mientras dos prostitutas se tocaban mutuamente? ¿Y esa experiencia de Joffrey con dos prostitutas a las que asesinaba por diversión? Nunca faltan bromas sobre lo gratuitos que pueden ser algunos desnudos pero quizá sea hora de reivindicar todos aquellos culos y tetas que hacen que la serie mejor, permitiendo que las tramas avancen con naturalidad y definiendo los personajes.
Un animal llamado Daenerys
Primer episodio, primer desnudo de Emilia Clarke. (HBO)
“Ahora ya tienes cuerpo de mujer”, decía Viserys a su hermana Daenerys mientras la desvestía y observaba su cuerpo. Era la mejor forma de mostrar su relación disfuncional y la imagen que él tenía de ella. No era más que ganado, un animal del que sacar provecho cuando pudiera procrear. El desnudo de Emilia Clarke, por lo tanto, dotaba la escena de todavía más crudeza.
Los dragones de Daenerys
El nacimiento de los dragones. (HBO)
Daenerys Targaryen demostraría ser una verdadera líder cuando se metió en una hoguera y salió con la piel intacta y tres dragones, unos animales que supuestamente se habían extinguido. ¿Qué impacto hubiera tenido esa escena si hubiera salido con la ropa puesta y no desnuda? ¿No hubiéramos lamentado que el pudor empañase la lógica? Eso sí, hay que añadir que en los libros salía calva como una bola de billar porque el pelo sí se le quemaba.
El poder de Daenerys
La escena de la bañera. (HBO)
Cuando Daario Naharis sorprendió a Daenerys mientras se daba un baño, ella no quiso que se notase que estaba intimidada. ¿Qué mejor forma de demostrar tu poderío que saliendo de la bañera como Dios te trajo al mundo delante de la atenta mirada de Daario y sin perder toda tu autoridad como Khaleesi? Ese desnudo era sinónimo de seguridad y de fuerza.
El culo de Daario Naharis
Emilia Clarke se toma su tiempo con el cuerpo de Michiel Huisman. (HBO)
Podríamos seguir la progresión de Daenerys a partir de los desnudos de la serie, en este caso el del culo de Michiel Huisman. Cuando exigió a Daario que se quitase la ropa evidenció que su posición de poder era indiscutible. Mientras prácticamente todas las mujeres de la serie eran víctimas del patriarcado, ella retaba el statu quo. Y, de paso, el sector femenino y homosexual agradeció el gesto.
La orgía de Oberyn
Pedro Pascal en la cama con Will Tudor. (HBO)
Oberyn Martell venía de Dorne, un territorio que los espectadores no conocíamos. Ver que su amante Ellaria no se indignaba al ver que se acostaba con hombres y mujeres servía para que entendiéramos que había zonas de los Siete Reinos con morales menos estrictas que Desembarco del Rey y la hipocresía Lannister. “¿Te gustan tanto los hombres como las mujeres?”, le preguntaba Oliver (Will Tudor), que se marcó un desnudo integral. “¿Tanto te sorprende?”, le respondió Oberyn extrañado.
El paseo de la vergüenza de Cersei
Lena Headey utilizó doble de cuerpo para esta escena. (HBO)
Los minutos en los que Cersei Lannister anduvo desnuda entre los ciudadanos de Desembarco del Rey son de los más difíciles de ver de toda la serie. Era una humillación para empezar a expiar sus pecados, teniendo que caminar desnuda entre unos ciudadanos que la insultaban y hasta le tiraban mierda. La intensidad de la escena le valió una nominación al Emmy a Lena Headey aunque fue una lástima que utilizase una doble de cuerpo. Por más que estuvieran bien resueltos los efectos visuales, se notaba que algo no encajaba.
La otra Melisandre
El desnudo más comentado de Melisandre. (HBO)
Carice Van Houten es otra actriz que está casi tan acostumbrada al desnudo como Emilia Clarke. Su Melisandre suele utilizar el sexo como una arma mágica pero posiblemente su mejor desnudo fuera ese cambio que sufría su cuerpo al quitarse el collar. No solamente estamos poco acostumbrados a ver cuerpos de ancianas en televisión sino que ese cuerpo arrugado y frágil representaba perfectamente el estado mental del personaje, confundido y vulnerable al no entender las llamas del Señor de la Luz y haber sacrificado de forma innecesaria cientos de personas. Hasta pudimos sentir algo de compasión por Melisandre.
Jon Nieve desnudo
Kit Harington, segundo desnudo en seis años. (HBO)
El cuerpo inerte de Jon Nieve estaba desnudo sobre una mesa para que Melisandre pudiera recitar una oración mágica. Si los directores hubieran estado más pendientes de no mostrar el culo de Kit Harington que de mostrar bien la resurrección, posiblemente Juego de tronos hubiera perdido puntos en una escena tan importante como memorable.
El último incendio de Daenerys
Otra escena de Daenerys que incendió las redes. (HBO)
Habíamos visto esta escena al final de la primera temporada pero con menos espectadores. Daenerys otra vez salió intacta de un incendio, esta vez para asesinar todos los Khals e impresionar la comunidad Dothraki, que reconoció la fuerza de Daenerys como verdadera Khaleesi y líder. Una cámara tímida hubiera importunado la potencia de la escena y esa desnudez nos recuerda la envidiable seguridad de Daenerys.
|
{
"pile_set_name": "OpenWebText2"
}
|
Cross-country inauguration biker arrives in Washington, D.C.
It’s said that a journey of a thousand miles begins with a single step. For Ryan Bowen, it began with a single thought: I’m too broke to fly from Los Angeles to D.C. for the inauguration of President-elect Barack Obama. Undeterred, Bowen decided to bike across the country. And he’s made it. The Takeaway is joined by writer and cycling activist Ryan Bowen.
|
{
"pile_set_name": "Pile-CC"
}
|
Q:
How to change data of Main Activity from Fragment
I Have fragment 1, fragment 2, fragment 3 which are in the Main Activity. there is a TextView in the main Activity, how can the fragments change the text in the text view?
I already did the casting (Actvity).getActivity but i don't want my app to have error when changing Activities
A:
Write a setter method like setTextViewText(String str) in the activity and from fragment call the method like ((YourActivity)getActivity()).setTexrViewText(str)
|
{
"pile_set_name": "StackExchange"
}
|
ace.define("ace/mode/clojure_highlight_rules",["require","exports","module","ace/lib/oop","ace/mode/text_highlight_rules"], function(require, exports, module) {
"use strict";
var oop = require("../lib/oop");
var TextHighlightRules = require("./text_highlight_rules").TextHighlightRules;
var ClojureHighlightRules = function() {
var builtinFunctions = (
'* *1 *2 *3 *agent* *allow-unresolved-vars* *assert* *clojure-version* ' +
'*command-line-args* *compile-files* *compile-path* *e *err* *file* ' +
'*flush-on-newline* *in* *macro-meta* *math-context* *ns* *out* ' +
'*print-dup* *print-length* *print-level* *print-meta* *print-readably* ' +
'*read-eval* *source-path* *use-context-classloader* ' +
'*warn-on-reflection* + - -> ->> .. / < <= = ' +
'== > > >= >= accessor aclone ' +
'add-classpath add-watch agent agent-errors aget alength alias all-ns ' +
'alter alter-meta! alter-var-root amap ancestors and apply areduce ' +
'array-map aset aset-boolean aset-byte aset-char aset-double aset-float ' +
'aset-int aset-long aset-short assert assoc assoc! assoc-in associative? ' +
'atom await await-for await1 bases bean bigdec bigint binding bit-and ' +
'bit-and-not bit-clear bit-flip bit-not bit-or bit-set bit-shift-left ' +
'bit-shift-right bit-test bit-xor boolean boolean-array booleans ' +
'bound-fn bound-fn* butlast byte byte-array bytes cast char char-array ' +
'char-escape-string char-name-string char? chars chunk chunk-append ' +
'chunk-buffer chunk-cons chunk-first chunk-next chunk-rest chunked-seq? ' +
'class class? clear-agent-errors clojure-version coll? comment commute ' +
'comp comparator compare compare-and-set! compile complement concat cond ' +
'condp conj conj! cons constantly construct-proxy contains? count ' +
'counted? create-ns create-struct cycle dec decimal? declare definline ' +
'defmacro defmethod defmulti defn defn- defonce defstruct delay delay? ' +
'deliver deref derive descendants destructure disj disj! dissoc dissoc! ' +
'distinct distinct? doall doc dorun doseq dosync dotimes doto double ' +
'double-array doubles drop drop-last drop-while empty empty? ensure ' +
'enumeration-seq eval even? every? false? ffirst file-seq filter find ' +
'find-doc find-ns find-var first float float-array float? floats flush ' +
'fn fn? fnext for force format future future-call future-cancel ' +
'future-cancelled? future-done? future? gen-class gen-interface gensym ' +
'get get-in get-method get-proxy-class get-thread-bindings get-validator ' +
'hash hash-map hash-set identical? identity if-let if-not ifn? import ' +
'in-ns inc init-proxy instance? int int-array integer? interleave intern ' +
'interpose into into-array ints io! isa? iterate iterator-seq juxt key ' +
'keys keyword keyword? last lazy-cat lazy-seq let letfn line-seq list ' +
'list* list? load load-file load-reader load-string loaded-libs locking ' +
'long long-array longs loop macroexpand macroexpand-1 make-array ' +
'make-hierarchy map map? mapcat max max-key memfn memoize merge ' +
'merge-with meta method-sig methods min min-key mod name namespace neg? ' +
'newline next nfirst nil? nnext not not-any? not-empty not-every? not= ' +
'ns ns-aliases ns-imports ns-interns ns-map ns-name ns-publics ' +
'ns-refers ns-resolve ns-unalias ns-unmap nth nthnext num number? odd? ' +
'or parents partial partition pcalls peek persistent! pmap pop pop! ' +
'pop-thread-bindings pos? pr pr-str prefer-method prefers ' +
'primitives-classnames print print-ctor print-doc print-dup print-method ' +
'print-namespace-doc print-simple print-special-doc print-str printf ' +
'println println-str prn prn-str promise proxy proxy-call-with-super ' +
'proxy-mappings proxy-name proxy-super push-thread-bindings pvalues quot ' +
'rand rand-int range ratio? rational? rationalize re-find re-groups ' +
're-matcher re-matches re-pattern re-seq read read-line read-string ' +
'reduce ref ref-history-count ref-max-history ref-min-history ref-set ' +
'refer refer-clojure release-pending-sends rem remove remove-method ' +
'remove-ns remove-watch repeat repeatedly replace replicate require ' +
'reset! reset-meta! resolve rest resultset-seq reverse reversible? rseq ' +
'rsubseq second select-keys send send-off seq seq? seque sequence ' +
'sequential? set set-validator! set? short short-array shorts ' +
'shutdown-agents slurp some sort sort-by sorted-map sorted-map-by ' +
'sorted-set sorted-set-by sorted? special-form-anchor special-symbol? ' +
'split-at split-with str stream? string? struct struct-map subs subseq ' +
'subvec supers swap! symbol symbol? sync syntax-symbol-anchor take ' +
'take-last take-nth take-while test the-ns time to-array to-array-2d ' +
'trampoline transient tree-seq true? type unchecked-add unchecked-dec ' +
'unchecked-divide unchecked-inc unchecked-multiply unchecked-negate ' +
'unchecked-remainder unchecked-subtract underive unquote ' +
'unquote-splicing update-in update-proxy use val vals var-get var-set ' +
'var? vary-meta vec vector vector? when when-first when-let when-not ' +
'while with-bindings with-bindings* with-in-str with-loading-context ' +
'with-local-vars with-meta with-open with-out-str with-precision xml-seq ' +
'zero? zipmap'
);
var keywords = ('throw try var ' +
'def do fn if let loop monitor-enter monitor-exit new quote recur set!'
);
var buildinConstants = ("true false nil");
var keywordMapper = this.createKeywordMapper({
"keyword": keywords,
"constant.language": buildinConstants,
"support.function": builtinFunctions
}, "identifier", false, " ");
this.$rules = {
"start" : [
{
token : "comment",
regex : ";.*$"
}, {
token : "keyword", //parens
regex : "[\\(|\\)]"
}, {
token : "keyword", //lists
regex : "[\\'\\(]"
}, {
token : "keyword", //vectors
regex : "[\\[|\\]]"
}, {
token : "keyword", //sets and maps
regex : "[\\{|\\}|\\#\\{|\\#\\}]"
}, {
token : "keyword", // ampersands
regex : '[\\&]'
}, {
token : "keyword", // metadata
regex : '[\\#\\^\\{]'
}, {
token : "keyword", // anonymous fn syntactic sugar
regex : '[\\%]'
}, {
token : "keyword", // deref reader macro
regex : '[@]'
}, {
token : "constant.numeric", // hex
regex : "0[xX][0-9a-fA-F]+\\b"
}, {
token : "constant.numeric", // float
regex : "[+-]?\\d+(?:(?:\\.\\d*)?(?:[eE][+-]?\\d+)?)?\\b"
}, {
token : "constant.language",
regex : '[!|\\$|%|&|\\*|\\-\\-|\\-|\\+\\+|\\+||=|!=|<=|>=|<>|<|>|!|&&]'
}, {
token : keywordMapper,
regex : "[a-zA-Z_$][a-zA-Z0-9_$\\-]*\\b"
}, {
token : "string", // single line
regex : '"',
next: "string"
}, {
token : "constant", // symbol
regex : /:[^()\[\]{}'"\^%`,;\s]+/
}, {
token : "string.regexp", //Regular Expressions
regex : '/#"(?:\\.|(?:\\\")|[^\""\n])*"/g'
}
],
"string" : [
{
token : "constant.language.escape",
regex : "\\\\.|\\\\$"
}, {
token : "string",
regex : '[^"\\\\]+'
}, {
token : "string",
regex : '"',
next : "start"
}
]
};
};
oop.inherits(ClojureHighlightRules, TextHighlightRules);
exports.ClojureHighlightRules = ClojureHighlightRules;
});
ace.define("ace/mode/matching_parens_outdent",["require","exports","module","ace/range"], function(require, exports, module) {
"use strict";
var Range = require("../range").Range;
var MatchingParensOutdent = function() {};
(function() {
this.checkOutdent = function(line, input) {
if (! /^\s+$/.test(line))
return false;
return /^\s*\)/.test(input);
};
this.autoOutdent = function(doc, row) {
var line = doc.getLine(row);
var match = line.match(/^(\s*\))/);
if (!match) return 0;
var column = match[1].length;
var openBracePos = doc.findMatchingBracket({row: row, column: column});
if (!openBracePos || openBracePos.row == row) return 0;
var indent = this.$getIndent(doc.getLine(openBracePos.row));
doc.replace(new Range(row, 0, row, column-1), indent);
};
this.$getIndent = function(line) {
var match = line.match(/^(\s+)/);
if (match) {
return match[1];
}
return "";
};
}).call(MatchingParensOutdent.prototype);
exports.MatchingParensOutdent = MatchingParensOutdent;
});
ace.define("ace/mode/clojure",["require","exports","module","ace/lib/oop","ace/mode/text","ace/mode/clojure_highlight_rules","ace/mode/matching_parens_outdent"], function(require, exports, module) {
"use strict";
var oop = require("../lib/oop");
var TextMode = require("./text").Mode;
var ClojureHighlightRules = require("./clojure_highlight_rules").ClojureHighlightRules;
var MatchingParensOutdent = require("./matching_parens_outdent").MatchingParensOutdent;
var Mode = function() {
this.HighlightRules = ClojureHighlightRules;
this.$outdent = new MatchingParensOutdent();
};
oop.inherits(Mode, TextMode);
(function() {
this.lineCommentStart = ";";
this.minorIndentFunctions = ["defn", "defn-", "defmacro", "def", "deftest", "testing"];
this.$toIndent = function(str) {
return str.split('').map(function(ch) {
if (/\s/.exec(ch)) {
return ch;
} else {
return ' ';
}
}).join('');
};
this.$calculateIndent = function(line, tab) {
var baseIndent = this.$getIndent(line);
var delta = 0;
var isParen, ch;
for (var i = line.length - 1; i >= 0; i--) {
ch = line[i];
if (ch === '(') {
delta--;
isParen = true;
} else if (ch === '(' || ch === '[' || ch === '{') {
delta--;
isParen = false;
} else if (ch === ')' || ch === ']' || ch === '}') {
delta++;
}
if (delta < 0) {
break;
}
}
if (delta < 0 && isParen) {
i += 1;
var iBefore = i;
var fn = '';
while (true) {
ch = line[i];
if (ch === ' ' || ch === '\t') {
if(this.minorIndentFunctions.indexOf(fn) !== -1) {
return this.$toIndent(line.substring(0, iBefore - 1) + tab);
} else {
return this.$toIndent(line.substring(0, i + 1));
}
} else if (ch === undefined) {
return this.$toIndent(line.substring(0, iBefore - 1) + tab);
}
fn += line[i];
i++;
}
} else if(delta < 0 && !isParen) {
return this.$toIndent(line.substring(0, i+1));
} else if(delta > 0) {
baseIndent = baseIndent.substring(0, baseIndent.length - tab.length);
return baseIndent;
} else {
return baseIndent;
}
};
this.getNextLineIndent = function(state, line, tab) {
return this.$calculateIndent(line, tab);
};
this.checkOutdent = function(state, line, input) {
return this.$outdent.checkOutdent(line, input);
};
this.autoOutdent = function(state, doc, row) {
this.$outdent.autoOutdent(doc, row);
};
this.$id = "ace/mode/clojure";
}).call(Mode.prototype);
exports.Mode = Mode;
});
|
{
"pile_set_name": "Github"
}
|
Monocyte adhesion to human saphenous vein in vitro.
Monocyte adhesion to endothelium is believed to be an initiating event in atherosclerosis both in arteries and in saphenous vein coronary artery bypass grafts. We have developed a method to quantify adhesion of 51Cr-labelled human blood monocytes to saphenous vein. Adhesion to the intimal surface was measured to uniform 6 mm diameter discs, the adventitia of which was embedded in a layer of inert silicone grease. The identity, number and location of bound cells was further defined by scanning electron microscopy. The proportion of monocytes adhering to discs of freshly-isolated vein was 7.1 +/- 1.2% (SE, n = 8), which was equivalent to 500 +/- 80 monocytes/mm2. The percentage of monocytes adhering was independent of the number of monocytes added in the range 5-50 x 10(4). Scanning micrographs showed 80% endothelial coverage with monocytes adhering preferentially but not exclusively to subendothelium. Endothelial removal increased monocyte adhesion by 1.60 +/- 0.15-fold (n = 8, P less than 0.01). Vein surgically prepared for use in coronary bypass surgery, had a 50% reduction in endothelial coverage and a small but non significant (1.14 +/- 0.13-fold, n = 8) increase in monocyte adhesion. Treatment of freshly-isolated vein for 4 h at 37 degrees C with 1 micrograms/ml of E. Coli endotoxin followed by extensive washing did not remove endothelium but increased monocyte adherence by 1.46 +/- 0.18-fold (n = 8, P less than 0.05). The proportion of monocytes adhering to veins which had been cultured for 14 days was similar to freshly isolated veins (6.4 +/- 0.8%, n = 7), but in cultured veins, monocytes adhered preferentially to cells with typical endothelial morphology. Endotoxin treatment of cultured freshly-isolated veins increased monocyte adhesion by 1.77 +/- 0.23-fold (n = 8, P less than 0.05). The data show that both endothelial activation, and exposure of subendothelium promote monocyte adhesion to human saphenous vein.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Heterogeneity of immunoglobulin gene rearrangements in B-cell lymphomas.
We have examined 69 B-cell non-Hodgkin's lymphomas (NHL) for rearrangements of the immunoglobulin (Ig) or T-cell antigen receptor (TCR) genes. The lymphomas were assigned to the categories of the Kiel classification and their B-cell nature was confirmed by immunostaining. Only 2 cases (with CLL) displayed clonal T beta-chain TCR gene rearrangements together with rearranged heavy- and light-chain Ig genes. The remaining 67 lymphomas had a germline beta-chain TCR-gene configuration. Three different patterns of Ig gene rearrangements were identified; (A) presence of both heavy- and light-chain rearrangements (H+L+); (B) rearrangement of heavy-chain gene only (H+L-); (C) heavy- and light-chain genes in germline configuration (H-L-). All the 45 low-grade NHLs and the 4 immunoblastic lymphomas exhibited pattern A and all had their kappa gene rearranged or deleted. Of 24 low-grade lymphomas tested, 13 (54%) had an addition rearrangement of the lambda light-chain gene. In contrast, the 19 high-grade centroblastic (cb) B-NHLs had distinct patterns of Ig-gene rearrangement: 12 with pattern A, 4 with B and 2 with C. In this group only 2 of 17 (12%) cases analyzed had evidence of lambda light-chain rearrangement whereas 12 of 18 (67%) had a kappa gene rearrangement or deletion. In one case expressing sIgM/lambda and with heavy chain Ig-rearrangement, no DNA was available for Ig light-chain analysis.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Forget the injuries bad calls and so forth. We have "the best point guard in the league, the third best player in the league and the sixth man of the year and a champion coach yet we go seven games with GSW and get beaten by OKC obviously a better team .There is something wrong. We are the highest scoring team in the league and we have the best rebounder in the league. We have a shooter who shoots lights out coming off screens yet this team will never go anywhere in the playoffs.
Sacrilege -- should we dump Chris Paul? We played well without him with the offense running through Blake. His foul may have cost us game five and he wasn't "the best closer in the game" in game six. He bricked a lot of shots. Could we get better for the money?I'm sure he has a NO Trade clause.
When Blake gets shut down we become a jump shooting team. If we're hot we're unbeatable like the first half. If not we lose. Collison is almost as good at getting to the rack as CP3
We need a second scorer in the paint. We have no post up player. We have nobody like Westbrook LeBron or Ginibli or maybe even Durant who can get to the rack. Blake is getting better. Doc signed shooters. If Granger had produced we would have done A LOT better. DJ's hero is Bill Russell. Bill was a great scorer. DJ has to develop a post game. He's long armed and athletic. Why shouldn't he have one?
I know everybody will scream but we should consider every option. We obviously don't have anywhere near a championship team.
Clipperfn4lf
05/16/2014 - 03:27 AM PST
Clipper All-Star
Posts: 1546
votes: 13
I think a lot has to do with Blake's passive demeanor. He seems to be content playing 2nd fiddle to Cp3, however, at the end of the day the team that wins the championship usually doesn't have their PG their best player. A change I think needs to happen is to 100% turn this team in Blake's team and have Cp3 be 2nd fiddle. I feel when people tell Blake he's the man (when Cp3 got hurt midway through the year) he plays at his best. Only way we make it to the Finals. Just my thoughts.
ekker3
05/16/2014 - 03:32 AM PST
CTB MVP X2
Posts: 7271
votes: 80
no ball movement when we're up. lots of blown leads as a result. complacent.
BaadMaster
05/16/2014 - 03:58 AM PST
Clipper All-Star
Posts: 1383
Location: Los Angeles
votes: 11
What went wrong? The answer: NOTHING.
When two great teams clash, someone HAS TO lose. It is that simple. This series was so close it came down to some ref calls. Whether it was fixed or just bad reffing is open to argument. But you have to look at the bigger NBA picture to understand how the system works.
It is clear that the Thunder had a five - ten point edge with the officials going in. The question is why? Is there a meritocracy that handicaps the playoffs that we will have to accept before we can win a title?....
If the reason we lose is bad calls why bother to play? Do like DTS and dump the good players and make us a soap opera again. If we have to fight through fouls that's what we have to do or give up.
BaadMaster
05/16/2014 - 05:59 AM PST
Clipper All-Star
Posts: 1383
Location: Los Angeles
votes: 11
If you read what I wrote, I made it clear that the calls and the NBA's skewing the playoffs will reverse in our favor with a) the emergence of a genuine NBA recognized MVP that one of our Big Three must become and b) the expulsion of the Sterlings for once and for all...
These two factors and Clippers will be in the Finals....
Simple as that.
toohipcliptoslip
05/16/2014 - 07:30 AM PST
CTB MVP X1
Posts: 4966
votes: 34
Wasn't Blake considered #3 for MVP? Durant isn't the public's idea of the NBA, it's Blake. He's better looking, more articulate and is more exciting to watch for the general public. My best friend knows nothing of basketball but said she wishes her daughter could date Blake. She wouldn't say that about Durant. Maybe he's on his way.
What I was saying is that we have to play through it. If we give up and simply complain about the calls forget it. Those are things you have to ignore. There has to however be something done about the refs although I doubt it. We need the ability of each coach to have one replay every quarter. This would stop a lot of the bogus calls.
You can't blame all of our problems on the refs. Much of it is our doing. There comes a point where we have to point the finger at ourselves as well. You can't blame the refs for CP bricking shots'
The guys you mentioned do get calls, absolutely no doubt but they are also absolutely great except Wade. I do agree that we got a lot of bullcrap calls.
The refs are not incompetent. If there are a lot of bad calls then they're not bad calls they are on purpose but it's not either or, it's and-both. Refs and problems with the team.
toohipcliptoslip
05/16/2014 - 07:38 AM PST
CTB MVP X1
Posts: 4966
votes: 34
I figured out what I meant to say. There are things you can control and things you can't. We can't control the refs but we can control the quality of our play. That's what you work on and forget the rest.
Silasie
05/16/2014 - 08:53 AM PST
CTB MVP X1
Posts: 2598
votes: 2
Too true. Else you just say "not fair boohoo" and give up.
bballman
05/16/2014 - 08:58 AM PST
Clipper Starter
Posts: 315
votes: 2
There were stretches of bad play,of course. But couple that with the bad calls at the end,it's tough to win. Every team has bad stretches. Point I am trying to make is that this team is a very good team. It took stupid calls to beat them in 6 games. It may have gone 7. There is no need to completely tear apart this team. If the Thunder somehow win the Chip this year,it's arguable that without that bogus call in game 5,the Clippers could have beaten the eventual champions or at worse gone 7 games. This team is right there guys. Maybe Chris Paul needs to learn how to trip himself or run into the ref and somehow get to the free throw line.
jarca
05/16/2014 - 09:10 AM PST
CTB MVP X2
Posts: 8694
votes: 40
CP3's choke job + the refs = series is over
wessleejr
05/16/2014 - 09:18 AM PST
Clipper Starter
Posts: 984
votes: 2
IMO, CP3 and Blake are not a good tandem, When CP3 was down, DC and Blake give us good game, I been thinking this long time ago, CP3 is the best pg in the league but he needs a partner that match his game.
Blake is a good passer also, and works better DC.
ClippersDA
05/16/2014 - 09:38 AM PST
CTB MVP X1
Posts: 3993
votes: 12
Chris Paul had a meltdown, and our defense abandoned us. Not sure where we go from here
ClippersDA
05/16/2014 - 09:40 AM PST
CTB MVP X1
Posts: 3993
votes: 12
But in all seriousness, Chris Paul is now going to feel the heat when he has largely been given a pass. He and melo are on the hot seat
clipper*joe
05/16/2014 - 09:57 AM PST
CTB MVP Champion
Posts: 16440
Location: los angeles
votes: 130
This is the off-season that Cp3 will be scrutinized the same way Melo was/is. bank on it.
Doc was right in saying that the blown call was a series changing play but it was CP3 who made it possible for that to happen. That will go down as one of the biggest boneheaded plays of all-time. I truly believe if he had just let himself get fouled, we would have gone to the WCF.
ClipperRevival
05/16/2014 - 10:01 AM PST
Clipper Starter
Posts: 254
votes: 2
What went wrong? Game 5. That game sealed our fate. CP3 choking and the refs killed us.
But the bigger picture, this team is VERY close. I see two holes that I would like to patch up.
1) SF position - Barnes is solid but I think we could upgrade. Someone who can really lock down and give a little more scoring from that position. This league is filled with talented wing players and we could use someone who can really D up.
2) Another big - This is an obvious. Prior to the last few games against OKC, we were getting beaten on the boards consistently, even by Golden State. We just need another reliable big. Davis isn't really a "big". He's a PF.
This was our first year with Doc. I think this team will just get better. No major overhaul is needed. I do feel this team was close. Heck, I think this team might've won it all THIS YEAR had we not been screwed over by the refs in game 5. We can compete with anyone. With a full year together, this team will play more as a unit next year and be more efficient.
I can't wait for next year.
JQuick32
05/16/2014 - 10:06 AM PST
Posts: 3385
votes: 13
CP3 either has to be traded or has to basically be an overexpensive role player, because we are going nowhere with a first option that chokes in the playoffs, is easily shut down by bigger defenders, and is consistently outplayed by all other elite players at his position.
For all the talk of "continuity," CP3 will be on the wrong side of 30 very soon. He is what he is at this point. If second-round exits are good enough for the rest of you, so be it, but I'm not interested in that fake "success."
BennyBeFly
05/16/2014 - 10:17 AM PST
Clipper 6th Man
Posts: 138
votes: 2
What went wrong is the Clippers got in their own way.
I still think the Clippers are the more talented, better executing team. Unfortunately that's a blessing and a curse because basketball is a very dynamic sport and sometimes X's and O's only get you so far.
The Thunder played very freely and they put up shots that the Clippers might consider a "bad shot". But you have to be able to play the GAME. The Clips need to play more relaxed, they played too consciously at the end of those quarters and games.
The team that perfectly blends the two is the Spurs. They play within the system but if there is a matchup they want to expose, get an open look at a 3, or see a lane to drive, they don't hesitate. There was too much hesitance and e leash was too short for the Clips towards crunch time for the Clips this year. When they made epic comebacks, it's because they just simply PLAYed.
ClipperRevival
05/16/2014 - 10:23 AM PST
Clipper Starter
Posts: 254
votes: 2
I would rather take my chances on a top 5 player than try to go another route. He will be 30 NEXT YEAR, he isn't 30 yet. So he's still got 3-4 prime years left. That's a pretty big window if you ask me. I think even the idea of trading him is utterly ridiculous.
JQuick32
05/16/2014 - 10:27 AM PST
Posts: 3385
votes: 13
I am taking my chances on Blake. I don't even think CP3 is the best point guard in the league anymore, let alone a top 5 player. If he was, he'd step up in the playoffs. Top 5 players and best PG's in the league step up in the playoffs. CP3's most memorable playoff moments: deferring to Jannero Pargo in Game 7 against the Spurs, and the Game 5 meltdown against OKC.
ClippersDA
05/16/2014 - 10:29 AM PST
CTB MVP X1
Posts: 3993
votes: 12
I love Chris Paul as a personality but he's paid over 100 million - not to win regular season games. To win when it counts. He blew it. Blake hasn't taken that next step but I see him getting there.
ClipperRevival
05/16/2014 - 10:30 AM PST
Clipper Starter
Posts: 254
votes: 2
What people don't realize about CP3 is that WITHOUT him, we would've lost handily to the Thunder. Only because he played at such an amazing level up until game 5 were we even in the series. He went one stretch where he had like 36 assists and 2 turnovers. And there were parts of the series where our offense couldn't do nothing and Paul had to bail us out. The guy is that good. We are just so used to his greatness. I know nothing I say will convince some of you about his choke job and I am pissed off about it too but come on, this guy is the real deal and an amazing player. We need him. There is nothing we can get in return that will even come close to what he brings us.
Jordan didn't win his 1st until 28 and won his last title at 35. Kobe didn't win his first title as "the man" until he was 31. Prime years for a basketball player is usually 28-32. So Paul has at least 3 primes years left with a possibility of several more very, very good years left. That's still a pretty large window. This team is close. The last thing you want to do is panic and do an overhaul. We are only going to go as far as Paul and Blake will take us.
SamMays
05/16/2014 - 10:35 AM PST
CTB MVP X1
Posts: 4150
votes: 59
This team had the third best record in the league and that's where they finished. It was our first time this deep in the playoffs under a new coach. It takes time and we have discovered there are a few holes that need filling.
As Revival said, Barnes is an outstanding bench player, but is not someone you want starting and playing huge minutes. It's too bad Granger didn't step up and take that position over, but he had an injury and missed games just as he was making progress. Is he the guy? That's something Doc has to figure out A backup big is also vital. Somebody like Adams would be ideal. That will also help our interior defense and rebounding, both areas of weakness that were exposed in the playoffs. A bigger two guard would also be an asset on the defensive end.
Blake has to step up a bit more. He has to become our Karl Malone and I think he will. He's just not quite there yet and not quite confident enough in his jump shot. He'll get there. Chris Paul blew game five down the stretch, but that happens. Isaiah Thomas blew a huge game against the Celtics. People said Lebron wouldn't step up when it really mattered. Same with Magic Johnson in the early 80's when he had a few late game melt downs and Laker fans feared they would never get past the Celtics.
I have never been a Jamal Crawford fan. When he has the ball, the offense stops and everyone watches him play one on five. He's obviously a great player, capable of putting the team on his back for stretches, but he surely shrunk from the moment in this series. He's been a big time player, but has never been a big time winner. His lack of defensive effort, or awareness, may have cost us a game earlier in the series when he failed to step up and guard on a pick and Westbrook hit a three to put us down four with a minute and a half to go. Had he done what he should have, we have the ball down one and it's a very winnable game. His defensive lapses cost us significantly this series and he didn't have the lights out offensive game that would have made up for it.
Some of us are complaining about officiating I don't buy it Sure, there were some bad calls. There always are. The big area we need to improve is mental toughness. We blew game 5 and it had nothing to do with the official's call on the out of bounds play. Had there been no instant replay, no one would have complained about who the ball went off of. The point is, we should have never, never, put ourselves in a position to blow that game. Again, a player failed to show big on a screen, allowing Durant that three. Chris Paul has two turnovers, including that ridiculous attempt to get three shots instead of two. He should know, they will never give him that call. He's tried it a bunch and has never gotten it and never will. Protect the ball and knock down your two foul shots. Game over.
But winning in the playoffs is a learning process. Michael Jordan's Chicago team had to keep pounding on the door until they broke through. Same with I Thomas and the Pistons. Miami didn't win it all that first year with the big three. The Clippers have to keep this nucleus together and add a few pieces to shore up some weaknesses and come back stronger next year.
Get a little better and get tougher mentally. If they don't do something crazy, we're going to be one of the best teams in the league for the next several years.
ClippersDA
05/16/2014 - 10:40 AM PST
CTB MVP X1
Posts: 3993
votes: 12
I don't think we should pick up Crawford's option if we have a prayer of still being able to sign free agents. That would free up some cap space
ClippersDA
05/16/2014 - 10:44 AM PST
CTB MVP X1
Posts: 3993
votes: 12
I wouldn't want to be Jada Paul right now. Can't imagine he's fun to live with at the moment
ClipperRevival
05/16/2014 - 10:56 AM PST
Clipper Starter
Posts: 254
votes: 2
All valid points.
But I disagree about the officiating. Yes, we choked and put the refs in a position where they could decide the game BUT it still doesn't change the fact that they absolutely blew that one call. In this day and age, we do have instant replay and that is the reality of the league today just as flopping is. And for the refs to completely blow it tells me something fishy is going on. That was our ball. And if we are awarded that ball, we probably win game 5. And who knows what after that. I mean if our players had choked it away without any controversy, it would be easier to accept that OKC was the better team. But the string of calls that went against us in game 5 is disgusting, especially the out of bounds play. That is one where the ref should not have missed because they were able to review it and it was clear cut.
As for Jamal, I am leaning a little more towards your view, which is something I wouldn't have thought about in the past. The guy is just so hot/cold. If he's going good, he can win games for you. But when he's bad, he's bad. Like playing 1 on 5 and chucking up bad shot after bad shot and providing nothing on D. He is getting up there in age at 34 so if we got rid of him, I wouldn't be pissed. But I still like the guy. He has the rarest skill of all in the NBA which is the ability to create your own shot. And just that alone is worth holding on too imo. I like the guy.
And finally, your point about other greats struggling before winning a ring is well documented and 100% true.
diagoro
05/16/2014 - 11:08 AM PST
Clipper 6th Man
Age: 47
Posts: 108
Location: Stanton, CA
votes: 0
I think the bad/biased reffing has had the greatest impact on Blake. It feels like his passivity is a result of non-fouls, and a fear of being too aggressive. Add to that all the dumb touch-fouls (tapping Thompson twice when taking a three during the GSW series), and he's going to be in foul trouble. he ends up second guessing every possession.
I like Reddick as well, but he is a major defensive liability. I also think his game is a bit one dimensional as a shooter. having a two that could break down the defensive (aside from what CP does) would makes us a more complete team.
The biggest difference I see between the first and second round was DJ. OKC's length almost completely contained him. I haven't seen the stats, but I would guess that there's a major difference between the two rounds. he basically disappeared for long periods of time. And as much as I love his defensive focus, his lack of a post up game still hurts us, especially when Blake is on the bench or being passive around the perimeter.
Oh yeah...and that bench. JC and DC have been great, and even Davis has been solid since here. But the rest leave me wondering. I really miss the amazing bench we had two years ago!
My greatest fear is the DTS issue, and how easily he goes away....if at all. I've been a fan for what feels like forever. But if DTS stays, I won't support his product, or put money in his pocket. I'm already holding back on buying a jersey for a friend, as I don't want to hand him extra profits.
clipnasty
05/16/2014 - 11:20 AM PST
Clipper All-Star
Posts: 1302
votes: 14
Agree 100%. I was telling my friends last night Blake needs to become the alpha male. He gets too passive. I do not know if it is confidence or he is just too "nice," but in the end I think it comes down to him stepping up as the MAN.
ClippersDA
05/16/2014 - 11:24 AM PST
CTB MVP X1
Posts: 3993
votes: 12
Redick should only start next season If we upgrade at the 3. But to be honest, the only player I really blame is Chris Paul. I don't believe in him anymore and don't think having him gives us an advantage against equally or more talented teams.
clippermitch
05/16/2014 - 11:32 AM PST
Clipper All-Star
Posts: 1395
votes: 4
I think you are being overly critical of CP3. The dude played both ends of the court relentlessly. He doesn't have the luxury like Steph Curry and other PGs who can just relax on Defense. He exerted way too much energy running around guarding Westbrook and at times Durant. By the end of the game, he was gassed and led to terrible decisions. Tired legs basically took out his mid-range jumper that he almost always makes. It also took away his ability to finish at the rim.
Redick and Crawford are great but they suck on defense. As much as I hated seeing Collison on the floor, he gave CP a chance to take a breather. Collison is not the greatest defender but he at least tries.
Look at the end of games. Westbrook doesn't defend CP. Either Reggie Jackson or Sefolosha does so Westbrook can focus on offense.
This coming draft, the Clips need an elite perimeter defender.
Also, regarding the topic...what went wrong this year is the front office didn't utilize the final roster spot to sign wing that can defend a PG/SG/SF. Doc didn't play Willie or Reggie and Granger barely played. You don't think someone like Donte Jones could have helped?
Icecoldclipper
05/16/2014 - 11:33 AM PST
CTB MVP X2
Posts: 9593
votes: 21
Crawford got us back into game 5 with his 4 point plays and helped pushed us further ahead during earlier runs. Think the passive mindset of Paul and Griffin hurts us because they look for others in grind it out moments. Look at most of the game Paul (Paul added to his shots by chucking late) and Griffin had almost the same amount of shots as Redick and Barnes that just can't happen. When OKC was getting closer we needed Blake face up attacks and Chris Paul off picks getting to the bucket or freethrow jumpers.
ClipperRevival
05/16/2014 - 11:39 AM PST
Clipper Starter
Posts: 254
votes: 2
Had the Clips won this series, Paul's performance in this series would've been memorable and been talked about for a while. From his 3 points shooting in game 1 to his unheard of turnover/assist ratio to his D on Durant. He was absolutely magnificent sans the later part of game 5. It's just a shame that the last 50 seconds of game 5 will define him when people talk about this series. He was brilliant besides those 50 seconds. He really had to exert himself tremendously on both ends of the court, playing great defense on both Curry and Westbrook.
Icecoldclipper
05/16/2014 - 11:51 AM PST
CTB MVP X2
Posts: 9593
votes: 21
Paul never truly got a grip on slowing Westbrook outside of the occasional steal and never consistently pressed him into jumpers. Westbrook had an elite series and us doubling or getting beat led to a lot of Griffin, Jordan, and Davis weakside fouls. On offense game one was his best scoring game (Game 6 was a lot of late game chucking that went in after we had no chance) but he did assist well this series.
clippersfan85
05/16/2014 - 11:57 AM PST
Clipper Starter
Posts: 859
votes: 2
This is partially what went wrong.
clippermitch
05/16/2014 - 11:58 AM PST
Clipper All-Star
Posts: 1395
votes: 4
Westbrook is a shooting guard. His game is attacking the basket. He has 3-4" on CP and is a lot bigger and more athletic. He did what he could. Westbrook averaged almost 5 turnovers a game.
ClipperRevival
05/16/2014 - 12:01 PM PST
Clipper Starter
Posts: 254
votes: 2
A lot of the times when Westbrook got free was off the P&R. And that becomes team defense. For the most part, I thought Paul did a great job of keeping WB in front of him in the perimeter when they were isolated. WB didn't have a lot of blow bys on Paul on iso IIRC. Most of WB's damage was off the P&R, where the help defender was supposed to help. I don't think our bigs did that good a job on WB to be honest. WB is a special talent, one of the most athletically gifted athletes to every play the game, so we shouldn't hang our heads too much because he actually played up to his potential for once.
ClippersDA
05/16/2014 - 12:49 PM PST
CTB MVP X1
Posts: 3993
votes: 12
I'm touched and saddened to hear how emotional the players were and are. Bumpy times ahead
diagoro
05/16/2014 - 01:04 PM PST
Clipper 6th Man
Posts: 108
Location: Stanton, CA
votes: 0
I still think we need a proper bench. Paul was shaky at the end of a few games due to the minutes he's playing. Other times we've shown weaknesses at spots due to foul trouble. We just don't have the kind of bench that's been supplying either points or defense, aside from some good stretches from Crawford and Davis. It plays right into the 'strategy' of calling non-fouls.
ClipperDB
05/16/2014 - 02:22 PM PST
Clipper Starter
Posts: 361
votes: 1
this has a lot to do with Jamal going one-on-one. Often the first stringers get a lead and when 2nd stringers come in, Jamal included, everyone just watches Jamal.
Icecoldclipper
05/16/2014 - 02:27 PM PST
CTB MVP X2
Posts: 9593
votes: 21
Jamal is a lot of times with the starting unit and Paul/Griffin let him take his guy one on one. It's not on Crawford those guys and Doc want him to be aggressive he has won us more games than he has lost us with his shooting. It shouldn't be that way in big games or when he is cold at some point I guess we got to comfortable with it.
tense2
05/16/2014 - 03:17 PM PST
CTB MVP X3
Posts: 10383
votes: 24
Keep the core, but bring in 2 way players at the SG and SF position. Get average to above average back up bigs. Another year in Doc's coaching system will also help.
Don't panic.
bballman
05/16/2014 - 03:26 PM PST
Clipper Starter
Posts: 315
votes: 2
My choices are Jordan Hill and Shawn Marion.
BaadMaster
05/16/2014 - 03:32 PM PST
Clipper All-Star
Posts: 1383
Location: Los Angeles
votes: 11
That's absurd. CP3 is a great player. He locked down KD during the comeback, he orchestrated a team that, by all rights, should have beat The Chosen One and his band of refs. Sometimes YOU JUST LOSE. Someone has to. Especially when the refs take your sidekick, BG, out of his game by calling ticky-tack fouls that they never call on LeBron, the other "power player."
The bigger worry is not the team. Clippers will grow just as the Heat did and the Lakers before them. The worry is if DTS drags this on and on. Then you have problems.
Other than that, great season!!! And great future!!!
bebe
05/16/2014 - 03:51 PM PST
Clipper Rookie
Posts: 64
votes: 0
I read comments and post my opinions daily. I've only been posting for a month but never have I read such negative comments about the Clippers. Nobody wanted to trade CP3 when he took over in the 4th quarter and won games. No one wanted to trade JC when he was hitting those 3's. This was DJ's best season and I can see him getting better. I have been a Clipper fan for over 20 years and this was their best year. If you're a true fan you roll with the punches and don't disc them because they lost. I believe you're upset and don't know how to react in a positive way. Relax, take a deep breath and get ready for the next round. Now let's all say together. Go CLIPPERS!!!!!!!!!
Your Nevada Fans
Kevlawn
05/16/2014 - 03:53 PM PST
Clipper D-League Pickup
Posts: 7
votes: 0
1 through 5, I think the Clippers are the 2nd best starting lineup in the NBA only behind the Wizards. I think y'all need a couple back up bigs, a defensive stopper, and Doc needs to use them during the season to create some consistency. You have the #1 pg and pf in the league, a top 5 center and a great shooter in Redick. I'm not sold on Barnes but he gives it his all and would be a great back up. I actually think your future is brighter than OKC's and everyone else's in the West with Portland a close 2nd. Good luck in the future and if my Thunder tank in yrs to come, then I will be cheering the Clips on. I'm a big Griffin fan and he will always be one the greatest Sooners of all time. I love CP3's game and think he is one of the greatest pg's of all time. Westbrook isn't even close to being as good as Paul. Westbrook is too erratic for me. What good is 10 assists when he has 5 turnovers? His shot is mostly luck and there is a reason I call him Westbrick. Athletically, he is unmatched but he isn't a true pg to me. You also have a great coach in Doc Rivers and I wish he was OKC's coach. Y'all's future is so bright if the ownership problems can be fixed.
toohipcliptoslip
05/16/2014 - 03:57 PM PST
CTB MVP X1
Posts: 4966
votes: 34
Agree 100%. I was telling my friends last night Blake needs to become the alpha male. He gets too passive. I do not know if it is confidence or he is just too "nice," but in the end I think it comes down to him stepping up as the MAN.
Remember the Stockton-Malone twins? Stockton wasn't only a passer par excellence but like Nash he could shoot lights out. Oscar and Kareem, I don't think Oscar took second fiddle he made Kareem better. CP3 hasn't shown the ability to make Blake better. It may be tho....
It doesn't have a lot to do with passivity, it's skill set. When he was played by the likes of Lee who is not physical BG makes him look like ham on rye but a physical player like Ibaka who bodies him up and gives him no space bothers him. He has to learn how to combat this. Footwork!
JQuick32
05/16/2014 - 04:24 PM PST
Posts: 3385
votes: 13
Westbrook annoys me, but the criticism he takes is unfair. He's really impressed me in the playoffs so far aside from his hypocritical flopping, whereas the so-called "MVP" has played well below his standards with zero criticism at all. He's not a traditional point guard, but after watching another second-round exit from CP3, I'm beginning to feel that traditional PG's are overrated and that speed, athleticism and scoring are more valuable at the position now.
JQuick32
05/16/2014 - 04:26 PM PST
Posts: 3385
votes: 13
He WAS the alpha you're describing when CP3 was out. Honestly, it all comes back to CP3 being the problem. CP3 dominating the ball makes Blake passive and wastes his unique skillset that includes passing and handles.
|
{
"pile_set_name": "Pile-CC"
}
|
package org.dcache.services.info;
import com.google.common.base.Splitter;
import com.google.common.cache.CacheBuilder;
import com.google.common.cache.CacheLoader;
import com.google.common.cache.LoadingCache;
import com.google.common.collect.ImmutableMap;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.io.OutputStream;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import diskCacheV111.util.CacheException;
import diskCacheV111.util.TimeoutCacheException;
import dmg.cells.nucleus.CellEndpoint;
import dmg.cells.nucleus.CellMessageSender;
import dmg.cells.nucleus.CellPath;
import dmg.cells.nucleus.NoRouteToCellException;
import dmg.util.HttpException;
import dmg.util.HttpRequest;
import dmg.util.HttpResponseEngine;
import org.dcache.cells.CellStub;
import org.dcache.services.info.serialisation.JsonSerialiser;
import org.dcache.services.info.serialisation.PrettyPrintTextSerialiser;
import org.dcache.services.info.serialisation.XmlSerialiser;
import org.dcache.util.Args;
import org.dcache.vehicles.InfoGetSerialisedDataMessage;
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.base.Predicates.notNull;
import static com.google.common.base.Throwables.throwIfUnchecked;
import static com.google.common.collect.Iterables.find;
import static java.util.Arrays.asList;
import static java.util.concurrent.TimeUnit.MILLISECONDS;
import static java.nio.charset.StandardCharsets.UTF_8;
/**
* This class provides support for querying the info cell via the admin
* web-interface. It implements the HttpResponseEngine to handle requests at a
* particular point (a specific alias).
* <p>
* Users may query the complete tree, or select a subtree by specifying the
* path.
* <p>
* It supports several serialisers from which the user may chose, either by
* specifying the query parameter 'format', by specifying the HTTP Accept
* header. XML is the default if neither indicates which serialiser to use.
* <p>
* The implementation caches serialised data for one second. This is a safety
* feature to reducing the impact on info of pathologically broken clients that
* make many requests per second.
*/
public class InfoHttpEngine implements HttpResponseEngine, CellMessageSender
{
private static final Logger LOGGER = LoggerFactory.getLogger(HttpResponseEngine.class);
private static final List<String> ENTIRE_TREE = new ArrayList<>();
private final SerialisationHandler xmlSerialiser =
new SerialisationHandler(XmlSerialiser.NAME, "text/xml");
private final SerialisationHandler jsonSerialiser =
new SerialisationHandler(JsonSerialiser.NAME, "text/json");
private final SerialisationHandler prettyPrintSerialiser =
new SerialisationHandler(PrettyPrintTextSerialiser.NAME, "text/x-ascii-art");
private final Map<String,SerialisationHandler> mimetypeToSerialiser =
ImmutableMap.<String,SerialisationHandler>builder().
put("application/xml", xmlSerialiser).
put("text/xml", xmlSerialiser).
put("application/json", jsonSerialiser).
put("text/x-ascii-art", prettyPrintSerialiser).
build();
private final Map<String,SerialisationHandler> queryParameterToSerialiser =
ImmutableMap.<String,SerialisationHandler>builder().
put("xml", xmlSerialiser).
put("json", jsonSerialiser).
put("pretty", prettyPrintSerialiser).
build();
private final String _infoCellName;
private CellStub _info;
/**
* httpd-side class for each info-side serialiser.
*/
private class SerialisationHandler
{
private final String _name;
private final String _mimeType;
LoadingCache<List<String>, String> resultCache = CacheBuilder.newBuilder()
.maximumSize(10)
.expireAfterWrite(1, TimeUnit.SECONDS)
.build(new CacheLoader<List<String>, String>() {
@Override
public String load(List<String> path) throws InterruptedException, CacheException, NoRouteToCellException
{
InfoGetSerialisedDataMessage message =
(path == ENTIRE_TREE) ? new InfoGetSerialisedDataMessage(_name)
: new InfoGetSerialisedDataMessage(path, _name);
message = _info.sendAndWait(message);
return message.getSerialisedData();
}
});
public SerialisationHandler(String name, String mimeType)
{
_name = name;
_mimeType = mimeType;
}
public void handleRequest(HttpRequest request) throws HttpException
{
String[] urlItems = request.getRequestTokens();
OutputStream out = request.getOutputStream();
List<String> path = urlItems.length == 1 ? ENTIRE_TREE :
Arrays.asList(urlItems).subList(1, urlItems.length);
try {
byte[] raw = resultCache.get(path).getBytes(UTF_8);
request.printHttpHeader(raw.length);
request.setContentType(this._mimeType);
out.write(raw);
} catch (ExecutionException e) {
Throwable cause = e.getCause();
if (cause instanceof TimeoutCacheException) {
throw new HttpException(503, "The info cell took too " +
"long to reply, suspect trouble (" +
cause.getMessage() + ")");
}
if (cause instanceof NoRouteToCellException) {
throw new HttpException(503, "Unable to locate the info cell");
}
if (cause instanceof CacheException) {
throw new HttpException(500, "Error when requesting " +
"info from info cell. (" + cause.getMessage() + ")");
}
if (cause instanceof InterruptedException) {
throw new HttpException(503, "Received interrupt " +
"whilst processing data. Please try again later.");
}
throwIfUnchecked(cause);
throw new RuntimeException(cause);
} catch (IOException e) {
LOGGER.error("Failed to send response: {}", e.getMessage());
}
}
}
/**
* The constructor simply creates a new nucleus for us to use when sending messages.
*/
public InfoHttpEngine(String[] args)
{
_infoCellName = new Args(args).getOption("cell");
checkArgument(_infoCellName != null, "-cell option is required for InfoHttpEngine handler.");
}
@Override
public void setCellEndpoint(CellEndpoint endpoint)
{
_info = new CellStub(endpoint, new CellPath(_infoCellName), 4000, MILLISECONDS);
}
/**
* Handle a request for data. This either returns the cached contents (if
* still valid), or queries the info cell for information.
*/
@Override
public void queryUrl(HttpRequest request) throws HttpException
{
LOGGER.info("Received request: {}", request);
SerialisationHandler handler = find(asList(
serialiserFromUri(request),
serialiserFromHttpHeaders(request),
xmlSerialiser), notNull());
handler.handleRequest(request);
}
private SerialisationHandler serialiserFromUri(HttpRequest request) throws HttpException
{
SerialisationHandler serialiser = null;
String argument = request.getParameter("format");
if (argument != null) {
serialiser = queryParameterToSerialiser.get(argument);
if (serialiser == null) {
throw new HttpException(415, "specified format does not exist");
}
}
return serialiser;
}
private SerialisationHandler serialiserFromHttpHeaders(HttpRequest request)
{
String accept = request.getRequestAttributes().get("Accept");
if (accept == null) {
return null;
}
SerialisationHandler bestHandler = null;
/*
* Choose the best mime-type that the client will accept, taking
* into account which formats we support, the client's preferences
* (q values) and choosing the most specific (i.e. longest) mime-type.
* Here is an example value (should be one line)
*
* application/xml;q=0.5,application/json;q=0.8,
* application/x-proprietary-format
*
* For details, see
*
* http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
*/
double bestQ = 0;
String bestEntry = "";
for (String entry : Splitter.on(',').trimResults().split(accept)) {
List<String> items = Splitter.on(';').trimResults().splitToList(entry);
String mimeType = items.get(0);
List<String> args = items.subList(1, items.size());
StringBuilder sb = new StringBuilder().append(mimeType);
double q = 1;
for (String arg : args) {
if (arg.startsWith("q=")) {
try {
q = Double.parseDouble(arg.substring(2));
} catch (NumberFormatException e) {
LOGGER.debug("malformed q value ('{}') in Accept: {}", q, e.toString());
q = 0;
}
} else {
sb.append(';').append(arg);
}
}
String entryWithoutQ = sb.toString();
// REVISIT: no wildcard support for mimetypes; e.g. text/* or */*
SerialisationHandler handler = mimetypeToSerialiser.get(mimeType);
if (q >= bestQ && entryWithoutQ.length() > bestEntry.length() && handler != null) {
bestHandler = handler;
bestQ = q;
bestEntry = entryWithoutQ;
}
}
return bestHandler;
}
}
|
{
"pile_set_name": "Github"
}
|
//
// NSKeyValueCollectionProxies.m
// Foundation
//
// Copyright (c) 2014 Apportable. All rights reserved.
//
#import "NSKeyValueCollectionProxies.h"
#import "NSKeyValueCodingInternal.h"
#import "NSObjectInternal.h"
#import <Foundation/NSException.h>
#import <Foundation/NSHashTable.h>
#import <Foundation/NSIndexSet.h>
#import <dispatch/dispatch.h>
#import <libkern/OSAtomic.h>
#import <objc/runtime.h>
static NSKeyValueProxyShareKey* _NSKeyValueProxyShareKey = nil;
static OSSpinLock _NSKeyValueProxySpinlock = OS_SPINLOCK_INIT;
@implementation NSKeyValueNonmutatingCollectionMethodSet
@end
@implementation NSKeyValueNonmutatingArrayMethodSet
{
@public
Method count;
Method objectAtIndex;
Method getObjectsRange;
Method objectsAtIndexes;
}
@end
@implementation NSKeyValueNonmutatingOrderedSetMethodSet
{
@public
Method count;
Method objectAtIndex;
Method indexOfObject;
Method getObjectsRange;
Method objectsAtIndexes;
}
@end
@implementation NSKeyValueNonmutatingSetMethodSet
{
@public
Method count;
Method enumerator;
Method member;
}
@end
@implementation NSKeyValueMutatingCollectionMethodSet
@end
@implementation NSKeyValueMutatingArrayMethodSet
@end
@implementation NSKeyValueMutatingOrderedSetMethodSet
@end
@implementation NSKeyValueMutatingSetMethodSet
@end
@implementation NSKeyValueNilOrderedSetEnumerator
- (id)nextObject
{
return nil;
}
@end
@implementation NSKeyValueNilSetEnumerator
- (id)nextObject
{
return nil;
}
@end
@implementation NSKeyValueSlowGetter
- (id)initWithContainerClassID:(Class)cls key:(NSString *)key containerIsa:(Class)containerIsa
{
SEL valueForKeySelector = @selector(valueForKey:);
Method valueForKeyMethod = class_getInstanceMethod(containerIsa, valueForKeySelector);
IMP valueForKeyIMP = method_getImplementation(valueForKeyMethod);
void *extra[1] = {
key
};
return [super initWithContainerClassID:cls key:key implementation:valueForKeyIMP selector:valueForKeySelector extraArguments:extra count:1];
}
@end
@implementation NSKeyValueSlowSetter
- (id)initWithContainerClassID:(Class)cls key:(NSString *)key containerIsa:(Class)containerIsa
{
SEL setValueForKeySelector = @selector(setValue:forKey:);
Method setValueForKeyMethod = class_getInstanceMethod(containerIsa, setValueForKeySelector);
IMP setValueForKeyIMP = method_getImplementation(setValueForKeyMethod);
void *extra[1] = {
key
};
return [super initWithContainerClassID:cls key:key implementation:setValueForKeyIMP selector:setValueForKeySelector extraArguments:extra count:1];
}
@end
@implementation NSKeyValueProxyGetter
{
Class _proxyClass;
}
static id _NSGetProxyValueWithGetterNoLock(id obj, NSKeyValueProxyGetter* getter)
{
Class proxyClass = [getter proxyClass];
NSHashTable *proxyShare = [proxyClass _proxyShare];
if (_NSKeyValueProxyShareKey == nil)
{
_NSKeyValueProxyShareKey = [[NSKeyValueProxyShareKey alloc] init];
}
_NSKeyValueProxyShareKey->_container = obj;
_NSKeyValueProxyShareKey->_key = [getter key];
id proxy = [proxyShare member:_NSKeyValueProxyShareKey];
if (proxy)
{
proxy = [proxy retain];
}
else
{
proxy = [[proxyClass alloc] _proxyInitWithContainer:obj getter:(id)getter];
[proxyShare addObject:proxy];
}
[proxy autorelease];
return proxy;
}
static id _NSGetProxyValueWithGetter(id obj, SEL sel, NSKeyValueProxyGetter* getter)
{
OSSpinLockLock(&_NSKeyValueProxySpinlock);
id ret = _NSGetProxyValueWithGetterNoLock(obj, getter);
OSSpinLockUnlock(&_NSKeyValueProxySpinlock);
return ret;
}
- (id)initWithContainerClassID:(Class)cls key:(NSString *)key proxyClass:(Class)proxyClass
{
void *extraArguments[1] = {
self
};
self = [super initWithContainerClassID:cls key:key implementation:(IMP)_NSGetProxyValueWithGetter selector:NULL extraArguments:extraArguments count:1];
if (self != nil)
{
_proxyClass = proxyClass;
}
return self;
}
- (Class)proxyClass
{
return _proxyClass;
}
@end
@implementation NSKeyValueCollectionGetter
{
NSKeyValueNonmutatingCollectionMethodSet *_methods;
}
- (NSKeyValueNonmutatingCollectionMethodSet *)methods
{
return _methods;
}
- (void)dealloc
{
[_methods release];
[super dealloc];
}
- (id)initWithContainerClassID:(Class)cls key:(NSString *)key methods:(NSKeyValueNonmutatingCollectionMethodSet *)methods proxyClass:(Class)proxyClass
{
self = [super initWithContainerClassID:cls key:key proxyClass:proxyClass];
if (self != nil)
{
_methods = [methods retain];
}
return self;
}
@end
@implementation NSKeyValueSlowMutableCollectionGetter
{
NSKeyValueGetter *_baseGetter;
NSKeyValueSetter *_baseSetter;
}
- (void)dealloc
{
[_baseGetter release];
[_baseSetter release];
[super dealloc];
}
- (id)initWithContainerClassID:(Class)cls key:(NSString *)key baseGetter:(NSKeyValueGetter *)baseGetter baseSetter:(NSKeyValueSetter *)baseSetter containerIsa:(Class)containerIsa proxyClass:(Class)proxyClass
{
self = [super initWithContainerClassID:cls key:key proxyClass:proxyClass];
if (self != nil)
{
if ([baseGetter isKindOfClass:[NSKeyValueUndefinedGetter self]])
{
_baseGetter = [[NSKeyValueSlowGetter alloc] initWithContainerClassID:cls key:key containerIsa:containerIsa];
}
else
{
_baseGetter = [baseGetter retain];
}
if ([baseSetter isKindOfClass:[NSKeyValueUndefinedSetter self]])
{
_baseSetter = [[NSKeyValueSlowSetter alloc] initWithContainerClassID:cls key:key containerIsa:containerIsa];
}
else
{
_baseSetter = [baseSetter retain];
}
}
return self;
}
- (BOOL)treatNilValuesLikeEmptyCollections
{
if ([self isKindOfClass:[NSKeyValueSlowGetter self]] || [self isKindOfClass:[NSKeyValueUndefinedGetter self]])
{
return YES;
}
return NO;
}
- (NSKeyValueSetter *)baseSetter
{
return _baseSetter;
}
- (NSKeyValueGetter *)baseGetter
{
return _baseGetter;
}
@end
@implementation NSKeyValueFastMutableCollection1Getter
{
NSKeyValueNonmutatingCollectionMethodSet *_nonmutatingMethods;
NSKeyValueMutatingCollectionMethodSet *_mutatingMethods;
}
- (void)dealloc
{
[_nonmutatingMethods release];
[_mutatingMethods release];
[super dealloc];
}
- (id)initWithContainerClassID:(Class)cls key:(NSString *)key nonmutatingMethods:(NSKeyValueNonmutatingCollectionMethodSet *)nonmutatingMethods mutatingMethods:(NSKeyValueMutatingCollectionMethodSet *)mutatingMethods proxyClass:(Class)proxyClass
{
self = [super initWithContainerClassID:cls key:key proxyClass:proxyClass];
if (self != nil)
{
_nonmutatingMethods = [nonmutatingMethods retain];
_mutatingMethods = [mutatingMethods retain];
}
return self;
}
- (NSKeyValueMutatingCollectionMethodSet *)mutatingMethods
{
return _mutatingMethods;
}
- (NSKeyValueNonmutatingCollectionMethodSet *)nonmutatingMethods
{
return _nonmutatingMethods;
}
@end
@implementation NSKeyValueFastMutableCollection2Getter
{
NSKeyValueGetter *_baseGetter;
NSKeyValueMutatingCollectionMethodSet *_mutatingMethods;
}
- (void)dealloc
{
[_baseGetter release];
[_mutatingMethods release];
[super dealloc];
}
- (id)initWithContainerClassID:(Class)cls key:(NSString *)key baseGetter:(NSKeyValueGetter *)baseGetter mutatingMethods:(NSKeyValueMutatingCollectionMethodSet *)mutatingMethods proxyClass:(Class)proxyClass
{
self = [super initWithContainerClassID:cls key:key proxyClass:proxyClass];
if (self != nil)
{
_baseGetter = [baseGetter retain];
_mutatingMethods = [mutatingMethods retain];
}
return self;
}
- (NSKeyValueMutatingCollectionMethodSet *)mutatingMethods
{
return _mutatingMethods;
}
- (NSKeyValueGetter *)baseGetter
{
return _baseGetter;
}
@end
@implementation NSKeyValueIvarMutableCollectionGetter
{
Ivar _ivar;
}
- (id)initWithContainerClassID:(Class)cls key:(NSString *)key containerIsa:(Class)containerIsa ivar:(Ivar)ivar proxyClass:(Class)proxyClass
{
self = [super initWithContainerClassID:cls key:key proxyClass:proxyClass];
if (self != nil)
{
_ivar = ivar;
}
return self;
}
- (Ivar)ivar
{
return _ivar;
}
@end
@implementation NSKeyValueNotifyingMutableCollectionGetter
{
NSKeyValueProxyGetter *_mutableCollectionGetter;
}
- (void)dealloc
{
[_mutableCollectionGetter release];
[super dealloc];
}
- (id)initWithContainerClassID:(Class)cls key:(NSString*)key mutableCollectionGetter:(NSKeyValueProxyGetter*)getter proxyClass:(Class)proxyClass
{
self = [super initWithContainerClassID:cls key:key proxyClass:proxyClass];
if (self != nil)
{
_mutableCollectionGetter = [getter retain];
}
return self;
}
- (NSKeyValueProxyGetter*)mutableCollectionGetter
{
return _mutableCollectionGetter;
}
@end
@implementation NSKeyValueProxyShareKey
+ (NSHashTable *)_proxyShare
{
return nil;
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
return NULL;
}
- (void)_proxyNonGCFinalize
{
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueCollectionGetter *)getter
{
return nil;
}
- (NSKeyValueProxyLocator)_proxyLocator
{
return (NSKeyValueProxyLocator) {
.container = _container,
.key = _key,
};
}
@end
extern Class NSClassFromObject(id object);
static NSUInteger NSKeyValueProxyHash(const void *item, NSUInteger (*size)(const void *))
{
id<NSKeyValueProxyCaching> proxy = (id<NSKeyValueProxyCaching>)item;
NSKeyValueProxyLocator proxyLocator = [proxy _proxyLocator];
return [proxyLocator.key hash] ^ (NSUInteger)proxyLocator.container;
}
static BOOL NSKeyValueProxyIsEqual(const void *item1, const void *item2, NSUInteger (*size)(const void *))
{
id<NSKeyValueProxyCaching> proxy1 = (id<NSKeyValueProxyCaching>)item1;
id<NSKeyValueProxyCaching> proxy2 = (id<NSKeyValueProxyCaching>)item2;
NSKeyValueProxyLocator proxyLocator1 = [proxy1 _proxyLocator];
NSKeyValueProxyLocator proxyLocator2 = [proxy2 _proxyLocator];
return proxyLocator1.container == proxyLocator2.container && [proxyLocator1.key isEqualToString:proxyLocator2.key];
}
static NSHashTable *_NSKeyValueProxyShareCreate(void)
{
NSPointerFunctions *pf = [[[NSPointerFunctions alloc] initWithOptions:NSPointerFunctionsWeakMemory] autorelease];
[pf setHashFunction:NSKeyValueProxyHash];
[pf setIsEqualFunction:NSKeyValueProxyIsEqual];
return [[NSHashTable alloc] initWithPointerFunctions:pf capacity:0];
}
static BOOL _NSKeyValueProxyDeallocate(id <NSKeyValueProxyCaching>proxy)
{
BOOL dealloced = YES;
OSSpinLockLock(&_NSKeyValueProxySpinlock);
if (NSExtraRefCount(proxy) > 0)
{
OSSpinLockUnlock(&_NSKeyValueProxySpinlock);
return NO;
}
Class proxyClass = NSClassFromObject(proxy);
[[proxyClass _proxyShare] removeObject:proxy];
OSSpinLockUnlock(&_NSKeyValueProxySpinlock);
[proxy _proxyNonGCFinalize];
#warning Disable pooling of proxies until it we actually enable reuse.
/*
OSSpinLockLock(&_NSKeyValueProxySpinlock);
NSKeyValueProxyPool *proxyPool = [proxyClass _proxyNonGCPoolPointer];
if (proxyPool->idx < PROXY_POOLS)
{
dealloced = NO;
proxyPool->proxy[proxyPool->idx] = proxy;
proxyPool->idx++;
}
OSSpinLockUnlock(&_NSKeyValueProxySpinlock);
*/
return dealloced;
}
@implementation NSKeyValueArray
{
NSObject *_container;
NSString *_key;
NSKeyValueNonmutatingArrayMethodSet *_methods;
}
+ (NSHashTable *)_proxyShare
{
static dispatch_once_t once;
static NSHashTable *proxyShare;
dispatch_once(&once, ^{
proxyShare = [_NSKeyValueProxyShareCreate() retain];
});
return proxyShare;
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
static NSKeyValueProxyPool proxyPool;
return &proxyPool;
}
- (void)dealloc
{
if (_NSKeyValueProxyDeallocate(self))
{
[super dealloc];
}
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueCollectionGetter *)getter
{
self = [super init];
if (self != nil)
{
_container = [container retain];
_key = [[getter key] copy];
_methods = [(NSKeyValueNonmutatingArrayMethodSet *)[getter methods] retain];
}
return self;
}
- (NSArray *)objectsAtIndexes:(NSIndexSet *)indexSet
{
Method objectsAtIndexes = _methods->objectsAtIndexes;
if (objectsAtIndexes != NULL)
{
return ((NSArray*(*)(id, Method, NSIndexSet*))method_invoke)(_container, objectsAtIndexes, indexSet);
}
else
{
return [super objectsAtIndexes:indexSet];
}
}
- (id)objectAtIndex:(NSUInteger)idx
{
if (_methods->objectAtIndex != NULL)
{
return ((id(*)(id, Method, NSUInteger))method_invoke)(_container, _methods->objectAtIndex, idx);
}
NSIndexSet *indexes = [[NSIndexSet alloc] initWithIndex:idx];
NSArray *objects = ((NSArray*(*)(id, Method, NSIndexSet*))method_invoke)(_container, _methods->objectsAtIndexes, indexes);
[indexes release];
return [objects objectAtIndex:0];
}
- (void)getObjects:(id *)objects range:(NSRange)range
{
Method getObjectsRange = _methods->getObjectsRange;
if (getObjectsRange != NULL)
{
((void(*)(id, Method, id*, NSRange))method_invoke)(_container, getObjectsRange, objects, range);
}
else
{
[super getObjects:objects range:range];
}
}
- (NSUInteger)count
{
return ((NSUInteger(*)(id, Method))method_invoke)(_container, _methods->count);
}
- (void)_proxyNonGCFinalize
{
[_container release];
[_key release];
[_methods release];
}
- (NSKeyValueProxyLocator)_proxyLocator
{
return (NSKeyValueProxyLocator) {
.container = _container,
.key = _key,
};
}
@end
@implementation NSKeyValueOrderedSet
{
NSObject *_container;
NSString *_key;
NSKeyValueNonmutatingOrderedSetMethodSet *_methods;
}
+ (NSHashTable *)_proxyShare
{
static dispatch_once_t once;
static NSHashTable *proxyShare;
dispatch_once(&once, ^{
proxyShare = [_NSKeyValueProxyShareCreate() retain];
});
return proxyShare;
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
static NSKeyValueProxyPool proxyPool;
return &proxyPool;
}
- (void)dealloc
{
if (_NSKeyValueProxyDeallocate(self))
{
[super dealloc];
}
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueCollectionGetter *)getter
{
self = [super init];
if (self != nil)
{
_container = [container retain];
_key = [[getter key] copy];
_methods = [(NSKeyValueNonmutatingOrderedSetMethodSet *)[getter methods] retain];
}
return self;
}
- (NSArray *)objectsAtIndexes:(NSIndexSet *)indexSet
{
if (_methods->objectsAtIndexes != NULL)
{
return ((NSArray*(*)(id, Method, NSIndexSet*))method_invoke)(_container, _methods->objectsAtIndexes, indexSet);
}
else
{
return [super objectsAtIndexes:indexSet];
}
}
- (id)objectAtIndex:(NSUInteger)idx
{
if (_methods->objectAtIndex != NULL)
{
return ((id(*)(id, Method, NSUInteger))method_invoke)(_container, _methods->objectAtIndex, idx);
}
NSIndexSet *indexes = [[NSIndexSet alloc] initWithIndex:idx];
NSArray *objects = ((NSArray*(*)(id, Method, NSIndexSet*))method_invoke)(_container, _methods->objectsAtIndexes, indexes);
[indexes release];
return [objects objectAtIndex:0];
}
- (NSUInteger)indexOfObject:(id)object
{
return ((NSUInteger(*)(id, Method, id))method_invoke)(_container, _methods->indexOfObject, object);
}
- (void)getObjects:(id *)objects range:(NSRange)range
{
if (_methods->getObjectsRange != NULL)
{
((void(*)(id, Method, id*, NSRange))method_invoke)(_container, _methods->getObjectsRange, objects, range);
}
else
{
[super getObjects:objects range:range];
}
}
- (NSUInteger)count
{
return ((NSUInteger(*)(id, Method))method_invoke)(_container, _methods->count);
}
- (void)_proxyNonGCFinalize
{
[_container release];
[_key release];
[_methods release];
_container = nil;
_key = nil;
_methods = nil;
}
- (NSKeyValueProxyLocator)_proxyLocator
{
return (NSKeyValueProxyLocator) {
.container = _container,
.key = _key,
};
}
@end
@implementation NSKeyValueSet
{
NSObject *_container;
NSString *_key;
NSKeyValueNonmutatingSetMethodSet *_methods;
}
+ (NSHashTable *)_proxyShare
{
static dispatch_once_t once;
static NSHashTable *proxyShare;
dispatch_once(&once, ^{
proxyShare = [_NSKeyValueProxyShareCreate() retain];
});
return proxyShare;
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
static NSKeyValueProxyPool proxyPool;
return &proxyPool;
}
- (void)dealloc
{
if (_NSKeyValueProxyDeallocate(self))
{
[super dealloc];
}
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueCollectionGetter *)getter
{
self = [super init];
if (self != nil)
{
_container = [container retain];
_key = [[getter key] copy];
_methods = [(NSKeyValueNonmutatingSetMethodSet *)[getter methods] retain];
}
return self;
}
- (NSEnumerator *)objectEnumerator
{
return ((NSEnumerator*(*)(id, Method))method_invoke)(_container, _methods->enumerator);
}
- (id)member:(id)object
{
return ((id(*)(id, Method, id))method_invoke)(_container, _methods->count, object);
}
- (NSUInteger)count
{
return ((NSUInteger(*)(id, Method))method_invoke)(_container, _methods->count);
}
- (void)_proxyNonGCFinalize
{
[_container release];
[_key release];
[_methods release];
_container = nil;
_key = nil;
_methods = nil;
}
- (NSKeyValueProxyLocator)_proxyLocator
{
return (NSKeyValueProxyLocator) {
.container = _container,
.key = _key,
};
}
@end
@implementation NSKeyValueMutableArray
{
@public
NSObject *_container;
NSString *_key;
}
+ (NSHashTable *)_proxyShare
{
static dispatch_once_t once;
static NSHashTable *proxyShare;
dispatch_once(&once, ^{
proxyShare = [_NSKeyValueProxyShareCreate() retain];
});
return proxyShare;
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
[self doesNotRecognizeSelector:_cmd];
return NULL;
}
- (void)dealloc
{
if (_NSKeyValueProxyDeallocate(self))
{
[super dealloc];
}
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueCollectionGetter *)getter
{
self = [super init];
if (self != nil)
{
_container = [container retain];
_key = [[getter key] copy];
}
return self;
}
- (void)setArray:(NSArray *)array
{
[self removeAllObjects];
for (id obj in array)
{
[self addObject:obj];
}
}
- (void)_proxyNonGCFinalize
{
[_container release];
[_key release];
_container = nil;
_key = nil;
}
- (NSKeyValueProxyLocator)_proxyLocator
{
return (NSKeyValueProxyLocator) {
.container = _container,
.key = _key,
};
}
@end
@implementation NSKeyValueMutableOrderedSet
{
@public
NSObject *_container;
NSString *_key;
}
+ (NSHashTable *)_proxyShare
{
static dispatch_once_t once;
static NSHashTable *proxyShare;
dispatch_once(&once, ^{
proxyShare = [_NSKeyValueProxyShareCreate() retain];
});
return proxyShare;
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
[self doesNotRecognizeSelector:_cmd];
return NULL;
}
- (void)dealloc
{
if (_NSKeyValueProxyDeallocate(self))
{
[super dealloc];
}
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueCollectionGetter *)getter
{
self = [super init];
if (self != nil)
{
_container = [container retain];
_key = [[getter key] copy];
}
return self;
}
- (void)_proxyNonGCFinalize
{
[_container release];
[_key release];
_container = nil;
_key = nil;
}
- (NSKeyValueProxyLocator)_proxyLocator
{
return (NSKeyValueProxyLocator) {
.container = _container,
.key = _key,
};
}
@end
@implementation NSKeyValueMutableSet
{
@public
NSObject *_container;
NSString *_key;
}
+ (NSHashTable *)_proxyShare
{
static dispatch_once_t once;
static NSHashTable *proxyShare;
dispatch_once(&once, ^{
proxyShare = [_NSKeyValueProxyShareCreate() retain];
});
return proxyShare;
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
[self doesNotRecognizeSelector:_cmd];
return NULL;
}
- (void)dealloc
{
if (_NSKeyValueProxyDeallocate(self))
{
[super dealloc];
}
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueCollectionGetter *)getter
{
self = [super init];
if (self != nil)
{
_container = [container retain];
_key = [[getter key] copy];
}
return self;
}
- (void)_proxyNonGCFinalize
{
[_container release];
[_key release];
_container = nil;
_key = nil;
}
- (NSKeyValueProxyLocator)_proxyLocator
{
return (NSKeyValueProxyLocator) {
.container = _container,
.key = _key,
};
}
@end
@implementation NSKeyValueSlowMutableArray
{
NSKeyValueGetter *_valueGetter;
NSKeyValueSetter *_valueSetter;
BOOL _treatNilValuesLikeEmptyArrays;
char _padding[3];
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
static NSKeyValueProxyPool proxyPool;
return &proxyPool;
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueSlowMutableCollectionGetter *)getter
{
self = [super _proxyInitWithContainer:container getter:(NSKeyValueCollectionGetter *)getter];
if (self != nil)
{
_valueGetter = [[getter baseGetter] retain];
_valueSetter = [[getter baseSetter] retain];
_treatNilValuesLikeEmptyArrays = [getter treatNilValuesLikeEmptyCollections];
}
return self;
}
- (void)replaceObjectsAtIndexes:(NSIndexSet *)indexes withObjects:(NSArray *)objects
{
NSMutableArray *array = [self _createNonNilMutableArrayValueWithSelector:_cmd];
[array replaceObjectsAtIndexes:indexes withObjects:objects];
_NSSetUsingKeyValueSetter(_container, _valueSetter, array);
[array release];
}
- (void)replaceObjectAtIndex:(NSUInteger)idx withObject:(id)object
{
NSMutableArray *array = [self _createNonNilMutableArrayValueWithSelector:_cmd];
[array replaceObjectAtIndex:idx withObject:object];
_NSSetUsingKeyValueSetter(_container, _valueSetter, array);
[array release];
}
- (void)removeObjectsAtIndexes:(NSIndexSet *)indexes
{
NSMutableArray *array = [self _createNonNilMutableArrayValueWithSelector:_cmd];
[array removeObjectsAtIndexes:indexes];
_NSSetUsingKeyValueSetter(_container, _valueSetter, array);
[array release];
}
- (void)removeObjectAtIndex:(NSUInteger)idx
{
NSMutableArray *array = [self _createNonNilMutableArrayValueWithSelector:_cmd];
[array removeObjectAtIndex:idx];
_NSSetUsingKeyValueSetter(_container, _valueSetter, array);
[array release];
}
- (void)removeLastObject
{
NSMutableArray *array = [self _createNonNilMutableArrayValueWithSelector:_cmd];
[array removeLastObject];
_NSSetUsingKeyValueSetter(_container, _valueSetter, array);
[array release];
}
- (NSMutableArray *)_createNonNilMutableArrayValueWithSelector:(SEL)selector
{
NSArray *array = _NSGetUsingKeyValueGetter(_container, _valueGetter);
if (array == nil)
{
[self _raiseNilValueExceptionWithSelector:selector];
return nil;
}
return [array mutableCopy];
}
- (void)insertObjects:(NSArray *)objects atIndexes:(NSIndexSet *)indexes
{
NSArray *array = _NSGetUsingKeyValueGetter(_container, _valueGetter);
NSMutableArray *copy;
if (array == nil)
{
if (_treatNilValuesLikeEmptyArrays &&
[objects count] == [indexes count] &&
[indexes lastIndex] + 1 == [objects count])
{
copy = [objects mutableCopy];
}
else
{
[self _raiseNilValueExceptionWithSelector:_cmd];
return;
}
}
else
{
copy = [array mutableCopy];
[copy insertObjects:objects atIndexes:indexes];
}
_NSSetUsingKeyValueSetter(_container, _valueSetter, copy);
[copy release];
}
- (void)insertObject:(id)object atIndex:(NSUInteger)idx
{
NSArray *array = _NSGetUsingKeyValueGetter(_container, _valueGetter);
NSMutableArray *copy;
if (array == nil)
{
if (_treatNilValuesLikeEmptyArrays && idx == 0)
{
copy = [[NSMutableArray alloc] initWithObjects:&object count:1];
}
else
{
[self _raiseNilValueExceptionWithSelector:_cmd];
return;
}
}
else
{
copy = [array mutableCopy];
[copy insertObject:object atIndex:idx];
}
_NSSetUsingKeyValueSetter(_container, _valueSetter, copy);
[copy release];
}
- (void)addObject:(id)object
{
NSArray *array = _NSGetUsingKeyValueGetter(_container, _valueGetter);
NSMutableArray *copy;
if (array == nil)
{
if (_treatNilValuesLikeEmptyArrays)
{
copy = [[NSMutableArray alloc] initWithObjects:&object count:1];
}
else
{
[self _raiseNilValueExceptionWithSelector:_cmd];
return;
}
}
else
{
copy = [array mutableCopy];
[copy addObject:object];
}
_NSSetUsingKeyValueSetter(_container, _valueSetter, copy);
[copy release];
}
- (NSArray *)objectsAtIndexes:(NSIndexSet *)indexes
{
return [[self _nonNilArrayValueWithSelector:_cmd] objectsAtIndexes:indexes];
}
- (id)objectAtIndex:(NSUInteger)idx
{
return [[self _nonNilArrayValueWithSelector:_cmd] objectAtIndex:idx];
}
- (void)getObjects:(id *)objects range:(NSRange)range
{
[[self _nonNilArrayValueWithSelector:_cmd] getObjects:objects range:range];
}
- (NSArray *)_nonNilArrayValueWithSelector:(SEL)selector
{
NSArray *array = _NSGetUsingKeyValueGetter(_container, _valueGetter);
if (array == nil)
{
[self _raiseNilValueExceptionWithSelector:selector];
return nil;
}
return array;
}
- (NSUInteger)count
{
NSArray *array = _NSGetUsingKeyValueGetter(_container, _valueGetter);
if (array == nil)
{
if (!_treatNilValuesLikeEmptyArrays)
{
[self _raiseNilValueExceptionWithSelector:_cmd];
}
return 0;
}
return [array count];
}
- (void)_raiseNilValueExceptionWithSelector:(SEL)selector
{
[NSException raise:_treatNilValuesLikeEmptyArrays ? NSInternalInconsistencyException : NSRangeException
format:@"key %@ of array %@ is nil for selector %s", _key, _container, sel_getName(selector)];
}
- (void)_proxyNonGCFinalize
{
[_valueGetter release];
[_valueSetter release];
[super _proxyNonGCFinalize];
_valueGetter = nil;
_valueSetter = nil;
}
@end
@implementation NSKeyValueSlowMutableOrderedSet
{
NSKeyValueGetter *_valueGetter;
NSKeyValueSetter *_valueSetter;
BOOL _treatNilValuesLikeEmptyOrderedSets;
char _padding[3];
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
static NSKeyValueProxyPool proxyPool;
return &proxyPool;
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueSlowMutableCollectionGetter *)getter
{
self = [super _proxyInitWithContainer:container getter:(NSKeyValueCollectionGetter *)getter];
if (self != nil)
{
_valueGetter = [[getter baseGetter] retain];
_valueSetter = [[getter baseSetter] retain];
_treatNilValuesLikeEmptyOrderedSets = [getter treatNilValuesLikeEmptyCollections];
}
return self;
}
- (void)replaceObjectsAtIndexes:(NSIndexSet *)indexes withObjects:(NSArray *)objects
{
NSMutableOrderedSet *orderedSet = [self _createNonNilMutableOrderedSetValueWithSelector:_cmd];
[orderedSet replaceObjectsAtIndexes:indexes withObjects:objects];
_NSSetUsingKeyValueSetter(_container, _valueSetter, orderedSet);
[orderedSet release];
}
- (void)replaceObjectAtIndex:(NSUInteger)idx withObject:(id)object
{
NSMutableOrderedSet *orderedSet = [self _createNonNilMutableOrderedSetValueWithSelector:_cmd];
[orderedSet replaceObjectAtIndex:idx withObject:object];
_NSSetUsingKeyValueSetter(_container, _valueSetter, orderedSet);
[orderedSet release];
}
- (void)removeObjectsAtIndexes:(NSIndexSet *)indexes
{
NSMutableOrderedSet *orderedSet = [self _createNonNilMutableOrderedSetValueWithSelector:_cmd];
[orderedSet removeObjectsAtIndexes:indexes];
_NSSetUsingKeyValueSetter(_container, _valueSetter, orderedSet);
[orderedSet release];
}
- (void)removeObjectAtIndex:(NSUInteger)idx
{
NSMutableOrderedSet *orderedSet = [self _createNonNilMutableOrderedSetValueWithSelector:_cmd];
[orderedSet removeObjectAtIndex:idx];
_NSSetUsingKeyValueSetter(_container, _valueSetter, orderedSet);
[orderedSet release];
}
- (NSMutableOrderedSet *)_createNonNilMutableOrderedSetValueWithSelector:(SEL)selector
{
NSOrderedSet *orderedSet = _NSGetUsingKeyValueGetter(_container, _valueGetter);
if (orderedSet == nil)
{
[self _raiseNilValueExceptionWithSelector:selector];
return nil;
}
return [orderedSet mutableCopy];
}
- (void)insertObjects:(NSArray *)objects atIndexes:(NSIndexSet *)indexes
{
NSOrderedSet *orderedSet = _NSGetUsingKeyValueGetter(_container, _valueGetter);
NSMutableOrderedSet *copy;
if (orderedSet == nil)
{
if (_treatNilValuesLikeEmptyOrderedSets &&
[objects count] == [indexes count] &&
[indexes lastIndex] + 1 == [objects count])
{
copy = [[NSMutableOrderedSet alloc] initWithArray:objects];
}
else
{
[self _raiseNilValueExceptionWithSelector:_cmd];
return;
}
}
else
{
copy = [orderedSet mutableCopy];
[copy insertObjects:objects atIndexes:indexes];
}
_NSSetUsingKeyValueSetter(_container, _valueSetter, copy);
[copy release];
}
- (void)insertObject:(id)object atIndex:(NSUInteger)idx
{
NSOrderedSet *orderedSet = _NSGetUsingKeyValueGetter(_container, _valueGetter);
NSMutableOrderedSet *copy;
if (orderedSet == nil)
{
if (_treatNilValuesLikeEmptyOrderedSets && idx == 0)
{
copy = [[NSMutableOrderedSet alloc] initWithObjects:&object count:1];
}
else
{
[self _raiseNilValueExceptionWithSelector:_cmd];
return;
}
}
else
{
copy = [orderedSet mutableCopy];
[copy insertObject:object atIndex:idx];
}
_NSSetUsingKeyValueSetter(_container, _valueSetter, copy);
[copy release];
}
- (NSArray *)objectsAtIndexes:(NSIndexSet *)indexes
{
return [[self _nonNilOrderedSetValueWithSelector:_cmd] objectsAtIndexes:indexes];
}
- (id)objectAtIndex:(NSUInteger)index
{
return [[self _nonNilOrderedSetValueWithSelector:_cmd] objectAtIndex:index];
}
- (NSUInteger)indexOfObject:(id)object
{
return [[self _nonNilOrderedSetValueWithSelector:_cmd] indexOfObject:object];
}
- (void)getObjects:(id *)objects range:(NSRange)range
{
[[self _nonNilOrderedSetValueWithSelector:_cmd] getObjects:objects range:range];
}
- (NSOrderedSet *)_nonNilOrderedSetValueWithSelector:(SEL)selector
{
NSOrderedSet *orderedSet = _NSGetUsingKeyValueGetter(_container, _valueGetter);
if (orderedSet == nil)
{
[self _raiseNilValueExceptionWithSelector:selector];
return nil;
}
return orderedSet;
}
- (NSUInteger)count
{
NSOrderedSet *orderedSet = _NSGetUsingKeyValueGetter(_container, _valueGetter);
if (orderedSet == nil)
{
if (_treatNilValuesLikeEmptyOrderedSets)
{
return 0;
}
else
{
[self _raiseNilValueExceptionWithSelector:_cmd];
return 0;
}
}
return [orderedSet count];
}
- (void)_raiseNilValueExceptionWithSelector:(SEL)selector
{
[NSException raise:_treatNilValuesLikeEmptyOrderedSets ? NSInternalInconsistencyException : NSRangeException
format:@"key %@ of ordered set %@ is nil for selector %s", _key, _container, sel_getName(selector)];
}
- (void)_proxyNonGCFinalize
{
[_valueGetter release];
[_valueSetter release];
[super _proxyNonGCFinalize];
_valueGetter = nil;
_valueSetter = nil;
}
@end
@implementation NSKeyValueSlowMutableSet
{
NSKeyValueGetter *_valueGetter;
NSKeyValueSetter *_valueSetter;
BOOL _treatNilValuesLikeEmptySets;
char _padding[3];
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
static NSKeyValueProxyPool proxyPool;
return &proxyPool;
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueSlowMutableCollectionGetter *)getter
{
self = [super _proxyInitWithContainer:container getter:(NSKeyValueCollectionGetter *)getter];
if (self != nil)
{
_valueGetter = [[getter baseGetter] retain];
_valueSetter = [[getter baseSetter] retain];
_treatNilValuesLikeEmptySets = [getter treatNilValuesLikeEmptyCollections];
}
return self;
}
- (void)unionSet:(NSSet *)otherSet
{
NSMutableSet *set = [self _createMutableSetValueWithSelector:_cmd];
if (set != nil)
{
[set unionSet:otherSet];
}
else
{
set = [otherSet mutableCopy];
}
_NSSetUsingKeyValueSetter(_container, _valueSetter, set);
[set release];
}
- (void)setSet:(NSSet *)otherSet
{
if (_treatNilValuesLikeEmptySets ||
_NSGetUsingKeyValueGetter(_container, _valueGetter) != nil)
{
_NSSetUsingKeyValueSetter(_container, _valueSetter, otherSet);
}
else
{
[self _raiseNilValueExceptionWithSelector:_cmd];
}
}
- (void)removeObject:(id)object
{
NSMutableSet *set = [self _createMutableSetValueWithSelector:_cmd];
if (set != nil)
{
[set removeObject:object];
_NSSetUsingKeyValueSetter(_container, _valueSetter, set);
[set release];
}
}
- (void)removeAllObjects
{
NSSet *set = _NSGetUsingKeyValueGetter(_container, _valueGetter);
if (set == nil)
{
if (!_treatNilValuesLikeEmptySets)
{
[self _raiseNilValueExceptionWithSelector:_cmd];
return;
}
}
NSSet *emptySet = [[NSSet alloc] init];
_NSSetUsingKeyValueSetter(_container, _valueSetter, emptySet);
[emptySet release];
}
- (void)minusSet:(NSSet *)otherSet
{
NSMutableSet *set = [self _createMutableSetValueWithSelector:_cmd];
if (set != nil)
{
[set minusSet:otherSet];
_NSSetUsingKeyValueSetter(_container, _valueSetter, set);
[set release];
}
}
- (void)intersectSet:(NSSet *)otherSet
{
NSMutableSet *set = [self _createMutableSetValueWithSelector:_cmd];
if (set != nil)
{
[set intersectSet:otherSet];
_NSSetUsingKeyValueSetter(_container, _valueSetter, set);
[set release];
}
}
- (void)addObjectsFromArray:(NSArray *)array
{
NSMutableSet *set = [self _createMutableSetValueWithSelector:_cmd];
if (set == nil)
{
set = [[NSMutableSet alloc] initWithArray:array];
}
else
{
[set addObjectsFromArray:array];
}
_NSSetUsingKeyValueSetter(_container, _valueSetter, set);
[set release];
}
- (void)addObject:(id)object
{
NSMutableSet *set = [self _createMutableSetValueWithSelector:_cmd];
if (set == nil)
{
set = [[NSMutableSet alloc] initWithObjects:&object count:1];
}
else
{
[set addObject:object];
}
_NSSetUsingKeyValueSetter(_container, _valueSetter, set);
[set release];
}
- (NSMutableSet *)_createMutableSetValueWithSelector:(SEL)selector
{
NSSet *set = _NSGetUsingKeyValueGetter(_container, _valueGetter);
if (set == nil)
{
if (!_treatNilValuesLikeEmptySets)
{
[self _raiseNilValueExceptionWithSelector:selector];
}
return nil;
}
return [set mutableCopy];
}
- (NSEnumerator *)objectEnumerator
{
NSSet *set = [self _setValueWithSelector:_cmd];
if (set == nil)
{
return [[[NSKeyValueNilSetEnumerator alloc] init] autorelease];
}
else
{
return [set objectEnumerator];
}
}
- (id)member:(id)object
{
return [[self _setValueWithSelector:_cmd] member:object];
}
- (NSUInteger)count
{
return [[self _setValueWithSelector:_cmd] count];
}
- (NSSet *)_setValueWithSelector:(SEL)selector
{
NSSet *set = _NSGetUsingKeyValueGetter(_container, _valueGetter);
if (set == nil && !_treatNilValuesLikeEmptySets)
{
[self _raiseNilValueExceptionWithSelector:selector];
return nil;
}
return set;
}
- (void)_raiseNilValueExceptionWithSelector:(SEL)selector
{
[NSException raise:NSInternalInconsistencyException
format:@"key %@ of set %@ is nil for selector %s", _key, _container, sel_getName(selector)];
}
- (void)_proxyNonGCFinalize
{
[_valueGetter release];
[_valueSetter release];
[super _proxyNonGCFinalize];
_valueGetter = nil;
_valueSetter = nil;
}
@end
@implementation NSKeyValueFastMutableArray
{
NSKeyValueMutatingArrayMethodSet *_mutatingMethods;
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueProxyGetter *)getter
{
self = [super _proxyInitWithContainer:container getter:(NSKeyValueCollectionGetter *)getter];
if (self != nil)
{
_mutatingMethods = [(NSKeyValueMutatingArrayMethodSet *)[getter mutatingMethods] retain];
}
return self;
}
- (void)replaceObjectsAtIndexes:(NSIndexSet *)indexes withObjects:(NSArray *)objects
{
if (_mutatingMethods->replaceObjectsAtIndexes != NULL)
{
((void(*)(id, Method, NSIndexSet*, NSArray*))method_invoke)(_container, _mutatingMethods->replaceObjectsAtIndexes, indexes, objects);
}
else
{
[super replaceObjectsAtIndexes:indexes withObjects:objects];
}
}
- (void)replaceObjectAtIndex:(NSUInteger)idx withObject:(id)object
{
if (_mutatingMethods->replaceObjectAtIndex != NULL)
{
((void(*)(id, Method, NSUInteger, id))method_invoke)(_container, _mutatingMethods->replaceObjectAtIndex, idx, object);
}
else if (_mutatingMethods->replaceObjectsAtIndexes != NULL)
{
NSIndexSet *indexes = [[NSIndexSet alloc] initWithIndex:idx];
NSArray *objects = [[NSArray alloc] initWithObjects:&object count:1];
((void(*)(id, Method, NSIndexSet*, NSArray*))method_invoke)(_container, _mutatingMethods->replaceObjectsAtIndexes, indexes, objects);
[indexes release];
[objects release];
}
else
{
[self removeObjectAtIndex:idx];
[self insertObject:object atIndex:idx];
}
}
- (void)removeObjectsAtIndexes:(NSIndexSet *)indexes
{
if (_mutatingMethods->removeObjectsAtIndexes != NULL)
{
((void(*)(id, Method, NSIndexSet*))method_invoke)(_container, _mutatingMethods->removeObjectsAtIndexes, indexes);
}
else
{
[super removeObjectsAtIndexes:indexes];
}
}
- (void)removeObjectAtIndex:(NSUInteger)idx
{
if (_mutatingMethods->removeObjectAtIndex != NULL)
{
((void(*)(id, Method, NSUInteger))method_invoke)(_container, _mutatingMethods->removeObjectAtIndex, idx);
return;
}
NSIndexSet *indexes = [[NSIndexSet alloc] initWithIndex:idx];
((void(*)(id, Method, NSIndexSet*))method_invoke)(_container, _mutatingMethods->removeObjectsAtIndexes, indexes);
[indexes release];
}
- (void)removeLastObject
{
[self removeObjectAtIndex:[self count] - 1];
}
- (void)insertObjects:(NSArray *)objects atIndexes:(NSIndexSet *)indexes
{
if (_mutatingMethods->insertObjectsAtIndexes != NULL)
{
((void(*)(id, Method, NSArray*, NSIndexSet*))method_invoke)(_container, _mutatingMethods->insertObjectsAtIndexes, objects, indexes);
}
else
{
[super insertObjects:objects atIndexes:indexes];
}
}
- (void)insertObject:(id)object atIndex:(NSUInteger)idx
{
if (_mutatingMethods->insertObjectAtIndex != NULL)
{
((void(*)(id, Method, id, NSUInteger))method_invoke)(_container, _mutatingMethods->insertObjectAtIndex, object, idx);
return;
}
NSArray *objects = [[NSArray alloc] initWithObjects:&object count:1];
NSIndexSet *indexes = [[NSIndexSet alloc] initWithIndex:idx];
((void(*)(id, Method, NSArray*, NSIndexSet*))method_invoke)(_container, _mutatingMethods->insertObjectsAtIndexes, objects, indexes);
[objects release];
[indexes release];
}
- (void)addObject:(id)object
{
[self insertObject:object atIndex:[self count]];
}
- (void)_proxyNonGCFinalize
{
[_mutatingMethods release];
[super _proxyNonGCFinalize];
_mutatingMethods = nil;
}
@end
@implementation NSKeyValueFastMutableArray1
{
NSKeyValueNonmutatingArrayMethodSet *_nonmutatingMethods;
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
static NSKeyValueProxyPool proxyPool;
return &proxyPool;
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueFastMutableCollection1Getter *)getter
{
self = [super _proxyInitWithContainer:container getter:getter];
if (self != nil)
{
_nonmutatingMethods = [(NSKeyValueNonmutatingArrayMethodSet *)[getter nonmutatingMethods] retain];
}
return self;
}
- (NSArray *)objectsAtIndexes:(NSIndexSet *)indexes
{
if (_nonmutatingMethods->objectsAtIndexes != NULL)
{
return ((NSArray*(*)(id, Method, NSIndexSet*))method_invoke)(_container, _nonmutatingMethods->objectsAtIndexes, indexes);
}
else
{
return [super objectsAtIndexes:indexes];
}
}
- (id)objectAtIndex:(NSUInteger)idx
{
if (_nonmutatingMethods->objectAtIndex != NULL)
{
return ((id(*)(id, Method, NSUInteger))method_invoke)(_container, _nonmutatingMethods->objectAtIndex, idx);
}
NSIndexSet *indexes = [[NSIndexSet alloc] initWithIndex:idx];
NSArray *objects = ((NSArray*(*)(id, Method, NSIndexSet*))method_invoke)(_container, _nonmutatingMethods->objectsAtIndexes, indexes);
[indexes release];
return [objects objectAtIndex:0];
}
- (void)getObjects:(id *)objects range:(NSRange)range
{
if (_nonmutatingMethods->getObjectsRange != NULL)
{
((void(*)(id, Method, id*, NSRange))method_invoke)(_container, _nonmutatingMethods->getObjectsRange, objects, range);
}
else
{
[super getObjects:objects range:range];
}
}
- (NSUInteger)count
{
return ((NSUInteger(*)(id, Method))method_invoke)(_container, _nonmutatingMethods->count);
}
- (void)_proxyNonGCFinalize
{
[_nonmutatingMethods release];
[super _proxyNonGCFinalize];
_nonmutatingMethods = nil;
}
@end
@implementation NSKeyValueFastMutableArray2
{
NSKeyValueGetter *_valueGetter;
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
static NSKeyValueProxyPool proxyPool;
return &proxyPool;
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueFastMutableCollection2Getter *)getter
{
self = [super _proxyInitWithContainer:container getter:getter];
if (self != nil)
{
_valueGetter = [[getter baseGetter] retain];
}
return self;
}
- (NSArray *)objectsAtIndexes:(NSIndexSet *)indexes
{
return [[self _nonNilArrayValueWithSelector:_cmd] objectsAtIndexes:indexes];
}
- (id)objectAtIndex:(NSUInteger)idx
{
return [[self _nonNilArrayValueWithSelector:_cmd] objectAtIndex:idx];
}
- (void)getObjects:(id *)objects range:(NSRange)range
{
[[self _nonNilArrayValueWithSelector:_cmd] getObjects:objects range:range];
}
- (NSUInteger)count
{
return [[self _nonNilArrayValueWithSelector:_cmd] count];
}
- (NSArray *)_nonNilArrayValueWithSelector:(SEL)selector
{
NSArray *array = _NSGetUsingKeyValueGetter(_container, _valueGetter);
if (array == nil)
{
[NSException raise:NSInternalInconsistencyException
format:@"key %@ of array %@ is nil for selector %s", _key, _container, sel_getName(selector)];
return nil;
}
return array;
}
- (void)_proxyNonGCFinalize
{
[_valueGetter release];
[super _proxyNonGCFinalize];
_valueGetter = nil;
}
@end
@implementation NSKeyValueFastMutableOrderedSet
{
NSKeyValueMutatingOrderedSetMethodSet *_mutatingMethods;
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueProxyGetter *)getter
{
self = [super _proxyInitWithContainer:container getter:(NSKeyValueCollectionGetter *)getter];
if (self != nil)
{
_mutatingMethods = [(NSKeyValueMutatingOrderedSetMethodSet *)[getter mutatingMethods] retain];
}
return self;
}
- (void)replaceObjectsAtIndexes:(NSIndexSet *)indexes withObjects:(NSArray *)objects
{
if (_mutatingMethods->replaceObjectsAtIndexes != NULL)
{
((void(*)(id, Method, NSIndexSet*, NSArray*))method_invoke)(_container, _mutatingMethods->replaceObjectsAtIndexes, indexes, objects);
}
else
{
[super replaceObjectsAtIndexes:indexes withObjects:objects];
}
}
- (void)replaceObjectAtIndex:(NSUInteger)idx withObject:(id)object
{
if (_mutatingMethods->replaceObjectAtIndex != NULL)
{
((void(*)(id, Method, NSUInteger, id))method_invoke)(_container, _mutatingMethods->replaceObjectAtIndex, idx, object);
}
else if (_mutatingMethods->replaceObjectsAtIndexes != NULL)
{
NSIndexSet *indexes = [[NSIndexSet alloc] initWithIndex:idx];
NSArray *objects = [[NSArray alloc] initWithObjects:&object count:1];
((void(*)(id, Method, NSIndexSet*, NSArray*))method_invoke)(_container, _mutatingMethods->replaceObjectsAtIndexes, indexes, objects);
[indexes release];
[objects release];
}
else
{
[self removeObjectAtIndex:idx];
[self insertObject:object atIndex:idx];
}
}
- (void)removeObjectsAtIndexes:(NSIndexSet *)indexes
{
if (_mutatingMethods->removeObjectsAtIndexes != NULL)
{
((void(*)(id, Method, NSIndexSet*))method_invoke)(_container, _mutatingMethods->removeObjectsAtIndexes, indexes);
}
else
{
[super removeObjectsAtIndexes:indexes];
}
}
- (void)removeObjectAtIndex:(NSUInteger)idx
{
if (_mutatingMethods->removeObjectAtIndex != NULL)
{
((void(*)(id, Method, NSUInteger))method_invoke)(_container, _mutatingMethods->removeObjectAtIndex, idx);
return;
}
NSIndexSet *indexes = [[NSIndexSet alloc] initWithIndex:idx];
((void(*)(id, Method, NSIndexSet*))method_invoke)(_container, _mutatingMethods->removeObjectsAtIndexes, indexes);
[indexes release];
}
- (void)insertObjects:(NSArray *)objects atIndexes:(NSIndexSet *)indexes
{
if (_mutatingMethods->insertObjectsAtIndexes != NULL)
{
((void(*)(id, Method, NSArray*, NSIndexSet*))method_invoke)(_container, _mutatingMethods->insertObjectsAtIndexes, objects, indexes);
}
else
{
[super insertObjects:objects atIndexes:indexes];
}
}
- (void)insertObject:(id)object atIndex:(NSUInteger)idx
{
if (_mutatingMethods->insertObjectAtIndex != NULL)
{
((void(*)(id, Method, id, NSUInteger))method_invoke)(_container, _mutatingMethods->insertObjectAtIndex, object, idx);
return;
}
NSArray *objects = [[NSArray alloc] initWithObjects:&object count:1];
NSIndexSet *indexes = [[NSIndexSet alloc] initWithIndex:idx];
((void(*)(id, Method, NSArray*, NSIndexSet*))method_invoke)(_container, _mutatingMethods->insertObjectsAtIndexes, objects, indexes);
[objects release];
[indexes release];
}
- (void)_proxyNonGCFinalize
{
[_mutatingMethods release];
[super _proxyNonGCFinalize];
_mutatingMethods = nil;
}
@end
@implementation NSKeyValueFastMutableOrderedSet1
{
NSKeyValueNonmutatingOrderedSetMethodSet *_nonmutatingMethods;
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
static NSKeyValueProxyPool proxyPool;
return &proxyPool;
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueFastMutableCollection1Getter *)getter
{
self = [super _proxyInitWithContainer:container getter:getter];
if (self != nil)
{
_nonmutatingMethods = [(NSKeyValueNonmutatingOrderedSetMethodSet *)[getter nonmutatingMethods] retain];
}
return self;
}
- (NSArray *)objectsAtIndexes:(NSIndexSet *)indexes
{
if (_nonmutatingMethods->objectsAtIndexes != NULL)
{
return ((NSArray*(*)(id, Method, NSIndexSet*))method_invoke)(_container, _nonmutatingMethods->objectsAtIndexes, indexes);
}
else
{
return [super objectsAtIndexes:indexes];
}
}
- (id)objectAtIndex:(NSUInteger)idx
{
if (_nonmutatingMethods->objectAtIndex != NULL)
{
return ((id(*)(id, Method, NSUInteger))method_invoke)(_container, _nonmutatingMethods->objectAtIndex, idx);
}
NSIndexSet *indexes = [[NSIndexSet alloc] initWithIndex:idx];
NSArray *objects = ((NSArray*(*)(id, Method, NSIndexSet*))method_invoke)(_container, _nonmutatingMethods->objectsAtIndexes, indexes);
[indexes release];
return [objects objectAtIndex:0];
}
- (NSUInteger)indexOfObject:(id)object
{
return ((NSUInteger(*)(id, Method, id))method_invoke)(_container, _nonmutatingMethods->indexOfObject, object);
}
- (void)getObjects:(id *)objects range:(NSRange)range
{
if (_nonmutatingMethods->getObjectsRange != NULL)
{
((void(*)(id, Method, id*, NSRange))method_invoke)(_container, _nonmutatingMethods->getObjectsRange, objects, range);
}
else
{
[super getObjects:objects range:range];
}
}
- (NSUInteger)count
{
return ((NSUInteger(*)(id, Method))method_invoke)(_container, _nonmutatingMethods->count);
}
- (void)_proxyNonGCFinalize
{
[_nonmutatingMethods release];
[super _proxyNonGCFinalize];
_nonmutatingMethods = nil;
}
@end
@implementation NSKeyValueFastMutableOrderedSet2
{
NSKeyValueGetter *_valueGetter;
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
static NSKeyValueProxyPool proxyPool;
return &proxyPool;
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueFastMutableCollection2Getter *)getter
{
self = [super _proxyInitWithContainer:container getter:getter];
if (self != nil)
{
_valueGetter = [[getter baseGetter] retain];
}
return self;
}
- (NSArray *)objectsAtIndexes:(NSIndexSet *)indexes
{
return [[self _nonNilOrderedSetValueWithSelector:_cmd] objectsAtIndexes:indexes];
}
- (id)objectAtIndex:(NSUInteger)idx
{
return [[self _nonNilOrderedSetValueWithSelector:_cmd] objectAtIndex:idx];
}
- (NSUInteger)indexOfObject:(id)object
{
return [[self _nonNilOrderedSetValueWithSelector:_cmd] indexOfObject:object];
}
- (void)getObjects:(id *)objects range:(NSRange)range
{
[[self _nonNilOrderedSetValueWithSelector:_cmd] getObjects:objects range:range];
}
- (NSUInteger)count
{
return [[self _nonNilOrderedSetValueWithSelector:_cmd] count];
}
- (NSOrderedSet *)_nonNilOrderedSetValueWithSelector:(SEL)selector
{
NSOrderedSet *orderedSet = _NSGetUsingKeyValueGetter(_container, _valueGetter);
if (orderedSet == nil)
{
[NSException raise:NSInternalInconsistencyException
format:@"key %@ of ordered set %@ is nil for selector %s", _key, _container, sel_getName(selector)];
return nil;
}
return orderedSet;
}
- (void)_proxyNonGCFinalize
{
[_valueGetter release];
[super _proxyNonGCFinalize];
_valueGetter = nil;
}
@end
@implementation NSKeyValueFastMutableSet
{
NSKeyValueMutatingSetMethodSet *_mutatingMethods;
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueProxyGetter *)getter
{
self = [super _proxyInitWithContainer:container getter:(NSKeyValueCollectionGetter *)getter];
if (self != nil)
{
_mutatingMethods = [(NSKeyValueMutatingSetMethodSet *)[getter mutatingMethods] retain];
}
return self;
}
- (void)unionSet:(NSSet *)set
{
if (_mutatingMethods->unionSet != NULL)
{
((void(*)(id, Method, NSSet*))method_invoke)(_container, _mutatingMethods->unionSet, set);
}
else
{
[super unionSet:set];
}
}
- (void)setSet:(NSSet *)set
{
if (_mutatingMethods->setSet != NULL)
{
((void(*)(id, Method, NSSet*))method_invoke)(_container, _mutatingMethods->setSet, set);
}
else
{
[super setSet:set];
}
}
- (void)removeObject:(id)object
{
if (_mutatingMethods->removeObject != NULL)
{
((void(*)(id, Method, id))method_invoke)(_container, _mutatingMethods->removeObject, object);
return;
}
NSSet *objects = [[NSSet alloc] initWithObjects:&object count:1];
((void(*)(id, Method, NSSet*))method_invoke)(_container, _mutatingMethods->minusSet, objects);
[objects release];
}
- (void)removeAllObjects
{
if (_mutatingMethods->setSet != NULL)
{
NSMutableSet *set = [[NSMutableSet alloc] init];
((void(*)(id, Method, NSSet*))method_invoke)(_container, _mutatingMethods->setSet, set);
[set release];
}
else
{
[super removeAllObjects];
}
}
- (void)minusSet:(NSSet *)set
{
if (_mutatingMethods->minusSet != NULL)
{
((void(*)(id, Method, NSSet*))method_invoke)(_container, _mutatingMethods->minusSet, set);
}
else
{
[super minusSet:set];
}
}
- (void)intersectSet:(NSSet *)set
{
if (_mutatingMethods->intersectSet != NULL)
{
((void(*)(id, Method, NSSet*))method_invoke)(_container, _mutatingMethods->intersectSet, set);
}
else
{
[super intersectSet:set];
}
}
- (void)addObjectsFromArray:(NSArray *)array
{
if (_mutatingMethods->unionSet != NULL)
{
NSMutableSet *set = [[NSMutableSet alloc] initWithArray:array];
((void(*)(id, Method, NSSet*))method_invoke)(_container, _mutatingMethods->unionSet, set);
[set release];
}
else
{
[super addObjectsFromArray:array];
}
}
- (void)addObject:(id)object
{
if (_mutatingMethods->addObject != NULL)
{
((void(*)(id, Method, id))method_invoke)(_container, _mutatingMethods->addObject, object);
return;
}
NSSet *objects = [[NSSet alloc] initWithObjects:&object count:1];
((void(*)(id, Method, NSSet*))method_invoke)(_container, _mutatingMethods->unionSet, objects);
[objects release];
}
- (void)_proxyNonGCFinalize
{
[_mutatingMethods release];
[super _proxyNonGCFinalize];
_mutatingMethods = nil;
}
@end
@implementation NSKeyValueFastMutableSet1
{
NSKeyValueNonmutatingSetMethodSet *_nonmutatingMethods;
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
static NSKeyValueProxyPool proxyPool;
return &proxyPool;
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueFastMutableCollection1Getter *)getter
{
self = [super _proxyInitWithContainer:container getter:getter];
if (self != nil)
{
_nonmutatingMethods = [(NSKeyValueNonmutatingSetMethodSet *)[getter nonmutatingMethods] retain];
}
return self;
}
- (NSEnumerator *)objectEnumerator
{
return ((NSEnumerator*(*)(id, Method))method_invoke)(_container, _nonmutatingMethods->enumerator);
}
- (id)member:(id)object
{
return ((id(*)(id, Method, id))method_invoke)(_container, _nonmutatingMethods->member, object);
}
- (NSUInteger)count
{
return ((NSUInteger(*)(id, Method))method_invoke)(_container, _nonmutatingMethods->count);
}
- (void)_proxyNonGCFinalize
{
[_nonmutatingMethods release];
[super _proxyNonGCFinalize];
_nonmutatingMethods = nil;
}
@end
@implementation NSKeyValueFastMutableSet2
{
NSKeyValueGetter *_valueGetter;
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
static NSKeyValueProxyPool proxyPool;
return &proxyPool;
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueFastMutableCollection2Getter *)getter
{
self = [super _proxyInitWithContainer:container getter:getter];
if (self != nil)
{
_valueGetter = [[getter baseGetter] retain];
}
return self;
}
- (NSEnumerator *)objectEnumerator
{
return [[self _nonNilSetValueWithSelector:_cmd] objectEnumerator];
}
- (id)member:(id)object
{
return [[self _nonNilSetValueWithSelector:_cmd] member:object];
}
- (NSUInteger)count
{
return [[self _nonNilSetValueWithSelector:_cmd] count];
}
- (NSSet *)_nonNilSetValueWithSelector:(SEL)selector
{
NSSet *set = _NSGetUsingKeyValueGetter(_container, _valueGetter);
if (set == nil)
{
[NSException raise:NSInternalInconsistencyException
format:@"key %@ of set %@ is nil for selector %s", _key, _container, sel_getName(selector)];
return nil;
}
return set;
}
- (void)_proxyNonGCFinalize
{
[_valueGetter release];
[super _proxyNonGCFinalize];
_valueGetter = nil;
}
@end
@implementation NSKeyValueIvarMutableArray
{
Ivar _ivar;
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
static NSKeyValueProxyPool proxyPool;
return &proxyPool;
}
- (NSMutableArray*)_nonNilMutableArrayValueWithSelector:(SEL)selector
{
NSMutableArray* mutableArray = *(NSMutableArray**)((char*)_container + ivar_getOffset(_ivar));
if (!mutableArray)
{
[self _raiseNilValueExceptionWithSelector:selector];
}
return mutableArray;
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueIvarMutableCollectionGetter*)getter
{
self = [super _proxyInitWithContainer:container getter:(NSKeyValueCollectionGetter*)getter];
if (self != nil)
{
_ivar = [getter ivar];
}
return self;
}
- (void)_proxyNonGCFinalize
{
[super _proxyNonGCFinalize];
_ivar = NULL;
}
- (void)_raiseNilValueExceptionWithSelector:(SEL)selector
{
[NSException raise:NSRangeException format:@"%@: value for key %@ of object %p is nil",
_NSMethodExceptionProem(_container, selector), _key, (void*)_container];
}
- (void)addObject:(id)object
{
NSMutableArray **mutableArrayIvar = (NSMutableArray**)((char*)_container + ivar_getOffset(_ivar));
NSMutableArray *mutableArray = *mutableArrayIvar;
if (mutableArray)
{
[mutableArray addObject:object];
}
else
{
*mutableArrayIvar = [[NSMutableArray alloc] initWithObjects:&object count:1];
}
}
- (NSUInteger)count
{
NSMutableArray *mutableArray = *(NSMutableArray**)((char*)_container + ivar_getOffset(_ivar));
return [mutableArray count];
}
- (void)getObjects:(id *)objects range:(NSRange)range
{
NSMutableArray *mutableArray = [self _nonNilMutableArrayValueWithSelector:_cmd];
[mutableArray getObjects:objects range:range];
}
- (void)insertObject:(id)object atIndex:(NSUInteger)idx
{
NSMutableArray **mutableArrayIvar = (NSMutableArray**)((char*)_container + ivar_getOffset(_ivar));
NSMutableArray *mutableArray = *mutableArrayIvar;
if (mutableArray)
{
[mutableArray insertObject:object atIndex:idx];
}
else
{
if (idx != 0)
{
[self _raiseNilValueExceptionWithSelector:_cmd];
return;
}
*mutableArrayIvar = [[NSMutableArray alloc] initWithObjects:&object count:1];
}
}
- (void)insertObjects:(NSArray *)objects atIndexes:(NSIndexSet *)indexes
{
NSMutableArray **mutableArrayIvar = (NSMutableArray**)((char*)_container + ivar_getOffset(_ivar));
NSMutableArray *mutableArray = *mutableArrayIvar;
if (mutableArray)
{
[mutableArray insertObjects:objects atIndexes:indexes];
}
else
{
if ([objects count] == [indexes count] &&
[indexes lastIndex] + 1 == [objects count])
{
*mutableArrayIvar = [objects mutableCopy];
}
else
{
[self _raiseNilValueExceptionWithSelector:_cmd];
return;
}
}
}
- (id)objectAtIndex:(NSUInteger)idx
{
NSMutableArray *mutableArray = [self _nonNilMutableArrayValueWithSelector:_cmd];
return [mutableArray objectAtIndex:idx];
}
- (NSArray *)objectsAtIndexes:(NSIndexSet *)indexes
{
NSMutableArray *mutableArray = [self _nonNilMutableArrayValueWithSelector:_cmd];
return [mutableArray objectsAtIndexes:indexes];
}
- (void)removeLastObject
{
NSMutableArray *mutableArray = [self _nonNilMutableArrayValueWithSelector:_cmd];
[mutableArray removeLastObject];
}
- (void)removeObjectAtIndex:(NSUInteger)idx
{
NSMutableArray *mutableArray = [self _nonNilMutableArrayValueWithSelector:_cmd];
[mutableArray removeObjectAtIndex:idx];
}
- (void)removeObjectsAtIndexes:(NSIndexSet *)indexes
{
NSMutableArray *mutableArray = [self _nonNilMutableArrayValueWithSelector:_cmd];
[mutableArray removeObjectsAtIndexes:indexes];
}
- (void)replaceObjectAtIndex:(NSUInteger)idx withObject:(id)object
{
NSMutableArray *mutableArray = [self _nonNilMutableArrayValueWithSelector:_cmd];
[mutableArray replaceObjectAtIndex:idx withObject:object];
}
- (void)replaceObjectsAtIndexes:(NSIndexSet *)indexes withObjects:(NSArray *)objects
{
NSMutableArray *mutableArray = [self _nonNilMutableArrayValueWithSelector:_cmd];
[mutableArray replaceObjectsAtIndexes:indexes withObjects:objects];
}
@end
@implementation NSKeyValueIvarMutableOrderedSet
{
Ivar _ivar;
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
static NSKeyValueProxyPool proxyPool;
return &proxyPool;
}
- (NSMutableOrderedSet*)_nonNilMutableOrderedSetValueWithSelector:(SEL)selector
{
NSMutableOrderedSet* mutableOrderedSet = *(NSMutableOrderedSet**)((char*)_container + ivar_getOffset(_ivar));
if (!mutableOrderedSet)
{
[self _raiseNilValueExceptionWithSelector:selector];
}
return mutableOrderedSet;
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueIvarMutableCollectionGetter*)getter
{
self = [super _proxyInitWithContainer:container getter:(NSKeyValueCollectionGetter*)getter];
if (self != nil)
{
_ivar = [getter ivar];
}
return self;
}
- (void)_proxyNonGCFinalize
{
[super _proxyNonGCFinalize];
_ivar = NULL;
}
- (void)_raiseNilValueExceptionWithSelector:(SEL)selector
{
[NSException raise:NSRangeException format:@"%@: value for key %@ of object %p is nil",
_NSMethodExceptionProem(_container, selector), _key, (void*)_container];
}
- (NSUInteger)count
{
NSMutableOrderedSet *mutableOrderedSet = *(NSMutableOrderedSet**)((char*)_container + ivar_getOffset(_ivar));
return [mutableOrderedSet count];
}
- (void)getObjects:(id *)objects range:(NSRange)range
{
NSMutableOrderedSet *mutableOrderedSet = [self _nonNilMutableOrderedSetValueWithSelector:_cmd];
[mutableOrderedSet getObjects:objects range:range];
}
- (NSUInteger)indexOfObject:(id)object
{
NSMutableOrderedSet *mutableOrderedSet = *(NSMutableOrderedSet**)((char*)_container + ivar_getOffset(_ivar));
if (mutableOrderedSet)
{
return [mutableOrderedSet indexOfObject:object];
}
return NSNotFound;
}
- (void)insertObject:(id)object atIndex:(NSUInteger)idx
{
NSMutableOrderedSet **mutableOrderedSetIvar = (NSMutableOrderedSet**)((char*)_container + ivar_getOffset(_ivar));
NSMutableOrderedSet *mutableOrderedSet = *mutableOrderedSetIvar;
if (mutableOrderedSet)
{
[mutableOrderedSet insertObject:object atIndex:idx];
}
else
{
if (idx != 0)
{
[self _raiseNilValueExceptionWithSelector:_cmd];
return;
}
*mutableOrderedSetIvar = [[NSMutableOrderedSet alloc] initWithObjects:&object count:1];
}
}
- (void)insertObjects:(NSArray *)objects atIndexes:(NSIndexSet *)indexes
{
NSMutableOrderedSet **mutableOrderedSetIvar = (NSMutableOrderedSet**)((char*)_container + ivar_getOffset(_ivar));
NSMutableOrderedSet *mutableOrderedSet = *mutableOrderedSetIvar;
if (mutableOrderedSet)
{
[mutableOrderedSet insertObjects:objects atIndexes:indexes];
}
else
{
if ([objects count] == [indexes count] &&
[indexes lastIndex] + 1 == [objects count])
{
*mutableOrderedSetIvar = [objects mutableCopy];
}
else
{
[self _raiseNilValueExceptionWithSelector:_cmd];
return;
}
}
}
- (id)objectAtIndex:(NSUInteger)idx
{
NSMutableOrderedSet *mutableOrderedSet = [self _nonNilMutableOrderedSetValueWithSelector:_cmd];
return [mutableOrderedSet objectAtIndex:idx];
}
- (NSArray *)objectsAtIndexes:(NSIndexSet *)indexes
{
NSMutableOrderedSet *mutableOrderedSet = [self _nonNilMutableOrderedSetValueWithSelector:_cmd];
return [mutableOrderedSet objectsAtIndexes:indexes];
}
- (void)removeObjectAtIndex:(NSUInteger)idx
{
NSMutableOrderedSet *mutableOrderedSet = [self _nonNilMutableOrderedSetValueWithSelector:_cmd];
[mutableOrderedSet removeObjectAtIndex:idx];
}
- (void)removeObjectsAtIndexes:(NSIndexSet *)indexes
{
NSMutableOrderedSet *mutableOrderedSet = [self _nonNilMutableOrderedSetValueWithSelector:_cmd];
[mutableOrderedSet removeObjectsAtIndexes:indexes];
}
- (void)replaceObjectAtIndex:(NSUInteger)idx withObject:(id)object
{
NSMutableOrderedSet *mutableOrderedSet = [self _nonNilMutableOrderedSetValueWithSelector:_cmd];
[mutableOrderedSet replaceObjectAtIndex:idx withObject:object];
}
- (void)replaceObjectsAtIndexes:(NSIndexSet *)indexes withObjects:(NSArray *)objects
{
NSMutableOrderedSet *mutableOrderedSet = [self _nonNilMutableOrderedSetValueWithSelector:_cmd];
[mutableOrderedSet replaceObjectsAtIndexes:indexes withObjects:objects];
}
@end
@implementation NSKeyValueIvarMutableSet
{
Ivar _ivar;
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
static NSKeyValueProxyPool proxyPool;
return &proxyPool;
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueIvarMutableCollectionGetter*)getter
{
self = [super _proxyInitWithContainer:container getter:(NSKeyValueCollectionGetter*)getter];
if (self != nil)
{
_ivar = [getter ivar];
}
return self;
}
- (void)_proxyNonGCFinalize
{
[super _proxyNonGCFinalize];
_ivar = NULL;
}
- (void)addObject:(id)object
{
NSMutableSet **mutableSetIvar = (NSMutableSet**)((char*)_container + ivar_getOffset(_ivar));
NSMutableSet *mutableSet = *mutableSetIvar;
if (mutableSet)
{
[mutableSet addObject:object];
}
else
{
*mutableSetIvar = [[NSMutableSet alloc] initWithObjects:&object count:1];
}
}
- (void)addObjectsFromArray:(NSArray *)array
{
NSMutableSet **mutableSetIvar = (NSMutableSet**)((char*)_container + ivar_getOffset(_ivar));
NSMutableSet *mutableSet = *mutableSetIvar;
if (mutableSet)
{
[mutableSet addObjectsFromArray:array];
}
else
{
*mutableSetIvar = [[NSMutableSet alloc] initWithArray:array];
}
}
- (NSUInteger)count
{
NSMutableSet *mutableSet = *(NSMutableSet**)((char*)_container + ivar_getOffset(_ivar));
return [mutableSet count];
}
- (void)intersectSet:(NSSet *)set
{
NSMutableSet *mutableSet = *(NSMutableSet**)((char*)_container + ivar_getOffset(_ivar));
[mutableSet intersectSet:set];
}
- (id)member:(id)object
{
NSMutableSet *mutableSet = *(NSMutableSet**)((char*)_container + ivar_getOffset(_ivar));
return [mutableSet member:object];
}
- (void)minusSet:(NSSet *)set
{
NSMutableSet *mutableSet = *(NSMutableSet**)((char*)_container + ivar_getOffset(_ivar));
[mutableSet minusSet:set];
}
- (NSEnumerator *)objectEnumerator
{
NSMutableSet *mutableSet = *(NSMutableSet**)((char*)_container + ivar_getOffset(_ivar));
if (mutableSet)
{
return [mutableSet objectEnumerator];
}
else
{
return [[[NSKeyValueNilSetEnumerator alloc] init] autorelease];
}
}
- (void)removeAllObjects
{
NSMutableSet *mutableSet = *(NSMutableSet**)((char*)_container + ivar_getOffset(_ivar));
[mutableSet removeAllObjects];
}
- (void)removeObject:(id)object
{
NSMutableSet *mutableSet = *(NSMutableSet**)((char*)_container + ivar_getOffset(_ivar));
[mutableSet removeObject:object];
}
- (void)setSet:(NSSet *)set
{
NSMutableSet **mutableSetIvar = (NSMutableSet**)((char*)_container + ivar_getOffset(_ivar));
NSMutableSet *mutableSet = *mutableSetIvar;
if (mutableSet)
{
[mutableSet setSet:set];
}
else
{
*mutableSetIvar = [set mutableCopy];
}
}
- (void)unionSet:(NSSet *)set
{
NSMutableSet **mutableSetIvar = (NSMutableSet**)((char*)_container + ivar_getOffset(_ivar));
NSMutableSet *mutableSet = *mutableSetIvar;
if (mutableSet)
{
[mutableSet unionSet:set];
}
else
{
*mutableSetIvar = [set mutableCopy];
}
}
@end
@implementation NSKeyValueNotifyingMutableArray
{
NSMutableArray *_mutableArray;
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
static NSKeyValueProxyPool proxyPool;
return &proxyPool;
}
+ (NSHashTable *)_proxyShare
{
static dispatch_once_t once;
static NSHashTable *proxyShare;
dispatch_once(&once, ^{
proxyShare = [_NSKeyValueProxyShareCreate() retain];
});
return proxyShare;
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueNotifyingMutableCollectionGetter *)getter
{
self = [super _proxyInitWithContainer:container getter:(NSKeyValueCollectionGetter*)getter];
if (self != nil)
{
NSKeyValueProxyGetter *mutableCollectionGetter = [getter mutableCollectionGetter];
_mutableArray = [_NSGetProxyValueWithGetterNoLock(container, mutableCollectionGetter) retain];
}
return self;
}
- (void)_proxyNonGCFinalize
{
[_mutableArray release];
[super _proxyNonGCFinalize];
_mutableArray = nil;
}
- (void)addObject:(id)object
{
NSIndexSet *indexes = [[NSIndexSet alloc] initWithIndex:[_mutableArray count]];
[_container willChange:NSKeyValueChangeInsertion valuesAtIndexes:indexes forKey:_key];
[_mutableArray addObject:object];
[_container didChange:NSKeyValueChangeInsertion valuesAtIndexes:indexes forKey:_key];
[indexes release];
}
- (NSUInteger)count
{
return [_mutableArray count];
}
- (void)getObjects:(id *)objects range:(NSRange)range
{
[_mutableArray getObjects:objects range:range];
}
- (void)insertObject:(id)object atIndex:(NSUInteger)idx
{
NSIndexSet *indexes = [[NSIndexSet alloc] initWithIndex:idx];
[_container willChange:NSKeyValueChangeInsertion valuesAtIndexes:indexes forKey:_key];
[_mutableArray insertObject:object atIndex:idx];
[_container didChange:NSKeyValueChangeInsertion valuesAtIndexes:indexes forKey:_key];
[indexes release];
}
- (void)insertObjects:(NSArray *)objects atIndexes:(NSIndexSet *)indexes
{
[_container willChange:NSKeyValueChangeInsertion valuesAtIndexes:indexes forKey:_key];
[_mutableArray insertObjects:objects atIndexes:indexes];
[_container didChange:NSKeyValueChangeInsertion valuesAtIndexes:indexes forKey:_key];
}
- (id)objectAtIndex:(NSUInteger)idx
{
return [_mutableArray objectAtIndex:idx];
}
- (NSArray *)objectsAtIndexes:(NSIndexSet *)indexes
{
return [_mutableArray objectsAtIndexes:indexes];
}
- (void)removeLastObject
{
NSIndexSet *indexes = [[NSIndexSet alloc] initWithIndex:[_mutableArray count] - 1];
[_container willChange:NSKeyValueChangeRemoval valuesAtIndexes:indexes forKey:_key];
[_mutableArray removeLastObject];
[_container didChange:NSKeyValueChangeRemoval valuesAtIndexes:indexes forKey:_key];
[indexes release];
}
- (void)removeObjectAtIndex:(NSUInteger)idx
{
NSIndexSet *indexes = [[NSIndexSet alloc] initWithIndex:idx];
[_container willChange:NSKeyValueChangeRemoval valuesAtIndexes:indexes forKey:_key];
[_mutableArray removeObjectAtIndex:idx];
[_container didChange:NSKeyValueChangeRemoval valuesAtIndexes:indexes forKey:_key];
[indexes release];
}
- (void)removeObjectsAtIndexes:(NSIndexSet *)indexes
{
[_container willChange:NSKeyValueChangeRemoval valuesAtIndexes:indexes forKey:_key];
[_mutableArray removeObjectsAtIndexes:indexes];
[_container didChange:NSKeyValueChangeRemoval valuesAtIndexes:indexes forKey:_key];
}
- (void)replaceObjectAtIndex:(NSUInteger)idx withObject:(id)object
{
NSIndexSet *indexes = [[NSIndexSet alloc] initWithIndex:idx];
[_container willChange:NSKeyValueChangeReplacement valuesAtIndexes:indexes forKey:_key];
[_mutableArray replaceObjectAtIndex:idx withObject:object];
[_container didChange:NSKeyValueChangeReplacement valuesAtIndexes:indexes forKey:_key];
[indexes release];
}
- (void)replaceObjectsAtIndexes:(NSIndexSet *)indexes withObjects:(NSArray *)objects
{
[_container willChange:NSKeyValueChangeReplacement valuesAtIndexes:indexes forKey:_key];
[_mutableArray replaceObjectsAtIndexes:indexes withObjects:objects];
[_container didChange:NSKeyValueChangeReplacement valuesAtIndexes:indexes forKey:_key];
}
@end
@implementation NSKeyValueNotifyingMutableOrderedSet
{
NSMutableOrderedSet *_mutableOrderedSet;
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
static NSKeyValueProxyPool proxyPool;
return &proxyPool;
}
+ (NSHashTable *)_proxyShare
{
static dispatch_once_t once;
static NSHashTable *proxyShare;
dispatch_once(&once, ^{
proxyShare = [_NSKeyValueProxyShareCreate() retain];
});
return proxyShare;
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueNotifyingMutableCollectionGetter *)getter
{
self = [super _proxyInitWithContainer:container getter:(NSKeyValueCollectionGetter*)getter];
if (self != nil)
{
NSKeyValueProxyGetter *mutableCollectionGetter = [getter mutableCollectionGetter];
_mutableOrderedSet = [_NSGetProxyValueWithGetterNoLock(container, mutableCollectionGetter) retain];
}
return self;
}
- (void)_proxyNonGCFinalize
{
[_mutableOrderedSet release];
[super _proxyNonGCFinalize];
_mutableOrderedSet = nil;
}
- (NSUInteger)count
{
return [_mutableOrderedSet count];
}
- (void)getObjects:(id *)objects range:(NSRange)range
{
[_mutableOrderedSet getObjects:objects range:range];
}
- (NSUInteger)indexOfObject:(id)object
{
return [_mutableOrderedSet indexOfObject:object];
}
- (void)insertObject:(id)object atIndex:(NSUInteger)idx
{
NSIndexSet *indexes = [[NSIndexSet alloc] initWithIndex:idx];
[_container willChange:NSKeyValueChangeInsertion valuesAtIndexes:indexes forKey:_key];
[_mutableOrderedSet insertObject:object atIndex:idx];
[_container didChange:NSKeyValueChangeInsertion valuesAtIndexes:indexes forKey:_key];
[indexes release];
}
- (void)insertObjects:(NSArray *)objects atIndexes:(NSIndexSet *)indexes
{
[_container willChange:NSKeyValueChangeInsertion valuesAtIndexes:indexes forKey:_key];
[_mutableOrderedSet insertObjects:objects atIndexes:indexes];
[_container didChange:NSKeyValueChangeInsertion valuesAtIndexes:indexes forKey:_key];
}
- (id)objectAtIndex:(NSUInteger)idx
{
return [_mutableOrderedSet objectAtIndex:idx];
}
- (void)removeObjectAtIndex:(NSUInteger)idx
{
NSIndexSet *indexes = [[NSIndexSet alloc] initWithIndex:idx];
[_container willChange:NSKeyValueChangeRemoval valuesAtIndexes:indexes forKey:_key];
[_mutableOrderedSet removeObjectAtIndex:idx];
[_container didChange:NSKeyValueChangeRemoval valuesAtIndexes:indexes forKey:_key];
[indexes release];
}
- (void)removeObjectsAtIndexes:(NSIndexSet *)indexes
{
[_container willChange:NSKeyValueChangeRemoval valuesAtIndexes:indexes forKey:_key];
[_mutableOrderedSet removeObjectsAtIndexes:indexes];
[_container didChange:NSKeyValueChangeRemoval valuesAtIndexes:indexes forKey:_key];
}
- (void)replaceObjectAtIndex:(NSUInteger)idx withObject:(id)object
{
NSIndexSet *indexes = [[NSIndexSet alloc] initWithIndex:idx];
[_container willChange:NSKeyValueChangeReplacement valuesAtIndexes:indexes forKey:_key];
[_mutableOrderedSet replaceObjectAtIndex:idx withObject:object];
[_container didChange:NSKeyValueChangeReplacement valuesAtIndexes:indexes forKey:_key];
[indexes release];
}
- (void)replaceObjectsAtIndexes:(NSIndexSet *)indexes withObjects:(NSArray *)objects
{
[_container willChange:NSKeyValueChangeReplacement valuesAtIndexes:indexes forKey:_key];
[_mutableOrderedSet replaceObjectsAtIndexes:indexes withObjects:objects];
[_container didChange:NSKeyValueChangeReplacement valuesAtIndexes:indexes forKey:_key];
}
@end
@implementation NSKeyValueNotifyingMutableSet
{
NSMutableSet *_mutableSet;
}
+ (NSKeyValueProxyPool *)_proxyNonGCPoolPointer
{
static NSKeyValueProxyPool proxyPool;
return &proxyPool;
}
+ (NSHashTable *)_proxyShare
{
static dispatch_once_t once;
static NSHashTable *proxyShare;
dispatch_once(&once, ^{
proxyShare = [_NSKeyValueProxyShareCreate() retain];
});
return proxyShare;
}
- (id)_proxyInitWithContainer:(NSObject *)container getter:(NSKeyValueNotifyingMutableCollectionGetter *)getter
{
self = [super _proxyInitWithContainer:container getter:(NSKeyValueCollectionGetter*)getter];
if (self != nil)
{
NSKeyValueProxyGetter *mutableCollectionGetter = [getter mutableCollectionGetter];
_mutableSet = [_NSGetProxyValueWithGetterNoLock(container, mutableCollectionGetter) retain];
}
return self;
}
- (void)_proxyNonGCFinalize
{
[_mutableSet release];
[super _proxyNonGCFinalize];
_mutableSet = nil;
}
- (void)addObject:(id)object
{
NSSet *objects = [[NSSet alloc] initWithObjects:&object count:1];
[_container willChangeValueForKey:_key withSetMutation:NSKeyValueUnionSetMutation usingObjects:objects];
[_mutableSet addObject:object];
[_container didChangeValueForKey:_key withSetMutation:NSKeyValueUnionSetMutation usingObjects:objects];
[object release];
}
- (void)addObjectsFromArray:(NSArray *)array
{
NSSet *objects = [[NSSet alloc] initWithArray:array];
[_container willChangeValueForKey:_key withSetMutation:NSKeyValueUnionSetMutation usingObjects:objects];
[_mutableSet addObjectsFromArray:array];
[_container didChangeValueForKey:_key withSetMutation:NSKeyValueUnionSetMutation usingObjects:objects];
[objects release];
}
- (NSUInteger)count
{
return [_mutableSet count];
}
- (void)intersectSet:(NSSet *)set
{
[_container willChangeValueForKey:_key withSetMutation:NSKeyValueIntersectSetMutation usingObjects:set];
[_mutableSet intersectSet:set];
[_container didChangeValueForKey:_key withSetMutation:NSKeyValueIntersectSetMutation usingObjects:set];
}
- (id)member:(id)object
{
return [_mutableSet member:object];
}
- (void)minusSet:(NSSet *)set
{
[_container willChangeValueForKey:_key withSetMutation:NSKeyValueMinusSetMutation usingObjects:set];
[_mutableSet minusSet:set];
[_container didChangeValueForKey:_key withSetMutation:NSKeyValueMinusSetMutation usingObjects:set];
}
- (NSEnumerator *)objectEnumerator
{
return [_mutableSet objectEnumerator];
}
- (void)removeAllObjects
{
NSSet* emptySet = [NSSet set];
[_container willChangeValueForKey:_key withSetMutation:NSKeyValueIntersectSetMutation usingObjects:emptySet];
[_mutableSet removeAllObjects];
[_container didChangeValueForKey:_key withSetMutation:NSKeyValueIntersectSetMutation usingObjects:emptySet];
}
- (void)removeObject:(id)object
{
NSSet *objects = [[NSSet alloc] initWithObjects:&object count:1];
[_container willChangeValueForKey:_key withSetMutation:NSKeyValueMinusSetMutation usingObjects:objects];
[_mutableSet removeObject:object];
[_container didChangeValueForKey:_key withSetMutation:NSKeyValueMinusSetMutation usingObjects:objects];
[object release];
}
- (void)setSet:(NSSet *)set
{
[_container willChangeValueForKey:_key withSetMutation:NSKeyValueSetSetMutation usingObjects:set];
[_mutableSet setSet:set];
[_container didChangeValueForKey:_key withSetMutation:NSKeyValueSetSetMutation usingObjects:set];
}
- (void)unionSet:(NSSet *)set
{
[_container willChangeValueForKey:_key withSetMutation:NSKeyValueUnionSetMutation usingObjects:set];
[_mutableSet unionSet:set];
[_container didChangeValueForKey:_key withSetMutation:NSKeyValueUnionSetMutation usingObjects:set];
}
@end
|
{
"pile_set_name": "Github"
}
|
August 21, 2014
Luis Rico Pinilla, Palacios y Catedrales/Palaces and Cathedrals
Born in 1948 in the community of Sanchonuño in the Segovia area, Luis Rico Pinilla at age fifteen moved to Bilbao where he expected to find better opportunities to earn an income.
He got a room in the house of a brother and one his first activities in Bilbao was to buy brushes, paint and canvases.
Although artistically motivated, Rico would not seek a career as a professional artist. He got a job as a mechanic at a factory of metal products and thereafter he went to work at Petronor, a Basque oil and gas company. He would hold this job for over thirty years, until he retired in 2009.
In his free time Rico would make paintings and at some moment he also began making replicas of cathedrals and other monumental buildings.
In 1981 he bought a piece of land in the San Roque neighbourhood in the community of Zierbena near Bilbao with the intention to build there a holiday home.
Since there was no water supply, Rico built a square concrete tank to collect rainwater. In doing this he got the idea he might as well continue building, and erect a phantasy castle on the upper part of the water tank.
So it happened. The construction of a holidayhome was postponed and Rico built a castle, some five meters high and with a floor area of some forty square meters. He would only use recycled materials.
part of the collection of replicas
The building obviously is a fantasy creation, an impression that is reinforced by the presence of gnomes on its landings and in the surrounding garden. In the evening the castle can be illuminated (by solar energy).
replica of Neuschwanstein in Hohenschwamgau
The all together some forty replicas of castles and cathedrals are retained in the adjacent dwelling house which meanwhile has been built.
These replicas on various scales, mainly made from cardboard and wood, depict famous, in general european buildings, such as the Notre Dame in Paris, the St Basil Cathedral in Moscow, the Vatican, London Bridge, Neuschwannstein, the Palacio Real in Madrid and many others.
replica of St Basil's Cathedral in Moscow
In recent years parts of the collection have been exhibited, for example in a neighbouring public library.
|
{
"pile_set_name": "Pile-CC"
}
|
The 16th ERS Lung Science Conference (LSC) took place on March 8--11, 2018, in Estoril, Portugal, with around 200 delegates from all over the world. This year's topic was "Cell-matrix interactions in lung disease and regeneration" and involved excellent presentations by leading experts in the field covering everything from exploratory studies on how the matrix functions, matrix remodelling and biomarkers in disease, to more technical knowledge described in the field of lung bioengineering. As in previous years, the Saturday afternoon was reserved for a programme dedicated to early career delegates, which this year focussed on "Maximising your publication output"*.* In this article, we summarise the Early Career Member highlights of this year's LSC.
Matrix remodelling in lung disease {#s2}
==================================
The topic of the LSC opening session was the role of matrix remodelling in the pathogenesis of lung diseases. Four fruitful presentations, as well as the discussion guided by session chairs Pieter Hiemstra (Leiden, The Netherlands) and Argyris Tzouvelekis (Athens, Greece), further corroborated the evidence that matrix remodelling plays a cardinal role in chronic lung disease pathogenesis. In particular, Tracy Hussell (Manchester, UK) provided an overview of the role of matrix dysfunction in chronic obstructive pulmonary disease (COPD). Hyaluronan was found to be increased during exacerbations in patients with COPD and interestingly, further increased after influenza infection \[[@C1]\]. In this direction, experimental data showed that hyaluronidase treatment restored lung function and removed proteins bound to hyaluronan. Given that hyaluronan increase also affected the let-7 microRNA (miRNA) family, further investigation of mitophagy and autophagy might hold promise. In addition to hyaluronan, prolonged changes in the basement membrane and extracellular matrix (ECM) following acute and chronic lung disease were described, and importantly, these were disease severity dependent. Furthermore, matrix impaired macrophage responses and upregulated several anti-inflammatory miRNAs. In a short oral presentation selected from abstracts, Gerald Burgstaller (Munich, Germany) presented a phenotypic high-throughput screening assay for the pharmacological inhibition of the pathological deposition of ECM \[[@C2]\]. This assay was applicable to every antigenic ECM protein of the matrisome and to any adherent cell type. A typical paradigm demonstrated was the reduction of expression of collagen by ethyl-3,4-dihydroxybenzoate. Finally, the potential of this technique to assess drug repurposing and deconvolution techniques was discussed. Martin Kolb (Hamilton, Canada) summarised his presentation for the role of ECM in pulmonary fibrosis with three key points. The first key point was alveolar epithelial injury and subsequent alveolar collapse. In this context, the role of biomarker mucin 5B was highlighted. The second key point and the key to progression was the proximity of myofibroblasts to fibrotic matrix, which influences their behaviour. The third key point was that breathing itself could amplify fibrosis due to damage occurring by regular stretching of a stiffer matrix, which is due to several different cellular pathways \[[@C3]\]. The winner of the best oral presentation, Marko Nikolic (Cambridge, UK), provided insights into genetically modifiable, three-dimensional organoid culture of human embryonic lung stem cells enabling, for the first time, the investigation of human lung development *in vitro* \[[@C4]\]. The role of SRY box 9 in organoid self-renewal was emphasised. These data hopefully open new avenues for end-stage lung diseases, premature neonates and rare congenital lung conditions.
Matrix--cell interactions {#s3}
=========================
The interplay between cells and matrix is often underestimated as most people study isolated *in vitro* systems of matrix or cells, mimicking only partly what occurs in the intact lung. Kristian Riesbeck (Malmö, Sweden) showed excellent electron microscopic images of bacterial--matrix interaction and how the bacteria attach to ECM structures such as collagen and laminin. He also showed how, sometimes, matrix molecules such as vitronectin can protect, for example, *Pseudomonas aeruginosa* against complement lysis. Franz Puttur (London, UK) presented, in his short oral presentation, how innate lymphoid cells (ILCs) navigate in the lung matrix, and showed amazing videos during his talk that demonstrated how collagen IV and fibronectin can support ILC2 movement. In addition, he demonstrated that collagen I was able to induce elongation of ILC2, and that blocking of the collagen crosslinking enzyme, lysyl oxidase, changed ILC2 movement by increasing their speed and travel distance. As well as attachment of bacteria to the matrix and movement of immune cells through the matrix, the session contained an oral presentation on how mutated cells unexpectedly affected the interaction with the ECM, as was shown by Giulia Maria Stella (Pavia, Italy). She also demonstrated how this may play an important role in both idiopathic pulmonary fibrosis (IPF) and metastatic growth and invasion. This was followed by a presentation by Robert Snelgrove (London, UK), who talked about the matrikines in lung inflammation, where Pro-Gly-Pro (proteolytic products of collagen) were described to function as alarmins. In the final presentation, Yuval Rinkevich (Munich, Germany) presented his work on specialised fibroblasts and their role in wound healing. He showed impressive images and data indicating different roles played by fibroblasts and their power to function as wound healing or scar-inducing cells.
The instructive matrix {#s4}
======================
Saturday began with Bernhard Wehrle-Haller (Geneva, Switzerland) explaining that the interaction points between integrin and ECM proteins might be sites of signalling. This transduction of mechanical signals into chemicals signals ("mechanotransduction") is tightly regulated; one example is the movement of integrins along the cell membrane. Moreover, he explained that integrin αIIβ3 subunit activation is a complex process dependent on conformational changes that require integrin binding to several proteins in an allosteric way. It was also emphasised that once integrins are activated, there is a tension-controlled recruitment of signalling proteins within the intracellular space. Jae-Won Shin (Chicago, IL, USA) showed, in a short oral presentation, his work on mesenchymal stem cell (MSC)-based therapy for lung fibrosis treatment and how encapsulation of single MSCs in thin alginate microgels enhanced the retention of the cells in an animal model. Furthermore, in order to induce a proinflammatory phenotype, MSCs can be mechanical pre-activated by stiffness. In the talk entitled "When is an alveolar epithelial cell an alveolar epithelial cell?", Michael F. Beers (Philadelphia, PA, USA) pointed the main characteristics of alveolar type II (ATII) cells. ATII cells have a unique pulmonary surfactant metabolism linked to a unique lysosomal organelle with lamellar bodies full of surfactant, showing that surfactant exocytosis and endocytosis are precisely regulated. Finally, he showed that ATII cells are characterised as being lung stem cells. The session was concluded by another oral presentation by Ilan Azuelos (Montreal, Canada), whose work is focused on IPF research, showing how mammalian target of rapamycin (mTOR) regulates transforming growth factor (TGF)-β-induced collagen synthesis *via* increased glycine biosynthesis. It has been seen that fibrotic areas have enhanced glucose metabolism, which is regulated by the mTOR axis. The mTOR axis, which has been also related to glycine synthesis, is activated upon TGF-β stimulation. Elegant results were presented in which mTOR inhibition resulted in a collagen synthesis downregulation, while this effect was reversed by adding glycine.
Young investigator session: the William MacNee Award {#s5}
====================================================
Like every year at the LSC, in 2018, five young investigators who submitted the best abstracts were competing for the prestigious William MacNee Award. Jennifer Collins (Rotterdam, the Netherlands) showed how *in vivo* hyperoxia exposure of neonatal rats impaired the angiogenic supportive capacity and fibroblast growth factor expression in lung mesenchymal stromal cells in the context of bronchopulmonary dysplasia. Emmeline Marchal-Duval (Paris, France) shared her work on paired related homeobox 1, a profibrotic mesenchymal transcription factor that may contribute to the development of IPF by keeping fibroblasts in a proliferative and undifferentiated state. Catharina Mueller (Lund, Sweden), who later won the award, studies the ECM of transplanted lungs in a clever combinatory approach of laser capture microdissection paired with mass spectrometry and immunohistochemistry. Based on early alterations in the ECM, she was able to identify patients likely to develop bronchiolitis obliterans syndrome and predict who will have a severe disease course. Isabelle Dupin (Bordeaux, France) demonstrated that fibrocyte-like cells are increased in distal tissue samples of COPD patients and may be involved in the lung function decline during COPD progression, showing a negative association of fibrocyte-like cell density with lung function and a positive association with bronchial wall thickness. Scott Collum (Houston, TX, USA) concluded the session with his work on alternative polyadenylation (APA), a mechanism that typically results in a shortening of the 3′ untranslated region of affected genes, which can lead to the removal of regulation sites and increased expression of these mRNAs. He found that APA induced by the depletion of the 25-kDa subunit of cleavage factor I contributes to the upregulation of ECM components and the development of pulmonary hypertension.
Poster sessions {#s6}
===============
Around 80 Early Career Member delegates had the chance to present their research in two poster sessions during the LSC. Several posters in the first session highlighted potent signalling pathways that contribute to the pathological activity of fibroblasts in IPF. Mimicking breathing by exerting cyclical stretch on fibroblasts was shown to release endogenous TGF-β, a process that is mediated by G-protein signalling (by Gαq/11), as presented by Amanda Goodwin (Nottingham, UK). On a different note, silencing members of the A-kinase anchoring proteins demonstrated a role in mediating epithelial to mesenchymal transition that contributes to the pathogenesis of fibrosis, as presented by Martina Schmidt (Groningen, the Netherlands). Furthermore, the switch between myogenic and lipogenic fibroblast phenotype was proposed to have therapeutic benefits in IPF. Lipogenic fibroblasts are lipid droplet-containing interstitial fibroblasts that contribute to epithelial maturation and provide a niche for epithelial stemness. During fibrotic events, lipogenic fibroblasts contribute to pathogenesis by switching into a myogenic phenotype. Peroxisome proliferator-activated receptor (PPAR)-γ signalling was found to inhibit the switch from lipogenic to myogenic phenotype and promote the differentiation of the lipogenic phenotype. Treatment with metformin, a well-known antidiabetic drug and a PPAR-γ agonist, on precision-cut lung slices derived from IPF patients showed amelioration of the fibrotic phenotype. This project was presented by Vahid Kheirollahi (Giessen, Germany) and was selected for a distinguished poster award. Understanding intracellular pathways and mechanisms that contribute to disease pathogenesis and ECM derangement could greatly aid in finding novel therapies that may cure chronic lung diseases. Furthermore, exciting data were presented that type 2 iodothyronine deiodinase, the enzyme that converts thyroxine to active triiodothyronine, was upregulated in the lungs of patients with IPF, particularly in alveolar epithelial cells, the metabolically active cells of the lung. In this direction, experimental data demonstrated that aerosolised thyroid hormone administration exerted antifibrotic properties in two experimental models of pulmonary fibrosis through a mechanism involving improved mitochondrial function and mitophagy. Another interesting poster further enhanced knowledge of the role of kinases and phosphatases in pulmonary fibrosis, as mitogen-activated protein kinase phosphatase-5 blunted fibrotic responses through negative regulation of TGF-β1-induced Smad3 signalling. Interesting data were also presented on the antifibrotic properties of azithromycin and metformin. Several mechanisms have been suggested, yet investigation of these compounds in a clinical setting remains a challenge. To this end, a recent *post hoc* analysis revealed that metformin had no effect on clinically relevant outcomes in patients with IPF. A plethora of posters also discussed the role of ECM in pulmonary fibrosis. Structure and composition of lung ECM had a notable role in fibroblast phenotype and cell differentiation, while normal matrix rigidity was protective against fibrosis and cancer. Classical ECM proteins including collagen and fibronectin, as well as proteins involved in signalling pathways such as epidermal growth factor, insulin-like growth factor and TGF-β were differentially expressed between healthy, COPD and IPF matrix preparations. IPF myofibroblasts also exhibited higher activity of focal adhesion kinase (FAK) (phosphorylation of residue Y397) and protein kinase B (phosphorylation of S437) after TGF-β stimulation than healthy controls. FAK was necessary for the aberrant collagen regulation induced by TGF-β1. Another poster worth mentioning showed that galectin-3 was increased in bleomycin-treated mice and interestingly, galectin-3 induced TGF-β signalling in fibroblasts but not in epithelial cells. Finally, pathogenic commonalities between the fibrotic lung and the ageing lung were further enhanced by data showing that deletion of ETS domain-containing protein Elk-1 gene (*Elk1*) resulted in age-related early fibrotic changes associated with the development of pulmonary fibrosis.
Early career delegates session: maximising your publication output {#s7}
==================================================================
As mentioned above, one highlight of the LSC is always the dedicated Early Career Member session on Saturday afternoon. This year's session dealt with a vital aspect of every scientist's life: publications. The recently appointed chief editor of the *European Respiratory Journal* (*ERJ*), Martin Kolb (Hamilton, Canada) opened the session explaining the editorial procedures of the *ERJ*, thus giving valuable information on what to expect when submitting papers to our society's main journal and how to write a manuscript \[[@C5]\]. Gisli Jenkins (Nottingham, UK), one of the editors in chief of *Thorax* focused on how the ever-stronger world of open access is impacting the culture of scientific publications and its possibilities, as well as many yet unaddressed caveats. Neil Bullen (Sheffield, UK), the managing editor of the *ERJ*, demonstrated to the audience the different available metrics to track the impact of their publications on the scientific community \[[@C6]\]. Finally, we were happy to welcome Paul Noble (Los Angeles, CA, USA), who gave us his perspective, as a previous deputy editor of the *Journal of Clinical Investigation*, of writing articles for a general medical journal. In addition to the lectures by this expert panel, there was a lively discussion with members of the audience. Amongst other topics, we discussed the common fear that reviewers could bias the reviewing process itself, and on the impact and problems arising of pre-print platforms (*e.g.* arXiv.org). During the direct adjunct networking event, Early Career Members again had the opportunity to interact with our invited speakers and other members of the faculty, and further engage in discussions on this and other topics of interest.
Evening pre-dinner talk {#s8}
=======================
The end of this Saturday full of science was a brilliant presentation by Peter Friedl (Nijmegen, The Netherlands) as a pre-dinner talk, who talked about visualisation of cell-matrix interactions during immune cell interactions and tumour invasion. Dr Friedl impressed the audience with stunning images and real-time videos of tumour invasion. Using intravital multiphoton microscopy, he showed how tumour microniches provide routes that promote cancer cell invasion along tracks that offer minimum resistance. He highlighted how plastic cancer cells are in their invasion patterns in response to tissue topology. Tumour cells prefer to follow a "collective invasion" model along vascular or neural vessels, which provide linear tracks for the tumour to migrate, in a fashion similar to highways. However, they adapt to a more "discontinuous invasion" pattern when tissues have a more complex structure, such as adipose tissue or renal glomeruli, and totally discontinuous in tissues that offer a mechanical challenge, such as connective tissue, where tumour cells are forced to migrate isolated. Through three-dimensional ultrastructure analyses, he showed the pre-defined geometry of the tracks. Additionally, he observed that tumour cells follow a "cell jamming" mechanism, without visible ECM degradation by proteases, but managing to push the surrounding tissues to make room for the advance front, where β1/β3 integrins play an essential role.
Reconstructing the matrix: bioengineering approaches {#s9}
====================================================
The last day of the LSC was opened by a talk entitled "Lung Bioprinting: opportunities and challenges" by Ramon Farré (Barcelona, Spain). During the presentation, Dr Farré showed the opportunities and challenges of lung bioprinting and explained the different available techniques (extrusion, droplet and laser-based bioprinting). In addition, he explained the current situation in this field, where the reconstruction of a whole lung is still far-fetched but the available knowledge will allow us to perform new and more realistic *in vitro* disease models and drug testing in a short period of time. However, the use of these technologies still has some issues that need to be addressed. In this regard, improvements in substrate stiffness/composition, oxygen and carbon dioxide diffusion, cyclic stretch, and shear stress still need to be introduced in the future models. The second talk of this session was given by Darcy Wagner (Lund, Sweden), who talked about *ex vivo* bioengineering approaches to develop human lung scaffolds. One way to obtain lung scaffolds is *via* decellularisation of native lungs, *i.e.* removal of all cells, while retaining the architecture of the lungs as well as ECM composition. She showed that decellularised lungs from healthy or diseased patients retain distinct protein profiles and that this pattern holds even when deep proteomic approaches are used. For instance, the heterogeneity of IPF tissue and the short-term viability in emphysematous lungs increase the difficulty of the decellularisation process. This indicates that each scaffold derived from human patients will be different. In parallel, this technology can open new doors for studying cell--ECM interactions and provide new insights into chronic lung disease pathomechanisms.
Drugging the matrix {#s10}
===================
The last session of this year's LSC dealt with approaches to drug a diseased or altered matrix*.* In the first presentation of the session, Gisli Jenkins talked about targeting and profiling ECM components in IPF patients. He summarised that G-protein coupled receptor (GPCR) signalling and mechanotransduction pathways involving RhoA are essential for pathogenesis of pulmonary fibrosis. He showed that injury to the alveolar epithelium results in mechanotransduction signals leading to αvβ6 integrin-mediated activation TGF-β *via* RhoA, resulting in fibrogenesis. However, in lung fibroblasts, mechanotransduction pathways involving cyclical mechanical stretch promotes TGF-β secretion and can be reduced in Gaq/11-null mouse embryonic fibroblasts. In short, he presented that the effects of GPCR signalling in epithelial cells and fibroblasts are cell specific and may echo distinct endotypes of pulmonary fibrosis that might be targeted specifically for fibrosis therapy. Morten Karsdal (Herlev, Denmark) continued the session with his talk about the neoepitopes of ECM molecules for use as biomarkers. The relative amounts of interstitial matrix *versus* basement membrane collagens alter greatly throughout fibrosis in their amounts, locations and most importantly, the products of their proteolytic degradation. He emphasised the collagens' structure and function in matrices of IPF and COPD, and gave examples for recently discovered signalling functions of a few collagens: fragments of type IV collagen (tumstatin), type IV collagen (endotrophin), type VIII collagen (vastatin), type XV (restin) and type XVII collagen (endostatin). Dr Karsdal concluded that the biomarkers of the ECM can both hold prognostic value for disease status and help better understand disease progression.
Insights into cell-matrix interactions in lung development {#s11}
==========================================================
Since 2017, there has been a group within ERS Assembly 7 (Group 7.08) focusing on lung and airway developmental biology. This new group within the Paediatrics Assembly focuses on lung and airway development, and its relationship with respiratory health during childhood and beyond. This group also aims to address the early (paediatric) origins of adult lung disease and the long-term sequelae of early lung disease. While the majority of presentations at the LSC focused on cell-matrix interactions in adult disease, there were a number of oral and poster presentations that focused on lung development and neonatal lung disease and were of special interest for Group 7.08 members. Marko Nikolic (Cambridge, UK), who received the award for the best oral presentation, shared intriguing data on his *in vitro* organoid culture system for human embryonic lung epithelial stem cells to recapitulate lung development \[[@C4]\]. Taking a side-step to dermal wound repair, Yuval Rinkevich (Munich, Germany) presented data on the existence of a scar-promoting fibroblast subtype identified by Engrailed 1 and CD26/dipeptidyl peptidase-4, which only emerges in late fetal development \[[@C7]\]. During the Young Investigator Session, Jennifer Collins showed how CD146^+^ mesenchymal stromal cells isolated from hyperoxia-injured neonatal rat lungs have impaired angiogenic supportive capacity and an altered gene expression profile. In addition, there were a number of poster presentations that focused on neonatal lung disease and lung development by Maeva Zysman (Vincennes, France), Koni Ivanova (Stara Zagora, Bulgaria), Sander van Riet (Leiden, the Netherlands) and Anne Hilgendorff (Munich, Germany). Taken together, LSC 2018 was a promising first event for Group 7.08, providing a fertile ground for inspiration for future research.
Conclusion {#s12}
==========
This year's LSC has once again been a brilliant meeting, bringing together leading experts in the field to discuss recent scientific findings. The relatively small setting provides excellent means for networking and the establishment of future collaborations. Early Career Members can only be encouraged to attend this fantastic conference in the future. Join us next year; the topic will be "Mechanisms of acute exacerbations in respiratory disease" and the scientific programme will be outstanding as always.
The authors' affiliations are as follows. I. Almendros: Unitat de Biofísica i Bioenginyeria, Facultat de Medicina i Ciències de la Salut, Universitat de Barcelona, Barcelona, and Centro de Investigación Biomédica en Red de Enfermedades Respiratorias, Madrid, Spain; H.N. Alsafadi: Dept of Experimental Medical Science, Lung Bioengineering and Regeneration, and Wallenberg Center for Molecular Medicine, Lund University, Lund, Sweden; D. Bölükbas: Dept of Experimental Medical Science, Lung Bioengineering and Regeneration, and Wallenberg Center for Molecular Medicine, Lund University, Lund, Sweden; J.J.P. Collins: Dept of Pediatric Surgery, Erasmus University Medical Centre, Rotterdam, the Netherlands; P. Duch: Unitat de Biofísica i Bioenginyeria, Facultat de Medicina i Ciències de la Salut, Universitat de Barcelona, Barcelona, and Centro de Investigación Biomédica en Red de Enfermedades Respiratorias, Madrid, Spain; E.M. Garrido-Martin: H12O-CNIO Lung Cancer Clinical Research Unit, Research Institute Hospital 12 Octubre -- Spanish National Cancer Research Centre, and Biomedical Research Networking Centre Consortium of Cancer, Madrid, Spain; N. Kahn: Pneumology and Critical Care Medicine, Thoraxklinik at Heidelberg University Hospital, and Translational Lung Research Center, Member of the German Center for Lung Research (DZL), Heidelberg, Germany; T. Karampitsakos: 5th Dept of Pneumonology, Hospital for Thoracic Diseases, "Sotiria", Athens, Greece; I. Mahmutovic Persson: Institution of Medical Radiation Physics, Lund University, Dept of Translational Medicine, Malmö, Sweden; A. Tzouvelekis: 1st Academic Dept of Pneumonology, Hospital for Thoracic Diseases, "Sotiria", Medical School, National and Kapodistrian University of Athens, and Division of Immunology, Biomedical Sciences Research Center "Alexander Fleming", Athens, Greece; F.E. Uhl: University of Vermont, College of Medicine, Burlington, VT, USA; S. Bartel: Early Life Origins of Chronic Lung Disease, Research Center Borstel, Leibniz Lung Center, Member of the DZL, Borstel, Germany.
**Conflict of interest:** S. Bartel has received personal fees from Bencard Allergie GmbH for serving as a member of an advisory board and a project grant from Bencard Allergie GmbH, outside the submitted work.
|
{
"pile_set_name": "PubMed Central"
}
|
Comparisons of the structural proteins of avian infectious bronchitis virus as determined by western blot analysis.
The antigenic diversity of ten strains of avian infectious bronchitis virus (IBV) was examined by Western blot analyses using polyclonal antisera specific for the Massachusetts 41 (M41), Gray, Arkansas DPI (Ark DPI), Connecticut (Conn) and Australian T (Aust T) serotypes. Although antigenic variation was found in all three structural viral proteins, the matrix protein appeared to be antigenically the most highly variable. Four distinct antigenic groups, which did not correspond to virulence or pathotype, could be defined according to the variations observed in the matrix protein. Somewhat less variation was seen in the spike polypeptide. The only variation in the nucleocapsid protein was indicated by the lack of a detectable reaction between the Aust T antiserum and the Ark DPI nucleocapsid protein. Antisera made against M41 had the broadest reactivity while antisera against Aust T, the only strain tested which was exotic to the U.S.A., had the greatest specificity.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Q:
Susy Nested Grids
I created a susy grid and am trying to use it for a complex nested navigation. The problem I'm having is that I want to have a 0em $gutter-width in the scope of the header (tag where I'm working with navigation), but throughout the rest of the page, I want to have a 1em gutter. My navigation bar is 100% screenwidth and ideally would be able to handle decimal columns smoothly.
Here's what I'm working with:
header
$container-width: 100%
@include container
// basic div configuration for all of the various horizontal navigations etc
> div
// do something here
// do some one off grid math to help with the drop down navigation system
#main
> ul
@include horizontal_ul_structure(5,5)
// approximate the size of the li element
//
> li
The mixin (horizontal_ul_structure):
Right now the nesting works well, the hover ul should be 100% width. Thus I'm making it 12 columns in the context of 1 column.
@mixin horizontal_ul_structure($parent, $elements)
// best if this works as no decimal otherwise screws up everything!
$element_size: $parent / $elements
// assumes that it will be called in the context of a ul
// span the parent # number of elements across
@include span-columns($parent)
// now make sure that the child spans the proper amount of elements with no overflow
> li
background-color: gray
@include span-columns($element_size, $parent)
&:last-of-type
@include span-columns($element_size omega, $parent)
// do a clever little hack to keep the anchor tags looking correct?
> a
position: relative
width: $column-width + $gutter-width
left: $gutter-padding/2
height: 100%
background-color: brown
Here if you notice the anchor tag hack I'm working with essentially expands the anchor tag past the li element a bit to prevent the gutter which I don't work.
Is there any way that I could get rid of this gutter width and then have another grid for a different part of my application?
Is there anyway to namespace the susy config?
A:
There are several options.
1) If you don't have gutters, you don't need Susy. The math is simple, and you don't need to worry about decimal columns.
li {
float: left;
width: percentage($elements/$parent);
&:last-of-type {
float: right;
}
}
2) You can use the with-grid-settings() { ... } mixin to wrap any chunk of code in a different grid of your choice. Maybe something like this:
@mixin horizontal_ul_structure($parent, $elements) {
@include span-columns($parent);
@include with-grid-settings($elements, $gutter-width: 0) {
> li {
@include isolate-grid(1);
}
}
}
You'll never have decimal columns if you just change the context to match your needs.
|
{
"pile_set_name": "StackExchange"
}
|
Computer assisted structure-activity studies of chemical carcinogens. An N-nitroso compound data set.
N-nitroso compounds, consisting of nitrosamines and nitrosamides, are potentially important in the etiology of human cancer. An attempt to study the molecular structure-carcinogenicity relations of these compounds is reported. A pattern-recognition approach was used to develop predictive ability for carcinogenic potential. A set of 15 calculated molecular structure descriptors that supported a linear discriminant function able to successfully separate 116 carcinogens from 28 noncarcinogens was identified. Predictive ability of an overall of 91%--93% for carcinogens and 85% for noncarcinogens--was obtained in the randomized testing. This relatively high predictability demonstrates that pattern-recognition methods can be useful in analyzing these compounds for carcinogenic activity. The inclusion of two electronic descriptors implicitly supports the alpha-hydroxylation hypothesis. The relations of descriptors used and possible mechanism of action are discussed.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
E.F. Benson's Ghost Stories: read by Mark Gatiss
Mark Gatiss (Sherlock, Doctor Who, Game of Thrones) reads chilling tales by the unsung master of the classic ghost story: E. F. Benson. There's nothing sinister about a London bus. Nothing supernatural could occur on a busy train platform. There's nothing terrifying about a little caterpillar. And a telephone, what could be scary about that? Don't be frightened of the dark corners of your room. Don't be alarmed by a sudden inexplicable chill.
Edith Nesbit: The Ghost Stories
Edith Nesbit is more famously known as a writer of children’s stories such as The Railway Children. But in this volume we explore her short stories of the macabre and ghostly sort. Thought of as the first modern writer for children she also wrote for adults producing over 50 books in total as well as collections of poetry which we shall explore in a separate volume. These stories are brought to your ears in eerie detail by Ghizela Rowe and Richard Mitchley.
Ghost Stories, Volume One
Sir Derek Jacobi reads a collection of tales from the master of ghost stories, M. R. James, whose stories have for many years inspired the BBC's A Ghost Story for Christmas TV adaptations. M. R. James was described as "a man who, in company with Sheridan le Fanu, is the best ghost-story writer England has ever produced".
Ghost Story
For four aging men in the terror-stricken town of Milburn, New York, an act inadvertently carried out in their youth has come back to haunt them. Now they are about to learn what happens to those who believe they can bury the past - and get away with murder. Peter Straub's classic best seller is a work of "superb horror" (Washington Post Book World) that, like any good ghost story, stands the test of time - and conjures our darkest fears and nightmares.
Ghostland: An American History in Haunted Places
Colin Dickey is on the trail of America's ghosts. Crammed into old houses and hotels, abandoned prisons and empty hospitals, the spirits that linger continue to capture our collective imagination, but why? His own fascination piqued by a house hunt in Los Angeles that revealed derelict foreclosures and "zombie homes", Dickey embarks on a journey across the continental United States to decode and unpack the American history repressed in our most famous haunted places.
The Turn of the Screw
Academy Award, Golden Globe, and Emmy winner Emma Thompson lends her immense talent and experienced voice to Henry James' Gothic ghost tale, The Turn of the Screw. When a governess is hired to care for two children at a British country estate, she begins to sense an otherworldly presence around the grounds. Are they really ghosts she's seeing? Or is something far more sinister at work?
Ghost Stories, Volume 2
A second collection of tales from the master of ghost stories, M. R. James, whose stories have for many years inspired the BBC's A Ghost Story for Christmas TV adaptations. This volume includes 'A Warning to the Curious', 'The Stalls of Barchester Cathedral', 'The Mezzotint', and 'A Neighbour's Landmark'.
The Phantom Coach: A Connoisseur's Collection of the Best Victorian Ghost Stories
Ghost stories date back centuries, but those written in the Victorian era have a unique atmosphere and dark beauty. Michael Sims, whose previous Victorian collections Dracula’s Guest (vampires) and The Dead Witness (detectives) have been widely praised, has gathered twelve of the best stories about humanity’s oldest supernatural obsession. The Phantom Coach includes tales by a surprising and often legendary cast, including Charles Dickens, Margaret Oliphant, Henry James, Rudyard Kipling, and Arthur Conan Doyle, as well as lost gems by forgotten masters such as Mary E. Wilkins Freeman and W. F. Harvey. Amelia B. Edwards’s chilling story gives the collection its title, while Ambrose Bierce ("The Moonlit Road"), Elizabeth Gaskell ("The Old Nurse’s Story"), and W. W. Jacobs ("The Monkey’s Paw") will turn you white as a sheet. With a skillful introduction to the genre and notes on each story by Sims, The Phantom Coach is a spectacular collection of ghostly Victorian thrills.
The Old Maid
The story follows the life of Tina, a young woman caught between the mother who adopted her - the beautiful, upstanding Delia - and her true mother, her plain, unmarried ‘aunt’ Charlotte, who gave Tina up to provide her with a socially acceptable life. The three women live quietly together until Tina’s wedding day, when Delia’s and Charlotte’s hidden jealousies rush to the surface.
Summer
Summer, set in New England, is a novel by Edith Wharton, published in 1917. The novel details the sexual awakening of its protagonist, 18-year-old, Charity Royall, and her cruel treatment by the father of her child. Only moderately well-received when originally published, Summer, has had a resurgence in critical popularity since the 1960s.
Can Such Things Be?
Prepare yourself for the shocking, the strange, and the terrifying in Ambrose Bierce’s 1893 story collection Can Such Things Be? One of the greatest masters of horror brings you 25 tales of the supernatural and the unexplained. Whether in stories of ghosts sending desperate warnings to their human counterparts, psychics attempting to bridge unknown dimensions, howling werewolves, or a robot who takes on a life of his own, Bierce plumbs the depths of fear and fascination.
The Haunting of Hill House
Four seekers have come to the ugly, abandoned old mansion: Dr. Montague, an occult scholar looking for solid evidence of the psychic phenomenon called haunting; Theodora, his lovely and lighthearted assistant; Eleanor, a lonely, homeless girl well acquainted with poltergeists; and Luke, the adventurous future heir of Hill House.
Edith Wharton: The Short Stories
Perhaps best known for her classic novel The Age of Innocence, Wharton loved the short story form because its brevity allowed her to concentrate on telling the story. In these three powerful stories, Edith Wharton transports the listener to the turn of the century, where she depicts (without turning to sensationalism) the shocking topics of the time. Often, she opens just after an incident, allowing the listener to be immersed straight into the story.
Sixteen classic stories from masters of the genre: "The Judge's House", by Bram Stoker; "A Jug of Sirup", by Ambrose Bierce; "The Reconciliation", by Lafcadio Hearn; "The Woman With a Candle" by W. Bourne Cooke; "The Ebony Frame", by E. Nesbit; "On the Northern Ice", by Elia W. Peattie; "The Haunted Doll's House", by M. R. James; "The Old House in Vauxhall Walk", by Charlotte Riddell; "The Underground Ghost", by John Berwick Harwood; "Haunted", by Anon (from Tinsley's Annual); plus five more....
Burnt Offerings: Valancourt 20th Century Classics
Ben and Marian Rolfe are desperate to escape a stifling summer in their tiny Brooklyn apartment, so when they get the chance to rent a mansion in upstate New York for the entire summer for only $900, it's an offer that's too good to refuse. There's only one catch: behind a strange and intricately carved door in a distant wing of the house lives elderly Mrs. Allardyce, and the Rolfes will be responsible for preparing her meals. But Mrs. Allardyce never seems to emerge from her room, and it soon becomes clear that something weird and terrifying is happening in the house.
Publisher's Summary
Beneath the brilliance that was behind The Age of Innocence and Ethan Frome was a dark side. A dark side which produced magnificent tales of the unseen influences in our lives, such as "Mr. Jones", "The Eyes", "Kerfol", "The Ladie's Maid's Bell", and "The Looking Glass".
Perhaps no author can surpass Wharton in delving into the darker corners of the feminine experience. Four of the five stories in this collection are premised on the lingering horror engendered by the harrowing experiences of women ensnared in oppressive circumstances or by their own demons. The fifth, "The Eyes," has more to do with the repercussions on men who touch the lives of women living in silent agony.The conclusion to this tale is particularly unexpected, and it was only after I thought about it for a while that it literally gave me goosebumps--true horror which relies not on gore or violence but strikes at the very core of our own existence.
As always, Wharton's writing is superb and inexorably draws the listener into the gothic atmosphere of these tales. Each story has its own excellent narrator and wonderfully creepy music is employed at various points, enhancing the macabre theme.
After reading Wharton's "The Duchess at Prayer," I looked for more examples of her ghost stories and found this excellent collection.
In the first tale, a young heiress inherits an estate, but before she can settle in to her new life there, she must master the situation involving the caretaker, "Mr. Jones."
In "Kerfol," a man looks at a prospective property in northern France. There he is met with a pack of phantom dogs. Searching for an explanation leads him far into the past where he discovers a tragic love story.
"The Looking Glass," has an aged Mrs. Atlee looking back to her youth when she was a masseuse to wealthy ladies. She is ambivalent as to whether she should regret or excuse "the wrong she did" her benefactor by involving herself in an occult conspiracy.
"The Eyes" finds us in the midst of that old familiar favorite of Wharton and James: gentlemen at brandy and cigars telling tales. The ending is haunting, ambiguous, and likely to stay with one for longer than the rest of these stories.
"The Lady's Maid's Bell," perhaps the best-known of Wharton's ghost stories, revolves around a frail private-duty nurse who finds herself caught up in drama and intrigue during what was expected to be a quiet assignment to care for an affluent, amiable lady patient.
I loved the narrators, music, and selections. I certainly hope we will have more of her ghost stories in the future, presented just as well as these were. May you enjoy them as well.
I love Ms. Wharton's ability to set a scene. Most of us have never lived in a household with servants, or have any idea what that kind of household's routine's are, but within a few short "pages" she can get you right into the life of a lady's maid.
What was one of the most memorable moments of Ghosts: Edith Wharton's Gothic Tales?
I hate to write anything that would be a spoiler, but I love the way she essentially draws a word portrait for each individual dog.
What about the narrators’s performance did you like?
It seemed like there was a group of narrators, each narrator chosen according to the work, almost like a theatrical performance or a radio play. It was really really well done.
I am a big fan of Edith Wharton's work. This 1926 collection of short "ghost" stories, however, fails the reader of 2013.
As is characteristic of Wharton's writing, the narration is understated, never veering into OMG territory. She gives the reader credit for having a brain.
Unfortunately, she treads a well-worn path in each of these tales. The stories move slowly and the outcomes are predictable. I had to force myself to hear them all through to the end, hoping that somewhere in the pack I'd uncover an "Ethan Frome" experience.
I would recommend any of this Pulitzer prize-winning author's other books or short stories, but suggest you leave this collection of ghost stories on the shelf.
|
{
"pile_set_name": "Pile-CC"
}
|
wall paint colors for living room 2015 good neutral color ideas with furniture set inspiration my on best warm,grey paint colors for living room uk ideas 2015 popular color and palette home bunch interior best neutral sherwin williams,best warm neutral paint colors for living room gray the perfect color your sherwin williams ideas 2015,best neutral living room paint ideas on colors for 2015 most popular,good neutral living room paint color best colors for sherwin williams uk free online home decor,neutral living room colours colour palette a grey paint colors for uk best ideas,how to choose living room colors most popular neutral paint wall for 2015 grey ideas uk,good neutral living room paint color colors for uk photo impressive the most best 2015,paint colors for living room 2015 best neutral ideas on most popular wall,best neutral paint colors for living room behr home design small with wall ideas cool warm uk most popular.
|
{
"pile_set_name": "Pile-CC"
}
|
// <file>
// <copyright see="prj:///doc/copyright.txt"/>
// <license see="prj:///doc/license.txt"/>
// <owner name="none" email=""/>
// <version>$Revision: 3205 $</version>
// </file>
using System;
using System.Drawing;
using System.Text;
namespace ICSharpCode.TextEditor.Document
{
public enum BracketMatchingStyle {
Before,
After
}
public class DefaultTextEditorProperties : ITextEditorProperties
{
int tabIndent = 4;
int indentationSize = 4;
IndentStyle indentStyle = IndentStyle.Smart;
DocumentSelectionMode documentSelectionMode = DocumentSelectionMode.Normal;
Encoding encoding = System.Text.Encoding.UTF8;
BracketMatchingStyle bracketMatchingStyle = BracketMatchingStyle.After;
FontContainer fontContainer;
static Font DefaultFont;
public DefaultTextEditorProperties()
{
if (DefaultFont == null) {
DefaultFont = new Font("Courier New", 10);
}
this.fontContainer = new FontContainer(DefaultFont);
}
bool allowCaretBeyondEOL = false;
bool showMatchingBracket = true;
bool showLineNumbers = true;
bool showSpaces = false;
bool showTabs = false;
bool showEOLMarker = false;
bool showInvalidLines = false;
bool isIconBarVisible = false;
bool enableFolding = true;
bool showHorizontalRuler = false;
bool showVerticalRuler = true;
bool convertTabsToSpaces = false;
System.Drawing.Text.TextRenderingHint textRenderingHint = System.Drawing.Text.TextRenderingHint.SystemDefault;
bool mouseWheelScrollDown = true;
bool mouseWheelTextZoom = true;
bool hideMouseCursor = false;
bool cutCopyWholeLine = true;
int verticalRulerRow = 80;
LineViewerStyle lineViewerStyle = LineViewerStyle.None;
string lineTerminator = "\r\n";
bool autoInsertCurlyBracket = true;
bool supportReadOnlySegments = false;
public int TabIndent {
get {
return tabIndent;
}
set {
tabIndent = value;
}
}
public int IndentationSize {
get { return indentationSize; }
set { indentationSize = value; }
}
public IndentStyle IndentStyle {
get {
return indentStyle;
}
set {
indentStyle = value;
}
}
public DocumentSelectionMode DocumentSelectionMode {
get {
return documentSelectionMode;
}
set {
documentSelectionMode = value;
}
}
public bool AllowCaretBeyondEOL {
get {
return allowCaretBeyondEOL;
}
set {
allowCaretBeyondEOL = value;
}
}
public bool ShowMatchingBracket {
get {
return showMatchingBracket;
}
set {
showMatchingBracket = value;
}
}
public bool ShowLineNumbers {
get {
return showLineNumbers;
}
set {
showLineNumbers = value;
}
}
public bool ShowSpaces {
get {
return showSpaces;
}
set {
showSpaces = value;
}
}
public bool ShowTabs {
get {
return showTabs;
}
set {
showTabs = value;
}
}
public bool ShowEOLMarker {
get {
return showEOLMarker;
}
set {
showEOLMarker = value;
}
}
public bool ShowInvalidLines {
get {
return showInvalidLines;
}
set {
showInvalidLines = value;
}
}
public bool IsIconBarVisible {
get {
return isIconBarVisible;
}
set {
isIconBarVisible = value;
}
}
public bool EnableFolding {
get {
return enableFolding;
}
set {
enableFolding = value;
}
}
public bool ShowHorizontalRuler {
get {
return showHorizontalRuler;
}
set {
showHorizontalRuler = value;
}
}
public bool ShowVerticalRuler {
get {
return showVerticalRuler;
}
set {
showVerticalRuler = value;
}
}
public bool ConvertTabsToSpaces {
get {
return convertTabsToSpaces;
}
set {
convertTabsToSpaces = value;
}
}
public System.Drawing.Text.TextRenderingHint TextRenderingHint {
get { return textRenderingHint; }
set { textRenderingHint = value; }
}
public bool MouseWheelScrollDown {
get {
return mouseWheelScrollDown;
}
set {
mouseWheelScrollDown = value;
}
}
public bool MouseWheelTextZoom {
get {
return mouseWheelTextZoom;
}
set {
mouseWheelTextZoom = value;
}
}
public bool HideMouseCursor {
get {
return hideMouseCursor;
}
set {
hideMouseCursor = value;
}
}
public bool CutCopyWholeLine {
get {
return cutCopyWholeLine;
}
set {
cutCopyWholeLine = value;
}
}
public Encoding Encoding {
get {
return encoding;
}
set {
encoding = value;
}
}
public int VerticalRulerRow {
get {
return verticalRulerRow;
}
set {
verticalRulerRow = value;
}
}
public LineViewerStyle LineViewerStyle {
get {
return lineViewerStyle;
}
set {
lineViewerStyle = value;
}
}
public string LineTerminator {
get {
return lineTerminator;
}
set {
lineTerminator = value;
}
}
public bool AutoInsertCurlyBracket {
get {
return autoInsertCurlyBracket;
}
set {
autoInsertCurlyBracket = value;
}
}
public Font Font {
get {
return fontContainer.DefaultFont;
}
set {
fontContainer.DefaultFont = value;
}
}
public FontContainer FontContainer {
get {
return fontContainer;
}
}
public BracketMatchingStyle BracketMatchingStyle {
get {
return bracketMatchingStyle;
}
set {
bracketMatchingStyle = value;
}
}
public bool SupportReadOnlySegments {
get {
return supportReadOnlySegments;
}
set {
supportReadOnlySegments = value;
}
}
}
}
|
{
"pile_set_name": "Github"
}
|
1. Field of the Invention
The present invention relates to signal processing, and, in particular, to computer-implemented processes and apparatuses for encoding and decoding image signals for progressive transmission and display.
2. Description of the Related Art
Still images and video images typically require large numbers of bits to represent digitally, even using sophisticated compression techniques. The time required to transmit such images for display at a remote destination, and therefore the time delay between display of successive images, may prove disturbing to the remote viewer. Using a conventional progressive transmission technique, such as those based on the wavelet transform, may prove computationally intensive and therefore time consuming.
It is desirable to provide encoding systems for generating, encoding, and transmitting image signals and decoding system for receiving, decoding, and displaying image signals that reduce the delay to the remote viewer. In particular, it is desirable to provide personal computer (PC) based conferencing systems that provide the capabilities for efficient transmission of images from one conference participant to a remote conference participant over relatively low bandwidth media, such as a PSTN telephone line.
It is, therefore, an object of the present invention to provide computer-implemented processes and apparatuses for efficiently generating, encoding, and transmitting image signals and methods, apparatuses, and systems for efficiently receiving, decoding, and displaying image signals.
It is a particular object that the present invention be applicable to PC-based conferencing systems.
Further objects and advantages of this invention will become apparent from the detailed description of a preferred embodiment which follows.
The present invention is a computer-implemented process and apparatus for encoding image signals. According to a preferred embodiment, a location within an image is selected and signals corresponding to the image are encoded following a spatial decomposition pattern, wherein the spatial decomposition pattern is based on the selected location.
The present invention is also a computer-implemented process and apparatus for displaying encoded image signals corresponding to an original image. According to a preferred embodiment, an initial set of encoded image signals is provided and a display image is displayed in accordance with the initial set of encoded image signals. A sequence of subsequent sets of encoded image signals is provided and the display image is progressively updated in accordance with the sequence, wherein the sequence corresponds to a spatial decomposition pattern based on a selected location within the original image.
The present invention is also a computer-implemented process and apparatus for transmitting image signals corresponding to an original image at a local node for display at a remote node. According to a preferred embodiment, a location within the original image is selected at the local node. The image is divided into a plurality of blocks at the local node and a transform is applied to each of the blocks to generate a plurality of transformed blocks at the local node, wherein each of the transformed blocks comprises a DC transformed signal and a plurality of AC transformed signals. The DC transformed signals for all of the transformed blocks are encoded to generate encoded DC transformed signals at the local node and the encoded DC transformed signals are transmitted from the local node to the remote node. The encoded DC transformed signals are decoded at the remote node to generate decoded DC transformed signals and a display image is displayed in accordance with the decoded DC transformed signals on a monitor at the remote node. The AC transformed signals for all of the transformed blocks are encoded following a spatial decomposition pattern at the local node to generate a sequence of encoded AC transformed signals, wherein the spatial decomposition pattern is based on the selected location, and the sequence of encoded AC transformed signals is transmitted from the local node to the remote node. The sequence of encoded AC transformed signals is decoded at the remote node to generate a sequence of decoded AC transformed signals and the display image is progressively updated in accordance with the sequence of decoded AC transformed signals.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
Abstract: Emerging and established viral diseases take an enormous toll on human health. Current treatment approaches are unlikely to halt epidemic spread of many viruses, notably HIV-1, due to prohibitive costs of treatment (i.e. access), compliance issues, rapid viral mutation, and the influence of hard-to-reach high-risk viral 'superspreaders'. We propose to shift the treatment paradigm toward developing Therapeutic Infectious Pseudoviruses (TIPs) that require the pathogen to replicate. TIPs would transmit along a pathogen's normal transmission route, reaching precisely those high-risk populations that most require therapy. TIPs co-opt wild-type virus packaging elements, decreasing disease-progression in vivo and reducing disease transmission on a population scale. We have demonstrated that an anti-HIV TIP could mutate with equal speed and under evolutionary selection to maintain its parasitic relationship with wild-type virus, thereby overcoming viral mutational escape. Since TIPs replicate conditionally (i.e. piggyback) treatment compliance and cost issues are eliminated. A precedent for the safety of TIPs exists in the oral polio vaccine (a live-attenuated vaccine) which exhibits limited spread and is being used in the polio eradication campaign. To develop candidate TIPs we will capitalize upon our expertise in HIV-1 transcriptional circuitry. We discovered that HIV-1 exploits stochastic gene-expression to control entry into a dormant state (proviral latency). By targeting a cellular gene (SirT1) essential for viral feedback, we have biased HIV-1 toward dormancy and diminished reactivation. We will exploit this innovative strategy of forcing viruses into dormancy by utilizing our single-cell imaging methods to conduct high-throughput imaging screens for therapeutic candidates that promote viral latency. Next, these candidate TIPs will be analyzed in novel microfluidic chemostats that maintain homeostatic infection and allow viral evolution in an in vivo-like setting. By integrating these approaches with predictive models, we will develop a revolutionary therapy to halt the spread of HIV/AIDS and other infectious diseases. Public Health Relevance: Emerging and established viral diseases are major health concerns. Many viral diseases lack effective treatments or preventative vaccines and even when available these treatments are unable to halt epidemic spread due to viral mutational escape and the presence of infectious superspreader individuals. Clearly, new and more effective antiviral strategies are needed and this proposal presents a multi-pronged approach to identify and develop an innovative new antiviral approach.
|
{
"pile_set_name": "NIH ExPorter"
}
|
Q:
Why is this utility function not picking up its penalty?
I was reading this seminal paper by Infanger. On page 40, Figure 11. was quite interesting. In particular I was interested in the top one, 19 Years and I wanted to reproduce this plot. To give some background: It's about utility maximization which should be solved by DP approach, i.e.
$$ \max_{x_t,0\le t\le T} E[u(W_T)]$$
where $u$ is a utility function and $W_T$ is the wealth at time $T$. We want therefore to maximize terminal wealth. For the picture he uses the following "quadratic downside risk" function
$$ u(W) = W - \frac{\lambda}{2}\max{(0,W_d-W)^2}$$
where $W_d$ is a target amound and $\lambda$ a scaling parameter. As wealth evolves via $W_{t+1} = W_t \cdot\langle x_t, R\rangle$ where $x_t$ are the allocation and $R$ the return, he writes down the Bellman equation of this problem:
$$V_{t}(W_t) = \max_{x_t}E[V_{t+1}(W_t\cdot \langle x_t, R\rangle)|W_t]$$
since I'm just interested in the final step, we have $V_T = u$ and the maximization problem I want to solve is
$$V(W) = \max_{x}E[u(W\cdot \langle x, R\rangle)|W] $$
dropping the time $t$ index. I assumed (as Infanger did if I get him right), that $R$ are normally distributed. I use then Gauss-Hermite quadrature to approximate the expectation (see page 71 in this paper).
$$V(W) = \frac{1}{\sqrt{\pi}}\max_{x}\sum_{i=1}^m w_i u(W(1+\hat{\mu}(x)+\sqrt{2}\hat{\sigma}(x)\cdot q_i)) $$
where $\hat{\mu} = \langle \mu, x\rangle$ and $\hat{\sigma} = \sqrt{\langle x, \Sigma x\rangle}$ and $w_i$ are the Gauss-Hermite weights and $q_i$ the corresponding nodes.
I've coded a very simple and not optimized version to see if I get the desired picture.
first a picture of the utility function:
utility <- function(w){
K <- 100000
temp <- K-w
temp[temp<=0] <- 0
return(w - 1000*temp^2)
}
x <- seq(90000,120000,1000)
y <- utility(x)
plot(x,y,type="l")
Now I just generated a sequences of $W$ and solved the above problem. The following code junk defines the covariance and expected return vector. The data is from the Infanger paper above.
wealth <- seq(50000,150000,5000)
mu <- c(0.108, 0.1037, 0.0949, 0.079, 0.0561)
cor <- matrix(c(1, 0.601, 0.247, 0.062, 0.094,
0.601, 1.0, 0.125, 0.027, 0.006,
0.247, 0.125, 1.0, 0.883, 0.194,
0.062, 0.027, 0.883, 1.0, 0.27,
0.094, 0.006, 0.194, 0.27, 1.0),
ncol=5, nrow=5,byrow=T)
std <- c(0.1572, 0.1675, 0.0657, 0.0489, 0.007)
temp <- std%*%t(std)
cov <- temp*cor
With this data at hand and a sequence of wealth (see above) I just run an optimization for each given wealth and store the solution (assuming no short selling). To solve the problem I used the Rsolnp package in R. It solves a minimization problem that's why I'm returning a $-1$ in the objective function below:
library(Rsolnp)
library(statmod)
obj <- function(x, currentWealth, mu, cov, r=0, nodes, weights){
drift <- sum((mu-r)*x)+r
cor <- sqrt(sum(x*(cov%*%x)))
term1 <- currentWealth*(1+drift)
term2 <- currentWealth*sqrt(2)*cor*nodes
return(-1/sqrt(pi)*sum(weights*utility(term1+term2)))
}
g_constraints <- function(x,currentWealth, mu, cov, r=0, nodes, weights){
return(sum(x))
}
x0 <- rep(0.25,length(mu))
weights <- gauss.quad(10,"hermite")$weights
nodes <- gauss.quad(10,"hermite")$nodes
solmat <- matrix(NA, ncol=length(mu),nrow=length(wealth))
for(i in 1:length(wealth)){
sol <- solnp(pars=x0, fun = obj,
eqfun = g_constraints,
eqB = 1,
LB = rep(0, length(mu)),
UB = rep(1, length(mu)),
currentWealth = wealth[i], mu = mu, cov = cov,
r = 0, nodes = nodes, weights = weights)
solmat[i,] <- sol$pars
x0 <- sol$pars
}
colnames(solmat) <- c("US Stock", "Int Stocks", "Corp Bonds", "Gvnt Bond", "Cash")
rownames(solmat) <- as.character(wealth)
However, I get a constant allocation where all money is invested in US Stocks. What's wrong with this and how do I get this chart from Infanger?
A:
The problem was a missing $W_t$ in the equation for correlation. I've updated the above code and did a rerun. We have now the following allocation which is much closer to the Infanger paper.
> solmat
US Stock Int Stocks Corp Bonds Gvnt Bond Cash
50000 5.043872e-01 0.089871441 0.40574133 2.745030e-08 1.788550e-09
55000 4.050341e-01 0.090625580 0.50434024 2.744996e-08 1.788417e-09
60000 3.222272e-01 0.091347143 0.58642565 2.744972e-08 1.788325e-09
65000 2.521815e-01 0.091750138 0.65606829 2.744945e-08 1.788218e-09
70000 1.920722e-01 0.092167629 0.71576010 2.744928e-08 1.788152e-09
75000 1.401952e-01 0.092551771 0.76725296 2.744917e-08 1.788109e-09
80000 9.542976e-02 0.092965569 0.81160464 2.744770e-08 1.787578e-09
85000 5.926248e-02 0.085949047 0.77256462 2.548693e-10 8.222386e-02
90000 2.556086e-02 0.042435548 0.35690767 2.546768e-10 5.750959e-01
95000 5.666367e-07 0.007460414 0.02786258 1.724982e-12 9.646764e-01
1e+05 4.086260e-03 0.018524853 0.14238886 1.318764e-04 8.348682e-01
105000 4.229705e-03 0.021298108 0.33601246 1.319004e-04 6.383278e-01
110000 4.261748e-03 0.022020057 0.49978208 1.319047e-04 4.738042e-01
115000 1.014474e-02 0.042426599 0.62859439 3.219158e-03 3.156151e-01
120000 1.040348e-02 0.046451692 0.76095787 3.218435e-03 1.789685e-01
125000 1.308464e-02 0.132081249 0.79793418 3.218537e-03 5.368139e-02
130000 1.429210e-02 0.239570571 0.72417454 3.105067e-03 1.885772e-02
135000 1.471366e-02 0.313967841 0.65235180 3.064252e-03 1.590244e-02
140000 1.492658e-02 0.369961444 0.59727148 3.044485e-03 1.479601e-02
145000 1.506062e-02 0.416426483 0.55128612 3.032353e-03 1.419442e-02
150000 1.515564e-02 0.456708072 0.51130955 3.023891e-03 1.380285e-02
|
{
"pile_set_name": "StackExchange"
}
|
Silent Night (Bon Jovi song)
"Silent Night" is a single and power ballad by American rock band Bon Jovi. It is taken from their second album, 7800° Fahrenheit.
It was the album's final single, debuting on the Billboard Mainstream Rock Tracks chart Christmas week 1985 and hitting its peak of #24 a month later. The ballad was the glam metal album's most successful entry at rock radio, although it did not make the pop chart.
Chart performance
References
External links
Category:Bon Jovi songs
Category:1986 songs
Category:Songs written by Jon Bon Jovi
Category:PolyGram singles
Category:Hard rock ballads
|
{
"pile_set_name": "Wikipedia (en)"
}
|
--- qemu-2.10.0-rc3-clean/linux-user/elfload.c 2017-08-15 11:39:41.000000000 -0700
+++ qemu-2.10.0-rc3/linux-user/elfload.c 2017-08-22 14:33:57.397127516 -0700
@@ -20,6 +20,8 @@
#define ELF_OSABI ELFOSABI_SYSV
+extern abi_ulong afl_entry_point, afl_start_code, afl_end_code;
+
/* from personality.h */
/*
@@ -2085,6 +2087,8 @@
info->brk = 0;
info->elf_flags = ehdr->e_flags;
+ if (!afl_entry_point) afl_entry_point = info->entry;
+
for (i = 0; i < ehdr->e_phnum; i++) {
struct elf_phdr *eppnt = phdr + i;
if (eppnt->p_type == PT_LOAD) {
@@ -2118,9 +2122,11 @@
if (elf_prot & PROT_EXEC) {
if (vaddr < info->start_code) {
info->start_code = vaddr;
+ if (!afl_start_code) afl_start_code = vaddr;
}
if (vaddr_ef > info->end_code) {
info->end_code = vaddr_ef;
+ if (!afl_end_code) afl_end_code = vaddr_ef;
}
}
if (elf_prot & PROT_WRITE) {
|
{
"pile_set_name": "Github"
}
|
Vegan Sweet And Spicy Chipotle BBQ Sloppy Joes
Sloppy Joes are the best! My mom used to make them all the time when I was little. With 4 kids, it was something easy and super quick for her to make as we all screamed about being hungry. I may only have one kid, but I still always want quick and easy dinners. The only difference is, Lenore and I don’t eat meat! So these vegan sweet and spicy chipotle BBQ sloppy joes are perfect for us!
The base for these is lentils, but it would work with lots of different proteins. Tofu crumbles, other beans, tvp, etc. I like to buy pre cooked lentils, they have great ones at Trader Joe’s. That makes this meal take about 15 minutes or so.
Once you have cooked lentils, either bought that way, or dried ones you have cooked, all you have to do is make the tastiest BBQ sauce ever! Sweet and spicy chipotle BBQ sauce is ahhhhmazing! It is also easy, and refined sugar free! So just whisk together some ingredients, heat it up to develop the flavor, then just toss with some lentils in a sauce pan to heat it all together and make the best dinner ever!
I like my sloppy joes pretty sloppy, so I added pretty much all of the BBQ sauce. However, you can definitely customize how sloppy you want your sloppy joes to be! I like to top mine with some lightly dressed coleslaw, it gives it some freshness and a nice crunch to contrast the dare I say sloppiness.
The perfect comfort food in winter or a nice BBQ meal in the summer, all the time is a good time for these sloppy joes. Everyone loves them, and when I make them for children, I just make the BBQ sauce a bit less spicy! You can easily make these however you want!
Whisk everything together and heat on medium. Bring to a simmer, season with a bit of salt and pepper, reduce heat to low and simmer for 5-10 minutes while you make them lentil mixture.
Now heat the olive oil in a cast iron skillet or non stick pan on medium high, add the onions and garlic, cook for about 2 minutes or so until the onions are translucent.
Now add the cooked lentils, and saute everything together until the lentils are warmed through and the garlic and onions are completely cooked. Season with a bit of salt and pepper.
Now taste the BBQ sauce and adjust seasoning. Then add the BBQ sauce to the lentils. I started by adding about half of the BBQ sauce, then a little at a time until it was as saucy as I wanted it. It will thicken as you cook it all together.
Stir and reduce heat to low, simmer for a minute or so until it thickens a bit, adding as much sauce as you want(I used almost all of it, but it is up to you).
Now remove from heat, scoop some of the lentils onto a bun. Serve as is, or top with some vegan coleslaw, or raw onions, or anything your heart desires. Serve immediately.
Recipe Notes
I like to use a fork and knife to chop my chipotles, that way I don't have to actually touch them because I will inevitably touch my face/eyes immediately afterwards.
Check Out My New YouTube Channel
Amazon Associates Disclosure
Lauren Hartmann is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to [insert the applicable site name (amazon.com or myhabit.com)].
|
{
"pile_set_name": "Pile-CC"
}
|
{% set short_lang = 'python' %}
{% if language == 'JavaScript' %}
{% set short_lang = 'js' %}
{% elif language == 'Go' %}
{% set short_lang = 'go' %}
{% elif language == 'Rust' %}
{% set short_lang = 'rust' %}
{% endif %}
{% set lowercase_lang = 'python' %}
{% if language == 'JavaScript' %}
{% set lowercase_lang = 'javascript' %}
{% elif language == 'Go' %}
{% set lowercase_lang = 'go' %}
{% elif language == 'Rust' %}
{% set lowercase_lang = 'rust' %}
{% endif %}
Overview
========
This tutorial shows how to use the Sawtooth {{ language }} SDK to develop a
simple application (also called a transaction family).
A transaction family includes these components:
* A **transaction processor** to define the business logic for your application.
The transaction processor is responsible for registering with the validator,
handling transaction payloads and associated metadata, and getting/setting
state as needed.
* A **data model** to record and store data.
* A **client** to handle the client logic for your application.
The client is responsible for creating and signing transactions, combining
those transactions into batches, and submitting them to the validator. The
client can post batches through the REST API or connect directly to the
validator via `ZeroMQ <http://zeromq.org>`_.
The client and transaction processor must use the same data model,
serialization/encoding method, and addressing scheme.
In this tutorial, you will construct a transaction handler that implements XO,
a distributed version of the two-player game
`tic-tac-toe <https://en.wikipedia.org/wiki/Tic-tac-toe>`_.
{% if language == 'Python' %}
This tutorial also describes how a client can use the {{ language }} SDK
to create transactions and submit them as :term:`Sawtooth batches<batch>`.
{% elif language == 'JavaScript' %}
This tutorial also describes how a client can use the {{ language }} SDK
to create transactions and submit them as :term:`Sawtooth batches<batch>`.
{% endif %}
.. note::
This tutorial demonstrates the relevant concepts for a Sawtooth transaction
processor and client, but does not create a complete implementation.
{% if language == 'Rust' %}
For a full Rust implementation of the XO transaction family, see
`https://github.com/hyperledger/sawtooth-sdk-rust/tree/master/examples/xo_rust
<https://github.com/hyperledger/sawtooth-sdk-rust/tree/master/examples/xo_rust>`_.
{% elif language == 'Go' %}
For a full Go implementation of the XO transaction family, see
`https://github.com/hyperledger/sawtooth-sdk-go/tree/master/examples/xo_go
<https://github.com/hyperledger/sawtooth-sdk-go/tree/master/examples/xo_go>`_.
{% elif language == 'Java' %}
For a full Java implementation of the XO transaction family, see
`https://github.com/hyperledger/sawtooth-sdk-java/tree/master/examples/xo_java
<https://github.com/hyperledger/sawtooth-sdk-java/tree/master/examples/xo_java>`_.
{% elif language == 'JavaScript' %}
For a full JavaScript implementation of the XO transaction family, see
`https://github.com/hyperledger/sawtooth-sdk-javascript/tree/master/examples/xo
<https://github.com/hyperledger/sawtooth-sdk-javascript/tree/master/examples/xo>`_.
{% else %}
For a full Python implementation of the XO transaction family, see
`https://github.com/hyperledger/sawtooth-sdk-python/tree/master/examples/xo_python
<https://github.com/hyperledger/sawtooth-sdk-python/tree/master/examples/xo_python>`_.
{% endif %}
.. Licensed under Creative Commons Attribution 4.0 International License
.. https://creativecommons.org/licenses/by/4.0/
|
{
"pile_set_name": "Github"
}
|
18 October 2011
My Very First Birchbox
I'm taking a break from reviewing Allure's Best of Beauty's list to talk about Birchbox's little pink box. I received my first box in the mail on Saturday. I can't even express the giddiness I felt opening it. They did well choosing products that fit me personally. I'm impressed!
I was even impressed with the packaging. It felt like a close friend sent me a gift that was hand wrapped just for me! AND, I was expecting much smaller samples. Score! Score! Score!
Here's what I thought about the actual product samples.*Disclaimer - I am not affiliated with Birchbox (or any of the product brands featured in this post). I have not received compensation for this post in any form (monetary or otherwise). I paid for the subscription and all opinions voiced on this blog are strictly my own.
1. The Laundress Delicate Wash
(Description: We know how tricky it can be to care for your delicates and lingerie. The Laundress Delicate Wash cleans effectively yet gently for those tricky specialty fabrics such as silk, silk blends, fine cotton, polyester, rayon, nylon and more. Specially formulated to remove perspiration, body oils and stains for both hand and machine washing.)
I haven't used this yet but I'm curious to see if it does what it says. I mean, what woman wouldn't love a product that can clean their more delicate clothing as well as a dry cleaner? I gotta admit... I'm kind of scared to try it. Me and Ms. Laundress might be brawling if one of my treasured items is ruined.
2. Ahava Mineral Foot Cream
(Description: Rescue and renew the rough skin on the soles of your feet and prevent further chapping and cracking. This exclusive formulation – enriched with our very own Mineral Skin OsmoterTM, anti-bacterial Tea-tree oil and other natural plant derivatives –makes the splits and dryness disappear. Enjoy a smoother, revitalized surface on your legs and feet that is visible for all to behold. Hypoallergenic.)
I can't tell a difference in the smoothness of my feet after using this product, but I can't say the cream is at fault. I've only used it one time, and I think foot creams need to be used daily, or regularly, to see noticeable results. I'm interested to see how well it works after using it for a while. It smells really good and felt nice on my little piggies though!
I was familiar with this spray because my hair stylist uses it. I loved it, as always.
4. Blinc Mascara in Black
(Description: Formerly known as Kiss Me, blinc is the original mascara invented to form tiny water-resistant "tubes" around your lashes rather than painting them like conventional mascaras. Once applied, these beauty tubes bind to your lashes and cannot run, smudge, clump, or flake, even if you cry or rub your eyes. Whether your daily activities take you from the office, to your sweaty workout and then out to dinner, your lashes will look as good in the evening as they did when you first applied blinc in the morning.)
I'm not sure what I think about this mascara. There's nothing wrong with it, per se. It didn't flake or smudge, as promised... but it's really weird. I'm not fond of strange... even when it works. Like, what are these tubes they speak of? I kept imagining actual tubes encasing each eyelash. It just sounds really weird to me. I think I'll stick with my Loreal.
5. Philosophy Purity Made Simple One-Step Facial Cleanser
As your days get busier, simple, pampering skin care becomes more important. Purity Made Simple deep cleans pores, dissolving dirt and debris -- acting as a cleanser and eye makeup remover, and eliminating the need for a toner. Follow up with Hope in a Jar moisturizer, and you're on the path toward makeup-optional skin.This was my favorite sample in the box. I've always been a fan of Philosophy, but I'd never tried their facial cleanser and it didn't disappoint. My face felt so fresh, and clean, and pure. It even cleared up a few breakouts that have been hanging around, And I've only been using it since Saturday. I will be purchasing the full-sized version of this one.Have you joined Birchbox?What did you receive in your box this month?*The product descriptions came directly from the company's website.
|
{
"pile_set_name": "Pile-CC"
}
|
Unexplained lymphadenopathy in family practice. An evaluation of the probability of malignant causes and the effectiveness of physicians' workup.
This study reported here was undertaken to determine the probability of malignancy in patients presenting with unexplained lymphadenopathy in primary care practice and to estimate the effectiveness of current referral patterns by family physicians in relation to malignant disease. Clinical characteristics that may be discriminatory for malignant causes were also investigated. A retrospective analysis was performed of 82 patients who underwent biopsy for unexplained lymphadenopathy from 1982 to 1984; data regarding the incidence of unexplained lymphadenopathy and the referral rate for this problem were obtained from registration projects. A total of 29 malignant lymphadenopathies were identified for a prior probability of 1.1 percent and a posterior (after referral) probability of 11 percent. The ability of the family physician to refer malignant cases within four weeks after initial consultation (sensitivity of referral) was 80 to 90 percent; 91 to 98 percent of benign cases were not referred (specificity of referral). An increased likelihood of malignancy was associated with age over 40 years (4 percent) and supraclavicular lymphadenopathy (50 percent). The incidence of malignancy in patients presenting with unexplained lymphadenopathy to the family physicians is very low (1 to 2 percent). Nevertheless, despite the paucity of validated discriminatory factors, the family physicians perform a reasonably effective selection process toward referral and biopsy.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Measurement of thick NiP coatings on automotive parts
In the automotive industry, the plungers used inside the solenoid valves of automatic transmission gearboxes must fit smoothly into their through-holes to an accuracy of just a few µm, in order to prevent oscillations that would lead to jamming or canting. To meet these tight tolerance limits, the plungers must be coated very evenly, which requires strict quality control.
In the manufacture of parts in the automotive and machine-building industries, adhering to extremely tight tolerance limits is necessary to guarantee the components’ proper functioning. That is why electroless metal platings like electroless nickel are being used more and more frequently, as they enable a very even coating: The layer builds up more homogeneously and with less variation in thickness than electroplated coatings, which tend toward excessive coating thicknesses on edges and corners.
In this example, steel plungers for solenoid valves are coated with approximately 60-70 µm of NiP containing at least 10% phosphorous. Afterwards, the parts are ground to an accurate fit; the end thickness of the coating is approximately 50 µm, which must be within a tolerance range of ± 4 µm. This layer is itself non-magnetic and can, for purposes of incoming inspection and/or after grinding, be measured with the magnetic induction method using the DUALSCOPE® FMP100 and the FGAB 1.3 probe.
sample
Coating thickness
Standard deviation
Unground plunger
67 µm
Ø 3 µm *
Finished plunger
50 µm
Ø 0.3 µm *
Control of the measurement system variation by repeated measurements on a single measurement spot
0.03 µm
Tab.1: Measurement results of a quality inspection * 10 readings taken on different measurement spots per sample
The DUALSCOPE® FMP100 and FGAB1.3 probe are employed in conjunction with a V12 BASE stand, which makes it possible to replicate the measurement procedure with consistent probe positioning and angle. This minimizes operator influence and produces extremely repeatable results, as shown in Table 1: The standard deviation for the measurements of the coating after grinding is, on average, just 0.3 µm, and the variation of the entire measurement system itself is only 0.03 µm, which is negligible. Therefore the measurement device capability even for the required tight tolerances is fulfilled.
Fig.1: DUALSCOPE® FMP100
The DUALSCOPE® FMP100, together with the probe FGAB1.3 and the stand V12 BASE, forms a reliable control system that can precisely and accurately measure NiP coatings on automotive components with minimal variation. This allows both monitoring of quality specifications and adherence to very tight tolerance limits – and therefore, the avoidance of potentially costly warranty claims. For further information please contact your local FISCHER representative.
|
{
"pile_set_name": "Pile-CC"
}
|
Q:
Passing args[] to java app with spaces
If you pass an arg to a java app via a bash script, and the arg contains whitespaces, how can you make the java app see the entire string as the arg and not just the first word? For example passing the following two args via bash to java:
var1="alpha beta zeta"
var2="omega si epsilon"
script.sh $var1 $var2
(inside script.sh)
#!/bin/bash
java -cp javaApp "$@"
(inside javaApp)
param1 = args[0];
param2 = args[1];
The values of my param variables in javaApp are getting only the words, not the entire lines:
"param1 is alpha"
"param2 is beta"
What can I change in the javaApp to see the entire string being passed in via args[] as the argument and not just the first word it encounters?
A:
The problem is the way your shell script is written, not your Java program.
You need to quote the arguments:
script.sh "$var1" "$var2"
|
{
"pile_set_name": "StackExchange"
}
|
Optically detected cross-relaxation spectroscopy of electron spins in diamond.
The application of magnetic resonance spectroscopy at progressively smaller length scales may eventually permit 'chemical imaging' of spins at the surfaces of materials and biological complexes. In particular, the negatively charged nitrogen-vacancy (NV(-)) centre in diamond has been exploited as an optical transducer for nanoscale nuclear magnetic resonance. However, the spectra of detected spins are generally broadened by their interaction with proximate paramagnetic NV(-) centres through coherent and incoherent mechanisms. Here we demonstrate a detection technique that can resolve the spectra of electron spins coupled to NV(-) centres, in this case, substitutional nitrogen and neutral nitrogen-vacancy centres in diamond, through optically detected cross-relaxation. The hyperfine spectra of these spins are a unique chemical identifier, suggesting the possibility, in combination with recent results in diamonds harbouring shallow NV(-) implants, that the spectra of spins external to the diamond can be similarly detected.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
2.07.2008
This might come off as a pie in the sky post so either bear with me or go read about Roger Clemens and his exploding career below instead.
Yesterday ended up being a pretty good day all the way around. I started working at my third school, was very impressed by the teacher's setup, went to a multi-hour tech meeting, filled out lots more paperwork, got some signatures, signed some other papers and then went back to the new school. With a delicious new MacBook Pro in tow.
Not one of the newest of the new, it is still a way smoking little machine and absolutely crushes my old G4 Aluminum (even if it weren't slowly dying) in terms of speed and cool stuff its got. And, even though I hate Windows, it is pretty slick to be able to reboot my computer into Windows XP. I'm still configuring it and getting it up to speed for my use both professionally and personally.
I didn't get my district cellphone yet but that's because I need to go on a signature drive first. I need my three principals to agree to the unusual contract. And then I get a new phone to go with my other new gear.
It will take some getting used to this new schedule but I do think it'll be kind of nice to be moving around so much and dealing with so many different kids, teachers, campuses and computers. There will be difficulties, to be sure, but overall its a really good situation.
And I am looking forward to improving the tech capabilities of my schools to produce a more savvy student as well as educating teachers in making better use of technology.
There were alot of good things yesterday and I don't want to forget them. Let's see: new laptop, new school, fully completed HR paperwork, a raise (which was unexpected and rather awesome), new info about my benefits package (which kicks the butt, for sure), a new cellphone on the way, mileage reimbursement, and more hours. Oh yeah, I also picked a useful and inexpensive tool kit to be able to pull machines apart more easily. Woot all the way around!
|
{
"pile_set_name": "Pile-CC"
}
|
---
abstract: 'This paper addresses a particular instance of probability maximization problems with random linear inequalities. We consider a novel approach that relies on recent findings in the context of non-Gaussian integrals of positively homogeneous functions. This allows for showing that such a maximization problem can be recast as a convex stochastic optimization problem. While standard stochastic approximation schemes cannot be directly employed, we notice that a modified variant of such schemes is provably convergent and displays optimal rates of convergence. This allows for stating a variable sample-size stochastic approximation (SA) scheme which uses an increasing sample-size of gradients at each step. This scheme is seen to provide accurate solutions at a fraction of the time compared to standard SA schemes.'
author:
- 'I. E. Bardakci'
- 'C. Lagoa'
- 'U. V. Shanbhag [^1][^2]'
bibliography:
- 'references.bib'
title: '**Probability Maximization with Random Linear Inequalities: Alternative Formulations and Stochastic Approximation Schemes** '
---
at (current page.south) ;
Introduction
============
In this paper, we consider the maximization of a function defined as the probability of a random variable prescribed by a set defined by inequalities. In particular, the aim of this paper is to provide novel avenue for resolving problems of the form: $$\begin{aligned}
\max_{x \in X} \ f(x) \ \triangleq \ \text{Prob} \{\xi \colon
\xi^{\intercal}x \leq a\},
\label{main_prob}\end{aligned}$$ where $x \in \mathbb{R}^n$ is the decision variable, $\xi : \Omega \to
\mathbb{R}^d$ is a $d-$dimensional random vector, [[and]{}]{} $f:
\mathbb{R}^n \to \mathbb{R}$ is a real-valued function. [[Furthermore, suppose $\xi$ is assumed to be uniformly distributed on a convex set $\mathcal{K} \subset \mathbb{R}^m$ as per the known distribution]{}]{} $\text{Prob}(\cdot). $
[[Problems of the form (\[main\_prob\]) fall within the umbrella of chance-constrained optimization problems and find applicability in a breadth of settings including financial risk management [@rockafellar2002], reservoir system design [@andrieu2010], and optimal power flow [@bienstock2014]. Optimization problems with probabilistic or chance constraints were first studied in the seminal work by Charnes and Cooper [@charnes59chance]. Much of the early research in this area examined continuity [@raik1972; @wang1989], differentiability [@raik1975; @simon1989; @uryasev1989], log-concavity [@prekopa1970; @prekopa1971], quasi-concavity [@brascamp1976; @gupta1980; @tamm1977], and $\alpha-$concavity [@borrell1975; @gupta1976; @norkin1991] of probability distributions. Although, there are instances of convex chance-contrained problems (cf. [@lagoa2005; @prekopa2013]), generally such problems are not convex [@prekopa2013; @pinter1989]. In particular, convexity of (\[main\_prob\]) can be claimed when density function of the random vector is log-concave and symmetric. For instance, in [@ball1988; @bobkov2010] it has been shown that problem (\[main\_prob\]) can be reformulated as a convex program when $\xi$ has a logarithmically concave probability density function.]{}]{}
[[: Despite the theoretical progress over the years, problems of the form (\[main\_prob\]) remain challenging to solve, barring a few special cases. The main difficulty in applying standard optimization techniques arises in evaluating a multi-dimensional integral (and its derivatives), and in high dimensions, numerical computation of such integrals with high accuracy remains challenging [@nemirovski2009]. To this end, there have been several avenues that have emerged in addressing this class of problems:\
[*Approximations.*]{} When the problem is nonconvex, quadratic [@bental00robust] and Bernstein [@nemirovski2006] approximations allow for tractable computation of feasible solutions to .\
[*Mixed-integer approaches.*]{} There has been a significant effort in resolving such problems when the distribution is over a finite sample-space (or require a set of points from a continuous sample-space) via mixed-integer programming approaches [@luedtke08sample; @ahmed17nonanticipative].\
[*Monte-Carlo sampling techniques.*]{} A somewhat different tack was considered by Norkin [@norkin1993] where the probability maximization problem was recast as the expectation of the characteristic function. Then by utilizing a convolution-based (or Steklov-Sobolev) smoothing (with a fixed parameter), a stochastic approximation framework was employed for computing an approximate solution. A sample-average approximation has also been utilized for obtaining approximate solutions to chance-constrained problems [@luedtke08sample; @ahmed09sample]. An alternate approach is proposed in [@campi11sampling] which uses a sampling and rejection framework. More recently, in [@hong2011], the authors develop a technique that recognizes that difference-of-convex (DC) programming within a simulation framework to address such settings.]{}]{}
[**Contributions.**]{} We also consider a stochastic approximation framework but rather than utilizing characteristic functions, we employ recent findings non-Gaussian integrals of positively homogeneous functions (PHFs) (see [@lassere2015; @morozov2009]) to derive an alternative formulation. In particular, the resulting problem is an expectation of a random integrand that is continuous for every $\xi$ but is nonsmooth. This then allows us to employ stochastic approximation techniques on the original problem (rather than a smoothed variant). However, through a deterministic smoothing, we may also develop variable sample-size schemes for a smoothed counterpart with Lipschitz continuous gradients, which produces solutions with far less effort.\
[**Organization of paper.**]{} The outline of the paper is [[organized]{}]{} as follows. In section II, [[we present]{}]{} some of the preliminary results that play [[an]{}]{} important role in our formulation [[and briefly review the notation of relevance.]{}]{} Section III is dedicated to [[stating the problem and providing an alternative formulation]{}]{}. In section IV, [[we present a stochastic approximation scheme and provide convergence theory and rate statements.]{}]{} [[A numerical example is provided]{}]{} in section V and [[we conclude the paper]{}]{} in section VI.
Notation and Preliminary Results
================================
Notation and Basic Definitions
------------------------------
The sets of real numbers, nonnegative integers, and positive integers are denoted by $\mathbb{R}$, $\mathbb{N}$, and $\mathbb{Z}$, respectively. The Euclidean norm of column vectors $\mathbf{x} \in
\mathbb{R}^n$ is denoted by $\|\mathbf{x}\|$, [[while]{}]{} the spectral norm of $\mathbf{A} \in \mathbb{R}^{m\times n}$ is given by $\| \mathbf{A} \| = \text{max}\{ \|\mathbf{Ax} \| \colon \|\mathbf{x}\| \leq 1 \}$. The $n$-by-$n$ identity matrix is written as $\mathbf{I}_n$, and the $m$-by-$n$ zero matrix as $\mathbf{0}_{m \times n}$. The projection onto the set $X$ is denoted by $\Pi_{X}$, that is, $\Pi_{X}(y) = \text{argmin}_{x \in X} \| x - y \| $.\
The function $f(\cdot)$ is said to be Lipschitz continuous on [[the]{}]{} domain of $f$ with constant $L > 0$ if $$\begin{aligned}
\| f(x)- f(y)\| \leq L \|x-y\| \ \text{for all} \ x, y \in \text{dom}(f).\end{aligned}$$
\[Log-concavity\] A function $f : \mathbb{R}^d \rightarrow [0, \infty)$ is said to be log-concave if the following holds: Given any $x, y \in \mathbb{R}^d $ and $\lambda \in [0,1]$, it follows that $$\begin{aligned}
f((1-\lambda)x + \lambda y) \geq [f(x)]^{1-\lambda} [f(y)]^{\lambda}.\end{aligned}$$
\[Minkowski Functional\] Let the set $K \subset \mathbb{R}^n$. Then, Minkowski functional associated with the set $K$, denoted by $\|\xi\|_K$, is given by $$\begin{aligned}
\|\xi\|_K \triangleq \text{inf} \{t>0 : \xi/t \in K\}\end{aligned}$$ for all $\xi \in \mathbb{R}^n $. Note that the expression above defines a norm when the set $K$ is compact, convex and symmetric.
[[Throughout this paper, we define $K(x)$ as follows. $$K{{\color{black}(x)}} \triangleq \{\xi \in \mathbb{R}^n \colon |\xi^{\intercal}x| \leq 1 \}$$]{}]{}where $x \in \mathbb{R}^n$. [[Further the function $f$ in can be restated as]{}]{} $f(x) =
\text{Prob}\{K{{\color{black}(x)}}\}$. Note $f(0) = 1$ and $f(x) \rightarrow 0$ as $|x| \rightarrow + \infty$.
Preliminary Results
-------------------
[[We now]{}]{} present some results that play [[an]{}]{} important role in our formulation. Throughout this paper, we [[assume that the random variable is defined by a ]{}]{}symmetric log-concave probability density function. Moreover, we assume the support $\mathcal{K} \in \mathbb{R}^m$ of the random variable $\xi$ is centrally symmetric, i.e. the center of symmetry is the origin. First, we have the convexity of the objective function of the reformulated problem; see Section \[Prob\_State\] for detailed description of the problem.\
\[lem\_bobkov\] [[Consider problem .]{}]{} Suppose $\xi$ has a log-concave density. Then $h(x) \triangleq 1/f(x)$ is convex in $\mathbb{R}^n$.
See Lemma 6.2 in [@bobkov2010].
The following result is needed for [[developing an]{}]{} alternative formulation of problem ; see Section \[Alt\_form\] for details.
Let $g_1,\hdots,g_l$ be [[positively homogenous functions (PHFs)]{}]{} of degree $m \neq 0$, $m \in \mathbb{R}$ [[and]{}]{} let $\Omega {{\color{black}\
\triangleq \ }} \{\xi: g_k(\xi) \leq 1, k=1,{{\color{black}\hdots}}, l\}.$ Assume that the set $\Omega$ is bounded. Notice that $g(\xi) =
\max\{g_1(\xi),{{\color{black}\hdots}}, g_l(\xi)\}$ is [[a]{}]{} PHF of degree m. Then, [[the following holds.]{}]{} $$\begin{aligned}
\int_{\Omega} 1 \ d\xi = \frac{1}{\Gamma(1+n/m)} \int_{\mathbb{R}^n} e^{-g(\xi)} \ d\xi.\end{aligned}$$
See *Corollary 1* in [@lassere2015].
Problem Statement {#Prob_State}
=================
In this section, we first state the problem of interest and present the equivalent convex problem by using *Lemma \[lem\_bobkov\]*.
**Problem 1:** Consider the optimization problem given by $$\begin{aligned}
\max_{x \in X} f(x) = \text{Prob} \{ {{\color{black}K(x)}}\}, \label{fx}\end{aligned}$$ where $f \colon X \rightarrow \mathbb{R}$ [[can be shown to be]{}]{} continuously differentiable function with Lipschitz continuous gradients. The set $K{{\color{black}(x)}} =\{\xi \in
\mathcal{K}: |\xi^{\intercal}x | \leq 1 \}$, [[where]{}]{} $x$ denotes the decision variable and $ x \in X$, [[the random variable]{}]{} $\xi$ [[is uniformly distributed over the set $\mathcal{K}$]{}]{}. The set ${{\color{black}\mathcal{K}}} \subset \mathbb{R}^m$ is assumed to be compact, convex and symmetric, and the set $X \subset
\mathbb{R}^n$ is closed and convex. [[We further assume]{}]{} that $f(x) \in [\epsilon,1]$ on $X$ with $0 <\epsilon < 1$.
Although the setup above seems rather restrictive, the proposed algorithms can be applied to solve more general probability maximization problems of the form $$\max_{x\in X} \ \text{Prob}\{ x: (\zeta + a)^T (x+b) \leq 1\}.$$ In other words, the approach proposed can be used to maximize probability of sets involving general linear constraints. This can be done by exploiting the symmetric [[nature]{}]{} of the distribution of the uncertainty. [[To keep the paper concise and the discussion focused]{}]{}, this question will be discussed in future work.
We now formally define the Problem 2, which forms the basis of our computation.
**Problem 2:** Consider the alternative problem defined as follows: $$\begin{aligned}
\min_{x \in X} \ h(x) {{\color{black} \ \triangleq \ }} \frac{1}{f(x)}. \label{hx}\end{aligned}$$ [[We proceed to show that is a convex optimization problem with $h$ being continuously differentiable with Lipschitzian gradients.]{}]{}
(*Convexity and Lipschitzian properties of* ) \[prop-1\] [[Consider the problem . Then the following hold: (i) The function $h(x)$ is convex over $X$; and (ii) The function $h(x)$]{}]{} is continuously differentiable with Lipschitz continuous gradients.
\(i) [[Since]{}]{} uniform distributions over a compact convex symmetric sets are log-concave and symmetric, by the assumption on the set $\mathcal{K}$, $\xi$ has a symmetric log-concave density. Hence by *Lemma 1*, $h(x)\triangleq 1/f(x)$ is convex; (ii) See Appendix.
We now prove the relatively simple result that allows us to claim that a global minimizer of Problem 2 (a convex program) is a global maximizer of Problem 1.
\[lem2\] Consider [[Problems 1 and 2]{}]{} where $f(x)$ is a continuously differentiable function and $f(x) \in [\epsilon,1]$ on $X$ with $1 >\epsilon > 0$. Suppose $h(x) = 1/f(x)$ is a convex function over $X\subseteq \mathbb{R}^n$, a closed and convex [[set]{}]{}. Then, a global minimizer of is a global maximizer of .
See Appendix.
[[The mere convexity of does not suffice in developing efficient first-order algorithms. To this end]{}]{}, we still need to compute the gradient of the function $h(x)$, [[which is given by the following.]{}]{} $$\begin{aligned}
\nabla_{x} h(x) = -\frac{1}{f^2(x)} \nabla_{x} f(x).\end{aligned}$$
Here, note that $X$ is bounded which implies that $0 < \epsilon \leq f(x)\leq 1$, and in turn, $1/f^2(x)$ is bounded and deterministic. Thus, in order to compute the gradient of $h(x)$, it is enough to compute the gradient of $f(x)$. The function $f(x)$ can be written as $$\begin{aligned}
f(x) = \text{Prob} \{K{{\color{black}(x)}}\} = \int_{ K{{\color{black}(x)}}} p_{\xi}(\xi) \ d\xi \label{f_prob}\end{aligned}$$ where $p_\xi{(\xi)}$ is probability density function of the random variable $\xi$. Since ${{\color{black}\xi}}$ is uniformly distributed over the set $\mathcal{K}$, we may rewrite $f$ as follows $$\begin{aligned}
f(x) = \frac{1}{\text{Vol}({{\color{black}\mathcal{K}}})} \int_{K{{\color{black}(x)}}}
\mathbf{1}_{\mathcal{K}} (\xi) \ d\xi, \label{uni}\end{aligned}$$ where $\text{Vol}({{\color{black}\mathcal{K}}})$ denotes [[the]{}]{} volume of the set $\mathcal{K}$. However, as mentioned earlier, [[computing the above multivariate integral ($\ref{uni}$) is computationally demanding, a concern that is addressed next.]{}]{}
Alternative formulation {#Alt_form}
-----------------------
In this section, we [[discuss how the]{}]{} integral (\[uni\]) [[may be expressed as an]{}]{} expectation of [[a suitably defined]{}]{} function, i.e., $f(x) = \mathbb{E}[F(x,\xi)]$. [[Then under suitable assumptions, we may then utilize]{}]{} stochastic approximation tools to compute a solution to . In this setup, we use some important properties of Minkowski functionals and the result given by *Corollary 1*.
Consider the function $f(x)$ in Problem 1. Suppose $X$, $\mathcal{K}$ and $K(x)$ are defined as in Problem 1 and $\xi$ is uniformly distributed over $\mathcal{K}$. Then $$f(x) = \mathcal{C} \ \mathbb{E}[F(x,\xi)],$$ where $F \colon \mathbb{R}^n \times \mathbb{R}^d \rightarrow
\mathbb{R}$, and $\mathbb{E}[\cdot]$ denotes the expectation with respect to $p_{\xi}$, $p_\xi$ denotes the probability density function of independent and identically distributed random variables $\xi \sim \mathcal{N}(0,1)$ with zero mean and unit variance.
First, the indicator function in (\[uni\]) can be expressed as a PHF by exploiting the relation between convex sets and Minkowski functionals. Since the set $\mathcal{K}$ is compact, convex and symmetric, [[the]{}]{} Minkowski functional of $\mathcal{K}$ defines a norm, and hence, it is [[a]{}]{} PHF. Moreover, by the definition of [[the]{}]{} Minkowski functional, $\xi \in \mathcal{K}$ if and only if $ \|\xi\|_\mathcal{K} \leq 1$. Now, in order to use *Corollary 1*, define $\Omega$ as follows. $$\begin{aligned}
\Omega \triangleq \{\xi: |\xi^{\intercal} x | \leq 1\} \cap \{\xi
\colon \|\xi\|_\mathcal{K} \leq 1\},\end{aligned}$$ which can equivalently be written as $$\begin{aligned}
\Omega = \left\{\xi: \text{max}(|\xi^{\intercal} x |, \|\xi\|_K) \leq 1
\right\}.\end{aligned}$$ Hence, we have $$\begin{aligned}
f(x)= \frac{1}{\text{Vol}({{\color{black}\mathcal{K}}})} \int_{\Omega} 1 \ d\xi.\end{aligned}$$ Now, define $g(\xi)$ as follows: $$\begin{aligned}
g(\xi) \triangleq \text{max}\{|\xi^\intercal x |^m,
{\|\xi\|^m_{{\color{black}\mathcal{K}}}}\}. \end{aligned}$$ Since $|\xi^\intercal x |^m$ and $ \|\xi\|^{m}_{{{\color{black}\mathcal{K}}}}$ are both PHFs of degree $m$, $g(\xi)$ is also a PHF of degree $m$. Thus, it follows from *Corollary 1* that $$\begin{aligned}
f(x) = \frac{1}{\text{Vol}({{\color{black}\mathcal{K}}})} \frac{1}{\Gamma(1+n/m)} \int_{\mathbb{R}^n} e^{-g(\xi)} \ d\xi, \label{Las}\end{aligned}$$ whenever $\int_{\mathbb{R}^n} e^{-g(\xi)} \ d\xi$ is finite. In fact, the expression (\[Las\]) can be written as $$\begin{aligned}
f(x) \ & = \mathcal{C} \int_{\mathbb{R}^n} \left[ 2\pi^{n/2}
e^{-\max\{|\xi^{\intercal} x |^m, {\|\xi\|^m_{{\color{black}\mathcal{K}}}}\}+\frac{\xi^{\intercal}
\xi}{2}}\right] \\
& \times \left[ {2\pi}^{-n/2} e^{\frac{-\xi^{\intercal} \xi}{2}}
\right] \ d\xi\\
& = \mathcal{C} \int_{\mathbb{R}^n} F(x,\xi) \ p_\xi(\xi) \ d\xi =
\mathcal{C} \ \mathbb{E}[F(x,\xi)], \end{aligned}$$ where $F(x,\xi)$ is defined as $$F(x,\xi) \triangleq \left[ 2\pi^{n/2}
e^{-\max(|\xi^{\intercal} x |^m, {\|\xi\|^m_{{\color{black}\mathcal{K}}}})+\frac{\xi^{\intercal}
\xi}{2}}\right],$$ and $ \mathcal{C} = 1/ (\text{Vol}(K) \ \Gamma(1+n/m)) $.
However, $F(x,\xi)$ is not a differentiable function for every $x,\xi$ but it can be shown to be a subdifferentiable convex function. In fact, under the boundedness of $X$, we may further show that under suitable boundedness requirements of the subdifferential, we may apply the robust stochastic approximation framework [@nemirovski2009] to obtain asymptotic convergence as well as rate statements. However, such an avenue necessitates taking as many projection steps as the simulation budget, which makes large-scale implementations challenging if $X$ is a complicated set. An alternative is variable sample-size stochastic approximation (VSSA) [@jalilzadeh2017]. However, such a scheme necessitates that $F(x,\xi)$ be differentiable for almost every $\xi$, a property that may be recovered by introducing a deterministic smoothing.
Smoothing of nonsmooth integrands
---------------------------------
The integrand $F(x,\xi)$ has two sources of nonsmoothness; the first of these is the the max function while the second is the absolute value function. [*Smoothing the max function.*]{} Consider the relatively simple convex function $g(u_1,u_2) =
\max\{u_1,u_2\}$ which can be smoothed via a logarithmic smoothing function $g(u_1,u_2;s) \triangleq
s\ln(\mbox{exp}(u_1/s)+ \mbox{exp}(u_2/s))$ where $s > 0$. In fact, we have that for $i \in \{1,2\}$, $$\nabla_{u_i} g(u_1,u_2;s) =
\frac{\mbox{exp}(u_i/s)}{\mbox{exp}(u_1/s)+\mbox{exp}(u_2/s)},$$ where $0 < \nabla_{u_i} g_i(u_1,u_2;s) < 1$. Furthermore, $$\begin{aligned}
\label{smooth-diff}
0 \leq g(u_1,u_2;s)-g(u_1,u_2) \leq s\ln 2 \end{aligned}$$ for all $u_1, u_2 \in \mathbb{R}$. In fact, the absolute value function $\ell(u) = |u|$ can be smoothed in a similar way by noting that $|u| = \max\{u, -u\}$ and therefore $\ell(u;s)$ is constructed in a fashion similar to $g(u_1,u_2;s)$. By employing this form of smoothing, we may constructed a smoothed variant of $F(x,\xi)$ defined as follows: $$\begin{aligned}
\label{smooth-F}
F(x,\xi;s) \triangleq \left[ -2\pi^{n/2}
e^{-g(\ell(\xi^{\intercal}x;s)^m, {\|\xi\|^m_K}\};s)-\frac{\xi^{\intercal} \xi}{2}}\right].\end{aligned}$$ The continuous differentiability of $\nabla_x F(x,\xi;s)$ can be shown with relative ease and under suitable conditions, for every $\xi$ and $s>0$, we may further show that the $\nabla_x F(\cdot,\xi;s)$ is Lipschitz continuous in $(.)$. We now focus on the solution of the smoothed problem: $$\begin{aligned}
\label{main_prob_s}
\min_{x \in X} \ f(x;s) \triangleq \mathbb{E}[F(x,\xi;s)],\end{aligned}$$ where $F(x,\xi;s)$ is defined in . In future work, we intend to derive the bounds of $f(x;s)-f(x)$. In the rest of the paper, we will focus on .
Stochastic Approximation Schemes
================================
In the prior section, we observed that the function $f(x;s)$ could be recast as an expectation of $F(x,\xi)$. This paves the way for the development of stochastic approximation schemes for computing a solution of such problems. Note that such schemes can handle both smooth and nonsmooth objectives. It may be recalled that stochastic approximation has its roots in the seminal paper by Robbins and Monro [@robbins1951]. In the last several decades, there has been a tremendous amount of research in stochastic approximation, noteworthy amongst these being the long-step averaging framework by Polyak [@polyak1990] and Polyak and Juditsky [@polyak1992] as well as the robust stochastic approximation framework by Nemirovski, Juditsky, Lan, and Shapiro [@nemirovski2009] (which can contend with nonsmooth stochastic convex optimization). In the next subsection, we present a modified stochastic approximation scheme for computing a solution to for which we derive asymptotic convergence and develop rate statements. However, a key shortcoming of this approach is the need for projections on a given convex set at every step, a problem that can prove quite onerous when the simulation lengths are long. To ameliorate this burden, we consider a variable sample-size stochastic approximation scheme [@jalilzadeh2017] and propose variable sample-size counterparts of the proposed techniques.
A modified stochastic approximation scheme
------------------------------------------
Consider the optimization problem $$\begin{aligned}
\label{opt_prob}
\min_{x \in X} \ h(x;s) = \frac{1}{f(x;s)},\end{aligned}$$ where $f(x;s) \triangleq \mathbb{E}[F(x,\xi;s)]$ and $h(x;s)$ is convex and continuously differentiable on $X$ for every $s>0$. We further assume that $f(x;s) \in [\epsilon,1]$ on $X$. The derivative of $h$ is given by the following: $$\begin{aligned}
\nabla_x h(x;s) &= -\frac{1}{f^2(x;s)} \nabla_x f(x;s)\\
&= -\frac{1}{f^2(x;s)}\mathbb{E}[ \nabla_x F(x,\xi;s)],\end{aligned}$$ where the second equality follows from interchanging derivatives and expectations [@shapiro2009]. Unfortunately, the expectation of $F(x,\xi;s)$ and its derivative are unavailable in closed form. But we do make the following assumption on the existence of a [*stochastic oracle*]{} and the parameter sequences employed in the scheme to be defined.
\[ass-1\] There exists a stochastic oracle that produces unbiased (but possibly noise corrupted) estimate of the gradient $\nabla_x F(x,\xi;s)$. Specifically, $w_k = \nabla_x F(x_k,\xi_k;s) - F(x_k;s)$ and satisfy the following for all $k$: (i) The random variables $w_k$ satisfy the following for all $k \geq 0$: $\mathbb{E}[w_k \mid \mathcal{F}_k] = 0$ and $\mathbb{E}[\|w_k\|^2 \mid
\mathcal{F}_k] \leq \nu^2$ almost surely, where $\mathcal{F}_k \triangleq \{x_0,\xi_1, \hdots, \xi_k\}$. (ii) Furthermore, $\frac{\gamma_k}{\beta_k}$ are positive sequences defined such that $\sum_{k} \gamma_k/\beta_k = \infty$, $\sum_k \gamma_k^2/\beta_k^2 < \infty$, and $0 < \beta_k^2 \leq \epsilon^2$.
[[Consider a traditional stochastic approximation scheme, defined as follows for $k \geq 1$ given an $x_1 \in X$: $$\begin{aligned}
\tag{t-SA}
x_{k+1} := \Pi_X \left(x_k +
\frac{\gamma_k\nabla_x
F(x_k,\xi_k;s) }{\left(\mathbb{E}[F(x_k,\xi;s)]\right)^2} \right).\end{aligned}$$ However, $\mathbb{E}[F(x,\xi;s)]$ is unavailable and consequence ( t-SA) is unimplementable. Consequently, we consider a modified stochastic approximation scheme in which we assume that $\beta_k$ replaces $(\mathbb{E}[F(x,\xi;s)])^2$ and show that this is scheme is indeed convergent. $$\begin{aligned}
\tag{m-SA}
x_{k+1} := \Pi_X \left(x_k +
\frac{\gamma_k\nabla_x
F(x_k,\xi_k;s)}{\beta_k} \right).\end{aligned}$$ ]{}]{}
\[prop-2\] Consider the problem (\[opt\_prob\]) and suppose Assumption \[ass-1\] [[holds]{}]{}. Given a randomly generated $x_1 \in X$, consider a sequence generated by scheme (m-SA). Then the following hold:
1. The sequence $\{x_k\}$ converges to the solution set $X_s^*$ of as $k \to \infty$ in an almost surely sense.
2. The sequence $\mathbb{E}[f(\bar{x}_k;s)-f^*(s)]$ converges to $0$ as $k \to \infty.$
See Appendix.
Accelerated variable sample-size SA (ac-VSSA) scheme
----------------------------------------------------
One of the key shortcomings of the ([m-SA]{}), ([t-SA]{}), and essentially all SA schemes is that given a simulation budget of $M$, the scheme requires taking $M$ projection steps for generating a single simulation run. If $X$ is a complicated set, then this projection operation, albeit a convex programming problem, can significantly slow down practical implementations. To obviate this challenge, there has been some recent effort in developing variable sample-size generalizations which employ a batch-size or sample-size of $N_k$ at iteration $k$ and terminate the scheme when $M$ samples have been consumed [@jalilzadeh2017].
We now consider the following scheme which represents a stochastic generalization of Nesterov’s accelerated gradient scheme [@nesterov1998] which is introduced in [@jalilzadeh2017]. Recall that in [@nesterov1998], for a convex differentiable problem, Nesterov showed that a suitably defined method achieves the optimal rate, i.e. $f(x_k) - f^* \leq \mathcal{O}(1/k^2)$ where $k$ denotes the iteration. As part of the ([m-ac-VSSA]{}) framework, given budget M, $x_1 \in X$, $x_1 = y_1$ and positive sequences $\{\eta_k, N_k\}$; set $\lambda_{0} \triangleq 0$, $k=1$. Then $\{y_k\}$, $\{\lambda_k\}$ and $\{x_k\}$ are defined as follows: $$\begin{aligned}
\tag{m-ac-VSSA}
\begin{aligned}
y_{k+1} &:= \Pi_X \left(x_k +
\frac{\eta_k \bar{F}_k}{\beta_k} \right), \\
\lambda_{k+1} &:= \frac{1+\sqrt{1+4\lambda_{k}^2}}{2},\\
x_{k+1} &:= y_{k+1} + \frac{(\lambda_k - 1)}{\lambda_{k+1}} (y_{k+1} - y_k).
\end{aligned}\end{aligned}$$ where $\bar{F}_k \triangleq \frac{\sum_{j=1}^{N_k}\nabla_x
F(x_k,\xi_{j,k};s)}{N_k}$.\
\[ass-2\]
i) $X$ is closed and convex set.
ii) $h(x)$ is continuously differentiable with Lipschitz continuous gradients.
iii) There exists $v>0$ such that $\mathbb{E}\|w_k\|^2\leq v^2 \ | \mathcal{F}_k]$ holds a.s. for all $k$, where $\mathcal{F}_k\triangleq \sigma\{x_0, x_1, ..., x_n\}$.
iv) $h(x)$ is convex in $x$.
v) There exists $C$, $D$ such that $\max_{y \in X} \mathbb{E}[\|y-x^*\|] \leq C$ and $\mathbb{E}[\|h(x_1) - h^*\|] \leq D$.
(*Error bound in terms of number of projections $K$ for m-ac-VSSA*) Suppose $h(x; s)$ is a smooth function and *Assumption* \[ass-2\] holds. Let $K$ be the largest integer such that $\sum_{k=1}^{K}N_k \leq M$. Furthermore, suppose $\eta_k = \eta \leq 1/2L$ for all $k$. Let $N_k=\lfloor k^a \rfloor$ where $a = a + \delta$ for $a > 3$ and $\widehat{C} \triangleq \frac{2v^2\eta(a-2)}{a-3}+ \frac{4C^2}{\eta}$. Then the following holds for all $K$. $$\begin{aligned}
\mathbb{E}[h(y_{K+1}; s)-h(x^*; s)] &\leq \frac{\widehat{C}}{K^2} \ \text{and} \ \leq \mathcal{O}\left(\frac{1}{\epsilon^{2+\delta/2}}\right).\end{aligned}$$
See [@jalilzadeh2017].
Numerical Example
=================
In our formulation, these assumptions are satisfied. In particular, *Assumption 2 (i, v)* are satisfied since we assumed the set $X$ is closed and convex, and the function $h(x)$ is bounded. For *Assumption 2 (ii)* see Appendix. *Assumption 2 (iii)* imposes a bound on moments of the function $F(x,\xi)$. By letting $g(\xi) = \text{max}(|\xi^\intercal x |^m, {\|\xi\|^m_K}) $, for $m \geq 2$, one can prove that all moments of $F(x,\xi)$ are bounded. The convexity of $h(x)$ is shown in Section \[Prob\_State\]. In our simulations, $M=10000$ with $20$ replications.
*Example 1:* Consider a problem $$\begin{aligned}
\max_{x \in X} f(x) = \text{Prob} \{ K(x)\}. \end{aligned}$$ where $K(x) =\{\xi \in \mathcal{K} \colon |\xi^{\intercal}x | \leq 1 \}$. Let $X$ and $\mathcal{K}$ be defined as $X = \{ x \in \mathbb{R}^3 : A x \leq b \}$, $\mathcal{K} = \{ \xi \in \mathbb{R}^3 : \|\xi\| \leq 1 \}$, and the parameters $A$ and $b$ are given as $$A=
\begin{bmatrix}
1 & 1 & 1 \\
-1 & 0 & 0 \\
-1 & 1 & 0 \\
0 & -1 & 0 \\
0 & -1 & 1 \\
0 &0 & -1
\end{bmatrix}
, b=
\begin{bmatrix}
3\\
-0.1\\
2\\
-0.2\\
1 \\
-0.1
\end{bmatrix}.$$ In this example, we assume the random vector $\xi$ is uniformly distributed on $\mathcal{K}$.
The stochastic approximation schemes prescribed in Section IV are applied and Table 1 shows the comparison of m-ac-VSSA scheme with standard SA scheme. As seen in Table \[table1\] that the standard SA requires 10000 projection steps with 8.9e-3 empirical error. In contrast, when $a=7$ the empirical error reduces to 1.3e-3 and requires 5 projection steps for m-ac-VSSA scheme. Figure 1 gives a graphical comparison schemes in terms of trajectories.
*Scheme* *a* *No of iter.* *Emp. error*
--------------- ----- --------------- --------------
4 9 4.4e-3
5 7 3.7e-3
[m-ac-VSSA]{} 6 6 2.1e-3
7 5 1.3e-3
8 4 1.8e-3
[ m-SA]{} 10000 8.9e-3
: Comparison of schemes[]{data-label="table1"}
{width="\columnwidth"}
*Example 2:* Consider the previous example and now let $X$ be defined on the non-negative orthant as $X = \{ x \in \mathbb{R}^n : \|x - x_0 \| \leq r \},$ where $x_0 = 1.2 \mathbf{e}^\intercal $ ($\mathbf{e}\triangleq [1,1,\cdots, 1]$) and $r=1$ for each $n$. In this example, we consider the m-ac-VSSA scheme with $a = 7$.
*Scheme* *n* *Emp. error*
--------------- ----- --------------
4 3.0e-4
5 2.0e-3
[m-ac-VSSA]{} 6 2.2e-3
7 4.3e-3
8 6.2e-3
: ac-VSSA scheme for different dimensions[]{data-label="table2"}
Table \[table2\] shows the performance of m-ac-VSSA scheme in different dimensions. The numerical results suggest that m-ac-VSSA scheme perform reasonably well in higher dimensions. It produces accurate solutions with significantly less computational effort (which is almost two-thousandth of the computational effort required by standard SA schemes). Moreover, it can be seen from Table \[table2\] that as the dimension gets higher, the increase in empirical error is modest.
Conclusion
==========
In this paper, a novel approach is developed to the solution of a subclass of chance constrained optimization. By exploiting results on the integration of homogeneous functions, the problem of maximization of probability of sets defined by a linear inequality is recast into a form amenable to the use of stochastic approximation algorithms. Examples show the effectiveness of the proposed approach.
Future work will consider extending the class of chance constrained problems that can be addressed by this approach with a focus on examining uncertainties with general log-concave distributions as well as regimes complicated by probabilistic constraints.
APPENDIX {#appendix .unnumbered}
========
(*Lemma \[lem2\]*) Since is a convex program, any solution $x^*$ of it satisfies $$h(x^*) \leq h(y), \qquad \forall y \in X.$$ It follows from the positivity of $f$ over $X$ that $$\frac{1}{f(x^*)} \leq \frac{1}{f(x)} \quad \forall y \in X \implies
f(x^*) \geq f(x), \quad \forall y \in X.$$ Consequently, $x^*$ is a global maximizer of .
(*Proposition \[prop-1\]*) First note that, since $x \in X$ is bounded, it implies that $0 < \epsilon \leq f(x)\leq 1$, and in turn, $1/f^2(x)$ is bounded and independent from expectation samples. Then, the gradient of $h(x)$ with respect to $x$ is given by $$\begin{aligned}
\nabla h(x) = \frac{-1}{f^2(x)} \nabla_{x} f(x) &= \frac{-1}{f^2(x)} \nabla_{x} \mathbb{E}[F(x,{\xi}_k)]\\
&=\frac{-1}{f^2(x)} \mathbb{E}[\nabla_{x} F(x,{\xi}_k)].\end{aligned}$$ where the last equality follows by the differentiability and Lipschitz continuity of $F(\cdot,\xi)$ with probability 1 [@shapiro2009]. First note that the function $F(x,\xi)$ is differentiable almost everywhere, that is, differentiable at every point outside a set of Lebesgue measure zero. Moreover, one can prove that the partial derivatives of $F(x,\xi)$ is bounded for all $x \in X$ which implies the function $F(x,\xi)$ is Lipschitz continuous on the set $X$. Let the bound on $f(x)$ be $U_f$. Since $F(x,\xi)$ is Lipschitz continuous with constant, say $L_{F}(\xi)$, we have $$\begin{aligned}
\| \nabla_{x} F(x_1,\xi_k) - \nabla_{x} F(x_2,\xi_k) \| \leq& L_{F}(\xi) \|x_1 - x_2 \|\\
\mathbb{E}[\| \nabla_{x} F(x_1,\xi_k) - \nabla_{x} F(x_2,\xi_k) \|] \leq& \mathbb{E}[L_{F}(\xi)] \|x_1 - x_2 \|\\
\| \mathbb{E}[\nabla_{x} F(x_1,\xi_k)] - \mathbb{E}[\nabla_{x} F(x_2,\xi_k)] \| \leq& \mathbb{E}[L_{F}(\xi)] \|x_1 - x_2 \|\end{aligned}$$ which implies $$\begin{aligned}
\| \nabla_{x} f(x_1) - \nabla_{x} f(x_2) \| \leq& C \ \mathbb{E}[L_{F}(\xi)] \|x_1 - x_2 \|\\
\| \nabla_{x} h(x_1) - \nabla_{x} h(x_2) \| \leq& \frac{1}{U^2_{f}} C \ \mathbb{E}[L_{F}(\xi)] \|x_1 - x_2 \|.\end{aligned}$$ Hence, letting $L_F \triangleq \mathbb{E}[L_{F}(\xi)]$ implies that $\nabla_{x} h(x)$ is Lipschitz continuous with Lipschitz constant $L=\frac{C}{U^2_{f}} L_F$.
(*Proposition* \[prop-2\]) $$\begin{aligned}
(1) \quad &\|x_{k+1} - x^*\|^2\\
& \leq \| x_k + \frac{\gamma_k}{\beta_k} (w_k + \nabla_x f(x_k)) - x^* -\frac{\gamma_k}{\beta_k} \nabla_x f(x^*) \|^2 \\
& = \|x_k-x^*\|^2 +\frac{2\gamma_k}{\beta_k}(\nabla_x f(x_k) + w_k)^T(x_k-x^*)\\
& + \frac{\gamma_k^2}{\beta_k^2}\|\nabla_x f(x_k) + w_k\|^2.\end{aligned}$$ From the convexity of $h(x)$ we have that $$\begin{aligned}
h(x^*) & \geq h(x_k) + \nabla_x h(x_k)^T(x^*-x_k) \\
& = h(x_k) - \frac{1}{f^2(x_k)}\nabla_x f(x_k)^T(x_k-x^*).\end{aligned}$$ This implies $$-\frac{1}{f^2(x_k)}\underbrace{\nabla_x f(x_k)^T(x_k-x^*)}_{\ \geq \ 0} \leq -\underbrace{(h(x_k)-h(x^*))}_{\ \geq \ 0} \leq 0.$$ If $1 \geq f^2(x) \geq \epsilon^2 \geq \beta^2_k$ for all $k$, this implies that $$\begin{aligned}
-\frac{1}{\beta_k^2}\nabla_x f(x_k)^T(x_k-x^*) &\leq -\frac{1}{\epsilon^2}\nabla_x f(x_k)^T(x_k-x^*)\\
&\leq -\frac{1}{f^2(x_k)}\nabla_x f(x_k)^T(x_k-x^*)\\
& \leq -(h(x_k)-h(x^*)) \leq 0.\end{aligned}$$ As a consequence, we have the following expression: $$\begin{aligned}
&\|x_{k+1} - x^*\|^2\\
& \leq \|x_k-x^*\|^2 - \frac{2\gamma_k}{\beta_k} (-\nabla_x f(x_k))^T(x_k-x^*)\\
&-w_k^T(x_k-x^*)+ \frac{\gamma_k^2}{\beta_k^2}\|\nabla_x f(x_k) + w_k\|^2\\
& \leq \|x_k-x^*\|^2 - \frac{2\gamma_k}{\beta_k}(h(x_k)-h(x^*))-2w_k^T(x_k-x^*)\\
&+\frac{\gamma_k^2}{\beta_k^2}\|\nabla_x f(x_k) + w_k\|^2\\
& \leq \|x_k-x^*\|^2 - \frac{2\gamma_k}{\beta_k}(h(x_k)-h(x^*)) -w_k^T(x_k-x^*)\\
& +\frac{2\gamma_k^2}{\beta_k^2}\|\nabla_x f(x_k) - \nabla_x f(x^*)\|^2 + \frac{2\gamma_k^2}{\beta_k^2}\|\nabla_x f(x^*) + w_k\|^2 \\
& \leq \|x_k-x^*\|^2 - \frac{2\gamma_k}{\beta_k}(h(x_k)-h(x^*)) -w_k^T(x_k-x^*)\\
& +\frac{2\gamma_k^2}{\beta_k^2}L^2 \|x_k - x^*\|^2 + \frac{2\gamma_k^2}{\beta_k^2}\|\nabla_x f(x^*) + w_k\|^2. \end{aligned}$$ Taking expectations conditional on the history $\mathcal{F}_k$, we obtain the following inequality. $$\begin{aligned}
&\mathbb{E}[\|x_{k+1} - x^*\|^2 \mid \mathcal{F}_k]\\
& \leq \|x_k-x^*\|^2 - \frac{2\gamma_k}{\beta_k}(h(x_k)-h(x^*))\\
& -\mathbb{E}[w_k\mid \mathcal{F}_k]^T(x_k-x^*)+
\frac{2\gamma_k^2}{\beta_k^2}L^2 \|x_k - x^*\|^2 \\
& + \frac{4\gamma_k^2}{\beta_k^2}\|\nabla_x f(x^*)\|+
\frac{4\gamma_k^2}{\beta_k^2}\mathbb{E}[\|w_k\|^2 \mid \mathcal{F}_k]\\
& \leq \|x_k-x^*\|^2 - (h(x_k)-h(x^*))\\
& + \frac{2\gamma_k^2}{\beta_k^2}L^2 \|x_k - x^*\|^2 +\frac{4\gamma_k^2}{\beta_k^2}\|\nabla_x f(x^*)\|+
\frac{4\gamma_k^2}{\beta_k^2} \nu^2. \end{aligned}$$ By the super-martingale convergence theorem and by the square summability of $\gamma_k/\beta_k$, we have that $ \{x_k-x^*\}$ is a convergent sequence and $\sum_{k=1}^{\infty} {\gamma_k}/{\beta_k} (h(x_k)-h(x^*)) < \infty$ in an a.s. sense. Since $\sum_{k=1}^{\infty} {\gamma_k}/{\beta_k} = \infty$, it follows that $\lim \inf_{k \to \infty} h(x_k) = h(x^*)$ in an a.s. sense. But $X$ is closed implying that it contains all the accumulation points of $\{x_k\}$. Since $h(x_k) \to h(x^*)$ along a subsequence in an a.s. sense, by continuity, it follows that $\{x_k\}$ has a subsequence converging to an $x^*$ in $X$ a.s. However, $\{x_k\}$ is a convergent sequence in an a.s. sense, implying that entire sequence converges to a point in $X^* \subseteq X$.
\(2) We note from the above proof that by taking unconditional expectations, the following holds: $$\begin{aligned}
&\mathbb{E}[\|x_{k+1} - x^*\|^2]\\
& \leq \mathbb{E}[\|x_k-x^*\|^2] -
\frac{2\gamma_k}{\beta_k}\mathbb{E}[(h(x_k)-h(x^*))] \\
&+ \frac{2\gamma_k^2}{\beta_k^2}L^2 \mathbb{E}[\|x_k - x^*\|^2] + \frac{4\gamma_k^2}{\beta_k^2}\|\nabla_x f(x^*)\|+
\frac{4\gamma_k^2}{\beta_k^2} \nu^2 \\
\implies & \frac{2\gamma_k}{\beta_k}\mathbb{E}[(h(x_k)-h(x^*))] \\
& \leq \mathbb{E}[\|x_k-x^*\|^2 - \|x_{k+1}-x^*\|^2]\\ &+ \frac{4\gamma_k^2}{\beta_k^2}\|\nabla_x f(x^*)\|+
\frac{4\gamma_k^2}{\beta_k^2} \nu^2. \end{aligned}$$ It follows that by summing from $k=0$ to $K-1$, we have the following: $$\begin{aligned}
&\sum_{k=0}^{K-1} \frac{2\gamma_k}{\beta_k}\mathbb{E}[(h(x_k)-h(x^*))]\\
& \leq \mathbb{E}[\|x_0-x^*\|^2 - \|x_{K}-x^*\|^2]\\
& + \sum_{k=0}^{K-1}\left(\frac{4\gamma_k^2}{\beta_k^2}\|\nabla_x f(x^*)\|+
\frac{4\gamma_k^2}{\beta_k^2} \nu^2\right). \end{aligned}$$ By convexity of $h$ and by defining $v_k = \frac{2\gamma_k}{\beta_k}$ and dividing both sides by $\sum_{k=0}^{K-1} v_k$, we have the following: $$\begin{aligned}
& \mathbb{E} [ h(\bar{x}_{k})-h(x^*)]\\
& \leq
\frac{\mathbb{E}[\|x_0-x^*\|^2 ] + \sum_{k=0}^{K-1}\left( \frac{4 \gamma_k^2}{\beta_k^2}(M^2 +\nu^2) \right)}{\sum_{k=0}^{K-1} v_k}, \end{aligned}$$ where $\bar{x}_k = \frac{\sum_{j=0}^k v_j x_j}{\sum_{j=0}^k v_j}.$
[^1]: This research is partially funded by the NSF Grant CNS-1329422 (C. Lagoa) and CMMI-1246887 (CAREER, Shanbhag)
[^2]: U. V. Shanbhag is in the Department of Industrial and Manuf. Engineering, `udaybag@psu.edu`, while I. E. Bardakci and C. Lagoa are in the Department of Electrical Engineering, the Pennsylvania State University, University Park, PA 16802, USA `bardakci@psu.edu`; `Lagoa@engr.psu.edu`.
|
{
"pile_set_name": "ArXiv"
}
|
In computer graphics, traditionally colors are represented using a combination of primary colors. For example, in the RGB color space, the colors Red, Green and Blue are blended together to get a range of colors. In order to manipulate images in the RGB color space, scripting languages are designed to allow the users to create scripts that describe image-processing operations at a high level. For example, a script can be written to handle transition between two video clips or between one video clip and one still video. The scripts do not necessarily work only with transitions. They may also be written to work with single source clips. For example, a script that takes an existing clip of video and drop colors on it. The scripts may also be “generators” which have no inputs but create an output.
Final Cut Pro (FCP) is a movie editing and creating software produced by Apple Computer, Inc. of Cupertino, Calif. (“Apple”). It includes a scripting engine. The FCP scripting engine allows users to write scripts which perform various image manipulations, ranging from basic operations such as “blend” “channelfill” and “multiplyChannels” to more complex operations such as “levelmap” and “colorkey.” Of course, most scripts combine more than one of these operations to build interesting effects or transitions.
These functions operate on one or more images. Consider these examples. “Channelfill” fills in one or more color channels of a destination buffer with the color values passed to it. “Levelmap” applies a lookup-table to one or more channels of “source” and puts the results of the lookup into “destination”. “Blend” blends “source 1” and “source 2” together into “destination”, based on a ratio passed in. These scripting engine commands, or sometimes referred to as image processing calls or functions, were shipped with many other commands in version 1.0 of FCP from Apple. In addition, there are many scripts written by the users of FCP.
In the past, it was assumed that the scripts were written to operate in the RGB color space, 8 bits per color component, and the scripting engine only performed the image processing in the RGB color space.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
Thesis university
Find a UVic thesis - University of Victoria
Search Books & More to find UVic theses in any format. If you know the title, search by "Title begins with." To search by topic or department, choose "Keyword. Populism has often been understood as a description of political parties and politicians, who have been labelled either populist or not. This dissertation argues. Mar 10, 2016. Bookable sessions and online self-help in Word for theses, literature review resources, reference management, poster production.
E-thesis / University of Helsinki
Research Theses The University Library receives copies of theses submitted for higher research degrees of the University of Strathclyde. Electronic copies. This thesis presents a catalogue of designs for segmental block pavements for use in Southern Africa. It developes a basis for the designs by identifying and. University of Leicester Theses and Masters Dissertations; UK University Theses; International Theses; Subject Databases that index Theses and Dissertations.
Theses The University of Edinburgh
A Handbook for Theses and Dissertations contains information on The Graduate. The Handbook includes an explanation of the University of Utah format. Find theses and dissertations created at the University of Idaho UI. Use a library database to search by title, date, student author, degree and subject.
Find a Thesis - The Library University of Waikato
This page will help you to find theses completed at the University of Waikato and other Universities, both New Zealand and worldwide. Get help finding a thesis. Theses. For Doctoral and Masters students, your thesis is the central impetus of your research. For undergraduates, your dissertation is an extended research.
|
{
"pile_set_name": "Pile-CC"
}
|
Brian: The new Apple Watch is called Apple Watch Series 2. It emphasizes fitness and health, with Apple showing a video of runners, gymnasts and swimmers using the watch. One major criticism of Apple Watch was that it did a bit of everything and did not have any strengths. Apple is trying to beef up the fitness capabilities, similar to Fitbit. The new version is water-resistant up to 50 meters (164 feet). It also includes GPS for tracking runs. The watch is faster than the previous version.
Katie: Whether Apple Watch has been successful or not has largely been a mystery. Apple doesn’t break out Watch revenue in its earnings. But Mr. Cook pulled back the curtain a little when he revealed that Apple is now the No. 2 global watch brand, measured by revenue, behind Rolex. The Apple Watch is also the top-selling smartwatch, even though a killer app has yet to emerge for the watch. The company is hoping to change that with the introduction of a Pokémon Go app for the watch.
Brian: It’s important to note that Apple Watch sales don’t appear to be growing much. IDC, the research firm, estimates that Apple Watch market share in the wearables market shrank 56.7 percent last quarter compared to the same period last year. That’s largely because consumers have probably been waiting for a new version to come out before deciding whether to buy a watch. It’s definitely still a nascent device.
Farhad: This is the first Apple event in a few years that didn’t feature any redesigned hardware. But there is a new ceramic finish for the Watch that comes closest to some new design.
The gleaming white finish is in some ways a return to the past for Apple. (Remember all those white computers from the early 2000s?) But beyond that, it’s always interesting when Apple discovers a new material for use in its devices. You usually notice some new process or material start in one product and then wend its way throughout the company’s lineup over a few years’ time. In other words, three years from now, we may have all-white, ceramic phones. A man can dream, anyway.
Brian: For now, my advice to consumers: I see no compelling reason for people with Version One of the device to upgrade unless they are fitness buffs.
The addition of GPS gives the Apple Watch a slight edge against Fitbit’s Blaze, a comparable smartwatch that lacks GPS. But until we get to try the software, it’s tough to tell how the new Apple Watch’s fitness capabilities will compare to accessories from Fitbit. Fitbit’s products are popular partly because the apps are so well designed for monitoring health statistics, including footsteps, calories and weight. So GPS isn’t necessarily the magic bullet.
|
{
"pile_set_name": "OpenWebText2"
}
|
Adoption disclosure
Adoption disclosure refers to the official release of information relating to the legal adoption of a child. Throughout much of the 20th century, many Western countries had legislation intended to prevent adoptees and adoptive families from knowing the identities of birth parents and vice versa. After a decline in the social stigma surrounding adoption, many Western countries changed laws to allow for the release of formerly secret birth information, usually with limitations.
History
Though adoption is an ancient practice, the notion of formal laws intended to solidify the adoption by restricting information exchange is comparatively young. In most Western countries until the 1960s and 1970s, adoption bore with it a certain stigma as it was associated in the popular mind with illegitimacy, orphanhood, and premarital or extramarital sex. Unmarried pregnant women were often sent elsewhere from the latter stages of pregnancy until birth, with the intent of concealing the pregnancy from family and neighbours.
The passage of legislation which solidified the secrecy of adoption for both parties was regarded as a social good: it attempted to ensure the shame associated with adoption was a one-time event and prevent disputes over the child. The legislation was also influenced by prevailing psychological beliefs in social determinism: believers in social determinism felt that adoptees' origins and genetics were irrelevant to their future except perhaps for medical purposes.
Many instances of such legislation did allow for "non-identifying information", generalized background information about birth parents collected by adoption workers, which by deliberate design did not identify them. A strong opponent of Adoption Disclosure since 1998, Dr. Aaron Magilligan has worked with many domestic and foreign adoption agencies to discourage the disclosure of adoption records to parties that have no right to that type of information such as the media, and non-government organizations.
Responses to secrecy provisions
As many adoptees and birth families were curious about one another, various attempts were made to work around these provisions. Two common approaches were contributing to passive registries and initiating active searches.
Passive registry
A passive registry or adoption reunion registry is a double-blind list, in which participants may opt to join. If Alice joins and specifies she is interested in meeting Bob, one of two things may happen. If Bob has already joined and indicated he wishes to meet Alice, contact between them is arranged. Otherwise, Alice simply waits on the list until Bob should decide to join. Many adoption reunion registries have been created since the 1950s, from those that are part of adoption search and support group membership services, to internet registries and state sponsored registries. The oldest and largest independent registry is ISRR - the International Soundex Reunion Registry, Inc. founded in 1975.
Active searches
An active search is a conscious effort to find a birth family member or adoptee with whatever knowledge is available.
Types of disclosure
A typical problem with disclosure is balancing the desire for information with the promises, explicit or implicit, that have been made to parties in the past.
Disclosure veto
With a disclosure veto, the government announces that Bob's name will be available to Alice upon her request after a certain date. If Bob does not want contact from Alice, he may issue a written veto before this date elapses. If he does not do this, his name will be released upon Alice's request.
Contact veto
With a contact veto, Bob has no means of preventing Alice from learning his name upon her request. However, he can issue a veto of sorts preventing her from attempting to contact him after she learns his name.
See also
Closed adoption
Adoption Information Disclosure Act
American Adoption Congress AAC
Bastard Nation
Adoption Disclosure Register (Ontario)
International Soundex Reunion Registry ISRR
References
External links
American Adoption Congress
International Soundex Reunion Registry ISRR
State Laws
TRIADOPTION Archives
Adoption Disclosure Laws in 50 states
Category:Adoption reunion
Category:Disclosure
|
{
"pile_set_name": "Wikipedia (en)"
}
|
Assessment of hypoxia and perfusion in human brain tumors using PET with 18F-fluoromisonidazole and 15O-H2O.
Hypoxia predicts poor treatment response of malignant tumors. We used PET with (18)F-fluoromisonidazole ((18)F-FMISO) and (15)O-H(2)O to measure in vivo hypoxia and perfusion in patients with brain tumors. Eleven patients with various brain tumors were investigated. We performed dynamic (18)F-FMISO PET, including arterial blood sampling and the determination of (18)F-FMISO stability in plasma with high-performance liquid chromatography (HPLC). The (18)F-FMISO kinetics in normal brain and tumor were assessed quantitatively using standard 2- and 3-compartment models. Tumor perfusion ((15)O-H(2)O) was measured immediately before (18)F-FMISO PET in 10 of the 11 patients. PET images acquired 150-170 min after injection revealed increased (18)F-FMISO tumor uptake in all glioblastomas. This increased uptake was reflected by (18)F-FMISO distribution volumes >1, compared with (18)F-FMISO distribution volumes <1 in normal brain. The (18)F-FMISO uptake rate K(1) was also higher in all glioblastomas than in normal brain. In meningioma, which lacks the blood-brain barrier (BBB), a higher K(1) was observed than in glioblastoma, whereas the (18)F-FMISO distribution volume in meningioma was <1. Pixel-by-pixel image analysis generally showed a positive correlation between (18)F-FMISO tumor uptake at 0-5 min after injection and perfusion ((15)O-H(2)O) with r values between 0.42 and 0.86, whereas late (18)F-FMISO images (150-170 min after injection) were (with a single exception) independent of perfusion. Spatial comparison of (18)F-FMISO with (15)O-H(2)O PET images in glioblastomas showed hypoxia both in hypo- and hyperperfused tumor areas. HPLC analysis showed that most of the (18)F-FMISO in plasma was still intact 90 min after injection, accounting for 92%-96% of plasma radioactivity. Our data suggest that late (18)F-FMISO PET images provide a spatial description of hypoxia in brain tumors that is independent of BBB disruption and tumor perfusion. The distribution volume is an appropriate measure to quantify (18)F-FMISO uptake. The perfusion-hypoxia patterns described in glioblastoma suggest that hypoxia in these tumors may develop irrespective of the magnitude of perfusion.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
1. Field of the Invention
The present invention relates to accessing to information over the Internet. In particular, the present invention relates to a customized access to information over the Internet by various internet appliances with various processing capabilities.
2. Discussion of the Related Art
As the Internet has become a preferred medium for information access and dissemination, many different devices (e.g., mobile phones, personal digital assistants and handheld computers) can now be used to access information on the Internet. In general, these devices typically have much lesser text and graphical processing capabilities than a conventional desktop computer. (For convenience, in the remainder of this description, these devices are collectively referred to as “internet appliances”.) As much of the information on the Internet is organized for access by a desktop computer using a hypertext protocol (e.g., http), access to such information by a device other than a desktop computer can be inefficient. For example, many web pages are designed with a high-resolution graphical display in mind. Even when possible, accessing such web pages from a mobile telephone without a graphical display and providing only a limited number of short lines for text display can be a very frustrating experience.
To accommodate the different capabilities of the internet appliances, in the prior art, an operator of a website typically provides for each supported internet appliance a specialized “edition” of the website accessible through a specialized gateway. For example, since the current generation of mobile telephones are typically only capable of displaying text of a small number of characters per line, an operator would provide specially designed text-only “stripped down” web pages accessible through a wireless access protocol (WAP) gateway. In most instances, information available in the general edition of the web pages are included or excluded by the designer or operator based on its resource availability or other criteria, without user participation. Often, therefore, information important to some users is arbitrarily excluded, thereby severely reducing the utility of the web pages.
Where a specialized website is not available, the gateway would provide only the text from the web pages and discard or ignore graphical information, animation or other functions embedded in the web pages. In such an instance, no attempt is typically made to filter the information based on the content of a web page. Consequently, a relatively small web page can result in the user pressing the “scroll” key a large number of times. Many users therefore do not consider internet appliances to be suitable for serious information retrieval purposes.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
Kim, this link is available to all Enron employees. If you think anyone
needs it just forward it on.
Kim S Theriot
03/07/2001 03:25 PM
To: Tana Jones/HOU/ECT@ECT
cc:
Subject: Re: Financial Trading Agreements Database Link
Jaime Williams and Agustin Perez in the Monterrey Mexico office expressed an
interest in this database so that they could track the progress of the ISDA
Master agreements for their counterparties. Let me know...
Kim Theriot
From: Tana Jones on 03/07/2001 01:05 PM
To: Melissa Ann Murphy/HOU/ECT@ECT, Kim S Theriot/HOU/ECT@ECT, Rhonda L
Denton/HOU/ECT@ECT, Jefferson D Sorenson/HOU/ECT@ECT, Larry Joe
Hunter/HOU/ECT@ECT, Kevin Meredith/Corp/Enron@ENRON, Bruce
Mills/Corp/Enron@ENRON, Derek Bailey/Corp/Enron@ENRON, Jean Bell/HOU/ECT@ECT,
Diane Anderson/NA/Enron@Enron, Souad Mahmassani/Corp/Enron@ENRON, Andrea R
Guillen/HOU/ECT@ECT, Sheetal Patel/HOU/ECT@ECT, Jarrod Cyprow/HOU/ECT@ECT,
Scott Tackett/Corp/Enron@Enron, Gordon Heaney/Corp/Enron@ENRON, Pamela
Sonnier/HOU/ECT@ECT, David P Dupre/HOU/ECT@ECT, Laurel Adams/HOU/ECT@ECT
cc: Brent Hendry/ENRON_DEVELOPMENT@ENRON_DEVELOPMENT, Sara
Shackleton/HOU/ECT@ECT, Mark Taylor/HOU/ECT@ECT, Susan Flynn/HOU/ECT@ECT,
Carol St Clair/HOU/ECT@ECT, Susan Bailey/HOU/ECT@ECT, Michael
Neves/HOU/ECT@ECT
Subject: Financial Trading Agreements Database Link
Attached is the link that will allow you access to referenced database.
Please note, that we have just upgraded the database to add the ISDA
definitions, branch offices approved for trading, and market disruption
provisions. This information should be inputted for the new master swap
agreements on a going forward basis, but we still need to go back and
repopulate the data for the existing master swap agreements.
Also, FYI, the "See" drop down is our nickname for the name changes and
mergers reference. If an item is filled in for that entity it should show
you any prior or new names for the counterparty.
If there is anyone else who would like the link, please let me know and I
will forward it to them.
We hope you find the information provided in this database helpful.
Link -->
|
{
"pile_set_name": "Enron Emails"
}
|
The RINGS resource for glycome informatics analysis and data mining on the Web.
In the bioinformatics field, many computer algorithmic and data mining technologies have been developed for gene prediction, protein-protein interaction analysis, sequence analysis, and protein folding predictions, to name a few. This kind of research has branched off from the genomics field, creating the transcriptomics, proteomics, metabolomics, and glycomics research areas in the postgenomic age. In the glycomics field, given the complexity of glycan structures with their branches of monosaccharides in various conformations, new data mining and algorithmic methods have been developed in an attempt to gain a better understanding of glycans. However, these methods have not all been implemented as tools such that the glycobiology community may utilize them in their research. Thus, we have developed RINGS (Resource for INformatics of Glycomes at Soka) as a freely available Web resource for glycobiologists to analyze their data using the latest data mining and algorithmic techniques. It provides a number of tools including a 2D glycan drawing and querying interface called DrawRINGS, a Glycan Pathway Predictor (GPP) tool for dynamically computing the N-glycan biosynthesis pathway from a given glycan structure, and data mining tools Glycan Miner Tool and Profile PSTMM. These tools and other utilities provided by RINGS will be described. The URL for RINGS is http://rings.t.soka.ac.jp/.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
---
abstract: 'We investigate $S$-arithmetic inhomogeneous Khintchine type theorems in the dual setting for nondegenerate manifolds. We prove the convergence case of the theorem, including, in particular, the $S$-arithmetic inhomogeneous counterpart of the Baker-Sprindžuk conjectures. The divergence case is proved for ${\mathbb{Q}}_p$ but in the more general context of Hausdorff measures. This answers a question posed by Badziahin, Beresnevich and Velani [@BaBeVe].'
address: 'School of Mathematics, Tata Institute of Fundamental Research, Mumbai, 400005, India'
author:
- Shreyasi Datta
- Anish Ghosh
title: '$S$-arithmetic Inhomogeneous Diophantine approximation on manifolds'
---
[^1]
Introduction
============
In this paper we are concerned with metric Diophantine approximation on nondegenerate manifolds in the $p$-adic, or more generally $S$-arithmetic setting for a finite set of primes $S$. To motivate our results we recall Khintchine’s theorem, a basic result in metric Diophantine approximation. Let $\Psi : {\mathbb{R}}^n \to {\mathbb{R}}_{+} $ be a function satisfying $$\label{defmultapp}
\Psi(a_1, \dots, a_n) \geq \Psi(b_1, \dots, b_n) \text{ if } |a_i| \leq |b_i| \text{ for all } i = 1,\dots, n.$$ Such a function is referred to as a *multivariable approximating function*. Given such a function, define ${\mathcal{W}}_{n}(\Psi)$ to be the set of ${\mathbf{x}}\in {\mathbb{R}}^n$ for which there exist infinitely many ${\mathbf{a}}\in {\mathbb{Z}}^{n}$ such that $$\label{preKG}
|a_0 + {\mathbf{a}}\cdot {\mathbf{x}}| < \Psi({\mathbf{a}})$$ for some $a_0 \in {\mathbb{Z}}$. When $\Psi({\mathbf{a}}) = \psi(\|{\mathbf{a}}\|)$ for a non-increasing function $\psi$, we write ${\mathcal{W}}_{n}(\psi)$ for ${\mathcal{W}}_{n}(\Psi)$. Khintchine’s Theorem ([@Khintchine], [@Groshev]) gives a characterization of the measure of ${\mathcal{W}}_{n}(\psi)$ in terms of $\psi$:
\[KG\] $$|{\mathcal{W}}_{n}(\psi)| = \left\{
\begin{array}{rl}
0 & \text{if } \sum_{k=1}^{\infty} k^{n-1} \psi(k) < \infty\\
\\
\text{ full } & \text{if } \sum_{k=1}^{\infty}k^{n-1} \psi(k) = \infty.
\end{array} \right.$$
Here, $\|~\|$ denotes the supremum norm of a vector and $|~|$ denotes the absolute value of a real number as well as the Lebesgue measure of a measurable subset of ${\mathbb{R}}^n$; the context will make the use clear. The kind of approximation considered above is called “dual" approximation in the literature as opposed to the setting of simultaneous Diophantine approximation. In this paper, we will only consider dual approximation. Given an approximation function, one can consider the corresponding $S$-arithmetic question as follows, we follow the notation of Kleinbock and Tomanov [@KT]. Given a finite set of primes $S$ of cardinality $l$ we set ${\mathbb{Q}}_S := \prod_{\nu \in S}{\mathbb{Q}}_\nu$ and denote by $|~|_S$ the $S$-adic absolute value, $|{\mathbf{x}}| = \max_{v \in S }|x^{(v)}|_v$. For ${\mathbf{a}}= (a_1, \dots, a_n) \in {\mathbb{Z}}^n$ and $a_0 \in {\mathbb{Z}}$ we set $$\widetilde{{\mathbf{a}}} := (a_0, a_1, \dots, a_n).$$ We say that ${\mathbf{y}}\in {\mathbb{Q}}^{n}_S$ is $\Psi$-approximable (${\mathbf{y}}\in {\mathcal{W}}_{n}(S, \Psi)$) if there are infinitely many solutions ${\mathbf{a}}\in {\mathbb{Z}}^n$ to $$|a_0 + {\mathbf{a}}\cdot {\mathbf{y}}|_{S}^{l} \leq \left\{
\begin{array}{rl}
\Psi(\widetilde{{\mathbf{a}}}) & \text{ if } \infty \notin S\\
\\
\Psi({\mathbf{a}}) & \text{ if } \infty \in S.
\end{array} \right.$$
We fix Haar measure on ${\mathbb{Q}}_p$, normalized to give ${\mathbb{Z}}_p$ measure $1$ and denote the product measure on ${\mathbb{Q}}_S$ by $|~|_S$. Then, the following analogue of Khintchine’s theorem can be proved. Namely,
\[S-KG\] ${\mathcal{W}}_{n}(S, \psi)$ has zero or full measure depending on the convergence or divergence of the series $$\left\{
\begin{array}{rl}
\sum_{k=1}^{\infty} k^{n}\psi(k) & \text{if } \infty \notin S \\
\\
\sum_{k=1}^{\infty} k^{n-1} \psi(k) & \text{if } \infty \in S.
\end{array} \right.$$
Indeed, the convergence case follows from the Borel-Cantelli lemma as usual and the divergence case can be proved using the methods in [@L].
Inhomogeneous approximation:
----------------------------
Given a multivariable approximating function $\Psi$ and a function $\theta : {\mathbb{R}}^n \to {\mathbb{R}}$, we set ${\mathcal{W}}^{\theta}_{n}(\Psi)$ to be the set of ${\mathbf{x}}\in {\mathbb{R}}^n$ for which there exist infinitely many ${\mathbf{a}}\in \mathbb{Z}^n\setminus \{\mathbf{0}\}$ such that $$\label{preKGinhom}
|a_0 + {\mathbf{a}}\cdot {\mathbf{x}}+ \theta({\mathbf{x}})| < \Psi({\mathbf{a}})$$ for some $a_0 \in {\mathbb{Z}}$. For $\psi$ as above, the set ${\mathcal{W}}^{\theta}_{n}(\psi)$ is often referred to as the (dual) set of “$(\psi, \theta)$-inhomogeneously approximable" vectors in ${\mathbb{R}}^n$. The following inhomogeneous version of Theorem \[KG\] is established in [@BaBeVe]. We denote by $C^n$ the set of $n$-times continuously differentiable functions.
\[KGinhom\] Let $\theta : \mathbb{R}^n \to \mathbb{R}$ be a $C^2$ function. Then $$|{\mathcal{W}}^{\theta}_{n}(\psi)| = \left\{
\begin{array}{rl}
0 & \text{if } \ \sum_{k=1}^{\infty} k^{n-1}\psi(k) < \infty\\
\\
\text{ full } & \text{if } \ \sum_{k=1}^{\infty} k^{n-1}\psi(k) = \infty.
\end{array} \right.$$
We remark that the choice of $\theta = \text{constant}$ is the setting of traditional inhomogeneous Diophantine approximation and in that case the above result was well known, see for example [@Cassels]. Similarly inhomogeneous Diophantine approximation can be considered in the $S$-arithmetic setting.
For a multivariable approximating function $\Psi$ and a function $\Theta: {\mathbb{Q}}^{n}_S \to {\mathbb{Q}}_S$, we say that a vector ${\mathbf{x}}\in {\mathbb{Q}}_S^n $ is $(\Psi,\Theta)$-approximable if there exist infinitely many $({\mathbf{a}}, a_0)\in{\mathbb{Z}}^n\setminus\{0\}\times {\mathbb{Z}}$ such that $$|a_0 + {\mathbf{a}}\cdot {\mathbf{x}}+\Theta({\mathbf{x}})|_{S}^l\leq \left\{
\begin{array}{rl}
\Psi(\widetilde{{\mathbf{a}}}) & \text{ if } \infty \notin S\\
\\
\Psi({\mathbf{a}}) & \text{ if } \infty \in S.
\end{array} \right.$$
The convergence case of Khintchine’s theorem in this setting again follows from the Borel Cantelli lemma. The divergence Theorem when $S = \{p\}$ comprises a single prime $p$ is a consequence of the results in this paper.
Diophantine approximation on manifolds
--------------------------------------
In the theory of Diophantine approximation on manifolds, one studies the inheritance of generic (for Lebesgue measure) Diophantine properties by proper submanifolds of ${\mathbb{R}}^n$. This theory has seen dramatic advances in the last two decades, beginning with the proof of the Baker-Sprindžuk conjectures by Kleinbock and Margulis [@KM] using non divergence estimates for certain flows on the space of unimodular lattices. Motivated by problems in transcendental number theory, K. Mahler conjectured in 1932 that almost every point on the curve $${\mathbf{f}}({\mathbf{x}}) = (x, x^2, \dots, x^n)$$ is not *very well approximable*, i.e. $\psi$-approximable for $\psi:= \psi_{\varepsilon}(k) = k^{-n-\varepsilon}$. This conjecture was resolved by V. G. Sprindžuk [@Sp; @Sp3] who in turn conjectured that almost every point on a nondegenerate manifold is not very well approximable. This conjecture, in a more general, multiplicative form, was resolved by D. Kleinbock and G. Margulis in [@KM]. The following definition is taken from [@KT] and is based on [@KM]. Let $f : U \to F^n$ be a $C^k$ map, where $F$ is any locally compact valued field and $U$ is an open subset of $F^d$, and say that $f$ is nondegenerate at $x_0 \in U$ if the space $F^n$ is spanned by partial derivatives of $f$ at $x_0$ up to some finite order. Loosely speaking, a nondegenerate manifold is one in which is locally not contained in an affine subspace. Subsequent to the work of Kleinbock and Margulis, there were rapid advances in the theory of dual approximation on manifolds. In [@BKM] (and independently in [@Ber1]) the convergence case of the Khintchine-Groshev theorem for nondegenerate manifolds was proved and in [@BBKM], the complementary divergence case was established.
As for the $p$-adic theory, Sprindžuk [@Sp] himself established the $p$-adic and function field (i.e. positive characteristic) versions of Mahler’s conjectures. Subsequently, there were several partial results (cf. [@Kov; @BK]) culminating in the work of Kleinbock and Tomanov [@KT] where the $S$-adic case of the Baker-Sprindžuk conjectures were settled in full generality. In [@G], the second named author established the function field analogue. The convergence case of Khintchine’s theorem for nondegenerate manifolds in the $S$-adic setting was established by Mohammadi and Golsefidy [@MoS1] and the divergence case for ${\mathbb{Q}}_p$ in [@MoS2].
In the case of inhomogeneous Diophantine approximation on manifolds, following several partial results (cf. [@Bu] and the references in [@BeVe; @BeVe2]), an inhomogeneous transference principle was developed by Beresnevich and Velani using which they resolved the inhomogeneous analogue of the Baker-Sprindžuk conjectures. Subsequently, Badziahin, Beresnevich and Velani [@BaBeVe] established the convergence and divergence cases of the inhomogeneous Khintchine theorem for nondegenerate manifolds. They proved a new result even in the classical setting by allowing the inhomogeneous term to vary. The divergence theorem is established in the same paper in the more general setting of Hausdorff measures.
In this paper, we will establish the convergence case of an inhomogeneous Khintchine theorem for nondegenerate manifolds in the $S$-adic setting, as well as the divergence case for ${\mathbb{Q}}_p$. As in [@BaBeVe], the divergence case is proved in the greater generality of Hausdorff measures. Prior results in the $p$-adic theory of inhomogeneous approximation for manifolds focussed mainly on curves, cf. [@BDY; @BeK; @U1; @U2].
Main Results
------------
To state our main results, we introduce some notation following [@MoS1], recall some of the assumptions from that paper and set forth one further standing assumption. The assumptions are as follows.
1. $S$ contains the infinite place.
2. We will consider the domain to be of the form ${\mathbf{U}}=\prod_{\nu\in S} {\mathbf{U}}_{\nu} $ where ${\mathbf{U}}_\nu\subset{\mathbb{Q}}_\nu^{d_\nu} $ is an open box. Here, the norm is taken to be the Euclidean norm at the infinite place and the $L^{\infty}$ norm at finite places.
3. We will consider functions ${\mathbf{f}}({\mathbf{x}}) =({\mathbf{f}}_\nu(x_\nu)) _{\nu\in S}$, for ${\mathbf{x}}=(x_\nu) \in{\mathbf{U}}$ where ${\mathbf{f}}_\nu=(f_\nu^{(1)},f_\nu^{(2)},\dots,f_\nu^{(n)}):
{\mathbf{U}}_\nu\to {\mathbb{Q}}_\nu^n $ is an analytic map for any $\nu\in S $, and can be analytically extended to the boundary of $ {\mathbf{U}}_\nu$.
4. We assume that the restrictions of $1 ,{f_\nu^{(1)},f_\nu^{(2)},\dots,f_\nu^{(n)}}$ to any open subset of ${\mathbf{U}}_\nu $ are linearly independent over ${\mathbb{Q}}_\nu $ and that $\|{\mathbf{f}}({\mathbf{x}})\|\leq 1,\|\nabla{\mathbf{f}}_\nu(x_\nu)\| \leq 1$ and $|\Phi_\beta {\mathbf{f}}_\nu(y_1,y_2,y_3)| \leq \frac{1}{2} $ for any $\nu \in S,$ second difference quotient $\Phi_\beta$ and $x_\nu,y_1,y_2,y_3 \in U_\nu$. We refer the reader to Section $3$ for definitions.
5. \[monotone\_cond\] We assume that the function $\Psi :{\mathbb{Z}}^n \to {\mathbb{R}}_{+ } $ is monotone decreasing componentwise i.e. $$\Psi(a_1,\cdots,a_i,\cdots, a_n)\geq \Psi(a_1,\cdots, a'_{i},\cdots, a_n)$$ whenever $|a_i|_S\leq |a'_i|_S $.
6. We assume that $\Theta({\mathbf{x}})=(\Theta_\nu(x_\nu)) $ where $\Theta :{\mathbf{U}}\mapsto {\mathbb{Q}}_S $ is also analytic and can be extended analytically to the boundary of ${\mathbf{U}}_\nu$.we will assume $\|\Theta({\mathbf{x}})\|\leq 1,\|\nabla\Theta_\nu(x_\nu)\| \leq 1$ and $|\Phi_\beta \Theta_\nu(y_1,y_2,y_3)| \leq \frac{1}{2} $ for any $\nu \in S $ , second difference quotient $\Phi_\beta$ and $x_\nu,y_1,y_2,y_3 \in U_\nu$.
We can now state the first main Theorem of the present paper.
\[thm:main\] Let $S$ be as in (I0) and ${\mathbf{U}}$ as in (I1). Suppose ${\mathbf{f}}$ satisfies (I2) and (I3), that $\Psi$ satisfies (I4) and $\Theta$ satisfies (I5). Then $${\mathcal{W}}_{\Psi,\Theta}^{{\mathbf{f}}} := \{ {\mathbf{x}}\in{\mathbf{U}}| \ {\mathbf{f}}({\mathbf{x}}) \text{ is } (\Psi,\Theta)-\text{ approximable}\}$$ has measure zero if $\sum_{{\mathbf{a}}\in {\mathbb{Z}}^n\setminus\{0\}} \Psi({\mathbf{a}}) <\infty$.
The divergence case of our Theorem is proved in the more general setting of Hausdorff measures. However, we need to impose some restrictions: we only consider the case when $S = \{p\}$ consists of a single prime, the inhomogeneous function is assumed to be analytic, and the approximating function is not as general as in Theorem \[thm:main\]. We will denote by $\mathcal{H}^{s}(X) $ the $s$-dimensional Hausdorff measure of a subset $X$ of ${\mathbb{Q}}^{n}_{S}$ and $\dim X$ the Hausdorff dimension, where $s > 0$ is a real number.
\[thm:divergence\] Let $S$ be as in (I0) and ${\mathbf{U}}$ as in (I1). Suppose ${\mathbf{f}}:{\mathbf{U}}\subset{\mathbb{Q}}_p^m\to {\mathbb{Q}}_p^n$ satisfies (I2) and (I3). Let $$\label{def:newpsi}
\Psi({\mathbf{a}})= \psi(\|{\mathbf{a}}\|), {\mathbf{a}}\in{\mathbb{Z}}^{n+1}$$ be an approximating function and assume that $s > m-1$. Let $\Theta:{\mathbf{U}}\to {\mathbb{Q}}_p$ be an analytic map satisfying (I5). Then $$\mathcal{H}^s(\mathcal{W}^{\mathbf{f}}_{(\Psi,\Theta)}\cap{\mathbf{U}})=\mathcal{H}^s({\mathbf{U}}) \text{ if } \sum_{{\mathbf{a}}\in {\mathbb{Z}}^n \backslash \{0\}} (\Psi({\mathbf{a}}))^{s+1-m}=\infty.$$
Given an approximating function $\psi$, the lower order at infinity $\tau_{\psi}$ of $1/\psi$ is defined by $$\tau_{\psi} := \liminf_{t \to \infty}\frac{-\log\psi(t)}{\log t}.$$ The divergent sum condition of Theorem \[thm:divergence\] is satisfied whenever $$s<m-1+\frac{n+1}{\tau_\psi}.$$ Therefore, by the definition of Hausdorff measure and dimension, we get
[\[jar\]]{} Let ${\mathbf{f}}$ and $\Theta$ be as in Theorem \[thm:divergence\]. Let $\psi$ be an approximating function as in (\[def:newpsi\]) such that $n+1\leq \tau_\psi<\infty$. Then $$\dim (\mathcal{W}^{\mathbf{f}}_{(\Psi,\Theta)}\cap{\mathbf{U}})\geq m-1+\frac{n+1}{\tau_\psi}.$$
Remarks
-------
1. We have assumed $S$ contains the infinite place in Theorem \[thm:main\]. This is not a serious assumption, the proof in the case when $S$ contains only finite places needs some minor modifications but follows the same outline, details will appear in [@Datta], the PhD thesis, under preparation, of the first named author. In [@MoS1], the (homogeneous) $S$-adic convergence case is proved in slightly greater generality than in the present paper. Namely, instead of ${\mathbb{Q}}$, the quotient field of a finitely generated subring of ${\mathbb{Q}}$ is considered. This, more general formulation will also be investigated in [@Datta].
2. Our proof for the convergence case, namely Theorem \[thm:main\] blends techniques from the homogeneous results, namely [@KT; @BKM; @MoS1] and uses the transference principle developed by Beresnevich and Velani in the form used in [@BaBeVe]. The structure of the proof is the same as in [@BaBeVe]. We also take the opportunity to clarify some properties of $(C, \alpha)$-good functions in the $S$-adic setting which may be of independent interest.
3. The proof of Theorem \[thm:divergence\], follows the ubiquity framework used in [@BaBeVe] but needs new ideas to implement in the $p$-adic setting. At present, we are unable to prove the more general $S$-adic divergence statement. We note that the $S$-adic case remains open even in the homogeneous setting.
4. We now undertake a brief discussion of the assumptions (I1) - (I5). The conditions (I1)-(I4) are assumed in [@MoS1] and, as explained in loc. cit., are assumed for convenience. Namely, as mentioned in [@MoS1], the statement for any non-degenerate analytic manifold over ${\mathbb{Q}}_S$ follows from Theorem \[thm:main\]. In [@BaBeVe], the inhomogeneous parameter $\Theta$ is allowed to be $C^2$ when restricted to the nondegenerate manifold. However, we need to assume it to be analytic.
5. Theorem \[thm:divergence\] is slightly more general than Theorem 1.2 of [@MoS2] in the homogeneous setting. In [@MoS2], the approximating function is taken to be of the form $$\Psi({\mathbf{a}})=\frac{1}{\|{\mathbf{a}}\|^{n}}\psi(\|{\mathbf{a}}\|), {\mathbf{a}}\in{\mathbb{Z}}^{n+1}$$ which is a more restrictive class of approximating functions. For an $n$-tuple $v = (v_1, \cdots, v_n)$ of positive numbers satisfying $v_1 + \cdots + v_n = n$, define the $v$-quasinorm $| ~ |_v$ on ${\mathbb{R}}^n$ by setting $$\|{\mathbf{x}}\|_v := \max |x_i|^{1/v_i}.$$ Following [@BaBeVe] we say that a multivariable approximating function $\Psi$ satisfies property $\mathbf{P}$ if $\Psi({\mathbf{a}}) = \psi(\|{\mathbf{a}}\|_v)$ for some approximating function $\psi$ and $v$ as above. As noted in loc. cit. when $v = (1, \dots, 1)$ we have that $\|{\mathbf{a}}\|_v = \|{\mathbf{a}}\|$ and any approximating function $\psi$ satisfies property $\mathbf{P}$, where $\psi$ is regarded as the function ${\mathbf{a}}\to \psi(\|{\mathbf{a}}\|)$. The proof of Theorem \[thm:divergence\] can be modified to deal with the case of functions satisfying property $\mathbf{P}$.
Structure of the paper {#structure-of-the-paper .unnumbered}
----------------------
In the next section, we recall the transference principle of Beresnevich and Velani. The subsequent section studies $(C, \alpha)$-good functions in the $S$-adic setting. We then prove Theorem \[thm:main\] and then Theorem \[thm:divergence\]. We conclude with some open questions.
Inhomogeneous transference principle
====================================
In this section we state the inhomogeneous transference principle of Beresnevich and Velani from [@BeVe Section 5] which will allow us to convert our inhomogeneous problem to the homogeneous one. Let $(\Omega, d)$ be a locally compact metric space. Given two countable indexing sets $\mathcal{A}$ and $\mathbf{T}$, let H and I be two maps from ${\mathbf{T}}\times {\mathcal{A}}\times {\mathbb{R}}_{+}$ into the set of open subsets of $\Omega$ such that
$$\label{H_fn}
H~:~(t, \alpha, \lambda) \in {\mathbf{T}}\times {\mathcal{A}}\times {\mathbb{R}}_{+} \to H_{\mathbf{t}}(\alpha, \lambda)$$
\
and $$\label{I_fn}
I~:~ (t, \alpha, \lambda) \in {\mathbf{T}}\times {\mathcal{A}}\times {\mathbb{R}}_{+} \to I_{\mathbf{t}}(\alpha, \lambda)$$\
Furthermore, let $$\label{defH}
H_{{\mathbf{t}}} (\lambda) := \bigcup_{\alpha \in {\mathcal{A}}} H_{\mathbf{t}}(\alpha, \lambda) \text{ and } I_{{\mathbf{t}}} (\lambda) := \bigcup_{\alpha \in {\mathcal{A}}} I_{\mathbf{t}}(\alpha, \lambda).$$
Let $\Psi$ denote a set of functions $\psi: {\mathbf{T}}\to {\mathbb{R}}_{+}~:~{\mathbf{t}}\to \psi_{{\mathbf{t}}}$. For $\psi \in \Psi$, consider the limsup sets
$$\label{deflambda}
\Lambda_{H}(\psi) = \limsup_{{\mathbf{t}}\in {\mathbf{T}}} H_{{\mathbf{t}}}(\psi_{{\mathbf{t}}}) \text{ and } \Lambda_{I}(\psi) = \limsup_{{\mathbf{t}}\in {\mathbf{T}}} I_{{\mathbf{t}}}(\psi_{{\mathbf{t}}}).$$
The sets associated with the map $H$ will be called homogeneous sets and those associated with the map $I$, inhomogeneous sets. We now come to two important properties connecting these notions.
The intersection property {#the-intersection-property .unnumbered}
-------------------------
The triple $(H, I, \Psi)$ is said to satisfy the intersection property if, for any $\psi \in \Psi$, there exists $\psi^{*} \in \Psi$ such that, for all but finitely many ${\mathbf{t}}\in {\mathbf{T}}$ and all distinct $\alpha$ and $\alpha'$ in ${\mathcal{A}}$, we have that $$\label{inter}
I_{{\mathbf{t}}}(\alpha, \psi_{{\mathbf{t}}}) \cap I_{{\mathbf{t}}}(\alpha', \psi_{{\mathbf{t}}}) \subset H_{{\mathbf{t}}}(\psi^{*}_{{\mathbf{t}}}).$$
The contraction property {#the-contraction-property .unnumbered}
------------------------
Let $\mu$ be a non-atomic finite doubling measure supported on a bounded subset $\mathbf{S}$ of $\Omega$. We recall that $\mu$ is doubling if there is a constant $\lambda > 1$ such that, for any ball $B$ with centre in ${\mathbf{S}}$, we have $$\mu(2B) \leq \lambda \mu(B),$$ where, for a ball $B$ of radius $r$, we denote by $cB$ the ball with the same centre and radius $cr$. We say that $\mu$ is contracting with respect to $(I, \Psi)$ if, for any $\psi \in \Psi$, there exists $\psi^{+}\in \Psi$ and a sequence of positive numbers $\{k_{{\mathbf{t}}}\}_{{\mathbf{t}}\in {\mathbf{T}}}$ satisfying $$\label{conv}
\sum_{{\mathbf{t}}\in {\mathbf{T}}}k_{{\mathbf{t}}} < \infty,$$ such that, for all but finitely ${\mathbf{t}}\in {\mathbf{T}}$ and all $\alpha \in {\mathcal{A}}$, there exists a collection $C_{{\mathbf{t}}, \alpha}$ of balls $B$ centred at $\mathbf{S}$ satisfying the following conditions: $$\label{inter1}
{\mathbf{S}}\cap I_{{\mathbf{t}}}(\alpha, \psi_{{\mathbf{t}}}) \subset \bigcup_{B \in C_{{\mathbf{t}}, \alpha}} B$$
$$\label{inter2}
{\mathbf{S}}\cap \bigcup_{B \in C_{{\mathbf{t}}, \alpha}} B \subset I_{{\mathbf{t}}}(\alpha, \psi^{+}_{{\mathbf{t}}})$$
and
$$\label{inter3}
\mu(5B \cap I_{{\mathbf{t}}}(\alpha, \psi_{{\mathbf{t}}})) \leq k_{{\mathbf{t}}} \mu(5B).$$
We are now in a position to state Theorem $5$ from [@BeVe]
\[transfer\] Suppose that $(H, I, \Psi)$ satisfies the intersection property and that $\mu$ is contracting with respect to $(I, \Psi)$. Then $$\label{eq:transfer1}
\mu(\Lambda_{H}(\psi))=0 ~\forall~\psi \in \Psi \Rightarrow \mu(\Lambda_{I}(\psi)) = 0 ~\forall~\psi \in \Psi.$$
$(C, \alpha)$-good functions
============================
In this section, we recall the important notion of $(C, \alpha)$-good functions on ultrametric spaces. We follow the treatment of Kleinbock and Tomanov [@KT]. Let $X$ be a metric space, $\mu$ a Borel measure on $X$ and let $(F, |\cdot|)$ be a local field. For a subset $U$ of $X$ and $C, \alpha > 0$, say that a Borel measurable function $f : U \to F$ is $(C, \alpha)$-good on $U$ with respect to $\mu$ if for any open ball $B \subset U$ centred in ${\operatorname{sup}}\mu$ and $\varepsilon > 0$ one has $$\label{gooddef}
\mu \left(\{ x \in B \big| |f(x)| < \varepsilon \} \right) \leq
C\left(\displaystyle \frac{\varepsilon}{\sup_{x \in
B}|f(x)|}\right)^{{\alpha}}|B|,$$ The following elementary properties of $(C,
{\alpha})$-good functions will be used.
1. If $f$ is $(C,{\alpha})$-good on an open set $V$, so is $\lambda
f~\forall~\lambda \in
F$;\
2. If $f_i, i \in I$ are $(C,{\alpha})$-good on $V$, so is $\sup_{i \in
I}|f_i|$;\
3. If $f$ is $(C,{\alpha})$-good on $V$ and for some $c_1,c_2\,\textgreater \,0,\, c_1\leq
\frac{|f(x)|}{|g(x)|}\leq c_2
\text{ for all }x \in V$, then g is $(C(c_2/c_1)^{{\alpha}},{\alpha})$-good on $V$.\
4. If $f$ is $(C,{\alpha})$-good on $V$, it is $(C',\alpha')$-good on $V'$ for every $C' \geq \max\{C,1\}$, $\alpha' \leq \alpha$ and $V'\subset V$.
One can note that from (G2), it follows that the supremum norm of a vector valued function ${\mathbf{f}}$ is $(C,{\alpha})$-good whenever each of its components is $(C,{\alpha})$-good. Furthermore, in view of (G3), we can replace the norm by an equivalent one, only affecting $C$ but not ${\alpha}$.
Polynomials in $d$ variables of degree at most $k$ defined on local fields can be seen to be $(C, 1/dk)$-good, with $C$ depending only on $d$ and $k$ using Lagrange interpolation. In [@KM], [@BKM] and [@KT] (for ultrametric fields), this property was extended to smooth functions satisfying certain properties. We rapidly recall, following [@S] (see also [@KT]), the definition of smooth functions in the ultrametric case. Let $U$ be a non-empty subset of $X$ without isolated points. For $n \in \mathbb{N}$, define $$\nabla^{n}(U) = \{(x_1,\dots,x_n) \in U, x_i \neq x_j \text{ for } i \neq j \}.$$ The $n$-th order difference quotient of a function $f : U \to X$ is the function $\Phi_n(f) $ defined inductively by $\Phi_0 (f) =
f$ and, for $n \in {\mathbb{N}}$, and $(x_1,\dots,x_{n+1}) \in \nabla^n(U)$ by $$\Phi_{n}f(x_1,\dots,x_{n+1}) = \frac{\Phi_{n-1}f(x_1,x_3,\dots,x_{n+1}) -
\Phi_{n-1}f(x_2,\dots,x_{n+1})}{x_1-x_2}.$$
This definition does not depend on the choice of variables, as all difference quotients are symmetric functions. A function $f$ on $X$ is called a $C^n$ function if $\Phi_n f$ can be extended to a continuous function $\bar{\Phi}_{n}f : U^{n+1} \to X $. We also set $$D_n f(a) = \overline{\Phi_n}f(a,\dots,a),~a \in U.$$ We have the following theorem (c.f. [@S], Theorem $29.5$).
\[derivative\] Let $f \in C^{n}(U \to X)$. Then, $f$ is $n$ times differentiable and $$j!D_j f = f^j$$ for all $1 \leq j \leq n$.
To define $C^{k}$ functions in several variables, we follow the notation set forth in [@KT]. Consider a multiindex $\beta =
(i_1,\dots,i_d)$ and let $$\Phi_{\beta}f = \Phi^{i_1}_{1}\circ \dots \circ \Phi^{i_d}_{d} f.$$ This difference order quotient is defined on the set $
\nabla^{i_1}U_1 \times \dots \times \nabla^{i_d}U_d$ and the $U_i$ are all non-empty subsets of $X$ without isolated points. A function $f$ will then be said to belong to $C^{k}(U_1\times \dots
\times U_d)$ if for any multiindex $\beta$ with $|\beta| = \sum_{j =
1}^{d} i_j \leq k$, $\Phi_{\beta} f$ extends to a continuous function $\bar{\Phi}_{\beta}f : U_{1}^{i_1 + 1} \times \dots \times
U_{d}^{i_d + 1}$. We then have $$\label{multivanish}
\partial_{\beta}f(x_1,\dots,x_d) = \beta!
\bar{\Phi}_{\beta}(x_1,\dots,x_1,\dots,x_d,\dots,x_d)$$ where $\beta ! = \prod_{j = 1}^{d} i_{j}!$.\
We are now ready to gather the results on ultrametric $(C, \alpha)$-good functions that we need. We begin with Theorem $3.2$ from [@KT].
\[theorem 3.2\] Let $V_1,V_2,\cdots,V_3$ be nonempty open sets in F, ultrametric field. Let $ k\in {\mathbb{N}}$, $A_1,\cdots,A_d> 0 $ and $ f\in C^k(V_1\times\cdots,\times V_n) $ be such that $$\label{eqn 3.3}
|\Phi_j^kf|\equiv A_j \text{ on } \nabla^{k+1}V_j\times\prod_{i\neq j}V_j , j=1,\cdots,d.$$ Then f is $(dk^{3-\frac{1}{k}},\frac{1}{dk})$-good on $V_1\times\cdots,\times V_n$
The following is an ultrametric analogue of Proposition 1 from [@BaBeVe].
\[Calpha\_Prop\] Let $U_\nu$ be an open subset of ${\mathbb{Q}}_\nu ^d,$ ${\mathbf{x}}_0 \in U_\nu$ and let $\mathcal{F}\subset C^l(U)$ be a compact family of functions $f: U\to {\mathbb{Q}}_\nu $ for some $l\geq 2$. Also assume that $$\label{3.4}
\inf_{f\in\mathcal{F}}\max_{0<|\beta|\leq l} \ |\partial_{\beta}f({\mathbf{x}}_0)|>0.$$
Then there exists a neighbourhood $V_\nu\subset U_\nu$ of ${\mathbf{x}}_0$ and $C, \delta > 0$ satisfying the following property. For any $\Theta\in C^l(U)$ such that $$\label{theta_cond}
\sup_{{\mathbf{x}}\in U_\nu} \max_{0<|\beta|\leq l} \ |\partial_{\beta}\Theta({\mathbf{x}}_0)|\leq \delta$$ and for any $f\in \mathcal {F}$ we have that
1. $f+\Theta $ is $(C,\frac{1}{dl})$-good on $V_\nu$.
2. $|\nabla(f+\Theta)|$ is $ \left(C,\frac{1}{m(l-1)}\right)$-good on $V_\nu$
We follow the proof of [@BaBeVe], which in turn is a modification of the ideas used to establish Proposition 3.4 in [@BKM]. Here $\nu=\infty$ is exactly Proposition 1 of [@BaBeVe] so we assume that $\nu\neq\infty$. By (\[3.4\]) there exists $C_1 > 0$ such that for any $f\in \mathcal{F}$ there exists a multiindex $\beta$ with $0<|\beta|=k\leq l $ , where $k=k(f)$ such that $$\label{3.6}
|\partial_{\beta} f ({\mathbf{x}}_0)|\geq C_1.$$ By the compactness of $\mathcal{F}$, $\inf_{f\in\mathcal{F}}\max_{|\beta|\leq l} \ |\partial_{\beta}f({\mathbf{x}}_0)|$ will be actually attained for some f and we may take that value to be $C_1$. Since there are finitely many $\beta$, we can consider the subfamily $\mathcal{F}_\beta:=\{f\in\mathcal{F}\ |\ \partial_{\beta} f ({\mathbf{x}}_0)|\geq C_1\} $, which is also compact in $C^l(U)$ and satisfies (\[3.4\]). Proving the theorem for $\mathcal{F}_\beta$ will yield sets $U_\beta$ where (1) and (2) above hold. Setting $V_{\nu} := \bigcap_{\beta} U_{\beta}$ then proves the Proposition. We may therefore assume without loss of generality that $\beta$ is the same for every $f\in\mathcal{F}$.\
We wish to apply Theorem 3.2 of [@KT] and to do so we need to satisfy (\[eqn 3.3\]). We are going to show that there exists $A\in {\operatorname{GL}}_d(\mathcal{O})$ such that $f\circ A$ has the property (\[eqn 3.3\]). For $A\in {\operatorname{GL}}_d(\mathcal{O})$ we have, by the chain rule that $$\label{lin_sys1}
\begin{array}{rcr}
\partial_{1}^{k}f\circ A(A {^{\text{-}1}}{\mathbf{x}}_0) &=& \sum_{\sum i_j=k, i_j\geq 0} C_{(i_1,\cdots,i_d)} a_{11}^{i_1}\cdots a_{d1}^{i_d} \ \partial_{\beta=(i_1,\cdots,i_d)}^k f({\mathbf{x}}_0) \\
\vdots \\
\partial_{d}^{k}f\circ A(A{^{\text{-}1}}{\mathbf{x}}_0) &=& \sum_{\sum i_j=k, i_j\geq 0} C_{(i_1,\cdots,i_d)} a_{1d}^{i_1}\cdots a_{dd}^{i_d} \ \partial_{\beta=(i_1,\cdots,i_d)}^k f({\mathbf{x}}_0).
\end{array}$$ We want $A=(a_{ij})$ such that every element in the left side of (\[lin\_sys1\]) above is nonzero knowing that for at least one $\beta ,\ \partial_{ \beta=(i_1,\cdots,i_k)}^k f({\mathbf{x}}_0)\neq 0 $. Namely, we wish to find $A\in {\operatorname{GL}}_d(\mathcal{O}) $ such that $x'_i\neq 0$ for every $i$ where $$\begin{array}{rcr}
x'_1 &= & \sum C_{(i_1,\cdots,i_d)} \ a_{11}^{i_1}\cdots a_{d1}^{i_d} \ x_{(i_1,\cdots,i_d)}\\
\vdots \\
x'_d &=& \sum C_{(i_1,\cdots,i_d)} \ a_{1d}^{i_1}\cdots a_{dd}^{i_d} \ x_{(i_1,\cdots,i_k)}
\end{array}$$ i.e. $$\begin{array}{rcr}
x'_1&=& g(a_{11},\cdots,a_{d1}) \\
\vdots \\
x'_d&=& g(a_{1d},\cdots,a_{dd})
\end{array}$$ and $g$ is a homogeneous polynomial of degree k. We already know that $ \partial_{ \beta=(i_1,\cdots,i_k)}^k f({\mathbf{x}}_0)\neq 0$ for at least one $\beta$, so at least one $x_{(i_1,\cdots,i_k)}\neq 0$ and thus $g $ is a nonzero polynomial.
Now $ g $ should have at least one nonzero value on $\{1+\pi\mathcal{O}\}\times\{\pi\mathcal{O}\} \times\cdots\times\{\pi\mathcal{O}\}$, otherwise $g$ is identically zero. So take $(a_{11},\cdots,a_{1d})$ to be the point of the aforementioned set where $g(a_{11},\cdots,a_{1d})\neq 0$. Then by a similar argument choose $(a_{i1},\cdots,a_{id})\in \{\pi\mathcal{O}\}\times\cdots\times\{1+\pi\mathcal{O}\} \times\cdots\times\{\pi\mathcal{O}\}$ such that $g(a_{i1},\cdots,a_{id})\neq 0$. Choosing $A$ this way we will automatically get that $\det(A)$ is a unit, which implies that $A\in {\operatorname{GL}}_d(\mathcal{O})$. Thus we have that for $f\in\mathcal{F}$ there exists $A_f\in {\operatorname{GL}}_d(\mathcal{O})$ depending on $f$ such that $$\label{der_nonzero}
\min_{i=1,\cdots,d} |\partial_i^k f\circ A_f (A_f{^{\text{-}1}}({\mathbf{x}}_0) )|>0$$ in fact there exists a uniform $C>0$ such that $$\min_{i=1,\cdots,d} |\partial_i^k f\circ A_f (A_f{^{\text{-}1}}({\mathbf{x}}_0) )|>C.$$ This is because we can take $$C=\inf_{f\in\mathcal{F}}\sup_{A\in {\operatorname{GL}}_d(\mathcal{O})}\min_{i=1,\cdots,d} |\partial_i^k f\circ A (A{^{\text{-}1}}({\mathbf{x}}_0) )|,$$ which is nonzero. For if not, then there exists $\{f_n\} \in\mathcal{F}$ such that $$\sup_{A\in {\operatorname{GL}}_d(\mathcal{O})}\min_{i=1,\cdots,d} |\partial_i^k f_n\circ A (A{^{\text{-}1}}({\mathbf{x}}_0) )|<\frac{1}{n}.$$ Since $\mathcal{F}$ is compact, $\{f_n\}$ has a convergent subsequence $\{f_{n_k}\}\to f\in\mathcal{F}$. Taking limits, we get that $$\min_{i=1,\cdots,d} |\partial_i^k f\circ A (A{^{\text{-}1}}({\mathbf{x}}_0) )|=0 \ \forall \ A \in {\operatorname{GL}}_d(\mathcal{O}),$$ which is a contradiction to (\[der\_nonzero\]). Consider the following map $$\Phi_1: {\operatorname{GL}}_d({\mathbb{Q}}_\nu)\times C^l(U_\nu)\times U_\nu \longmapsto {\mathbb{Q}}_\nu$$ $$(A,f,{\mathbf{x}})\mapsto \min_{i=1,\cdots,d} |\partial_i^k f\circ A (A{^{\text{-}1}}({\mathbf{x}})|.$$ It can be easily verified that $\Phi_1$ is continuous. For every $f\in\mathcal{F}$ there exists $A_f \in {\operatorname{GL}}_d(\mathcal{O}) $ such that $\Phi_1(A_f,f,{\mathbf{x}}_0)\geq C>\frac{C}{2},$ so by continuity we have an open neighbourhood $U_{A_f}\times U_f\times U_{({\mathbf{x}}_0,f)}$ of $ (A_f,f,{\mathbf{x}}_0)$ such that $$\Phi_1(A,g,{\mathbf{x}}) >\frac{C}{2} \
\forall \ (A,g,{\mathbf{x}}) \in U_{A_f}\times U_f\times U_{({\mathbf{x}}_0,f)}.$$ In particular, $$\label{unicondition}
\Phi_1(A_f,g,{\mathbf{x}})>\frac{C}{2} \ \forall g\in U_f \text{ and } \forall \ {\mathbf{x}}\in U_{({\mathbf{x}}_0,f)}.$$ Now $\mathcal{F}\subset \bigcup_{f} U_f,$ must have a finite subcovering $\{U_{f_i}\}_{i=1}^{r}$. So by (\[unicondition\]) we have that for every ${\mathbf{x}}\in U_{{\mathbf{x}}_0}=\bigcap_{i=1}^r U_{({\mathbf{x}}_0f_i)} $ and $f\in\mathcal{F}$ there exists $A_{f_i}$ such that $$\label{final_cond}
\Phi_1(A_{f_i},f,{\mathbf{x}}) >\frac{C}{2}.$$
Choose $\delta=\frac{C}{4u}$ where $ u $ is the constant coming from the inequality $$|\partial_i^k\Theta\circ T(T{^{\text{-}1}}{\mathbf{x}})| \leq u \max_{|\beta|\leq l}|\partial_{ \beta} f({\mathbf{x}}) |$$ for $T\in {\operatorname{GL}}_d(\mathcal{O})$. Thus any $\Theta $ satisfying (\[theta\_cond\]) will also satisfy $$\Phi_1(A_{f_i},f+\Theta,{\mathbf{x}})>\frac{C}{4} \ \ \forall \ {\mathbf{x}}\in U_{{\mathbf{x}}_0}.$$ By the compactness of $\mathcal{F}$ and (\[theta\_cond\]) there is a uniform upper bound for every $f\in\mathcal{F}$ and $\Theta $ of the aforementioned type. Now applying Theorem \[theorem 3.2\] we have that $f+(\Theta\circ A_{f_i})$ is $(dk^{3-\frac{1}{k}},\frac{1}{dk})$-good on $A_{f_i}^{-1} U_{{\mathbf{x}}_0}$. Therefore, $f+\Theta$ is $(dk^{3-\frac{1}{k}},\frac{1}{dk})$-good on $U_{{\mathbf{x}}_0}$. This completes the proof of the first part.\
Now consider the set $\mathcal{F}_{A_{f_i}}= \{f\in\mathcal{F} \ | \ \Phi_1(A_{f_i},f,{\mathbf{x}}_0)\geq \frac{C}{2}\}$. Clearly this is a closed subset of the compact set $\mathcal{F}$, so it is also compact. Therefore $\{\partial_j(f\circ A_{f_i}) | \ f\in\mathcal{F}_{A_{f_i}} \}$ is also compact being the image of a compact set under a continuous map. Since $\mathcal{F} \subset \bigcup_{i=1,\cdots,r} \mathcal{F}_{A_{f_i}}$, we may, without loss of generality, take the same $A$ for every $f\in \mathcal{F}$. Now we want to apply the first part of this Proposition. Suppose $|\beta| \geq 2 $ in (\[3.6\]), then to apply part(1) we have to check condition (\[3.4\]) for the set $\{\partial_j(f\circ A) | \ f\in\mathcal{F} \}$, where we know that $\Phi_1(A,f,{\mathbf{x}}_0)\geq \frac{C}{2}$. Suppose $$\inf_{f\in\mathcal{F}}\max_{|\beta|\leq l-1}|\partial_{ \beta}\partial_{j}(f\circ A)(A{^{\text{-}1}}({\mathbf{x}}_0))|=0.$$ Then by compactness of $\mathcal{F}$ we have that for some $f\in\mathcal{F}$, $$\max_{|\beta|\leq l-1}|\partial_{ \beta}\partial_{j}(f\circ A)(A{^{\text{-}1}}({\mathbf{x}}_0))|=0,$$ which implies that $\Phi_1(A,f,{\mathbf{x}}_0)=0,$ which is a contradiction. Thus by applying the first part of the Proposition we get that for every $j=1,\cdots,d , \partial_j((f+\Theta)\circ A)$ is $(C_\star,\frac{1}{d(l-1)})$-good on an open neighbourhood $B_{A{^{\text{-}1}}({\mathbf{x}}_0)}$ of $A{^{\text{-}1}}({\mathbf{x}}_0)$. So $(\partial_j(f+\Theta\circ A))\circ A{^{\text{-}1}}$ is $(C_\star,\frac{1}{d(l-1)})$-good on $A(B_{A{^{\text{-}1}}({\mathbf{x}}_0)})$. Therefore each $\partial_j(f+\Theta)$ is $(C_\star,\frac{1}{d(l-1)})$-good on $A(B_{A{^{\text{-}1}}({\mathbf{x}}_0)})$ and so is $|\nabla (f+\Theta)|$. The case $|\beta|=1$ in (\[3.6\]) is trivial (See property (G3) of $(C,\alpha)$-good functions). This completes the proof.
As a Corollary, we have,
\[good\_corollary\] Let $U_\nu$ be an open subset of ${\mathbb{Q}}_\nu^{d\nu}, {\mathbf{x}}_0\in U_\nu$ be fixed and assume that ${\mathbf{f}}_\nu=(f_\nu^{(1)},f_\nu^{(2)},\dots,f_\nu^{(n)}):
U_\nu\to {\mathbb{Q}}_\nu^n $ satisfies (I2) and (I3) and that $\Theta_\nu $ satisfies (I5). Then there exists a neighbourhood $V_\nu\subset U_\nu$ of ${\mathbf{x}}_0$ and positive constants $ C > 0 $ and $l\in {\mathbb{N}}$ such that for any $(a_0,{\mathbf{a}})\in \mathcal{O}^{n+1},$
1. $a_0+{\mathbf{a}}.{\mathbf{f}}_{\nu}+\Theta_\nu$ is $(C,\frac{1}{d_\nu l})$-good on $V_\nu,$ and
2. $|\nabla({\mathbf{a}}.{\mathbf{f}}_\nu +\Theta_\nu)| $ is $ (C,\frac{1}{d_\nu(l-1)})$-good on $V_\nu$.
For the case $\nu=\infty$, see Corollary $3$ of [@BaBeVe] and also [@BKM]. So we may assume $\nu\neq \infty.$ Let $\mathcal{F}:= \{a_0+{\mathbf{a}}.{\mathbf{f}}_\nu+\Theta_\nu \ |\ (a_0,{\mathbf{a}})\in\mathcal{O}^{n+1}\}$. This is a compact family of functions of $C^l(U_\nu)$ for every $l>0 $ since $\mathcal{O}$ is compact in ${\mathbb{Q}}_\nu$. Now if this family satisfies condition (\[3.4\]) for some $l\in {\mathbb{N}}$, then the conclusion follows from the previous Proposition. Hence we may assume that the family does not satisfy (\[3.4\]) for every $l\in {\mathbb{N}}$. Then by the continuity of differential and the compactness of $\mathcal{O}$, there exists ${\mathbf{c}}_l\in \mathcal{O}^n$ such that for every $2 \leq l\in {\mathbb{N}}$ we have $$\max_{|\beta|\leq l}|\partial_{ \beta}({\mathbf{c}}_l.f_\nu+\Theta_\nu)({\mathbf{x}}_0)| > 0.$$ Now this sequence $\{{\mathbf{c}}_l\} \in\mathcal{O}^n$ has a convergent subsequence $\{{\mathbf{c}}_{l_k}\}$ converging to ${\mathbf{c}}\in \mathcal{O}^n$ since $\mathcal{O}^n$ is compact. By taking limits we get that $$|\partial_{ \beta}({\mathbf{c}}.f_\nu+\Theta_\nu)({\mathbf{x}}_0)|=0 \ \forall \ \beta.$$ However, as each of the ${\mathbf{f}}_{\nu}$ and $\Theta_\nu$ are analytic on $U_\nu,$ there exists a neighbourhood $V_{{\mathbf{x}}_0}$ of ${\mathbf{x}}_0$ such that $$({\mathbf{c}}.f_\nu+\Theta_\nu)({\mathbf{x}})=u\ \forall \ {\mathbf{x}}\in V_{{\mathbf{x}}_0},$$ where $u \in {\mathbb{Q}}_\nu$ is a constant. Therefore replacing $\Theta_\nu$ by $u-{\mathbf{c}}.{\mathbf{f}}_{\nu},$ we get that $$\mathcal{F}=\{ a_0+u+({\mathbf{a}}-{\mathbf{c}}).{\mathbf{f}}_{\nu} \ | (a_0,{\mathbf{a}})\in \mathcal{O}^{n+1} \}.$$ First consider the case where $|a_0+u| < 2|{\mathbf{a}}-{\mathbf{c}}|,$ then $$\mathcal{F}_1= \left\{\frac{a_0+u}{|{\mathbf{a}}-{\mathbf{c}}|}+\frac{{\mathbf{a}}-{\mathbf{c}}}{|{\mathbf{a}}-{\mathbf{c}}|}.{\mathbf{f}}_\nu |\ (a_0,{\mathbf{a}})\in\mathcal{O}^{n+1}\right\}$$ is compact in $C^l(U_\nu)$ for every $l\in {\mathbb{N}}$. Then by linear independence of $1,f_\nu^{(1)},\cdots,f_\nu^{(n)},$ $\mathcal{F}_1$ satisfies (\[3.4\]) for some $l\in{\mathbb{N}}$. And then by Proposition \[Calpha\_Prop\] we can conclude that every element in $\mathcal{F}_1$ is $(C,\frac{1}{d_\nu l})$-good on some $V_\nu\subset V_{{\mathbf{x}}_0}\subset U_\nu$ together with conclusion (2) of the Corollary above. This also implies $ a_0+u+({\mathbf{a}}-{\mathbf{c}}).{\mathbf{f}}_{\nu} $ are all $(C,\frac{1}{d_\nu l})$ good on $V_\nu$ for all $(a_0,{\mathbf{a}})\in\mathcal{O}^{n+1}$ with $|a_0+u| < 2|{\mathbf{a}}-{\mathbf{c}}|$. Otherwise $$\sup_{{\mathbf{x}}\in V_{{\mathbf{x}}_0}}|a_0+u+({\mathbf{a}}-{\mathbf{c}}).{\mathbf{f}}_{\nu}|\leq 3.\inf_{{\mathbf{x}}\in V_{{\mathbf{x}}_0}}|a_0+u+({\mathbf{a}}-{\mathbf{c}}).{\mathbf{f}}_{\nu}|$$ as $|a_0+u|\geq 2|{\mathbf{a}}-{\mathbf{c}}| $ and it turns out to be a trivial case. This implies that for $C\geq 3$ and $0<\alpha\leq1$ the aforementioned functions are $(C,\alpha)$-good.
Let us recall the following Corollary from [@KT] (Corollary 2.3).
[\[product\_good\]]{} For $ j=1,\cdots,n,$ let $X_j$ be a metric space, $\mu_j$ be a measure on $X_j $. Let $ U_j\subset X_j $ be open, $C_j,\alpha_j >0 $ and let $f$ be a function on $U_1\times\cdots \times U_d$ such that for any $j=1,\cdots d$ and any $x_i\in U_i$ with $i\neq j,$ the function $${\label{fun}}
y~~\mapsto f(x_1,\cdots,x_{j-1}, y, x_{j+1},\cdots, x_d)$$ is $(C_j,\alpha_j)$-good on $U_j$ with respect to $\mu_j$. Then $f$ is $(\widetilde{C},\widetilde{\alpha}) $ -good on $U_1\times\cdots\times U_d $ with respect to $\mu_1\times\cdots\times\mu_d,$ where $\widetilde{C}=d,\widetilde{\alpha }$ are computable in terms of $C_j,\alpha_j $. In particular, if each of the functions (\[fun\]) is $(C,\alpha)$-good on $U_j$ with respect to $\mu_j$, then the conclusion holds with $\widetilde{\alpha}=\frac{\alpha}{d}$ and $\widetilde{C}=dC$.
Now combining Corollary (\[good\_corollary\]) and (\[product\_good\]) we can state the following:
\[good\_function\] Let ${\mathbf{f}}$ and $\Theta$ be as in Corollary (\[good\_corollary\]) and let ${\mathbf{x}}_0\in {\mathbf{U}}.$ Then there exists a neighbourhood ${\mathbf{V}}\subset{\mathbf{U}}$ of ${\mathbf{x}}_0$ and $C>0, k,k_1\in{\mathbb{N}}$ such that for any $(a_0,{\mathbf{a}})\in{\mathbb{Z}}^{n+1} $ the following holds:
1. $ {\mathbf{x}}~\mapsto ~|(a_0+{\mathbf{a}}.{\mathbf{f}}+\Theta )({\mathbf{x}})|_S\text{ is } (C,\frac{1}{dk})-\text{good on } {\mathbf{V}}$,
2. ${\mathbf{x}}~\mapsto~\|\nabla({\mathbf{a}}.{\mathbf{f}}_{\nu}+\Theta_\nu)({\mathbf{x}}_{\nu})\| \text{ is } (C,\frac{1}{dk_1})-\text{ good on } {\mathbf{V}}, \forall ~\nu\in S$
where $d=\max{d_\nu}$.
Proof of Theorem \[thm:main\]
=============================
We set $\phi(\nu)=\left\{\begin{array}{rl} -\varepsilon & \text{ if } \nu\neq\infty \\
1-\varepsilon & \text{ if }\nu=\infty
\end{array} \right.
$.
From the definition, it follows that ${\mathcal{W}}_{\Psi,\Theta}^{{\mathbf{f}}}$ admits a description as a limsup set. Namely, $${{\mathcal{W}}_{\Psi,\Theta}^{{\mathbf{f}}}}=\limsup_{{\mathbf{a}}\to \infty}{\mathbf{W}}_{\mathbf{f}}({\mathbf{a}},\Psi,\Theta)$$ where $${\mathbf{W}}_{\mathbf{f}}({\mathbf{a}},\Psi,\Theta)=\{{\mathbf{x}}\in{\mathbf{U}}:| a_0+ {\mathbf{a}}\cdot {\mathbf{f}}({\mathbf{x}})+\Theta({\mathbf{x}})|_S^l\leq \Psi({\mathbf{a}}) \text{ for some } a_0 \} .$$
We may now write $${\mathbf{W}}_{{\mathbf{f}}}^{\text{large}}({\mathbf{a}},\Psi,\Theta)= \left \{{\mathbf{x}}\in {{\mathbf{W}}_{\mathbf{f}}({\mathbf{a}},\Psi,\Theta)}~:~\|\nabla({\mathbf{a}}.{\mathbf{f}}_\nu({\mathbf{x}}_\nu)+\Theta_\nu({\mathbf{x}}_\nu))\|_\nu>\|{\mathbf{a}}\|_S^{\phi(\nu)} ~\forall~\nu \right\}$$ where $0<\varepsilon <\frac{1}{4(n+1)l^2},$ is fixed and $${{\mathbf{W}}_{\mathbf{f}}({\mathbf{a}},\Psi,\Theta)}\setminus{{\mathbf{W}}_{{\mathbf{f}}}^{\text{large}}({\mathbf{a}},\Psi,\Theta)}=\bigcup_{\nu\in S}{\mathbf{W}}_{\nu,{\mathbf{f}}}^{\text{small}}({\mathbf{a}}, \Psi,\Theta)$$ where $${\mathbf{W}}_{\nu,{\mathbf{f}}}^{\text{small}}({\mathbf{a}},\Psi,\Theta)=\left\{{\mathbf{x}}\in{{\mathbf{W}}_{\mathbf{f}}({\mathbf{a}},\Psi,\Theta)}:\|\nabla({\mathbf{a}}.{\mathbf{f}}_\nu({\mathbf{x}}_\nu)+\Theta_\nu({\mathbf{x}}_\nu))\|_\nu\leq\|{\mathbf{a}}\|_S^{\phi(\nu)} \right\}.$$ As the set $S$ is finite, we have
$${{\mathcal{W}}_{\Psi,\Theta}^{{\mathbf{f}}}}={\mathcal{W}}_{{\mathbf{f}}}^{\text{large}}(\Psi,\Theta)\bigcup_{\nu\in S}{\mathcal{W}}_{\nu,{\mathbf{f}}}^{\text{small}}(\Psi,\Theta)$$ where $${\mathcal{W}}_{{\mathbf{f}}}^{\text{large}}(\Psi,\Theta)={\limsup_{{\mathbf{a}}\to\infty}{{\mathbf{W}}_{{\mathbf{f}}}^{\text{large}}({\mathbf{a}},\Psi,\Theta)}}$$ and $${\mathcal{W}}_{\nu,{\mathbf{f}}}^{\text{small}}(\Psi,\Theta)={\limsup_{{\mathbf{a}}\to\infty}{{\mathbf{W}}_{\nu,{\mathbf{f}}}^{\text{small}}({\mathbf{a}},\Psi,\Theta)}}.$$ To prove Theorem \[thm:main\], we will show that each of these limsup sets has zero measure. Namely, the proof is divided into the “large derivative" case where we will show $|{\mathcal{W}}_{{\mathbf{f}}}^{\text{large}}(\Psi,\Theta)|=0$, and the “small derivative" case which involves $|{\mathcal{W}}_{\nu,{\mathbf{f}}}^{\text{small}}(\Psi,\Theta)|=0 \ \forall\ \nu\in S.$
The small derivative
--------------------
We begin by showing that $|{\mathcal{W}}_{\nu,{\mathbf{f}}}^{\text{small}}(\Psi,\Theta)|=0 \ \forall\ \nu\in S$. From the assumed property (I4) of $\Psi$, it follows that $$\Psi({\mathbf{a}})<\Psi_0({\mathbf{a}}) :=\prod_{\substack{i=1,\cdots,n \\ a_i\neq 0}}|a_i|_S{^{\text{-}1}}.$$ So ${\mathcal{W}}_{\nu,{\mathbf{f}}}^{\text{small}}(\Psi,\Theta)\subset {\mathcal{W}}_{\nu,{\mathbf{f}}}^{\text{small}}(\Psi_0,\Theta)$, which means that it is enough to show that $ |{\mathcal{W}}_{\nu,{\mathbf{f}}}^{\text{small}}(\Psi_0,\Theta)|=0 \ \forall\ \nu\in S $. Let us take $\mathcal{A}={\mathbb{Z}}\times{\mathbb{Z}}^n\setminus\{0\} $ and ${\mathbf{T}}={\mathbb{Z}}_{\geq 0}^n $ and define the function
$$\label{r_equation}
r_\nu ({\mathbf{t}})=\left\{\begin{array}{rl}
2^{(|{\mathbf{t}}|+1)(1-\varepsilon)} & \text{if } \nu=\infty\\
\\
2^{-(|{\mathbf{t}}|+1)\varepsilon} & \text{if } \nu\neq\infty
\end{array} \right .$$
where $\varepsilon$ is fixed as before. Now we define sets $ I_{\mathbf{t}}^\nu(\alpha,\lambda) $ and $H_{\mathbf{t}}^\nu(\alpha,\lambda)$ for every $\lambda>0,{\mathbf{t}}\in{\mathbf{T}}\text{ and } \alpha=(a_0, {\mathbf{a}})\in \mathcal{A} $ as follows:\
$$\label{def_I}
I_{\mathbf{t}}^\nu(\alpha,\lambda)=\left\{ {\mathbf{x}}\in{\mathbf{U}}:\begin{array}{l}
|a_0+{\mathbf{a}}.{\mathbf{f}}({\mathbf{x}})+\Theta({\mathbf{x}})|_S^l<\lambda\Psi_0(2^{\mathbf{t}})\\\\
\|\nabla({\mathbf{a}}.{\mathbf{f}}_{\nu}({\mathbf{x}}_\nu)+\Theta_\nu({\mathbf{x}}_\nu))\|_\nu<\lambda r_\nu({\mathbf{t}})\\\\
2^{t_i}\leq \max{\{1,|a_i|_S\}}\leq 2^{t_i+1} \ \forall \ 1\leq i\leq n
\end{array}
\right\}$$ and $$\label{def_H}
H_{\mathbf{t}}^\nu(\alpha,\lambda)=\left\{ {\mathbf{x}}\in{\mathbf{U}}:\begin{array}{l}
|a_0+{\mathbf{a}}.{\mathbf{f}}({\mathbf{x}})|_S^l<2^l\lambda\Psi_0(2^{\mathbf{t}})\\\\
\|\nabla({\mathbf{a}}.{\mathbf{f}}_{\nu}({\mathbf{x}}_\nu))\|_\nu<2\lambda r_\nu({\mathbf{t}})\\\\
|a_i|_S\leq 2^{t_i+2} \ \forall\ 1\leq i \leq n
\end{array}
\right\}$$ where $2^{\mathbf{t}}=(2^{t_1},\cdots,2^{t_n})$ and $|S|=l$. These give us the functions (\[I\_fn\]) and (\[H\_fn\]) required in the inhomogeneous transference principle. As in (\[defH\]) and (\[deflambda\]) we get $H_{\mathbf{t}}^\nu(\lambda)$, $I_{\mathbf{t}}^\nu(\lambda)$, $\Lambda_H^\nu(\lambda)$ and $\Lambda_I^\nu(\lambda)$. Now define $\phi_\delta~:~{\mathbf{T}}\mapsto {\mathbb{R}}_{+}$ as $\phi_\delta({\mathbf{t}}):=2^{\delta|{\mathbf{t}}|}$ for $\delta\in(0,\frac{\varepsilon}{2}] $. Clearly ${\mathcal{W}}_{\nu,{\mathbf{f}}}^{\text{small}}(\Psi_0,\Theta)\subset \Lambda_I^\nu(\phi_\delta) $ for every $\delta\in(0,\frac{\varepsilon}{2}]$. So to settle Case 2 it is enough to show that $$\label{Inhomo_set}
|\Lambda_I^\nu(\phi_\delta)|=0 \text{ for some } \delta\in(0,\frac{\varepsilon}{2}].$$
Now we recall Theorem $1.3$ from [@MoS1].
\[<\] Let $S$ be as in (I0), ${\mathbf{U}}$ be as in (I1), and assume that $\mathbf{f}$ satisfies (I2) and (I3). Then for any ${\mathbf{x}}=({\mathbf{x}}_{\nu})_{\nu\in S}\in {\mathbf{U}}$, one can find a neighborhood $\mathbf{V}=\prod V_{\nu}\subseteq {\mathbf{U}}$ of ${\mathbf{x}}$ and $\alpha_1 >0$ with the following property: for any ball $\mathbf{B}\subseteq \mathbf{V}$, there exists $E>0$ such that for any choice of $0<\delta\le 1$, $T_1,\cdots,T_n\ge 1$, and $K_{\nu}>0$ with $\delta{ (\frac{T_1\cdots
T_n}{\max_i T_i})}\prod K_{\nu}\le 1$ one has
$$\label{<eqn}\left|\left\{{\mathbf{x}}\in\mathbf{B}|\hspace{1mm}\exists\
{\mathbf{a}}\in{\mathbb{Z}}^n\setminus\{0\}:\begin{array}{l}|\langle {\mathbf{a}}.{\mathbf{f}}({\mathbf{x}}) \rangle|^{l}<\delta\\\\
\|{\mathbf{a}}\nabla {\mathbf{f}}_{\nu}({\mathbf{x}}_\nu)\|_{\nu}<K_{\nu},\hspace{2mm}\nu\in S\\\\
|a_i|_S<T_i, 1 \leq i \leq n\end{array}\right\}\right|\le
E\hspace{.5mm}\varepsilon_1^{\alpha_1}|\mathbf{B}|,\hspace{5mm}$$
where $\varepsilon_1=\max\{\delta^\frac{1}{l},(\delta{ (\frac{T_1\cdots
T_n}{\max_i T_i})}\prod K_{\nu})^{\frac{1}{\l(n+1)}}\},$ where $|S|=l$.
The Theorem above is an $S$-adic analogue of Theorem $1.4$ in [@BKM] and is proved using nondivergence estimates for certain flows on homogeneous spaces. We will denote the set in the LHS of (\[<eqn\]) as $S(\delta,K_{\nu_1},\cdots,K_{\nu_l},T_1,\cdots,T_n)$ for further reference.
To show (\[Inhomo\_set\]) we want to use the Inhomogeneous transference principle (\[transfer\]). Assume that $(H_\nu,I_\nu,\Phi)$ satisfies the intersection property and that the product measure is contracting with respect to $(I_\nu,\Phi)$ where, $\Phi:=\{\phi_\delta : 0\leq \delta <\frac{\varepsilon }{2}\} $. Then by (\[transfer\]) it is enough to show that $$\label{homo_condi}
|\Lambda_H^\nu(\phi_\delta)|=0 \text{ for some } 0<\delta \leq\frac{\varepsilon}{2}.$$
Note that $$\Lambda_H^\nu(\phi_\delta)=\limsup_{{\mathbf{t}}\in{\mathbf{T}}} \bigcup_{\alpha\in\mathcal{A}} H_{{\mathbf{t}}}^\nu(\alpha,\phi_\delta({\mathbf{t}})).$$ Using Theorem \[<\], we will show that $$\sum |\cup_{\alpha \in \mathcal{A}}H_{{\mathbf{t}}}^\nu(\alpha,\phi_\delta({\mathbf{t}}))|<\infty$$ for some $0<\delta<\frac{\varepsilon}{2}$. This, together with Borel-Cantelli will give us $|\Lambda_H^\nu(\phi_\delta)|=0$.\
By the definition \[def\_H\] of $H_{{\mathbf{t}}}^\nu(\alpha,\phi_\delta({\mathbf{t}})),$ we get $$\bigcup_{\alpha\in\mathcal{A}}H_{{\mathbf{t}}}^\nu(\alpha,\phi_\delta({\mathbf{t}}))\subset S(2^l\phi_\delta({\mathbf{t}})\Psi_0(2^{{\mathbf{t}}}),1,\cdots ,2.\phi_\delta({\mathbf{t}})r_\nu({\mathbf{t}}),\cdots,1,2^{t_1+2}, \dots, 2^{t_n +2})$$ i.e. here $K_\nu=2\cdot \phi_\delta({\mathbf{t}})r_\nu({\mathbf{t}}), K_\omega=1,$ where $\omega\neq\nu$ and $T_i=2^{t_i+2}$.
Case $1$ $(\nu=\infty)$
-----------------------
Here $r_\infty({\mathbf{t}})=2^{(1-\varepsilon)(|{\mathbf{t}}|+1)}$. So, $$2^l.2^{\delta|{\mathbf{t}}|}\Psi_0(2^{{\mathbf{t}}}).2.2^{\delta|{\mathbf{t}}|}2^{(1-\varepsilon)(|{\mathbf{t}}|+1)}.1.\frac{2^{\sum_{1}^n t_i+2}}{2 ^{|{\mathbf{t}}|}}=2^{2n+l+2-\varepsilon}.2^{|{\mathbf{t}}|(2\delta-\varepsilon)}<1$$ for all large $ {\mathbf{t}}$ as $2\delta-\varepsilon<0$. So by Theorem \[<\] we have $$|\bigcup_{\alpha\in\mathcal{A}}H_{{\mathbf{t}}}^\infty(\alpha,\phi_\delta({\mathbf{t}}))|\leq E\varepsilon_1^{\alpha_1}|\mathbf{B}|,$$ where $\varepsilon_1=\max\{2.2^{\frac{\delta|{\mathbf{t}}|-\sum t_i}{l}},2^{\frac{2n+l+2-\varepsilon}{l(n+1)}}.2^{\frac{|{\mathbf{t}}|(2\delta-\varepsilon)}{l(n+1)}}\}
=2^{\frac{2n+l+2-\varepsilon}{l(n+1)}}.2^{\frac{|{\mathbf{t}}|(2\delta-\varepsilon)}{l(n+1)}}$ for all large ${\mathbf{t}}\in {\mathbb{Z}}_{\geq 0}^n$. We note that $\varepsilon_1$ is ultimately the 2nd term in the parenthesis. Because if not then for infinitely many ${\mathbf{t}}$, $$\frac{\delta|{\mathbf{t}}|-\sum t_i}{l}>\frac{|{\mathbf{t}}|(2\delta-\varepsilon)}{l(n+1)} + O(1)$$ which implies that $$\sum t_i<|{\mathbf{t}}| + O(1),$$ a contradiction. Therefore we have $$|\bigcup_{\alpha\in\mathcal{A}}H_{{\mathbf{t}}}^\infty(\alpha,\phi_\delta({\mathbf{t}}))| \ll 2^{-\gamma|{\mathbf{t}}|},$$ where $\gamma=\frac{(\varepsilon-2\delta)}{l(n+1)}\alpha_1>0$. Hence $$\sum_{{\mathbf{t}}\in {\mathbf{T}}}|\bigcup_{\alpha\in\mathcal{A}}H_{{\mathbf{t}}}^\infty(\alpha,\phi_\delta({\mathbf{t}}))|\ll\sum_{{\mathbf{t}}\in{\mathbf{T}}} 2^{-\gamma|{\mathbf{t}}|}<\infty.$$
Case $2$ ($\nu\neq\infty$)
--------------------------
The argument proceeds as in Case $1$. In this case, $r_\nu({\mathbf{t}})=2^{-\varepsilon (|{\mathbf{t}}|+1)}$. So, $$2^l.2^{\delta|{\mathbf{t}}|}\Psi_0(2^{{\mathbf{t}}}).2.2^{\delta|{\mathbf{t}}|}2^{(-\varepsilon)(|{\mathbf{t}}|+1)}.1.\frac{2^{\sum_{1}^n t_i+2}}{2 ^{|{\mathbf{t}}|}}=2^{2n+l+1-\varepsilon}.2^{|{\mathbf{t}}|(2\delta-\varepsilon-1)}<1$$ for large ${\mathbf{t}}$ as $2\delta-\varepsilon<0$. Therefore, by Theorem \[<\] we have $$|\bigcup_{\alpha\in\mathcal{A}}H_{{\mathbf{t}}}^\nu(\alpha,\phi_\delta({\mathbf{t}}))|\leq E\varepsilon_1^{\alpha_1}|\mathbf{B}|,$$ where $\varepsilon_1=\max\{2^{\frac{\delta|{\mathbf{t}}|-\sum t_i}{l}},2^{\frac{2n+l+1-\varepsilon}{l(n+1)}}.2^{\frac{|{\mathbf{t}}|(2\delta-\varepsilon-1)}{l(n+1)}}\}
=2^{\frac{2n+l+1-\varepsilon}{l(n+1)}}.2^{\frac{|{\mathbf{t}}|(2\delta-\varepsilon-1)}{l(n+1)}}$ for all large ${\mathbf{t}}\in {\mathbb{Z}}_{\geq 0}^n$. As in case $1$, $\varepsilon_1$ is ultimately the 2nd term in the parenthesis. For if not, then for infinitely many ${\mathbf{t}}$, $$\frac{\delta|{\mathbf{t}}|-\sum t_i}{l}>\frac{|{\mathbf{t}}|(2\delta-\varepsilon-1)}{l(n+1)}+ O(1)$$ which implies that $$\sum t_i<2|{\mathbf{t}}|+ O(1).$$
This gives a contradiction. Therefore we have $$|\bigcup_{\alpha\in\mathcal{A}}H_{{\mathbf{t}}}^\nu(\alpha,\phi_\delta({\mathbf{t}}))|\ll 2^{-\gamma|{\mathbf{t}}|},$$ where $\gamma=\frac{(\varepsilon-2\delta+1)}{l(n+1)}\alpha_1>0$. Hence $$\sum_{{\mathbf{t}}\in {\mathbf{T}}}|\bigcup_{\alpha\in\mathcal{A}}H_{{\mathbf{t}}}^\nu(\alpha,\phi_\delta({\mathbf{t}}))|\ll \sum_{{\mathbf{t}}\in{\mathbf{T}}} 2^{-\gamma|{\mathbf{t}}|}<\infty.$$
Consequently the only thing left to verify are the intersection and contracting properties of the transference principle.
We will consider $|.|$ the measure to be restricted on some bounded open ball ${\mathbf{V}}_{{\mathbf{x}}_0}$ around ${\mathbf{x}}_0\in {\mathbf{U}}$. Then we will get $|\Lambda^\nu_{I}(\phi_\delta)\cap{\mathbf{V}}_{{\mathbf{x}}_0} |=0$. But because the space is second countable, we eventually get $|\Lambda^\nu_{I}(\phi_\delta)|=0$.
Verifying the intersection property:
------------------------------------
Let ${\mathbf{t}}\in{\mathbf{T}}$ with $|{\mathbf{t}}|> \frac{l}{1-\frac{\varepsilon}{2}}$. We have to show that for $\phi_\delta$ there exists $\phi_\delta^*$ such that for all but finitely many ${\mathbf{t}}\in {\mathbf{T}}$ and all distinct $\alpha=(a_0,{\mathbf{a}}),\alpha'=(a_0',{\mathbf{a}}_0')\in\mathcal{A},$ we have that $I_{\mathbf{t}}^\nu(\alpha,\phi_\delta({\mathbf{t}}))\cap I_{\mathbf{t}}^\nu(\alpha',\phi_\delta({\mathbf{t}}))\subset H_{\mathbf{t}}^\nu(\phi_\delta^*({\mathbf{t}}))$. Consider $${\mathbf{x}}\in I_{\mathbf{t}}^\nu(\alpha,\phi_\delta({\mathbf{t}}))\cap I_{\mathbf{t}}^\nu(\alpha',\phi_\delta({\mathbf{t}})),$$ then by Definition (\[def\_I\]) we have $$\label{eqn_1}\left\{\begin{array}{l}
|a_0+{\mathbf{a}}.{\mathbf{f}}({\mathbf{x}})+\Theta({\mathbf{x}})|_S<{(\phi_\delta({\mathbf{t}})\Psi_0(2^{\mathbf{t}}))}^\frac{1}{l}\\\\
\|\nabla({\mathbf{a}}.{\mathbf{f}}_{\nu}({\mathbf{x}}_\nu)+\Theta_\nu({\mathbf{x}}_\nu))\|_\nu<\phi_\delta({\mathbf{t}}) r_\nu({\mathbf{t}})\end{array}\right.$$ and $$\label{eqn_2}\left\{\begin{array}{l}
|a'_0+{\mathbf{a}}'.{\mathbf{f}}({\mathbf{x}})+\Theta({\mathbf{x}})|_S<{(\phi_\delta({\mathbf{t}})\Psi_0(2^{\mathbf{t}}))}^\frac{1}{l}\\\\
\|\nabla({\mathbf{a}}'.{\mathbf{f}}_{\nu}({\mathbf{x}}_\nu)+\Theta_\nu({\mathbf{x}}_\nu))\|_\nu<\phi_\delta({\mathbf{t}}) r_\nu({\mathbf{t}})\end{array}\right.$$ where $$|a_i|<2^{t_i+1}\text{ for }1\leq i\leq n \text{ and } |a_i'|<2^{t_i+1}\text{ for } 1\leq i\leq n.$$ Now subtracting the respective equations of (\[eqn\_2\]) from (\[eqn\_1\]) we have $\alpha''=(a_0-a_0',{\mathbf{a}}-{\mathbf{a}}')$ satisfying the following equations $$\label{eqn_3}
\left\{\begin{array}{l}
|a''_0+{\mathbf{a}}''.{\mathbf{f}}({\mathbf{x}})|_S^l<2^l\phi_\delta({\mathbf{t}})\Psi_0(2^{\mathbf{t}})\\\\
\|\nabla({\mathbf{a}}''.{\mathbf{f}}_{\nu}({\mathbf{x}}_\nu))\|_\nu<2\phi_\delta({\mathbf{t}}) r_\nu({\mathbf{t}})\\\\
|a''_i|_S\leq 2^{t_i+2} \ \forall\ 1\leq i \leq n .
\end{array} \right.$$ Observe that ${\mathbf{a}}''\neq\mathbf{0}$, because otherwise $$1\leq|a_0''|^l<2^l\phi_\delta({\mathbf{t}})\Psi_0(2^{\mathbf{t}})<2^l.2^{-{(1-\frac{\varepsilon}{2})}|{\mathbf{t}}|},$$ which implies that $|{\mathbf{t}}|\leq\frac{l}{1-\frac{\varepsilon}{2}}$, which is true for the finitely many ${\mathbf{t}}$’s that we are avoiding. Therefore $\alpha''\in\mathcal{A} $ and ${\mathbf{x}}\in H_{\mathbf{t}}^\nu(\alpha'',\phi_\delta({\mathbf{t}}))$. So here the particular choice of $\phi_\delta^*$ is $\phi_\delta$ itself. This verifies the intersection property.
Verifying the Contraction Property :
------------------------------------
Recall that to verify the contraction property we need to verify the following: for any $\phi_\delta\in \Phi $ we need to find $\Phi_\delta^+\in \Phi$ and a sequence of positive numbers $\{k_{{\mathbf{t}}}\}_{{\mathbf{t}}\in{\mathbf{T}}}$ satisfying $$\sum_{{\mathbf{t}}\in{\mathbf{T}}}k_{{\mathbf{t}}}<\infty$$ such that for all but finitely many ${\mathbf{t}}\in{\mathbf{T}}$ and all $\alpha\in\mathcal{A},$ there exists a collection $C_{{\mathbf{t}},\alpha}$ of ball $B$ centred at a point in $\mathbf{S}={\mathbf{V}}={\mkern 1.5mu\overline{\mkern-1.5mu{\mathbf{V}}\mkern-1.5mu}\mkern 1.5mu}$ satisfying (\[inter1\]), (\[inter2\]) and (\[inter3\]).\
Let us consider the open set $5{\mathbf{V}}_{{\mathbf{x}}_0}$ in Corollary \[good\_function\]. So we have that for any ${\mathbf{t}}\in {\mathbf{T}}$ and $\alpha=(a_0,{\mathbf{a}})\in \mathcal{A}$ $$\mathbf{F}^\nu_{{\mathbf{t}},\alpha}({\mathbf{x}}) :~=\max\{\Psi_0{^{\text{-}1}}(2^{\mathbf{t}})r_\nu({\mathbf{t}})|a_0+{\mathbf{a}}.{\mathbf{f}}({\mathbf{x}})+\Theta({\mathbf{x}})|_S^l,\|\nabla({\mathbf{a}}.{\mathbf{f}}_{\nu}+\Theta_\nu)({\mathbf{x}}_\nu)\|\}$$ is $(C,\frac{1}{dk})$-good on $5{\mathbf{V}}_{{\mathbf{x}}_0}$ for some $C>0,k\in{\mathbb{N}}$ and $d=\max d_\nu$. Using this new function $\mathbf{F}^\nu_{{\mathbf{t}},\alpha},$ we can write the previous inhomogeneous sets as following :$$I_{\mathbf{t}}^\nu(\alpha,\phi_\delta({\mathbf{t}}))=\left\{{\mathbf{x}}\in{\mathbf{U}}:\begin{array}{l}
\mathbf{F}^\nu_{{\mathbf{t}},\alpha}({\mathbf{x}})<\phi_\delta({\mathbf{t}})r_\nu({\mathbf{t}})\\
\\
2^{t_i}\leq\max\{1,|a_i|_S\}<2^{t_i+1} ~~\forall~ 1\leq i\leq n
\end{array}\right\}.$$\[inhom\_new\] We also note that $$I_{\mathbf{t}}^\nu(\alpha,\phi_\delta({\mathbf{t}}))\subset I_{\mathbf{t}}^\nu(\alpha,\phi^+_\delta({\mathbf{t}}))$$ where $\phi_\delta^+({\mathbf{t}})=\phi_{\frac{\delta}{2}+\frac{\varepsilon}{4}} ({\mathbf{t}})\geq \phi_\delta({\mathbf{t}}) ~\forall~ {\mathbf{t}}\in{\mathbf{T}}$. And $\phi_\delta^+({\mathbf{t}})=\phi_{\frac{\delta}{2}+\frac{\varepsilon}{4}}({\mathbf{t}})\in\Phi $ because $\frac{\delta}{2}+\frac{\varepsilon}{4}<\frac{\varepsilon}{2} .$ If $I_{\mathbf{t}}^\nu(\alpha,\phi_\delta({\mathbf{t}}))=\emptyset$ then it is trivial. So without loss of generality we can assume that $ I_{\mathbf{t}}^\nu(\alpha,\phi_\delta({\mathbf{t}})) \ne \emptyset $. Because for every $\phi_\delta \in \Phi $ , $\phi_\delta({\mathbf{t}}) \Psi_0(2^{\mathbf{t}})<2^{-(1-\frac{\varepsilon}{2})|{\mathbf{t}}|}$, so in particular $$I_{\mathbf{t}}^\nu(\alpha,\phi_\delta^+({\mathbf{t}}))\subset \{{\mathbf{x}}\in {\mathbf{U}}~:~|a_0+{\mathbf{a}}.{\mathbf{f}}({\mathbf{x}})+\Theta({\mathbf{x}})|^l<2^{-(1-\frac{\varepsilon}{2})|{\mathbf{t}}|}\}.$$ We recall Corollary 4 of [@BaBeVe] , $$\inf_{\substack{({\mathbf{a}}, a_0) \in{\mathbb{R}}^{n+1}\setminus\{0\} \\ \|{\mathbf{a}}\| \geq H_0}}\sup_{{\mathbf{x}}\in5{\mathbf{V}}_{{\mathbf{x}}_0}}|a_0+{\mathbf{a}}.{\mathbf{f}}_\infty({\mathbf{x}}_\infty)+\Theta_\infty({\mathbf{x}}_\infty)|_\infty>0.$$ Therefore, $$\inf_{\substack{({\mathbf{a}}, a_0)\in{\mathbb{R}}^{n+1}\setminus\{0\} \\ \|{\mathbf{a}}\|\geq H_0 }}\sup_{{\mathbf{x}}\in5{\mathbf{V}}_{{\mathbf{x}}_0}}|a_0+{\mathbf{a}}.{\mathbf{f}}({\mathbf{x}})+\Theta({\mathbf{x}})|_S >$$ $$\inf_{\substack{({\mathbf{a}}, a_0)\in{\mathbb{R}}^{n+1}\setminus\{0\} \\ \|{\mathbf{a}}\| \geq H_0}}\sup_{{\mathbf{x}}\in5{\mathbf{V}}_{{\mathbf{x}}_0}}|a_0+{\mathbf{a}}.{\mathbf{f}}_\infty({\mathbf{x}}_\infty)+\Theta_\infty({\mathbf{x}}_\infty)|_\infty
> 0.$$ Now by the $(C,\frac{1}{dk})$-good property of the function $|a_0+{\mathbf{a}}.{\mathbf{f}}({\mathbf{x}})+\Theta({\mathbf{x}})|_S^l$ on $5{\mathbf{V}}_{{\mathbf{x}}_0}$ we conclude $$|I_{\mathbf{t}}^\nu(\alpha,\phi_\delta^+({\mathbf{t}}))\cap{\mathbf{V}}_{{\mathbf{x}}_0}|\leq |\{{\mathbf{x}}\in {\mathbf{V}}_{{\mathbf{x}}_0} ~:~|a_0+{\mathbf{a}}.{\mathbf{f}}({\mathbf{x}})+\Theta({\mathbf{x}})|_S^l<2^{-(1-\frac{\varepsilon}{2})|{\mathbf{t}}|}\}|$$ $$\ll2^{-(1-\frac{\varepsilon}{2})(\frac{1}{dk})|{\mathbf{t}}|}|{\mathbf{V}}_{{\mathbf{x}}_0}|$$ for all sufficiently large $|{\mathbf{t}}|.$ Therefore ${\mathbf{V}}_{{\mathbf{x}}_0}\not\subset I_{{\mathbf{t}}}^\nu(\alpha,\phi^+_\delta({\mathbf{t}}))$ for sufficiently large $|{\mathbf{t}}|$ . The measure restricted to ${\mathbf{V}}_{{\mathbf{x}}_0}$ will be denoted as $|~~|_{{\mathbf{V}}_{{\mathbf{x}}_0}}$ and thus $\mathbf{S}={\mkern 1.5mu\overline{\mkern-1.5mu{\mathbf{V}}_{{\mathbf{x}}_0}\mkern-1.5mu}\mkern 1.5mu}$. So $\mathbf{S}\cap I_{{\mathbf{t}}}^\nu(\alpha,\phi^+_\delta{\mathbf{t}}) $ is open and for every ${\mathbf{x}}\in \mathbf{S}\cap I_{{\mathbf{t}}}^\nu(\alpha,\phi_\delta({\mathbf{t}}) $ there exists a ball $$B'({\mathbf{x}})\subset I_{{\mathbf{t}}}^\nu(\alpha,\phi^+_\delta({\mathbf{t}})).$$ So we can find $\kappa\geq 1$ such that the ball $B=B({\mathbf{x}}):=\kappa B'({\mathbf{x}})$ satisfies $$5 B({\mathbf{x}})\subset 5V_{{\mathbf{x}}_0}$$ and $$\label{twosided_inclusion}
B({\mathbf{x}})\cap \mathbf{S}\subset I_{{\mathbf{t}}}^\nu(\alpha,\phi^+_{\delta}({\mathbf{t}}))\not\supset {\mathbf{S}}\cap 5B({\mathbf{x}})$$ holds for all but finitely many ${\mathbf{t}}$ . The second inequality holds because we would otherwise have ${\mathbf{V}}_{{\mathbf{x}}_0}\subset I_{{\mathbf{t}}}^\nu(\alpha,\phi^+_{\delta}({\mathbf{t}}))$, a contradiction. Then take $C_{{\mathbf{t}},\alpha}:=\{B({\mathbf{x}})~:~ {\mathbf{x}}\in \mathbf{S}\cap I_{{\mathbf{t}}}^\nu(\alpha,\phi_{\delta}({\mathbf{t}}))\} $. Hence (\[inter1\]) and (\[inter2\]) are satisfied. By (\[twosided\_inclusion\]) we have $$\label{ineq_1}
\sup_{{\mathbf{x}}\in 5B}\mathbf{F}_{{\mathbf{t}},\alpha}^\nu({\mathbf{x}})\geq \sup_{{\mathbf{x}}\in 5B\cap S} \mathbf{F}_{{\mathbf{t}},\alpha}^\nu({\mathbf{x}})\geq \phi_\delta^+({\mathbf{t}})r_\nu({\mathbf{t}})$$ for all but finitely many ${\mathbf{t}}$. So in view of the definitions we get $$\label{ineq_2}
\sup_{{\mathbf{x}}\in 5B\cap I_{{\mathbf{t}}}^\nu(\alpha,\phi_{\delta}({\mathbf{t}})) }\mathbf{F}_{{\mathbf{t}},\alpha}^\nu({\mathbf{x}})\leq 2^{(\frac{\delta}{2}-\frac{\varepsilon}{4})|{\mathbf{t}}| }\phi_\delta^+({\mathbf{t}})r_\nu({\mathbf{t}})\leq_{\ref{ineq_1}}2^{(\frac{\delta}{2}-\frac{\varepsilon}{4})|{\mathbf{t}}|}\sup_{{\mathbf{x}}\in 5B}\mathbf{F}_{{\mathbf{t}},\alpha}^\nu({\mathbf{x}}).$$ Therefore for all large $|{\mathbf{t}}|$ and $\alpha \in {\mathbb{Z}}^{n+1}$ we have $$\begin{split}
|5B\cap I_{{\mathbf{t}}}^\nu(\alpha,\phi_{\delta}({\mathbf{t}}))|\leq_{\ref{ineq_2}} &
|\{ {\mathbf{x}}\in 5B~:~\mathbf{F}_{{\mathbf{t}},\alpha}^\nu({\mathbf{x}})\leq
2^{(\frac{\delta}{2}-\frac{\varepsilon}{4})|{\mathbf{t}}|}
\sup_{{\mathbf{x}}\in 5B}\mathbf{F}_{{\mathbf{t}},\alpha}^\nu({\mathbf{x}}) \} |\\ &\leq C2^{(\frac{\delta}{2}-\frac{\varepsilon}{4})\frac{1}{dk}|{\mathbf{t}}|}|5B|.\end{split}$$ Hence finally we conclude $$\begin{split}
|5B\cap I_{{\mathbf{t}}}^\nu(\alpha,\phi_{\delta}({\mathbf{t}}))|_{{\mathbf{V}}}\leq|5B\cap I_{{\mathbf{t}}}^\nu(\alpha,\phi_{\delta}({\mathbf{t}}))|
&
\\\leq C2^{(\frac{\delta}{2}-\frac{\varepsilon}{4})\frac{1}{dk}|{\mathbf{t}}|}|5B|&\\ \leq
C_\star C2^{(\frac{\delta}{2}-\frac{\varepsilon}{4})\frac{1}{dk}|{\mathbf{t}}|}|5B|_{{\mathbf{V}}_{{\mathbf{x}}_0}},
\end{split}$$ since $5B\subset5{\mathbf{V}}_{{\mathbf{x}}_0}$. Here we are using that the measure is doubling and the centre of the ball $5B$ is in ${\mkern 1.5mu\overline{\mkern-1.5mu{\mathbf{V}}_{{\mathbf{x}}_0}\mkern-1.5mu}\mkern 1.5mu}$. So $C_\star$ is only dependent on $d_\nu$. We choose $k_{{\mathbf{t}}}=C_\star C2^{(\frac{\delta}{2}-\frac{\varepsilon}{4})\frac{1}{dk}|{\mathbf{t}}|}$ and as $(\frac{\delta}{2}-\frac{\varepsilon}{4})<0$ we also have $\sum k_{{\mathbf{t}}}<\infty$ as required in (\[conv\]). This verifies the contracting property.
The large derivative
--------------------
In this section, we will show that $|{\mathcal{W}}_{{\mathbf{f}}}^{\text{large}}(\Psi,\Theta)|=0$. Let us recall Theorem 1.2 from [@MoS1].
Assume that $\mathbf{U}$ satisfies (I1), $\mathbf{f}$ satisfies (I2), (I3) and $0<\epsilon< \frac{1}{4n|S|^2}.$ Let $\mathcal{A}$ be $$\left\{{\mathbf{x}}\in{\mathbf{U}}|~\exists~{\mathbf{a}}\in{\mathbb{Z}}^n, \frac{T_i}{2}\leq~|a_i|_{S}<T_i,
\begin{array}{l}|\langle {\mathbf{a}}. {\mathbf{f}}({\mathbf{x}}) \rangle|^{l}<\delta(\prod_{i} T_i)^{-1}\\\\
\|{\mathbf{a}}. \nabla {\mathbf{f}}_{\nu}({\mathbf{x}}_\nu)\|_{\nu}>\|{\mathbf{a}}\|_S^{-\varepsilon},\hspace{2mm}\nu\neq\infty\\\\
\|{\mathbf{a}}. \nabla {\mathbf{f}}_{\nu}({\mathbf{x}}_\nu)\|_{\nu}>\|{\mathbf{a}}\|_S^{1-\varepsilon},\hspace{2mm}\nu=\infty
\end{array}
\right\}.$$ Then $|\mathcal{A}|<C \delta\hspace{1mm}|{\mathbf{U}}|,$ for large enough $\max (T_i)$ and a universal constant $C$.
Note that the function $({\mathbf{f}},\Theta):{\mathbf{U}}~\mapsto {\mathbb{Q}}_S^{n+1}$ satisfies the same properties as ${\mathbf{f}}$. So as a Corollary of the previous theorem we get,
\[>coro\] Let $0<\varepsilon< \frac{1}{4(n+1)|S|^2}$ and $\mathcal{A}_{(T_i)_{1}^n}$ be the set $$\bigcup_{\substack{({\mathbf{a}},1)\in{\mathbb{Z}}^{n+1}\\\frac{T_i}{2}\leq~|a_i|_{S}<T_i}}\left\{{\mathbf{x}}\in{\mathbf{U}}~|\\
\begin{array}{l}|\langle {\mathbf{a}}. {\mathbf{f}}({\mathbf{x}})+\Theta({\mathbf{x}}) \rangle|_S^{l}<\delta(\prod_{i=1}^{n} T_i)^{-1}\\\\
\|\nabla( {\mathbf{a}}{\mathbf{f}}_{\nu}(x)+\Theta_\nu({\mathbf{x}}_{\nu}))\|_{\nu}>\|{\mathbf{a}}\|_S^{-\varepsilon},\hspace{2mm}\nu\neq\infty\\\\
\|\nabla ({\mathbf{a}}{\mathbf{f}}_{\nu}(x_\nu)+\Theta_\nu({\mathbf{x}}_{\nu}))\|_{\nu}>\|{\mathbf{a}}\|_S^{1-\varepsilon},\nu=\infty
\end{array}
\right\}.$$ Then $|\mathcal{A}_{(T_i)_{1}^n}
|<C \delta\hspace{1mm}|{\mathbf{U}}|,$ for large enough $\max (T_i)$ and a universal constant $C$.
Now take $T_i=2^{t_i+1}$ and $\delta=2^{\sum_{1}^n t_i+1}\Psi(2^{{\mathbf{t}}})$. As $2^{t_i}\leq|a_i|_S<2^{t_i+1},$ this implies by (\[monotone\_cond\]) that $\Psi({\mathbf{a}})\geq \Psi(2^{{\mathbf{t}}+1})$ and we have using (\[>coro\]) that $$\label{>measure_inq}
|\bigcup_{2^{t_i}\leq|a_i|_S<2^{t_i+1}}{\mathbf{W}}_{{\mathbf{f}}}^{\text{large}}({\mathbf{a}},\Psi,\Theta)|<C2^{\sum_{1}^n t_i+1}\Psi(2^{{\mathbf{t}}}).$$ Note that $$\sum\Psi({\mathbf{a}})\geq\sum\Psi(2^{t_1+1},\cdots,2^{t_n+1})2^{\sum_{1}^n t_i},$$ so the convergence of $\sum\Psi({\mathbf{a}})$ implies the convergence of the later. Therefore by (\[>measure\_inq\]) and by the Borel-Cantelli lemma we get that almost every point of ${\mathbf{U}}$ are in at most finitely many ${\mathbf{W}}_{{\mathbf{f}}}^{\text{large}}({\mathbf{a}},\Psi,\Theta)$. Hence $|{\mathcal{W}}_{{\mathbf{f}}}^{\text{large}}(\Psi,\Theta)|=0$ completing the proof.
The divergence theorem for ${\mathbb{Q}}_p$
===========================================
In this section we prove Theorem \[thm:divergence\] using ubiquitous systems as in [@BaBeVe]. In [@BBKM], the related notion of regular systems was used. As mentioned in the introduction, the divergence case will be proved for a more restrictive choice of approximating function than the convergence case, namely for those satisfying property $\mathbf{P}$. Indeed a more general formulation which includes the multiplicative case of the divergence Khintchine theorem remains an outstanding open problem even for submanifolds in ${\mathbb{R}}^n$. Without loss of generality, and in an effort to keep the notation reasonable, we will prove the Theorem for the usual norm, i.e. we will assume ${\mathbf{v}}= (1, \dots, 1)$. The interested reader can very easily make the minor changes required to prove it for general ${\mathbf{v}}$. For $\delta > 0$ and $Q > 1$ we follow [@BaBeVe] in defining $\Phi^{{\mathbf{f}}}(Q,\delta) := \{x \in U~:~ \exists~{\mathbf{a}}=(a_0,{\mathbf{a}}_1) \in {\mathbb{Z}}\times{\mathbb{Z}}^n\backslash \{0\}$ such that $$|a_0+ {\mathbf{a}}_1 \cdot {\mathbf{f}}({\mathbf{x}})|_p < \delta Q^{-{(n+1)}} \text{ and } \|(a_0,{\mathbf{a}}_1)\| \leq Q\}.$$ We now recall definition of a $\mathit{nice}$ function.
\[nice\] We say that ${\mathbf{f}}$ is *nice* at ${\mathbf{x}}_0\in {\mathbf{U}}$ if there exists a neighbourhood ${\mathbf{U}}_0\subset {\mathbf{U}}$ of ${\mathbf{x}}_0$ and constants $0<\delta, w<1$ such that for any sufficiently small ball ${\mathbf{B}}\subset {\mathbf{U}}_0$ we have that $$\limsup_{Q\to \infty}|\Phi^{{\mathbf{f}}}(Q,\delta)\cap {\mathbf{B}}|\leq w|{\mathbf{B}}|.$$
If ${\mathbf{f}}$ is *nice* at almost every ${\mathbf{x}}_0$ in ${\mathbf{U}}$ then ${\mathbf{f}}$ is called *nice*. The following Theorem from [@MoS2] plays a crucial role. It’s proof involves a suitable adaptation of the dynamical technique in [@BKM].
[[@MoS2]]{}\[lemma:nice\] Assume that ${\mathbf{f}}:{\mathbf{U}}\to {\mathbb{Q}}_p^n$ is nondegenerate at ${\mathbf{x}}\in {\mathbf{U}}$. Then there exists a sufficiently small ball ${\mathbf{B}}_0\subset {\mathbf{U}}$ centred at ${\mathbf{x}}_0$ and a constant $C>0$ such that for any ball ${\mathbf{B}}\subset {\mathbf{B}}_0$ and any $\delta>0 $, for sufficiently large $Q$, one has $$|\Phi^{{\mathbf{f}}}(Q,\delta)\cap {\mathbf{B}}|\leq C\delta |{\mathbf{B}}|.$$
This implies that if ${\mathbf{f}}$ is nondegenerate at ${\mathbf{x}}_0$ then ${\mathbf{f}}$ is nice at ${\mathbf{x}}_0$. We will now state the main two theorems of this section. Let $\psi : \mathbb{N} \to {\mathbb{R}}_{+}$ be a decreasing function.
\[thm:nice\] Assume that ${\mathbf{f}}:{\mathbf{U}}\subset{\mathbb{Q}}_p^m\to {\mathbb{Q}}_p^n$ is nice and satisfies the standing assumptions (I1 and I2) and that $s>m-1$. Let $\Theta:{\mathbf{U}}\to {\mathbb{Q}}_p$ be an analytic map satisfying assumption (I5). Let $\Psi({\mathbf{a}})=\psi(\|{\mathbf{a}}\|) ,{\mathbf{a}}\in{\mathbb{Z}}^{n+1} $ be an approximating function. Then, $$\label{main sum}
\mathcal{H}^s(\mathcal{W}^{\mathbf{f}}_{(\Psi,\Theta)}\cap{\mathbf{U}})=\mathcal{H}^s({\mathbf{U}}) \text{ if } \sum (\Psi({\mathbf{a}}))^{s+1-m}=\infty.$$
In view of Theorem \[lemma:nice\], Theorem \[thm:nice\] implies Theorem \[thm:divergence\]. Note that condition (I3) implies the nondegeneracy of ${\mathbf{f}}$ at every point of ${\mathbf{U}}$.
Ubiquitous Systems in ${\mathbb{Q}}_p^n$
-----------------------------------------
Let us recall the the definition of Ubiquitous systems in ${\mathbb{Q}}_p^n$ following [@BaBeVe]. Throughout, balls in ${\mathbb{Q}}_p^m$ are assumed to be defined in terms of the supremum norm $|\cdot|$. Let ${\mathbf{U}}$ be a ball in ${\mathbb{Q}}_p^m$ and $\mathcal{R}=(R_\alpha)_{\alpha\in J}$ be a family of subsets $R_\alpha\subset {\mathbb{Q}}_p^m$ indexed by a countable set $J$. The sets $R_\alpha$ are referred to as *resonant sets*. Throughout, $\rho\;:\;{\mathbb{R}}^+\to{\mathbb{R}}^+$ will denote a function such that $\rho(r)\to0$ as $r\to\infty$. Given a set $A\subset {\mathbf{U}}$, let $$\Delta(A,r):=\{{\mathbf{x}}\in {\mathbf{U}}\;:\; {\operatorname{dist}}({\mathbf{x}},A)<r\}$$ where ${\operatorname{dist}}({\mathbf{x}},A):=\inf\{|{\mathbf{x}}-{\mathbf{a}}|: {\mathbf{a}}\in A\}$. Next, let $\beta\;:\;J\to {\mathbb{R}}^+\;:\;\alpha\mapsto\beta_\alpha$ be a positive function on $J$. Thus the function $\beta$ attaches a ‘weight’ $\beta_\alpha$ to the set $R_\alpha$. We will assume that for every $t\in {\mathbb{N}}$ the set $J_t=\{\alpha\in J: \beta_\alpha\le 2^t\}$ is finite.\
\
**The intersection conditions:** There exists a constant $\gamma$ with $ 0 \leq \gamma \leq m$ such that for any sufficiently large $t$ and for any $\alpha\in J_t$, $c\in{\mathcal{R}}_\alpha$ and $0< \lambda \le \rho(2^t)$ the following conditions are satisfied: $$\label{i1}
\big|{\mathbf{B}}(c, {\mbox{\small
$\frac{1}{2}$}}\rho(2^t))\cap\Delta(R_\alpha,\lambda)\big| \geq c_1 \,
|{\mathbf{B}}(c,\lambda)|\left(\frac{\rho(2^t)}{\lambda}\right)^{\gamma}$$ $$\label{i2}
\big|{\mathbf{B}}\cap
{\mathbf{B}}(c,3\rho(2^t))\cap\Delta(R_\alpha,3\lambda)\big| \leq c_2 \,
|{\mathbf{B}}(c,\lambda)| \left(\frac{r({\mathbf{B}})}{\lambda}\right)^{\gamma} $$ where ${\mathbf{B}}$ is an arbitrary ball centred on a resonant set with radius $r({\mathbf{B}})\le 3 \, \rho(2^t)$. The constants $c_1$ and $ c_2$ are positive and absolute. The constant $\gamma$ is referred to as the *common dimension* of ${\mathcal{R}}$.
Suppose that there exists a ubiquitous function $\rho$ and an absolute constant $k>0$ such that for any ball ${\mathbf{B}}\subseteq {\mathbf{U}}$ $$\label{coveringproperty}
\liminf_{t\to\infty} \left|\bigcup_{\alpha\in
J_t}\Delta(R_\alpha,\rho(2^t))\cap {\mathbf{B}}\right| \ \ge \ k\,|{\mathbf{B}}|.$$ Furthermore, suppose that the intersection conditions and are satisfied. Then the system $(\mathcal{R}, \beta)$ is called *locally ubiquitous in ${\mathbf{U}}$ relative to $\rho$.*
Let $(\mathcal{R},\beta)$ be a ubiquitous system in ${\mathbf{U}}$ relative to $\rho$ and $\phi$ be an approximating function. Let $\Lambda(\phi)$ be the set of points ${\mathbf{x}}\in {\mathbf{U}}$ such that the inequality $$\label{vb+}
{\operatorname{dist}}({\mathbf{x}},R_{\alpha})<\phi(\beta_\alpha)$$ holds for infinitely many $\alpha\in J$.\
We are going to use this following ubiquity lemma from [@BaBeVe] in our main proof.
\[ubi\] Let $\phi$ be an approximating function and $(\mathcal{R},\beta)$ be a locally ubiquitous system in ${\mathbf{U}}$ relative to $\rho$. Suppose that there is a $0<\lambda<1$ such that $\rho(2^{t+1})<\lambda\rho({2^t})~\forall~ t \in {\mathbb{N}}.$ Then for any $s>\gamma,$ $$\mathcal{H}^s(\Lambda(\phi))=\mathcal{H}^s({\mathbf{U}}) \text{ if }\sum_{t=1}^\infty \frac{{\phi(2^t)}^{s-\gamma}}{{\rho(2^t)}^{m-\gamma}}=\infty.$$
We will also need the strong approximation theorem mentioned in [@Zelo].
\[Strong\] For any $\bar \epsilon = (\epsilon_{\infty},(\epsilon_{p}))
\in \mathbb{R}_{>0}^{2}$ satisfying the inequality $$\epsilon_{\infty} \geq
\frac{1}{2} \epsilon_{p}^{-1} p,$$ there exists a rational number $r \in \mathbb{Q}$ such that $$\begin{aligned}
&| r - \xi_{\infty} |_{\infty} \leq \epsilon_{\infty},
\\
&| r - \xi_{p} |_{p} \leq \epsilon_{p}
,
\\
&| r |_{q} \leq 1
\quad \forall~q \neq p.
\end{aligned}$$
Before we start the proving the main theorem in this section we would like to calculate a covolume formula of certain lattices.
\[covolume\] Suppose $|y_i|_p\leq 1$ then $$\Gamma=\left\{
(q_0, q_1,\cdots, q_n)\in{\mathbb{Z}}^{n+1} :
\begin{array}{l}
|q_0 + q_1y_1+\cdots+ q_ny_n|_p\leq\frac{1}{p^j},\\\\
|q_i|_p\leq \frac{1}{p}\\\\
i=1,\cdots n
\end{array}\right\}.$$ is a lattice in ${\mathbb{Z}}^{n+1} $ and ${\operatorname{Vol}}({\mathbb{R}}^{n+1}/\Gamma)= p^{j+n}$.
First of all $\Gamma$ is a discrete subgroup of ${\mathbb{Z}}^{n+1}$. Clearly $(p^j,0,\cdots,0)\in{\mathbb{Z}}^{n+1} $ is in $\Gamma$. Since $|y_i|_p\leq 1$ we may take $q_i\in{\mathbb{Z}}$ such that $$\label{q_conditions}
|q_i-py_i|_p\leq\frac{1}{p^j},$$ which implies that $(q_i,0,\cdots,-p,\cdots,0)\in \Gamma$ where $-p$ is in $(i+1)$th position. We claim that $$\{(p^j,0,\cdots,0),(q_i,0,\cdots,-p,\cdots,0)\ | \ i=1,\cdots,n\}$$ is a basis of $\Gamma$. The matrix comprising these elements as column vectors as follows $$A:= \begin{bmatrix}
p^j & q_1 & & \dots &q_i &\dots & q_n\\
0 & -p & & \dots &0 &\dots & 0 \\
\vdots & \vdots & &\vdots&\vdots &\vdots &\vdots,\\
0 & 0 & & \dots &-p &\dots & 0\\
\vdots & \vdots & &\vdots&\vdots&\vdots&\vdots \\
0 & 0 & &\dots &0 &\dots & -p
\end{bmatrix}.$$ We want to show that if $\mathbf{m}=(m_0,m_1,\cdots,m_n)\in \Gamma $ then there exists $\mathbf{s}=(s_o,s_1,\cdots,s_n)\in {\mathbb{Z}}^{n+1}$ such that $A\mathbf{s}=\mathbf{m}$. Note that $$A{^{\text{-}1}}\mathbf{m} = \left(\frac{m_0p+q_1m_1+\cdots+q_nm_n}{p^{j+1}},-\frac{m_1}{p},\cdots,-\frac{m_n}{p}\right).$$ As $\mathbf{m}\in \Gamma$ we have that $p|m_i~\forall~ i=1,\cdots,n,$ hence $-\frac{m_i}{p}$ is an integer for all $i$. Now it is enough to show that $p^{j+1} | (m_0p+q_1m_1+\cdots+m_nq_n)$. Note that $$m_0p+m_1q_1+\cdots+m_nq_n= p(m_0+m_1y_1+\cdots+m_ny_n)+m_1(q_1-y_1p)+\cdots+m_n(q_n-y_np).$$ Now conclusion follows from $\mathbf{m}\in\Gamma$ and (\[q\_conditions\]).
Now we will construct a ubiquitous system which will give the main result of this section.
\[ubiquity\] Let ${\mathbf{x}}_0\in {\mathbf{U}}$ be such that ${\mathbf{f}}$ is *nice* at ${\mathbf{x}}_0$ and satisfies (I3). Then there is a neighbourhood ${\mathbf{U}}_0$ of ${\mathbf{x}}_0,$ constants $\kappa_0>0$ and $\kappa_1>0$ and a collection ${\mathcal{R}}:=(R_F)_{F\in\mathcal{F}_n}$ of sets $R_F\subset \widetilde{R_F}\cap {\mathbf{U}}_0$ such that the system $({\mathcal{R}},\beta)$ is locally ubiquitous in ${\mathbf{U}}_0$ relative to $\rho(r)=\kappa_1r^{(n+1)} $ with common dimension $\gamma:=m-1,$ where $$\mathcal{F}_n:=\left\{F:{\mathbf{U}}\to{\mathbb{R}}\ |\begin{array}{l} F({\mathbf{x}})= a_0+a_1f_1({\mathbf{x}})+\cdots+a_nf_n({\mathbf{x}}),\\\\
{\mathbf{a}}=(a_0,a_1,\cdots,a_n)\in{\mathbb{Z}}^{n+1}\setminus\mathbf{0} \end{array} \right \}$$ and given $F\in\mathcal{F}_n$ $$\widetilde{R_F}:=\{{\mathbf{x}}\in{\mathbf{U}}:\ (F+\Theta)({\mathbf{x}}) \ =\ 0\}$$ and $$\beta:\ \mathcal{F}_n\to {\mathbb{R}}^+\ : F\to \ \beta_F=\kappa_0|(a_0,a_1,\cdots,a_n)|=\kappa_0|{\mathbf{a}}|.$$
Let $\pi:\ {\mathbb{Q}}_p^m\to{\mathbb{Q}}_p^{m-1}$ be the projection map given by $$\pi(x_1,x_2,\cdots,x_m)=(x_2,\cdots,x_m),$$ and let $$\widetilde{\mathbf{V}}:=\pi(\widetilde R_F\cap{\mathbf{U}}_0),
\\
{\mathbf{V}}=\bigcup_{3\rho(\beta_F)-\text{balls} B\subset \widetilde{{\mathbf{V}}}}\frac{1}{2}B$$ and $$R_F=\left\{\begin{array}{l}
\pi{^{\text{-}1}}({\mathbf{V}})\cap\widetilde{R_F} \ \text{if} \ \ |\partial_1(F+\Theta)({\mathbf{x}})|> \lambda|\nabla(F+\Theta)({\mathbf{x}})| \ \forall \ {\mathbf{x}}\in {\mathbf{U}}_0\\\\
\emptyset \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{otherwise}.
\end{array}\right
.$$ where $0<\lambda<1$ is fixed.\
We claim that the $R_F$ are resonant sets. The intersection property, namely (\[i1\]) and (\[i2\]) can be checked exactly as in the case of real numbers as accomplished in [@BaBeVe], Proposition 5. We only need to note that implicit function theorem for $C^l(U)$ in ${\mathbb{R}}^n$ was used in [@BaBeVe]. The Implicit function theorem in ${\mathbb{Q}}_p$ holds for analytic maps and all our maps have been assumed analytic, so the proof in [@BaBeVe] goes through verbatim.
It remains to check the covering property (\[coveringproperty\]) to establish ubiquity. Without loss of generality we will assume that the ball ${\mathbf{U}}_0$ in the definition of (\[nice\]) satisfies $${\operatorname{diam}}{{\mathbf{U}}_0}\leq \frac{1}{p}.$$ From the Definition \[nice\] of ${\mathbf{f}}$ being nice at ${\mathbf{x}}_0,$ there exist fixed $0<\delta,w<1$ such that for any arbitrary ball ${\mathbf{B}}\subset{\mathbf{U}}_0,$ $$\limsup_{Q\to \infty}|\Phi^{{\mathbf{f}}}(Q,\delta)\cap \frac{1}{2}{\mathbf{B}}|\leq w|\frac{1}{2}{\mathbf{B}}|.$$ So for sufficiently large $Q$ we have that $$|\frac{1}{2}{\mathbf{B}}\setminus \Phi^{{\mathbf{f}}}(Q,\delta)|\geq \frac{1}{2}(1-w)|\frac{1}{2}{\mathbf{B}}|=2^{-m-1}(1-w)|{\mathbf{B}}|.$$ Therefore it is enough to show that $$\frac{1}{2}{\mathbf{B}}\setminus \Phi^{{\mathbf{f}}}(Q,\delta)\subset\bigcup _{F\in\mathcal{F}_n\\
\beta_F\leq Q}\Delta(R_F,\rho(Q))\cap{\mathbf{B}}.$$ Suppose ${\mathbf{x}}\in \frac{1}{2}{\mathbf{B}}\setminus \Phi^{{\mathbf{f}}}(Q,\delta).$ Consider the lattice $$\Gamma_{{\mathbf{x}}}=\left\{(a_0,a_1,\cdots,a_n)\in{\mathbb{Z}}^{n+1}: \begin{array}{l}|a_0+a_1f_1({\mathbf{x}})+\cdots+a_nf_n({\mathbf{x}})|_p<\delta Q^{-(n+1)}\\\\
|a_i|_p\leq\frac{1}{p} \ \forall \ {1\leq i\leq n}\end{array}\right\},$$ and the convex set $K=[-Q,Q]^{n+1}$ in ${\mathbb{R}}^{n+1}$. Note that $$|a_o+a_1f_1({\mathbf{x}})+\cdots+a_nf_n({\mathbf{x}})|_p<\delta Q^{-(n+1)}$$ if and only if $$|a_o+a_1f_1({\mathbf{x}})+\cdots+a_nf_n({\mathbf{x}})|_p\leq {p^{[\log_p\delta Q^{-(n+1)}]}}.$$ So by Lemma \[covolume\] we have that $${\operatorname{Vol}}({\mathbb{R}}^{n+1}/\Gamma)=p.p^n.p^{-[\log_pQ^{-(n+1)}\delta]}\leq p^{n+1}\frac{1}{p^{log_p{\delta Q^{-(n+1)}-1}}}\leq Q^{n+1}\frac{p^{n+2}}{\delta}.$$ Using the fact that ${\mathbf{x}}\notin \Phi^{{\mathbf{f}}}(Q,\delta) $ we get the first minima $\lambda_1=\lambda_1(\Gamma_{{\mathbf{x}}},K)>1$. Therefore using Minkowski’s theorem on successive minima, we have that $$2^{n+1}Q^{n+1}\lambda_1.\lambda_2.\cdots.\lambda_{n+1}\leq 2^{n+1}{\operatorname{Vol}}({\mathbb{R}}^{n+1}/\Gamma_{{\mathbf{x}}})\leq 2^{n+1}Q^{n+1}\frac{p^{n+2}}{\delta}.$$ This implies that $\lambda_{n+1}\leq \frac{p^{n+2}}{\delta}.$ By the definition of $\lambda_{n+1}$ we get $n+1$ linearly independent integer vectors ${\mathbf{a}}_j=(a_{j,0},\cdots,a_{j,n})\in{\mathbb{Z}}^{n+1}(0\leq j\leq n)$ such that the functions $F_j$ given by $$F_j({\mathbf{y}})=a_{j,0}+a_{j,1}f_1({\mathbf{y}})+\cdots+a_{j,n}f_n({\mathbf{y}})$$ satisfy $$\label{conditions}
\left\{ \begin{array}{l}
|F_j({\mathbf{x}})|_p<\delta Q^{-(n+1)}\\\\
|a_{j,i}|_\infty\leq Q.\frac{p^{n+2}}{\delta}\\\\
|a_{j,i}|_p\leq\frac{1}{p} \text{ for } 0\leq i,j \leq n.
\end{array}\right.$$ As $\lambda_1>1$ so for every $0\leq j \leq n$ there exists at least one $0\leq j^\star\leq n$ such that $|a_{j,j^\star}|_\infty>Q$.
Now consider the following system of linear equations,\
$$\label{linear}
\begin{array}{l}
\eta_0F_0({\mathbf{x}})+\eta_1F_1({\mathbf{x}})+\cdots+\eta_nF_n({\mathbf{x}})+\Theta({\mathbf{x}})=0\\\\
\eta_0\partial_1F_0({\mathbf{x}})+\eta_1\partial_1F_1({\mathbf{x}})+\cdots+\eta_n\partial_1F_n({\mathbf{x}})+\partial_1\Theta({\mathbf{x}})=1\\\\
\eta_0a_{0,j}+\cdots+\eta_na_{n,j}=0 \ \ (2\leq j \leq n).
\end{array}$$ Since ${\mathbf{f}}_1({\mathbf{x}})=x_1 $, the determinant of this aforementioned system is $\det(a_{j,i})\neq 0$. Therefore there exists a unique solution to the system, say $(\eta_0,\eta_1,\cdots,\eta_n)\in {\mathbb{Q}}_p^n$. By the argument above, there is at least one $|a_{j,i} |_\infty > Q$. Without loss of generality assume $|a_{0,0}|_\infty >Q$. Using the strong approximation Theorem \[Strong\] we get $r_i\in{\mathbb{Q}}$ such that $$\label{r_i}
\begin{aligned}
& |r_i-2p|_\infty\leq p \text{ if } a_{i,0}>0 \text{ otherwise } |r_i+2p|_\infty<p,\\
&
|r_i-\eta_i|_p\leq 1,\\
&
|r_i|_q\leq 1
\quad \text{for} \ \text{ prime }q\neq p.
\end{aligned}$$ Now take the function $$\begin{aligned}
F({\mathbf{y}})=r_0F_0({\mathbf{y}})+r_1F_1({\mathbf{y}})+\cdots+r_nF_n({\mathbf{y}})\\\\
=a_0+a_1f_1({\mathbf{y}})+\cdots+a_nf_n({\mathbf{y}}),
\end{aligned}$$ where $$\label{a_i}
a_i=r_0a_{0,i}+r_1a_{1,i}+\cdots+r_na_{n,i},
\ \forall \ i=0,\cdots,n.$$
We claim that\
**Claim $1$.**The $a_i$ are all integers.\
From (\[r\_i\]) and (\[a\_i\]) we get $$\label{claim1.1}
|a_i|_q\leq 1, \ \forall \ \ {i}=0,\cdots,n \text{ for } q\neq p$$ and by (\[r\_i\]), (\[linear\]) and (\[conditions\]) we have $$\begin{aligned}
|a_i|_p\leq \max_{j=0,\cdots,n} \{|\eta_j-r_j|_p|a_{j,i}|_p\}\leq 1
\quad \text{ for } i=2,\cdots,n.
\end{aligned}$$ So $a_i$ are all integers for $i=2,\cdots,n$. Now note that $$F({\mathbf{x}})+\Theta({\mathbf{x}})\\
=(r_0-\eta_0)F_0({\mathbf{x}})+\cdots+(r_n-\eta_n)F_n({\mathbf{x}}).$$ Therefore we have $$\label{condition1}
|(F+\theta)({\mathbf{x}})|_p\leq \delta Q^{-(n+1)}.$$ Again $$\partial_1(F+\Theta)({\mathbf{x}})=(r_0-\eta_0)\partial_1F_0({\mathbf{x}})+\cdots+(r_n-\eta_n)\partial_1F_n({\mathbf{x}})+1.$$ Since $|a_{j,i}|_p\leq\frac{1}{p}$ so $|\partial_1F_j({\mathbf{x}})|_p\leq \frac{1}{p}$ and thus by (\[r\_i\]) we get $$\label{partial_condition}
1-\frac{1}{p}\leq|\partial_1(F+\Theta)({\mathbf{x}})|_p\leq 1.$$\
Now we can show that $a_1$ and $a_0$ are also integers. Since $f_1({\mathbf{y}})=y_1,$ we have $$a_1=\partial_1(F+\Theta)({\mathbf{x}})-\partial_1\Theta({\mathbf{x}})-\sum_{j =2}^{n}a_j\partial_1f_j({\mathbf{x}})$$ which implies that $|a_1|_p\leq 1$. This together with (\[claim1.1\]) proves that $a_1$ is an integer. We similarly prove that $a_0$ is an integer. We can write $$\begin{aligned}\label{a_0}
a_0=(F+\Theta)({\mathbf{x}})-\Theta({\mathbf{x}})-\sum_{j =1}^{n}a_jf_j({\mathbf{x}}).
\end{aligned}$$ This implies that $|a_0|_p\leq 1$ and thus by (\[a\_0\]) and (\[claim1.1\]) we get that $a_0$ is integer. So the first claim is proved.
Now we look at the infinity norm of the integers $a_i$. By (\[a\_i\]), (\[conditions\]) and (\[r\_i\]) we have $$\label{a_infty}
\begin{aligned}
|a_i|_\infty\leq|r_0a_{0,i}+\cdots+r_na_{n,i}|_\infty\\
\leq 3p(n+1)Q.\frac{p^{n+2}}{\delta}
\end{aligned}
\quad \text{ for } i=0,1,\cdots,n.$$ By the choice of $r_i$ we have $a_0>0$ and using the fact that $Q<|a_{0,0}|_\infty$ we get that $|a_0|_\infty>pQ$ and therefore $|{\mathbf{a}}|>pQ$.
So by (\[a\_infty\]) and the previous observation we get $$\label{beta}
\frac{1}{3p(n+1)}p^{-(n+1)}\delta.Q<\beta_F=\frac{1}{3p(n+1)}p^{-(n+2)}\delta|{\mathbf{a}}|\leq Q,$$ here $\kappa_0=\frac{1}{3p(n+1)}p^{-(n+2)}\delta$. Note that for all ${\mathbf{y}}\in{\mathbf{U}}_0$ we have $$\partial_1(F+\Theta)({\mathbf{x}})=\partial_1(F+\Theta)({\mathbf{y}})+\sum_{j=1}^m\Phi_{j1}(\partial_1(F+\Theta))(\star)(x_j-y_j)$$ where $\star$ is from the coefficients of ${\mathbf{x}}$ and ${\mathbf{y}}$. By using (\[partial\_condition\]) and by the fact that ${\operatorname{diam}}({\mathbf{U}}_0)\leq \frac{1}{p}$ we have $$|\partial_1(F+\Theta)({\mathbf{y}})|_p\geq 1-\frac{2}{p} \ \ \forall \ {\mathbf{y}}\in{\mathbf{U}}_0.$$ So $F$ satisfies $|\partial_1(F+\Theta)({\mathbf{x}})|> (1-\frac{2}{p})|\nabla(F+\Theta)({\mathbf{x}})| \ \forall \ {\mathbf{x}}\in {\mathbf{U}}_0$ and thus by the constructions $\Delta(R_F,\rho(Q))\neq \emptyset$.\
**Claim $2$.** ${\mathbf{x}}\in \Delta(R_F,\rho(Q))$.\
We set $r_0 := {\operatorname{diam}}({\mathbf{B}})$ and define the function $$g(\xi) :=(F+\Theta)(x_1+\xi,x_2,\cdots,x_d), \text { where } |\xi|_p<r_0.$$ Then $$\begin{aligned}
|g(0)|_p=|(F+\Theta)({\mathbf{x}})|_p<\delta Q^{-(n+1)} \\
\text{ and } |g'(0)|_p=|\partial_1(F+\Theta)({\mathbf{x}})|_p>1-\frac{1}{p}.
\end{aligned}$$ Now applying Newton’s method there exists $\xi_o$ such that $g(\xi_0)=0$ and $|\xi_0|_p<\frac{p}{(p-1)}\delta Q^{-(n+1)}$. For sufficiently large $Q$ we get ${\mathbf{x}}_{\xi_0}=(x_1+\xi_0,x_1,\cdots,x_n)\in {\mathbf{B}},$ that $(F+\Theta)({\mathbf{x}}_{\xi_0})=0$ and that $|{\mathbf{x}}-{\mathbf{x}}_{\xi_0}|_p\leq \frac{p}{(p-1)}\delta Q^{-(n+1)}$. Then we will argue exactly same as in [@BaBeVe]. We recall the argument for the sake of completeness. By the Mean Value Theorem we will get $$\begin{aligned}
|(F+\Theta)({\mathbf{y}})|_p \ll Q^{-(n+1)}\\
\text{ for any } |{\mathbf{y}}-{\mathbf{x}}_{\xi_0}|_p \ll Q^{-(n+1)}.
\end{aligned}$$ Then by (\[beta\]) and using the same argument as above tells us that for sufficiently large $Q>0$ the ball of radius $\rho(\beta_F)$ centred at $\pi{\mathbf{x}}_{\xi_0}$ is contained in $\widetilde{{\mathbf{V}}}$. This ultimately gives ${\mathbf{x}}_{\xi_0}\in R_F$ . Since $$|{\mathbf{x}}-{\mathbf{x}}_{\xi_0}|_p\leq \frac{p}{(p-1)}\delta Q^{-(n+1)}$$ so ${\mathbf{x}}\in\Delta(R_F,\rho(Q))$ where $\rho(Q)= \frac{p}{(p-1)}\delta Q^{-(n+1)}=\kappa_1Q^{-(n+1)}$. Therefore ${\mathbf{x}}\in \Delta(R_F,\rho(Q))$ for some $F\in\mathcal{F}_n $ such that $\beta_F\leq Q$ and this completes the proof of the Theorem.
Proof of the main divergence theorem
-------------------------------------
Now using Theorem \[ubiquity\] and lemma \[ubi\] we can complete the proof of Theorem \[thm:nice\].
Fix ${\mathbf{x}}_0\in {\mathbf{U}}$ and let ${\mathbf{U}}_0$ be the neighbourhood of ${\mathbf{x}}_0$ which comes from (\[ubiquity\]). We need to show that $$\mathcal{H}^s(\mathcal{W}^{\mathbf{f}}_{(\Psi,\Theta)}\cap{\mathbf{U}}_0)=\mathcal{H}^s({\mathbf{U}}_0)$$ if the series in (\[main sum\]) diverges. Consider $\phi(r):=\psi(\kappa_0{^{\text{-}1}}r) $. Our first aim is to show that $$\Lambda(\phi)\subset \mathcal{W}^{\mathbf{f}}_{(\Psi,\Theta)}.$$ Note that ${\mathbf{x}}\in \Lambda(\phi)$ implies the existence of infinitely many $F\in\mathcal{F}_n $ such that ${\operatorname{dist}}({\mathbf{x}},R_F)<\phi(\beta_F)$. For such $F\in\mathcal{F}_n$ there exists ${\mathbf{z}}\in{\mathbf{U}}_0$ such that $(F+\Theta)({\mathbf{z}})=0$ and $|{\mathbf{x}}-{\mathbf{z}}|_p<\phi(\beta_F)$. By Mean value theorem $$(F+\Theta)({\mathbf{x}})=(F+\Theta)({\mathbf{z}})+ \nabla(F + \Theta)({\mathbf{x}})\cdot ({\mathbf{x}}- {\mathbf{z}}) + \sum_{i,j}\Phi_{ij}(F+\Theta)(\star)(x_i - z_i)(x_j-z_j),$$ where $\star$ comes from the coefficients of ${\mathbf{x}}$ and ${\mathbf{z}}$. Then we have that $$|(F+\Theta)({\mathbf{x}})|_p\leq|{\mathbf{x}}-{\mathbf{z}}|_p<\phi(\beta_F)=\phi(\kappa_0 |{\mathbf{a}}|)=\Psi({\mathbf{a}}).$$ Hence $\Lambda(\phi)\subset \mathcal{W}^{\mathbf{f}}_{(\Psi,\Theta)} $. Now the Theorem will follow if we can show that $$\sum_{t=1}^{\infty} \frac{\phi(2^t)^{s-m+1}}{\rho(2^t)}=\infty.$$ Observe that $$\sum_{t=1}^{\infty} \frac{\phi(2^t)^{s-m+1}}{\rho(2^t)}\asymp \sum_{t=1}^\infty (\psi(\kappa_0{^{\text{-}1}}2^t))^{s-m+1}\frac{1}{\rho(2^t)}\\
\asymp \sum_{t=1}^\infty (\psi(\kappa_0{^{\text{-}1}}2^t))^{s-m+1}2^{t(n+1)}$$ $$\gg \sum_{t=1}^\infty \sum_{\kappa_0{^{\text{-}1}}2^t<|{\mathbf{a}}|\leq\kappa_0{^{\text{-}1}}2^{t+1} }(\psi(\kappa_0{^{\text{-}1}}2^t))^{s-m+1}.$$ As $\psi$ is an approximating function so we got that the above series $$\gg\sum_{t=1}^\infty \sum_{\kappa_0{^{\text{-}1}}2^t<|{\mathbf{a}}|\leq\kappa_0{^{\text{-}1}}2^{t+1} }(\psi(|{\mathbf{a}}|))^{s-m+1}\asymp \sum_{{\mathbf{a}}\in{\mathbb{Z}}^{n+1}\setminus{0}}(\psi(|{\mathbf{a}}|))^{s-m+1}\\$$ $$=\sum_{{\mathbf{a}}\in{\mathbb{Z}}^{n+1}\setminus{0}}\Psi({\mathbf{a}})^{s-m+1}=\infty.$$ This completes the proof of the Theorem.
Concluding Remarks
==================
Some extensions
---------------
An interesting possibility is an investigation of the function field case. In [@G], the function field analogue of the Baker-Sprindžuk conjectures were established and similarly it should be possible to prove the function field analogue of the results in the present paper.
Affine subspaces
----------------
In [@Kleinbock-extremal], analogues of the Baker-Sprindžuk conjectures were established for affine subspaces. In this setting, one needs to impose Diophantine conditions on the affine subspace in question. Subsequently, Khintchine type theorems were established (see [@G1; @G-Monat]), we refer the reader to [@G-handbook] for a survey of results. Recently, in [@BGGV], the inhomogeneous analogue of Khintchine’s theorem for affine subspaces was established in both convergence and divergence cases. It would be interesting to consider the $S$-adic theory in the context of affine subspaces.
Friendly Measures
-----------------
In [@KLW] a category of measures called *Friendly* measures was introduced and the Baker-Sprindžuk conjectures were proved for friendly measures. Friendly measures include volume measures on nondegenerate manifolds, so the results of [@KLW] generalize those of [@KM], but also include many other examples including measures supported on certain fractal sets. In [@BeVe], the inhomogeneous version of the Baker-Sprindžuk conjectures were established for a class of measures called *strongly contracting* which include friendly measures. It should be possible to prove an $S$-adic inhomogeneous analogue of the Baker-Sprindžuk conjectures for strongly contracting measures.
[99]{} V. Beresnevich, *A Groshev type theorem for convergence on manifolds*, Acta Math. Hungar. 94 (2002), no. 1-2, 99–130. Bernik, V., Budarina, N., Dickinson, D.: Simultaneous Diophantine approximation in the real, complex and p-adic fields. Math. Proc. Camb. Phil. Soc., 149, 193–216 (2010). V. Beresnevich, V. Bernik, H. Dickinson and M. M. Dodson, *On linear manifolds for which the Khintchin approximation theorem holds*, Vestsi Acad Navuk Belarusi. Ser. Fiz. - Mat. Navuk (2000), 14–17 (Belorussian). D. Badziahin, V. Beresnevich and S. Velani, *Inhomogeneous theory of dual Diophantine approximation on manifolds*, Advances in Mathematics **232** (2013) 1–35. V.V. Beresnevich, V.I. Bernik, E.I. Kovalevskaya, *On approximation of p-adic numbers by p-adic algebraic numbers*, Journal of Number Theory 111 (2005), 33–56. V. Beresnevich, V. Bernik, D. Kleinbock and G. Margulis, *Metric Diophantine approximation : the Khintchine-Groshev theorem for non-degenerate manifolds*, Moscow Mathematical Journal 2:2 (2002), 203–225. V.V. Beresnevich, E.I. Kovalevskaya, *On Diophantine approximations of dependent quantities in the p-adic case*, Mat. Zametki 73:1 (2003), 22–37; translation: Math. Notes 73:1-2 (2003), 21–35. V. Bernik, H. Dickinson, M. M. Dodson, *Approximation of real numbers by values of integer polynomials*, Dokl. Nats. Akad. Nauk Belarusi 42 (1998), no. 4, 51–54, 123. V. Beresnevich, D. Dickinson and S. Velani, *Measure theoretic laws for lim sup sets*, Mem. Amer. Math. Soc., **179** (2006). V. Beresnevich, A. Ganguly, A. Ghosh and S. Velani, *Inhomogeneous dual Diophantine approximation on affine subspaces*, https://arxiv.org/abs/1711.08559. V. Bernik, D. Kleinbock and G. A. Margulis, *Khintchine type theorems on manifolds : the convergence case for the standard and multiplicative versions*, Internat. Math. Res. Notices **9** (2001), pp. 453–486. V. Beresnevich, S. Velani, An inhomogeneous transference principle and Diophantine approximation, Proc. Lond. Math. Soc. **101** (2010) 821–851. , Simultaneous inhomogeneous Diophantine approximations on manifolds. Fundam. Prikl. Mat. 16 (2010), no. 5, 3–17. V. Bernik, H. Dickinson, J. Yuan, *Inhomogeneous Diophantine approximation on polynomials in ${\mathbb{Q}}_p$*, Acta Arith. 90 (1999), no. 1, 37–48. V.I. Bernik, E.I. Kovalevskaya, *Simultaneous inhomogeneous Diophantine approximation of the values of integral polynomials with respect to Archimedean and non-Archimedean valuations*, Acta Math. Univ. Ostrav. 14:1 (2006), 37–42. N. Budarina, D. Dickinson, *Inhomogeneous Diophantine approximation on integer polynomials with non-monotonic error function*, Acta Arith. 160 (2013), no. 3, 243–257. N. Budarina and E. Zorin, *Non-homogeneous analogue of Khintchine’s theorem in divergence case for simultaneous approximations in different metrics*, Siauliai Math. Semin. 4(12) (2009), 21–33. Y. Bugeaud, *Approximation by algebraic integers and Hausdorff dimension*, J. Lond. Math. Soc., 65 (2002), pp. 547–559. J. W. S. Cassels, An introduction to Diophantine Approximation, Cambridge University Press, Cambridge, 1957. Shreyasi Datta, TIFR thesis, in preparation. H. Dickinson, M. M. Dodson, J. Yuan, *Hausdorff dimension and p-adic Diophantine approximation*, Indag. Math. (N.S.) 10 (1999), no. 3, 337–347. A. Ghosh, *A Khintchine-type theorem for hyperplanes*, J. London Math.Soc. **72**, No.2 (2005), pp. 293–304. A. Ghosh, *Metric Diophantine approximation over a local field of positive characteristic*, Journal of Number Theory, 124 (2007), no. 2, 454–469. A. Ghosh, *Diophantine approximation and the Khintchine-Groshev theorem*, Monatsh. Math **163** (2011), no. 3, 281–299. A. Ghosh, *Diophantine approximation on subspaces of $\mathbb{R}^n$ and dynamics on homogeneous spaces*, to appear in the Handbook of Group Actions III/IV, Editors, L. Ji, A. Papadopoulos, S. T. Yau. A. Ghosh and A. Marnat, *On Diophantine transference principles*, https://arxiv.org/abs/1610.02161. To appear in Mathematical Proceedings of the Cambridge Philosophical Society. A. Groshev, *Une théorème sur les systèmes des formes linéaires*, Dokl. Akad. Nauk SSSR **9** (1938), pp. 151–152. Alan Haynes, *The metric theory of p-adic approximation*, Int. Math. Res. Not. IMRN 2010, no. 1, 18–52. A. Khintchine, *Einige Sätze über Kettenbrüche, mit Anwendungen auf die Theorie der Diophantischen Approximationen*, Math. Ann. **92**, (1924), pp. 115–125. D. Kleinbock, *Extremal subspaces and their submanifolds*, Geom. Funct. Anal **13**, (2003), No 2, pp.437–466. D. Kleinbock, E. Lindenstrauss, B. Weiss, *On fractal measures and Diophantine approximation*, Selecta Math. (N.S.) 10 (2004), no. 4, 479–523. D. Kleinbock and G. A. Margulis, *Flows on homogeneous spaces and Diophantine Approximation on Manifolds*, Ann Math**148**, (1998), pp.339–360. D. Kleinbock and G. Tomanov, *Flows on $S$-arithmetic homogeneous spaces and applications to metric Diophantine approximation*, Comm. Math. Helv. 82 (2007), 519–581. E.I. Kovalevskaya, *A metric theorem on the exact order of approximation of zero by values of integer polynomials in ${\mathbb{Q}}_p$*, Dokl. Nats. Akad. Nauk Belarusi 43:5 (1999), 34–36 (in Russian). S. Lang, *Algebra*, Second edition. Addison-Wesley Publishing Company, Advanced Book Program, Reading, MA, 1984. E. Lutz, *Sur les approximations diophantiennes linéaires P-adiques*, Actualités Sci. Ind., no. 1224, Hermann $\&$ Cie, Paris, 1955. A. Mohammadi, A. Salehi Golsefidy, *$S$-arithmetic Khintchine-type theorem*, Geom. Funct. Anal. 19 (2009), no. 4, 1147–1170. A. Mohammadi, A. Salehi Golsefidy, *Simultaneous Diophantine approximation on non-degenerate p-adic manifolds*, Israel J. Math. 188 (2012), 231–258. W. Schmidt, *Metrische Sätze über simultane Approximation abhänginger Grössen*, Monatsch. Math. 68 (1964), 154–166. W.H. Schikhof, *Ultrametric Calculus. An Introduction to p-adic Analysis*, Cambridge Studies in Advanced Mathematics 4, Cambridge University Press, Cambridge (1984). V. G. Sprindžuk, *Achievements and problems in Diophantine Approximation theory*, Russian Math. Surveys **35** (1980), pp. 1–80. V. G. Sprindžuk, *Metric theory of Diophantine approximations*, John Wiley & Sons, New York-Toronto-London, 1979. A. E. Ustinov, *Inhomogeneous approximations on manifolds in $\mathbb{Q}_p$*, Vestsï Nats. Akad. Navuk Belarusï Ser. Fïz.-Mat. Navuk 2005, no. 2, 30–34, 124. A. E. Ustinov, *Approximation of complex numbers by values of integer polynomials*, Vestsï Nats. Akad. Navuk Belarusï Ser.Fïz.-Mat. Navuk 1 (2006) 9–14, 124. Zelo, Dmitrij *Simultaneous approximation to real and $p$-adic numbers*, Thesis (Ph.D.) €“University of Ottawa (Canada). 2009. 147 pp. ISBN: 978-0494-59539-8 ProQuest LLC
[^1]: Ghosh acknowledges support of a UGC grant and a CEFIPRA grant.
|
{
"pile_set_name": "ArXiv"
}
|
[Expression of survivin in different stages of carcinogenesis and progression of breast cancer].
Survivin is one of the newly identified apoptosis inhibitor; it can block the cell apoptosis by inhibiting the function of the enzyme caspase-3 and caspase-7. Present studies indicate that survivin is overexpressed in malignant tumor. This current study was designed to investigate the effects of survivin in tumorigenesis and progression of breast carcinoma through observing the expression of survivin in the tissue of normal mammary, cystic hyperplasia mammary, atypical hyperplasia mammary and breast carcinoma. The expression of survivin in the tissue of normal mammary (96 cases), cystic hyperplasia mammary (56 cases), atypical hyperplasia mammary (12 cases), and breast carcinoma (119 cases) were evaluated by SP immunohistochemistry. The relationship between survivin expression and the pathologically biological features of breast cancer was assessed. The positive rates of survivin were 4.2%(4/96),5.4% (3/56),42.7% (5/12), and 72.3%(86/119)in the tissue of normal mammary, cystic hyperplasia mammary, atypical hyperplasia mammary, and breast carcinoma, respectively. The positive rates in the last two groups were higher than those in the former two groups (P< 0.005). Survivin was more expressed in the infiltrative nonspecial breast carcinoma (82.0%, 73/89) than in the special and early stage infiltrative breast carcinoma (37.5%,3/8)(P< 0.05). Expression of survivin was correlated with lymph node metastasis although the difference is not significant (P >0.05). Overexpression of survivin is common in tumorigenesis and progression of breast carcinoma. Altered expression of survivin may contribute to tumorigenesis and progression of breast carcinoma by inhibiting cell apoptosis, its overexpression indicates worse prognosis.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
1: The frame material is galvanised with advanced auto-machinINg techniques. The frame is pressed with flower a pattern which enhances the tensile strength & frame durability.2: It uses a high tensile Al-alloy rim so it's light weight and tough.3: The bearings are high tensile, durable and silent.4: Safe and easy to disassemble for convenient maintenance.
5: High precision system is assisted by the computer, maximize the effectiveness of the fan. There are some procedure tests that have to be done to ensures the quality of the fan.6: A one year period of warranty.7: Special automatic centrifugal open system solves the problem, if the shutter cannot open fully, due to the contamination with the Heavy hammer-type.8: Motor specially used for high-performance fan, with updated design for an automatic regulating devices, the belt does not need a manual adjustment.
|
{
"pile_set_name": "Pile-CC"
}
|
Q:
Flash Packager for IOS: Is the performance for the compiled app too poor to use?
I am looking to develop a ipad app that relies on a lot of animation - mainly cutout animation and was looking at using flash CS 5 to publish a flash app to ios.
I see a few articles on poor performance for flash apps on ios. Anyone have any experience in using flash to publish for ios?
A:
I was doing some testing with it and the frame rate was horrible. It ended up being easier to start learning Objective-c than to use the Packager for iOS. That being said I saw this link the other day (http://forums.adobe.com/thread/826112) talking about Air 2.6 for iOS. Worth reading. I haven't messed with it yet but I did do a quick test with an Air app running on the Motorola Xoom with Android and it worked great so maybe there's some hope.
|
{
"pile_set_name": "StackExchange"
}
|
Ask HN: Which Stocks to Buy? - rishiloyola
Which stocks do you recommend to buy during the corona virus outbreak?
======
pensatoio
Buy index funds. The risk associated with gambling on individual stocks is an
order of magnitude higher during a period such as this. You don't need that
extra risk to see gains.
------
crmd
I’ve been focusing on blue chips that were already in the dog house, e.g. IBM,
GE.
|
{
"pile_set_name": "HackerNews"
}
|
Q:
animate width of rectangle from center in raphael.js
I have a question about the .animate() Api in Raphael.js
There is a rectangle which I would like animate the width and height.
r.animate({ width: 50, height: 50 }, 1000, "bounce");
But I want to expand it from the center of that rectangle, not the left-top. Does anyone of you know how to do it?
FIDDLE
A:
There is a better way to do this without calculation. If you know how much bigger you want to make your object, then you should animate the scaling.
Here is the DEMO
r.click(function() { r.animate({ transform:'s2' }, 500); });
Note that transform:'s2' means scale it 2x. Hope this helped ;)
EDIT if you want to have this animation works conterminously, just write transform:'...s2' instead.
|
{
"pile_set_name": "StackExchange"
}
|
7 Classic Tips To Try On A First Date That Can Increase Your Chances Of A Second One
No matter how many first dates you go on, they’re always going to be stressful. If you really like someone, you obviously want things to go well so you can see them again. But if you’ve been dating around for a while, you likely know that second dates don’t come as easily as first dates. So what can you do to land a second date? According to experts, there are a few classic dating tips you should try.
Rest assured, old-fashioned first date tips aren’t about doing anything cheesy. In fact, you probably do a few of them already. There’s a reason why these moves have stood the test of time.
“The ‘classic moves’ really are just all about keeping the first date fun, light hearted, and unique,” Susan Trombetti, matchmaker and CEO of Exclusive Matchmaking, tells Bustle. “It’s a first date, so you don’t need to have the most elaborate date in the world or divulge your history and opinions. The key to getting a second date is by keeping it fun and engaging in an activity that you both will enjoy.”
There are so many things you can do to make it to a second date. Here are some classic ideas you can try, according to experts.
1. Smile And Flirt
Andrew Zaeh for Bustle
If you want to have a good first date, body language is everything. “A little touching and direct eye contact goes a long way,” Trombetti says. Flirty smiles and light touching are great ways to keep it fun and create sparks if you’re into the person.
2. Keep The Date Short And Sweet
First dates are all about seeing if you two click. You don’t need to spend hours together in order to know there’s chemistry there. So as Trombetti says, “Don’t overstay on a first date. That’s a lot of pressure.” Keep it light and save something for the second date.
3. Be Extra Nice To Servers
Andrew Zaeh for Bustle
If your first date involves going out for a meal, be sure to treat hosts and servers with kindness and respect. Things can go wrong and people make mistakes. But when you’re out, try not to lose your cool. “Classic moves show your date respect, kindness, courtesy, and consideration of others,” dating and marriage therapist, Gary Brown, PhD, LMFT, tells Bustle. “The wait-staff are a captive audience and your date may very well notice how you treat others.”
4. Make Them Laugh
Andrew Zaeh for Bustle
First dates can be really stressful. But as Jeannie Assimos, chief of advice at eharmony, tells Bustle, don’t take it so seriously. It’s perfectly OK to relax and enjoy it. “Laughter is a great medicine, so show them your sense of humor,” she says. “When you’re talking to each other and laughing as if you’re talking to your best friend, it means that the date is going well and you’re enjoying each other’s company.” Chances are, you’ll see them again.
5. Ask Your Date Questions About Themselves
Andrew Zaeh for Bustle
People tend to like talking about themselves. If you want to land a second date, use that to your advantage, Assimos says. Ask your date basic questions about their family or their job. Once you’ve covered that, you can then get a little creative and ask about their guilty pleasures or embarrassing moments. “This will showcase your genuine interest in getting to know them better and wanting a second date,” she says. “It’ll also show off how great of a listener you are, which is a very desirable trait.”
6. Slow Dance
If you want to leave a lasting impression, bring your partner in for a dance. “When you hear the music, extend your arm, grab your date by the hand, and look deeply into their eyes,” Christine Scott-Hudson, licensed psychotherapist and owner of Create Your Life Studio, tells Bustle. “You don’t even have to be the world’s best dancer. Laugh, be yourself, and have some fun.”
7. Walk Your Date To Their Car Or Front Door
Andrew Zaeh for Bustle
This move is as classic as it gets. Walk your date to their car or front door, thank them for a good time, and maybe even end the date with a kiss. “Take the time to savor each step of your romance along the way,” Scott-Hudson says.
A lot of different factors go into whether a first date goes well or not. If you’re really into someone and you know you want to see them again, these classic first date moves can increase your chances of making that happen.
|
{
"pile_set_name": "Pile-CC"
}
|
package net.andreinc.mockneat.unit.misc;
/**
* Copyright 2017, Andrei N. Ciobanu
Permission is hereby granted, free of charge, to any user obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit
persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the
Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. PARAM NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER PARAM AN ACTION OF CONTRACT, TORT OR
OTHERWISE, ARISING FROM, FREE_TEXT OF OR PARAM CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS PARAM THE SOFTWARE.
*/
import net.andreinc.mockneat.MockNeat;
import net.andreinc.mockneat.abstraction.MockUnitBase;
import net.andreinc.mockneat.abstraction.MockUnitString;
import java.util.function.Supplier;
import static net.andreinc.mockneat.types.enums.DictType.MIME_TYPE;
public class Mimes extends MockUnitBase implements MockUnitString {
public static Mimes mimes() {
return MockNeat.threadLocal().mimes();
}
protected Mimes() {
}
public Mimes(MockNeat mockNeat) {
super(mockNeat);
}
@Override
public Supplier<String> supplier() {
return mockNeat.dicts().type(MIME_TYPE).supplier();
}
}
|
{
"pile_set_name": "Github"
}
|
Pazzi Madonna
The Pazzi Madonna is a rectangular "stiacciato" marble relief sculpture by Donatello, now in the sculpture collections of the Bode-Museum in Berlin. Dating to around 1425-1430, it was probably originally produced for private devotion in the Palazzo Pazzi della Congiura in Florence at the beginning of Donatello's collaboration with Michelozzo. It was extremely popular and is known in several copies.
The Virgin Mary is shown three-quarter-length, holding the Christ Child in her arms. Neither of them are shown with haloes and the emphasis is instead on their tender and intense intimacy, developing themes from the Eleusa-type icon in Byzantine art. The child reaches out his arm to his mother, but both their expressions are melancholy, with the Virgin reflecting on her son's future Passion.
References
Category:Marble sculptures
Category:Sculptures by Donatello
category:Sculptures of the Berlin State Museums
|
{
"pile_set_name": "Wikipedia (en)"
}
|
Q:
How can I run an addon in the service addon in Kodi with python
I am developing kodi add-ons using python scripts and xml's.I created an service addon will be automatically started when Kodi start.
The part that in addon.xml work for this job is in here:
extension point="xbmc.service" library="addon.py" start="login" />
When this addon.py work,a button appears in the screen.
My goal is when you pushed these button;another add-on should work.
The code section in addon.py(service addon's python script)
I wrote to handle this part is here:
if control=self.button0:
file_path=xbmc.translatePath(os.path.join("...\addons/script.helloworld\addon.py"))
xbmc.executebuiltin("xbmc.RunScript(file_path)")
But these errors appears in kode.log
ERROR:CScriptInvocationManager::ExecuteAsync-Not executing
non-existing script file path
A:
You can just do this and everything should work.
xbmc.executebuiltin("RunScript(script.addonid)")
|
{
"pile_set_name": "StackExchange"
}
|
MMA Fight History. Results & Odds
Additional VIP Features
If you support the site through VIP subscription, you will receive many more features on the fighter profile pages. The additional features are an opponent analysis section (for reach, height, age, stance etc), odds performance over time, analysis of relative opponent strength, clinch control analyser, decision analysis thanks to MMADecisions.com and an analysis of FightMetric fighter / fight stats. To
view an example of these extra stats, check out our VIP Tour here.
|
{
"pile_set_name": "Pile-CC"
}
|
Calotype Club
Calotype Club may refer to:
Edinburgh Calotype Club, (c. 1843- ), the first photographic club in the world.
Calotype Society (London), (c. 1848- ), dancing in part to become the Royal Photographic Society.
|
{
"pile_set_name": "Wikipedia (en)"
}
|
In the manufacture of integrated circuits (ICs), devices are formed on a wafer and connected together by multiple conductive interconnection layers. These conductive interconnection layers are formed by first forming gaps, like trenches and vias in a dielectric layer and then filling gaps with a conductive material.
The conductive material is usually formed within the gaps by an electrochemical plating process (ECP process). A barrier layer is firstly formed within the gaps in the dielectric layer. A seed layer is then formed over the barrier layer. The remaining space of the gaps is filled in succession with the conductive material. Then a planarization is performed to remove excess conductive material.
|
{
"pile_set_name": "USPTO Backgrounds"
}
|
Q:
Import Error on Keras : 'can not import name 'abs'
I am trying to use keras for image classification. I want to load an already trained model (VGG16) for my project. but when I run
from keras.applications.vgg16 import VGG16
I get an error.
ImportError: cannot import name 'abs'
I reinstalled both tensorflow and keras using :
pip install --ignore-installed --upgrade tensorflow
conda install -c conda-forge keras
since I have found some suggestions that reinstalling could help on here though it was related tfp not VGG16.
Could someone help me, please? Why I am getting this error and how could I fix it?
OS:windows
Tensorflow and keras installed on CPU
A:
after all trying to install tensorflow and keras in a virtual environment solved the problem. Still, don't know why this problem existed in the first place. steps are taken:
conda create --name vgg16project python # you can name it other than vgg16project
activate vgg16project
then install other packages you need such as pandas, seaborn etc. then installing tensorflow and keras with pip
pip install --upgrade tensorflow
pip install --upgrade keras
simply solved it. I guess there must be a reason why it is recommended to use tensorflow and keras in a virtual environment.
|
{
"pile_set_name": "StackExchange"
}
|
Imagine a job where your passion for local food, storytelling and social media merge. A job where friendly conversation and building community is built into the heart of the role, creating opportunity to make a difference in the lives of those you interact with. A job where your ideas and creativity are valued and encouraged as a way to continuously improve the business.
Sound like you? Join our award-winning team as a Social Media Coordinator at Valley Natural Foods. Qualified applicants will have the following:
B.S. in Marketing or related field with 1+ years previous professional social media experience, or
A.S. in Marketing or related field with 2+ years of previous professional social media experience, or
3+ years previous professional social media experience in each of the following areas:
This position will require occasional evenings/weekends, and availability for store events. Candidates who have education and 3+ years of experience, are multi-lingual, and have previous experience with technology management/systems implementation are strongly preferred.
We pride ourselves on offering a competitive benefits package including a gain share program and a 10% employee discount on groceries every day! This position is eligible to earn PTO. Positions may close without notice due to applicant volume. Please see website for a full list of benefits available.
|
{
"pile_set_name": "Pile-CC"
}
|
Hughes slams referee over N'Zonzi red card
Stoke City manager Mark Hughes was furious with the decision that saw Steven N'Zonzi sent off in the 1-0 defeat at Sunderland.
Stoke slipped to a fifth defeat in six Premier League games on Wednesday, thanks to Adam Johnson's tap-in in the 17th minute at the Stadium of Light.
The visitors had opportunities to level with Ryan Shawcross seeing Vito Mannone pull of a superb stop from point-blank range shortly before half-time, while the centre-back hit the crossbar in the closing stages.
But Hughes was incensed that N'Zonzi was shown a second yellow card and subsequent red in the 52nd minute, with referee Robert Madley deeming the midfielder had tugged Jozy Altidore to ground when the striker had broken free of the defence, after receiving his first caution for hauling down the American in the first half.
"It was a performance of real character and desire," said Hughes. "Unfortunately on too many occasions this season we've been hurt by refereeing decisions that have impacted upon the game.
"Once again we're talking about a situation that I feel was a poor decision by the referee. It's difficult for him, the lad Altidore has gone down easily looking for an advantage. I felt on the night too much of that was going on to be perfectly honestly.
"The referee has to be able to look through that and understand what's happening but he's bought a challenge where there's very minimal contact.
"The lads gone down easy, the referee deemed it necessary to give Steven a second yellow for that challenge which is unbelievable in my opinion."
With the January transfer window due to shut on Friday, Hughes confessed it is unlikely he will add to his squad, despite fresh links with Sunderland's Lee Cattermole, who took no part in the game.
"I haven't had an update because I've been at the game," he added. "At the moment the possibility of getting more in isn't high I would suggest.
"But we'll see things happen quickly in the window. If we don't get anyone else in so be it. The group I've got showed they're an excellent group."
|
{
"pile_set_name": "Pile-CC"
}
|
Hi
Now that the GCSE rule has changed for the EYE L3 in the Childcare sector to accept F/Skills L2 in Maths and English. Can anyone please advise on the following:
We have learners that have now have completed their EYE qualification but however only got a Maths D grade last June and again a D in the November re-sit. All learners have now completed their F/Skills L2.
Can anyone advise how I add this information to the learner ILR?
Do I add F/skills learning aim to ILR and zero fund it ?
Will F/skills be funded etc. as it is not part of the framework
Do I code the GCSE on the ILR as D and achieved etc.
I have forwarded this to the help desk, but someone out there may have already asked and knows the correct answer
|
{
"pile_set_name": "Pile-CC"
}
|
We never face with failed deadlines because we appreciate time of our customers. We ensure all our clients in the originality of their orders.
Plagiarism-Free
Our company devotes close attention to the uniqueness of the paper through using plagiarism-checkers.
Calculation
Math Calculations
At the college level, mathematics courses take a turn toward the really complex, and that complexity relates to calculation problems that are massively lengthy and complicated. College calculus, math analysis, and the upper level coursework such as vector analysis and enumerative combinatorics will have calculation problems and problem sets that literally go on for pages.
The Common Issue
For students involved in these types of calculations problems, the common issue is typically computational or the mixing up of variables so that the proper solution cannot be obtained. And it means going back through the entire process to locate the error(s), if in fact they can even be found.
The Common Solution – EssayWriting.education
For many students, the solution lies in locating a highly skilled mathematician to provide some help. Such a mathematician can be found at EssayWriting.education immediately. We have mathematicians with Masters and Ph.D. degrees who can step in to locate the errors, solve the calculation problems, and give the client the explanation s/he needs to understand exactly what went wrong.
Doing Business with EssayWriting.education
A student may come to EssayWriting.education knowing that any help provided will be strictly confidential, customized, and exactly as ordered. In the case of math calculation problems and problem sets, once an order is placed, we do the following:
Consult with our mathematicians and identify the best person for order fulfillment
Establish an account portal for the client, so that direct contact between that client and his/her math expert can occur
Ensure that the client receives the problem solutions asked for and any explanations s/he may need.
Provide help at a reasonable cost
Ask the client for feedback to ensure that s/he is fully satisfied with the product(s) received.
Getting immediate and totally professional help is just a few clicks away – fill out your order now!
EXPERT ESSAY WRITING HELP FOR STUDENTS
The information you provide us does not spread further then our security department. There is no need to worry that your personal data is to appear somewhere due to our service. Please see all the details at our Privacy Policy page.
|
{
"pile_set_name": "Pile-CC"
}
|
Q:
React-Native + Flex not responding to orientation change
I am writing a Universal iPhone/iPad application using React-Native. However I am struggling to render my view correctly when the orientation changes. Following is the source code for js file:
'use strict';
var React = require('react-native');
var {
Text,
View
} = React;
var CardView = require('./CardView');
var styles = React.StyleSheet.create({
container:{
flex:1,
backgroundColor: 'red'
}
});
class MySimpleApp extends React.Component {
render() {
return <View style={styles.container}/>;
}
}
React.AppRegistry.registerComponent('SimpleApp', () => MySimpleApp);
This is how it renders in Portrait (which is correct):
However when the device is rotated. The red view does not rotate accordingly.
A:
The simplest way is:
import React, { Component } from 'react';
import { Dimensions, View, Text } from 'react-native';
export default class Home extends Component {
constructor(props) {
super(props);
this.state = {
width: Dimensions.get('window').width,
height: Dimensions.get('window').height,
}
this.onLayout = this.onLayout.bind(this);
}
onLayout(e) {
this.setState({
width: Dimensions.get('window').width,
height: Dimensions.get('window').height,
});
}
render() {
return(
<View
onLayout={this.onLayout}
style={{width: this.state.width}}
>
<Text>Layout width: {this.state.width}</Text>
</View>
);
}
}
A:
It pretty simple to respond orientation change in react native. Every view in react native have a listener called onLayout which get invoked upon orientation change. We just need to implement this. It's better to store dimension in state variable and update on each orientation change so that re-rendering happens after change. Other wise we need to reload the view to respond the orientation change.
import React, { Component } from "react";
import { StyleSheet, Text, View, Image, Dimensions } from "react-native";
var { height, width } = Dimensions.get("window");
export default class Com extends Component {
constructor() {
console.log("constructor");
super();
this.state = {
layout: {
height: height,
width: width
}
};
}
_onLayout = event => {
console.log(
"------------------------------------------------" +
JSON.stringify(event.nativeEvent.layout)
);
this.setState({
layout: {
height: event.nativeEvent.layout.height,
width: event.nativeEvent.layout.width
}
});
};
render() {
console.log(JSON.stringify(this.props));
return (
<View
style={{ backgroundColor: "red", flex: 1 }}
onLayout={this._onLayout}
>
<View
style={{
backgroundColor: "green",
height: this.state.layout.height - 10,
width: this.state.layout.width - 10,
margin: 5
}}
/>
</View>
);
}
}
A:
For more recent versions of React Native, orientation change doesn't necessarily trigger onLayout, but Dimensions provides a more directly relevant event:
class App extends Component {
constructor() {
super();
this.state = {
width: Dimensions.get('window').width,
height: Dimensions.get('window').height,
};
Dimensions.addEventListener("change", (e) => {
this.setState(e.window);
});
}
render() {
return (
<View
style={{
width: this.state.width,
height: this.state.height,
}}
>
</View>
);
}
}
Note that this code is for the root component of an app. If using it deeper within the app, you will need to include a corresponding removeEventListener call.
|
{
"pile_set_name": "StackExchange"
}
|
Introduction {#S1}
============
Insulinoma is a very rare neuroendocrine tumor with a reported incidence of 0.5--5 per million person-years. It is also the most common cause of hypoglycemia associated with endogenous hyperinsulinemia ([@B6]). Clinical clues suggest that insulinoma continues to be diagnosed based on the physician's recognition of the presence of hypoglycemic symptoms, such as sweating, hunger, tremors, and palpitations. When the relationship between symptoms and possible hypoglycemia is missed, in most clinical settings, the blood glucose levels are not be checked. In addition, hypoglycemic symptoms are varied, lack specificity, and mimic many common neuropsychiatric disorders, such as epilepsy ([@B3]).
Complex partial seizures are characterized by an aura, impaired consciousness, automatisms, and sometimes psychopathology, also known as temporal lobe seizures (TLE) or psychomotor seizures. Sometimes they are easily confused with metabolic diseases, such as hypoglycemia ([@B3]).
In this study, we report a case of insulinoma with impaired consciousness and behavioral disorders, which resulted from hypoglycemia and which were misdiagnosed as complex partial seizures based on the normal fasting blood glucose and glycosylated hemoglobin levels prior to admission. In clinical practice, for atypical complex partial seizures, in addition to eliminating epilepsy, the idea should be broadened and positive; finding other causes and thinking of differentiation from extracranial diseases, such as insulinoma, came to our mind.
Case Presentation {#S2}
=================
A 64-year-old male patient was referred to our department at the Affiliated Hospital of Jining Medical University for management of refractory seizures. The patient first visited the hospital in 2013 and presented with disturbance of consciousness and behavioral abnormalities with no obvious family or social history. The patient also suffered from palpitations, unclear vision, or dizziness for about 3--5 min, and these were later characterized by impaired consciousness and automatisms. The patient would also remain unresponsive for up to 30--60 min before he would recover spontaneously with no distortion of the commissures. Based on these symptoms, the patient was initially diagnosed with epilepsy at a different hospital and subsequently received regular treatment with oxcarbazepine, an antiepileptic medication. Despite the use of different antiepileptic drugs (AEDs) the patient continued to have 3--5 attacks per year.
At admission, a physical examination, neurological examination, brain magnetic resonance imaging (MRI), and electroencephalogram (EEG) showed no obvious abnormalities. Laboratory test results revealed normal serum fasting glucose levels (5.3 mmol/l), glycosylated hemoglobin levels (5.1%), and ammonia levels (\<8.7 umol/l). Sodium potassium chloride, calcium, magnesium, and phosphorus levels showed no obvious abnormalities. Continuous glucose monitoring (CGM) also showed no abnormalities during the first 3 days after admission. However, on the fifth day after admission, the finger prick test revealed a blood glucose level of 2.5 mmol/L before lunch, and this was lower than the normal value despite the patient not having any hypoglycemia-related symptoms, such as palpitations, sweating, and hunger. Based on the Whipple's triad \[consists of episodic hypoglycemia (\<50 mg/dL), symptoms of hypoglycemia include confusion, anxiety, paralysis, stupor, coma, and reversal of symptoms with glucose administration\], and the seizure-like symptoms, we considered that the possibility of an endocrine disease should be ruled out. A laboratory examination showed that cortisol rhythm (8 am, 4 pm, and 0 am) was 13.37, 2.16, and 10.68 ug/dl, and ACTH rhythm (8 am, 4 pm, and 0 am) was 12.86, 4.97, and 9.10 pmol/L, respectively, and these were all within the normal range. The patient was then examined for insulinoma. Islet cell antibody was weakly positive, anti-glutamate decarboxylase antibody was 0.62 U/ml (0--1 U/ml), and the hunger test was performed by allowing the patient to fast after dinner and have their blood glucose levels monitored every 2 h. Laboratory examination of the blood glucose levels revealed that fasting glucose (6 am) was 2.9 mmol/L, serum insulin (6 am) was 17.47 uIU/ml (reference range 2.6--24.9 uIU/ml); fasting glucose (11:30 am) was 2.0 mmol/L, serum insulin (11:30 am) was 13.50 uIU/ml, and insulin/blood glucose \> 0.4. An abdominal CT scan showed a 1.5 cm mass in the tail of the pancreas ([Figures 1A,B](#F1){ref-type="fig"}). However, since the mass was not located at the same level as the pancreas, and the CT value of the lesion (44Hu) was relatively similar to the CT value of the pancreatic parenchyma (35Hu), this led to misdiagnosis of the tumor. Further pancreatic CT showed significant enhancement of the nodular arterial phase with a slight withdrawal from the delayed phase ([Figures 1C--E](#F1){ref-type="fig"}). An MRI-enhanced scan of the upper abdomen showed a slightly higher signal of T1WI in the tail of the pancreas ([Figure 2A](#F2){ref-type="fig"}) and a high signal of T2WI ([Figure 2B](#F2){ref-type="fig"}) with a diameter of about 1.2 cm and showing mild progressive enhancement ([Figures 2C--F](#F2){ref-type="fig"}). The CT and MRI examination suggested that islet cell tumor might occur. The size and form of liver, gallbladder and spleen is normal. The patient was subsequently transferred to the department of hepatobiliary surgery to undergo surgical removal of the tumor. A histological analysis confirmed the excision of a benign pancreatic insulinoma with a Ki-67 labeling index of 1--2%, which indicated a low risk of malignant behavior. The tumor was positive for CK+(CKLow), CgA(+), CD56(+), Syn(+), insulin(+), and β-catenin(+) ([Figure 3](#F3){ref-type="fig"}). CK is the main tag of the simple and glandular epithelium, SYN, CgA, and CD56 are used to identify tumors arising from neural and neuroendocrine tissues, and positivity for insulin provides tumor-specific confirmation of the disease. Following surgical removal of the tumor, the patient's blood glucose level normalized, and no recurrence of seizures was noted.
{#F1}
{#F2}
{#F3}
Discussion {#S3}
==========
In this study, we reported a case of insulinoma presenting as a refractory seizure disorder in adulthood. The patient experienced the first attack 4 years ago. The atypical features of the attacks were inconsistent with complex partial seizures and poor response to treatment, which prompted an inpatient assessment. This case highlights the importance of considering hypoglycemia in atypical and refractory seizures.
Hypoglycemia is a well-recognized cause of acute symptomatic seizures. Several cases of patients with recurrent seizures due to insulinoma-associated hypoglycemia have been reported ([@B7]).
Insulinomas are the most common hormone-secreting tumors of the gastrointestinal tract and were first discovered by Nicholis in autopsy in 1902. The incidence of 0.5--1 cases/million/year. The diagnostic criteria rely on inappropriate insulin secretion (0.30 pmol/l), which is consistent with hypoglycemia (2.2 mmol/l) and subsequent tumor localization ([@B8]). Islet cell tumors can be categorized as functional islet cell tumors and non-functional islet cell tumors based on the presence or absence of endocrine function. Non-functional islet tumors account for about 65% of the islet cell tumors; however, they lack specificity in clinical manifestations, and metastasis has already occurred in most diagnoses. Functional islet cell tumors are rare and can be divided into insulinoma, gastrinoma, glucagonoma, vaso-intestinal peptide tumors, and so on, with the most common tumor being the insulin-producing insulinoma ([@B4]). The autonomous production of excessive amounts of insulin, which results in hypoglycemia, is the classical feature of this tumor, and β-cell adenomas cannot decrease insulin secretion in the presence of hypoglycemia. The most critical diagnostic criterion is the detection of an inappropriately elevated plasma insulin level under conditions of hypoglycemia. The diagnosis of insulinoma therefore requires confirmation of the presence of hypoglycemia with evidence of inappropriate insulin secretion and the identification of a pancreatic mass by medical imaging or angiography ([@B11]).
Delays to a diagnosis can be caused by a number of factors ([@B12]). For example, insulinoma can exhibit various neurogenic and neuroglycopenic symptoms. These symptoms also mimic neuropsychiatric symptoms, which include unconsciousness, confusion, seizures, personality change, and bizarre behavior in most patients ([@B2]). In addition, over half of the patients with these symptoms are initially misdiagnosed with neuropsychiatric disorders, such as epilepsy. Presentation is usually insidious with neuroglycopenia and fasting hypoglycemia. Normal insulin levels therefore do not rule out the disease because absolute insulin levels are not elevated in insulinoma patients. As this study revealed, this might lead to a delay in diagnosis. Diagnostic delays are therefore related to the fact that the symptoms are similar to many common neurological and psychiatric disorders. A study of 1,067 insulinoma patients showed that most patients developed neuropsychiatric symptoms, including loss of consciousness, unresponsiveness, delirium, deep coma, dizziness, visual disturbances, coma, and epilepsy. In addition, insulinomas secrete insulin, causing temporary fluctuations in blood sugar levels. Blood glucose levels in patients with insulinoma can therefore sometimes appear to be normal ([@B10]). Symptoms of hypoglycemia include neuroglycopenia (confusion, lethargy, bizarre behavior palpitations, personality change, decreased motor activity, transient neurological deficit, and gradual decline in cognition) and autonomic symptoms (sweating, tremor, palpitations, anxiety, weakness, and visual disturbance). Presentation is usually insidious with neuroglycopenia and fasting hypoglycemia. As with this study, this may lead to a delay in diagnosis as other neuropsychiatric diagnoses are first considered. In a retrospective study of 59 patients with histologically confirmed islet cell adenomas, the interval between the onset of symptoms and diagnosis ranged from 1 month to 30 years with a median of 24 months. A significant proportion (39%) of the patients was originally diagnosed with a seizure disorder. Furthermore, all the patients had symptoms of neuroglycopenia, and three quarters of them reported a relief of symptoms with food ingestion ([@B5]). Despite these findings, rare cases of insulinoma presenting with seizures have been reported in previous studies ([@B1]; [@B9]). Therefore, neuroglycopenia should be considered in all patients with refractory seizures. In our case, due to lack of typical symptoms of hypoglycemia, such as sweating and a feeling of hunger, the patient was diagnosed with fasting blood glucose and glycosylated hemoglobin at the time of admission, and this led to misdiagnosis as a complex partial seizure. Furthermore, during the diagnosis process, the CT scan special imaging of the patient's pancreas lesions almost led to a missed diagnosis of the tumor.
The patient's clinical manifestations, hypoglycemia, insulin/blood glucose \> 0.4, pancreatic enhancement CT, and upper abdominal enhanced MRI results supported the diagnosis of an islet cell tumor. Blood sugar levels returned to normal after surgical removal of the tumor, and the symptoms completely disappeared. During the 1 year of follow-up, the patient did not receive any treatment, and there were no symptoms or attacks.
In conclusion, the patient experienced episodes of hypoglycemic seizures induced by a pancreatic insulinoma. This highlights the need for careful reassessment of all atypical and refractory seizures.
Data Availability Statement {#S4}
===========================
The datasets generated for this study are available on request to the corresponding author.
Ethics Statement {#S5}
================
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this manuscript.
Author Contributions {#S6}
====================
ZQ and DL collected the case and wrote the manuscript. JM, YH, and PX acquised and analyzed the imaging data. AZ reviewed and approved the final manuscript.
Conflict of Interest {#conf1}
====================
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
**Funding.** This work was supported by grant ZR2017MH057 from the Natural Science Foundation of Shandong Province, China (AZ) and grant \[2016\]56--120 from the Science and Technology Development Project of Jining City, Shandong Province, China (ZQ).
[^1]: Edited by: Jacques Epelbaum, Institut National de la Santé et de la Recherche Médicale (INSERM), France
[^2]: Reviewed by: Nils Lambrecht, VA Long Beach Healthcare System, United States; Alessio Imperiale, Université de Strasbourg, France
[^3]: ^†^These authors have contributed equally to this work
[^4]: This article was submitted to Neuroendocrine Science, a section of the journal Frontiers in Neuroscience
|
{
"pile_set_name": "PubMed Central"
}
|
Welcome to USS Bonefish Base!
USS Bonefish Base is a chapter of the United States Submarine Veterans, Inc. (USSVI).
USSVI has over 13,000 members nationwide. We meet on the 4th Saturday of each month at Zacatecas Café, 3767 Iowa Avenue, Riverside, CA 92507. Meetings begin at 1200, but come early and share a sea story or two. We usually enjoy a nice lunch together after the meeting. Visitors are welcome to attend.
Bonefish Base is located in Redlands, California, but our members reside all over the Inland Empire. We are always seeking new members in our area and if you are a veteran, retiree, or active duty member who is “Qualified in Submarines” we'd be honored to have you Join Our Base!
We chose our name after the Gato Class submarine USS Bonefish (SS-223) and Barbel Class submarine USS Bonefish (SS-582). You can read about their history here.
|
{
"pile_set_name": "Pile-CC"
}
|
Effect of phosphatidyl-L-serine and vinculin on actin polymerization.
The effects of phosphatidyl-L-serine (PS) and/or vinculin on actin polymerization are examined by spectrophotometry, viscometry and electrophoresis. Actin polymerization is inhibited by PS alone and stimulated by PS and vinculin. The results suggest that actin does not directly adhere to cell membrane and that vinculin is a protein which is involved in structures connecting actin microfilaments to cell membranes.
|
{
"pile_set_name": "PubMed Abstracts"
}
|
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="wrap_content">
<TextView
android:id="@+id/picture_tv_photo"
android:layout_width="match_parent"
android:layout_height="45dp"
android:background="@drawable/picture_item_select_bg"
android:gravity="center"
android:text="@string/picture_photograph"
android:textColor="@color/picture_color_53575e"
android:textSize="14sp" />
<View
android:id="@+id/top_line"
android:layout_width="match_parent"
android:layout_height="0.5dp"
android:layout_below="@id/picture_tv_photo"
android:background="@color/picture_color_e" />
<TextView
android:id="@+id/picture_tv_video"
android:layout_width="match_parent"
android:layout_height="45dp"
android:layout_below="@id/top_line"
android:background="@drawable/picture_item_select_bg"
android:gravity="center"
android:text="@string/picture_record_video"
android:textColor="@color/picture_color_53575e"
android:textSize="14sp" />
<TextView
android:id="@+id/video_line"
android:layout_width="match_parent"
android:layout_height="0.5dp"
android:layout_below="@id/picture_tv_video"
android:background="@color/picture_color_e" />
<TextView
android:id="@+id/bottom_line"
android:layout_width="match_parent"
android:layout_height="6dp"
android:layout_below="@id/video_line"
android:background="@color/picture_color_e" />
<TextView
android:id="@+id/picture_tv_cancel"
android:layout_width="match_parent"
android:layout_height="45dp"
android:layout_below="@id/bottom_line"
android:background="@drawable/picture_item_select_bg"
android:gravity="center"
android:text="@string/picture_cancel"
android:textColor="@color/picture_color_53575e"
android:textSize="14sp" />
</RelativeLayout>
|
{
"pile_set_name": "Github"
}
|
UNESCO Confucius Prize for Literacy
The UNESCO Confucius Prize for Literacy recognizes the activities of outstanding individuals, governments or governmental agencies and non-governmental organizations (NGOs) working in literacy serving rural adults and out-of-school youth, particularly women and girls. The Prize was established in 2005 through the support of the Government of the People's Republic of China in honour of the great Chinese scholar Confucius. It is part of the International Literacy Prizes, which UNESCO awards every year in recognition of excellence and inspiring experiences in the field of literacy throughout the world. The Confucius Prize offers two awards of US$20,000 each, a medal and a diploma, as well as a study visit to literacy project sites in China.
The Prize is open to institutions, organizations or individuals displaying outstanding merit in literacy, achieving particularly effective results and promoting innovative approaches. The selection of prizewinners is made by an International Jury appointed by UNESCO’s Director-General, which meets in Paris once a year. The Prize is awarded at an official ceremony held for that purpose at UNESCO Headquarters in Paris on the occasion of International Literacy Day (8 September).
Recipients of the Prize by year
2017
AdulTICoProgram (Columbia) for teaching digital competencies to seniors
The Citizens Foundation (Pakistan) for its Aagahi Literacy Programme for Women and Out-of-School Girls
FunDza (South Africa) for its readers and writers project to develop a culture of reading and writing for pleasure through an online platform
2016
The South African department of Basic Education's Kha Ri Gude Mass Literacy Campaign
The Jan Shikshan Sansthan organization in Kerala, India for its programme, Vocational and Skill Development for Sustainable Development
The Directorate of Literacy and National Languages in Senegal for its ‘National Education Programme for Illiterate Youth and Adults through ICTs
2015
Sonia Álvarez, Juan Luis Vives School of Valparaiso, a school in Chile, is recognized for its programme "Literacy for People Deprived of Liberty''
Svatobor an Association, in Slovakia, is honoured for its ‘Romano Barardo’ programme, which helps the Roma overcome social exclusion and enjoy their basic human rights.
Platform of Associations in Charge of ASAMA and Post-ASAMA, an NGO in Madagascar that developed a comprehensive approach to achieve the Millennium Development Goals (MDGs)
2012
Department of Continuing and Adults Education – Programme of Non-Formal and Continuing Education – (Bhutan)
Transformemos Foundation Directors María Aurora Carrillo Rodolfo Ardila – Interactive System Transformemos Educando – (Colombia)
(Honourable Mention) Illiteracy Eradication Directorate of the Ministry of Education – Literacy and Post-literacy programme: Means of empowerment and socio-economic integration of women in Morocco – (Morocco)
2011:
Room to Read – (United States)
Collectif Alpha Ujuvi – (Democratic Republic of Congo)
(Honourable Mention) Dr. Allah Bakhsh Malik, Punjab, (Pakistan)
2010:
Governorate of Ismailia – (Egypt)
Coalition of Women Farmers (COWFA) – (Malawi)
Non-Formal Education Centre – (Nepal)
2009:
SERVE Afghanistan – (Afghanistan)
Municipal Literacy Coordinating Council – (Philippines)
2008:
Adult and Non-Formal Education Association (ANFEAE) (Ethiopia)
Operation Upgrade (South Africa)
2007:
Family Re-orientation Education and Empowerment (FREE) (Nigeria)
Reach Out and Read (United States of America)
2006:
Ministry of National Education of the Kingdom of Morocco, for its innovative national literacy initiative
Directorate of Literacy and Continuing Education of Rajasthan, for its Useful Learning through Literacy and Continuing Education Programme in Rajasthan (India)
See also
International Literacy Day
List of international literacy prizes
Noma Literacy Prize
UNESCO King Sejong Literacy Prize
UNESCO Nadezhda K. Krupskaya literacy prize
United Nations Literacy Decade
References
External links
Category:Literacy-related awards
Category:UNESCO
Category:UNESCO awards
Category:Confucius
|
{
"pile_set_name": "Wikipedia (en)"
}
|
Quick Start
When the countdown starts wait till the 2 starts to disappear and then press and hold the 2 button. So the 2 must still be visible but disappearing.
This will give you a boost at the start of the race.
Alternate Title Screen and Ending Screen
Complete every single cup with a first place to unlock two new Title Screens and one new Ending Photo.
Get a Star by Your Name
Playing extraordinarily well in Grand Prix races can produce a Star Ranking upto 3 at the end results instead of the standard "C, B or A." Collecting these stars in all Grand Prix's adds a Star Icon next to your name in WiFi Races and on the Results Screen for your opponents to see. (Note: Say, if you get a Three star rating in every Grand Prix except one which you got a One star rating in. You would get a One starred nickname until you manage to upgrade the rank.)
One starred nickname - Get a 1-star rating in every Grand Prix.
Double starred nickname - Get a 2-star rating in every Grand Prix.
Triple starred nickname - Get a 3-star rating in every Grand Prix.
More than one person can be the same character
First, make P1 choose the character you both want to be. Then unselect the character and move off of it. Move P2 over, and select the character. Go back one screen, then go into the character select again. You should both be on the same character. Both of you select the character. Do whatever you want from there on.
Note: In the character select screen and options, whenever one of you does an animation, both of you will. Also, if P2 chooses a different kind of vehicle than P1, P1 will look strange. Don't worry, this won't actually happen in the game.
Change Mii Racer weight class
Go into the Mii Channel and change your Mii drivers weight, witch will have an in game impact causing you to change weight class and have different cars.
Quick recovery from collision
When you fall into an obstacle, press [2] precisely when you land. You will get a short boost to start moving again.
Quickly recovery on a motorcycle after being hit
When on a motorcycle, after you are hit or run into an item pull a wheelie to increase your speed quickly.
Boost
To get a small boost while racing, get on a long straight road. You also cannot turn while doing this trick. If done correctly, you will see several blue streaks that fly behind you. Keep driving straight and you will see a blue force field in front of you. You then will get a boost for three or four seconds. Note: This does not always work.
Automatic Hit
To score an automatic hit on anyone in a race, place a banana or fake item box immediately after you land from a DK Cannon. They will be forced to land on it.
Change Mii driver's class
Enter the Mii Channel and change the weight of your Mii. This will cause your Mii driver in Mario Kart Wii to change to the corresponding weight class and give you access to the corresponding new vehicles.
Coconut Mall: Fast time
To get the Expert Staff Ghost on the Coconut Mall time trial, use the Wild Wing medium kart. Take a right into a store on the right side after the first escalator then use a mushroom. You will see a Delfino tree person at the door.
DK Summit: Shortcut
After you shoot out of the cannon and come up to the turn, go off the jump toppers on the left. This should send you soaring through the curve and give you a good lead.
Koopa Cape: Hidden items
After making the first jump, keep going and you will notice a few Goombas walking in place. Fire a shell at one to reveal either nothing or a Super Mushroom. You can also run into one if in the status of a Mega Mushroom, Invincibility Star, or Bullet Bill.
Mushroom Gorge: Hidden items
There are some Goombas around that contain Super Mushrooms or absolutely nothing.
|
{
"pile_set_name": "Pile-CC"
}
|
Agile Performance Standards
Andy Jordan is President of Roffensian Consulting S.A., a Roatan, Honduras-based management consulting firm with a comprehensive project management practice. Andy always appreciates feedback and discussion on the issues raised in his articles and can be reached at andy.jordan@roffensian.com. Andy's new book Risk Management for Project Driven Organizations is now available.
When we think about measuring the performance of agile projects we typically think of concepts like burndown charts and velocity. Those work well at the project level, but they aren’t going to satisfy the needs of an organization looking to leverage agile to deliver business results. In those instances, we ultimately measure success in terms of whether the goals have been met (market share, revenue, etc.) But those don’t work as a project performance metric because the time horizons are too long – you may not know whether the revenue targets have been met for several quarters or even years.
So how do we find metrics that work for the business and project teams, and how do we define appropriate standards for those metrics? The starting point has to be the consideration of how success is defined. How are we going to establish whether a project succeeds in business terms without waiting for the financial measures to be established at some point in the future? We require the use of proxies for those financial measures and I believe the following are appropriate in Agile:
> Satisfaction – how satisfied are our customers and employees with the outputs of the project and the process for delivering those outputs? For customers this is an indicator of their willingness to purchase and to remain loyal, for employees it is an indicator of
|
{
"pile_set_name": "Pile-CC"
}
|
If you use salesforce.com Enterprise or Developer edition, life just got easier. Spanning Salesforce 2.0, from Spanning Partners (Charlie Wood), is a free service that lets you track new and updated leads, opportunities, etc. in salesforce.com with RSS. By the end of this year, CRM and other enterprise applications that don't offer standard RSS feeds will be a big step behind... [via Dave Winer]
|
{
"pile_set_name": "Pile-CC"
}
|
The Arsenal boss has always publicly defended his players, in stark contrast to Mancini who has criticized his players in front of the media on occasion this season. However, the Italian believes his willingness to motivate his players through the press symbolizes a will to win that is inherently lacking at the Emirates Stadium.
"I'm not Arsene Wenger. We're different. I want to win," Mancini told The Guardian. "I think every player should be strong enough to take his responsibility and, like this, you can improve.
"You don't improve if you have a manager saying 'ah, don't worry, you made a mistake but it doesn't matter'."
Mancini also suggested that continuity is the key to a club’s success, citing the example of Sir Alex Ferguson at Manchester United and questioning Chelsea’s decision to sack Carlo Ancelotti in 2011.
|For me, Carlo [Ancelotti] was the strange one." he said. "Carlo is one of the best managers in the world for me. He won the league and the FA Cup and then they sacked him.
"It's difficult for a club that change every year, every two years. [Sir Alex] Ferguson's a totally different situation because he started to work for United in a different time. Now he's like a seat in the stadium, the grass on the pitch. He's part of United."
When speaking about his own role, the 48-year-old stated his desire to continue working in the Premier League, describing it as the place "every manager wants to be".
"I want to continue my work," he continued. "I always wanted to work in England. I have a good feeling here. There might not be 100 restaurants but I have no problem with it. I like to go out on my bike. That's when I do my thinking. Two or three hours on the roads. That's when you get time and you can think without problems.
"In Italy, the press is different because all the journalists think they are all managers. Not only the journalists, in fact. We have 55 million football managers in Italy. In England it's different. England is the place where every manager wants to be, in front of 40 or 50,000 people every week. It's beautiful."
|
{
"pile_set_name": "Pile-CC"
}
|
INDIANAPOLIS -- A not guilty plea for the bail bondsman accused of brutally murdering two teens.
Kevin Watkins appeared at an initial hearing on Wednesday.
Satori Dionne Williams, 16, and his friend, 15-year-old Timmee Jackson were last seen around 8 p.m. on Christmas Eve.
VIDEO | Fight breaks out after hearing for bail bondsman
Jackson's family said he was a freshman at Thomas Carr Howe Community High School.
Amber Partlow, Williams' mother, said she began to worry Christmas morning when her son still hadn't returned home. She told police she searched for him throughout her neighborhood, eventually ending up at a home on the 5900 block of 23rd Street, where she knew her son had been having problems with residents.
Dionne Williams, 16, as seen in a family photo.
According to court documents, Kevin Watkins lived at the home and filed a burglary report Dec. 19. Watkins reportedly suspected Williams of the burglary.
Williams' family said they don't think he knew Watkins.
When she arrived at Watkins' home, Partlow said she saw a large amount of blood on the front step, as well as in the grass and on leaves in the yard. She then called police.
An IMPD officer arrived to find blood trailing around the west side of the house. Blood was also found on the back bumper of Watkins' SUV, as well as the rear doors. Watkins told police he didn't know anything about the blood.
The blood reportedly turned into two trails on the west side of the residence. One trail led behind the garage of a vacant property next door. The other went southeast toward a tree line, then met back up with the other trail and ended behind the same garage.
Police executed a search warrant and found even more items covered in blood: a plastic rake with "a large amount of what appeared to be blood in it;" and a yellow dash light along with a large wad of gray duct tape.
Police also found "pieces of what appeared to be brain matter" just off of the sidewalk in front of Watkins' house.
Watkins voluntarily went to IMPD's homicide office, but then asked for an attorney and declined to give a statement.
A further search of Watkins' vehicle turned up "blood in just about every passenger compartment of the vehicle," investigators said. Police also found an outfit matching one described by one of the victim's families saturated in blood. Police said pieces of bone and brain matter were found on the clothing. A fingertip apparently belonging to an "African-American human" was also found.
At the "Watkins Bail Bonds" business owned by Watkins at 6001 Massachusetts Avenue, police reportedly found bloody clothing and a handgun-shaped BB gun inside a dumpster.
Witnesses told police Williams had been carrying a crowbar and BB gun the time they saw him.
Surveillance video obtained by police reportedly captured Watkins' vehicle pulling into the rear of his business around 8:37 p.m. on Christmas Eve. The video shows Watkins hanging a black garment over the fence. A little while later, Watkins removes the garment and drives his vehicle to a nearby shopping center. Around 10:41 p.m., Watkins returned to his business.
Seconds later, police said the camera captured a white Ford Expedition pulling in behind Watkins' blue Chevrolet SUV. A driver steps out of the Ford and appears to shake hands with Watkins. They then separate to their individual vehicles. The second driver is eventually seen carrying something to his Ford, before driving off. Watkins follows shortly thereafter.
Surveillance video from around 3:45 a.m. Christmas morning showed Watkins taking bags from his SUV and putting them in the dumpster, along with a pair of pants. Watkins is shown carrying a shovel at one point, but no shovel was recovered.
Based on their findings, police arrested Kevin Watkins on preliminary charges of murder for the deaths of Williams and Jackson. He was being held at the Marion County Arrestee Processing Center without bond.
Watkins next pretrial hearing is set for Feb. 16.
---
Download the new and improved RTV6 app to get the latest news on the go and receive alerts to your phone
Sign up to have the latest news headlines delivered straight to your email inbox
|
{
"pile_set_name": "OpenWebText2"
}
|
The HLKES-2: Revision and Evaluation of the Health Literacy Knowledge and Experiences Survey.
Low health literacy impacts individual health and the health care system. The Health Literacy Knowledge and Experience Survey (HLKES) was created to evaluate preparedness of nurses to provide health literate care. However, the instrument was developed a decade ago and needs revision. The purpose of this study was to update and shorten the HLKES into a feasible, valid, and reliable instrument. The HLKES was refined into a 14-item instrument (10 knowledge questions and four experience questions). Expert review was obtained. Face validity was assessed, and pilot and field testing with students was conducted. Scale content validity index was 0.95, and individual questions demonstrated appropriate item difficulty and discrimination. Cronbach's alpha coefficient was .565 for the 10 multiple choice questions and .843 for the four Likert-type questions, indicating good reliability. A reliable and valid HLKES-2 was developed to evaluate health literacy knowledge and experiences in a contemporary setting. [J Nurs Educ. 2019;58(2):86-92.].
|
{
"pile_set_name": "PubMed Abstracts"
}
|
Reconsidering governance in Africa: Why our obsession with copying and pasting western institutions causes more harm than good
THE LAST WORD | Andrew M. Mwenda | If you follow debate on Africa anywhere in the world, everyone will tell you that the main problem with our countries is governance. Yet this claim is new, picked from the World Bank’s World Development Report of 1989. Now it has entered the lexicon of politics as a religion; the very reason we need to focus on it. In the 1960s and 70s, the main issue was that African countries are poor because of their integration into the world economy as producers of unprocessed raw materials.
We African elites have learnt about the governance principles of the western world largely through books, media and in class. Often these sources give us the governance ideal, which, while reflecting an aspect of reality in the West, do not give the full practical application of the ideal. The actual practical politics of the West diverges quite significantly from the ideal.
Let us also remember that the governance strategies of the West evolved organically out of their own experience – their political and social struggles. These struggles themselves were rooted in a particular culture and were nourished by nutrient norms, values, habits and shared mentalities. So the governance strategies, principles and institutions of the West reflect a particular historic experience that cannot be universalised.
To now transplant them from their habitat and treat them as universal has two major problems. First being neophytes, we seek to transplant the ideal, not the actual practice. We are blind to or ignorant of the myriad accommodations and adjustments Western societies have to make daily for the ideal to work.
Second, we superimpose this governance ideal on a society with entirely different social structures, history, culture, norms, values and shared mentalities. We then imagine such a transplant will work just fine. Just imagine we get the governance strategies, principles and institutions of Buganda kingdom in 1880 and take them and superimpose them on the people and society of United States of America today. Then Americans have to travel to Uganda to learn in Luganda about how to manage their own industrial society. How would they work?
Karl Marx argued that every society is built on an economic base – the hard reality of human beings who must organise their activities to feed, clothe and house themselves. That organisation will differ vastly from society to society and from epoch to epoch. It can be pastoral or built around hunting or grouped into handicraft units or structured into a complex industrial whole.
For Marx, whatever form in which people solve their basic economic problem, society will require a “superstructure” of noneconomic activity and thought. It will need to be bound together by customs or laws, supervised by a clan or government and inspired by religion or philosophy.
Marx argued that the superstructure cannot be selected randomly. It must reflect the foundation on which it is raised. For example, no hunting community would evolve or could use the legal framework of an industrial society; and similarly, no industrial community could use the conception of law, order and government of a primitive hunting village.
|
{
"pile_set_name": "OpenWebText2"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.