text stringlengths 0 7.84M | meta dict |
|---|---|
After exercise, the 12 healthy men participating in the study were given a cup of a sweet glucose drink and tests were done to see how quickly their bodies moved this glucose from the blood to be stored in other parts of the body, like the muscle.
Those who ate breakfast before exercise cleared glucose from their blood quickest, the team found. The researchers discovered that eating breakfast before exercising not only increased the rate at which the body burned carbohydrates during exercise, but also increased the bodyâs ability to digest and metabolise carbohydrates eaten throughout the day.
The study also showed the carbs burned were not only coming from breakfast, but from the glycogen stored in their muscles.
The researchers thought breakfast followed by exercise may kick start the bodyâs carbohydrate processing and âprimeâ the body for quicker storage of nutrients from the meals that follow.
While the research team is continuing to study the impact of eating before exercise, for now itâs interesting to know that eating a bowl of cereal before you work out may be a great way to kick start your metabolism for the day.
Why not try these three pre-workout breakfast ideas:
Wholegrains and milk: Just like the study why not fuel up on a wholegrain porridge made with milk, or a plant-based milk? This combination is a great source of fibre and is low GI for slow release energy. The milk or a fortified plant milk can also be a handy source of calcium and vitamin B12. As the weather warms up, overnight oats topped with fruit and a splash of milk is a tasty option too. Wholegrain cereal and milk is also a breakfast winner.
Wholegrain toast with nut butter: This is a high fibre start to the day. Nut butters are a good source of protein and healthy fats, which can lower the overall GI of a meal to keep you feeling fuller for longer. Choose a nut butter with no added sugar or salt.
Fruit smoothie: Blend up your favourite whole fruits, along with milk or plant milk. If you're not a morning person, you can prep the night before by pre-chopping your fruit so your brekkie drink is ready to blend and go in the morning. | {
"pile_set_name": "Pile-CC"
} |
77Se solid-state NMR of As2Se3, As4Se4 and As4Se3 crystals: a combined experimental and computational study.
(77)Se NMR parameters for three prototypical crystalline compounds (As2Se3, As4Se4 and As4Se3) have been determined from solid-state NMR spectra in the framework of an investigation concerning AsxSe(1-x) glass structure understanding. Density functional NMR calculations using the gauge including projector augmented wave methodology have been performed on X-ray and optimized crystal structures for a set of selenium-based crystals. These theoretical results have been combined with the experimental data in order to achieve a precise assignment of the spectral lines. This work and the high sensitivity of solid-state NMR to local order show that the structure of As4Se3 should be reinvestigated using state-of-the-art diffraction techniques. Calculations performed on several molecules derived from the crystal structures have demonstrated the limited effect of interlayer or intermolecular interactions on the isotropic chemical shifts. These interactions are therefore not responsible for the unexpected large chemical shift difference observed between these three systems that could mostly be attributed to the presence of short rings. | {
"pile_set_name": "PubMed Abstracts"
} |
Q:
CSS corner ribbon without rotate
There many tutorials to make corner ribbon, and of all tutorial using transform/rotate 45 deg. It makes content inside div (font) also rotate. I don't want it. I want to make like below picture, font/symbol still stand-up.
I try to make a triangle background, but I can't make like what I want.
A:
@Dedi Ananto : Please take note of the following code:
<div class="arrow-right"></div>
.arrow-right {
width: 0px;
height: 0px;
border-top: 0px solid transparent;
border-bottom: 70px solid transparent;
border-left: 60px solid red;
}
Hope this Helps..
Regards,
Karan
| {
"pile_set_name": "StackExchange"
} |
Advances in drug metabolism screening.
Developments in automation, analytical technologies and molecular biology are being exploited by drug metabolism scientists in order to provide enhanced in vitro systems for the study of the metabolic disposition of potential drug candidates. Routine investigation of factors such as metabolic stability and induction and inhibition of drug metabolizing enzymes is now preferred in the early stages of drug discovery. This, in turn, should provide a greater understanding of the underlying principles governing these processes and allow a greater role for drug metabolism in the design of new drug molecules. | {
"pile_set_name": "PubMed Abstracts"
} |
1. Field of the Invention
The invention relates to a shift control apparatus and shift control method for a vehicular automatic transmission, which enables suppression of a drop in rotational speed of an input shaft which occurs during a clutch-to-clutch downshift executed while a vehicle is decelerating.
2. Description of the Related Art
A shift control apparatus for a vehicular automatic transmission is known which, when executing a clutch-to-clutch downshift, executes shift hydraulic pressure control so as to reduce an apply pressure of a hydraulic friction device to be released, which was applied in order to achieve a predetermined speed before the downshift, while increasing an apply pressure of a hydraulic friction device to be applied in order to achieve a predetermined speed after the downshift. According to JP(A) 11-287318, for example, during the clutch-to-clutch downshift, feedback control is performed on the apply pressure of the hydraulic friction device to be applied so that a transmitted torque capacity of the hydraulic friction device to be applied becomes constant, i.e., so that a rotational speed of an input shaft of the automatic transmission increases at a constant rate.
In the aforementioned shift control apparatus for a vehicular automatic transmission, the engine speed drops during the clutch-to-clutch downshift when the vehicle is decelerating, and then increases again when the hydraulic friction device to be applied is applied. This combination of a drop followed by an increase in engine speed results in shift shock or a delay in the shift time. Also, fuel efficiency may be reduced if the drop in engine speed is large enough to require that the fuel supply be restarted.
In comparison, it is conceivable to automatically suppress the drop in engine speed during the clutch-to-clutch downshift while the vehicle is decelerating, and appropriately reduce or eliminate shift shock or a delay in shift time caused by that drop. It is also possible to appropriately reduce the adverse effect on fuel efficiency caused by fuel being supplied to the engine again due to a further drop in engine speed. However, doing so may result in shift shock occurring when there is little or no drop in engine speed. | {
"pile_set_name": "USPTO Backgrounds"
} |
AIRCRAFT ENGINEERAND AIRSHIPS
FIRST AERONAUTICAL^WEEKLY IN THE^WORLD .⢠FOUNDED tooo
Editor
M. P0UU3EN
Managing Editor
G. GEOFFREY SMITH Chief Photographa
JOHN YOXALL
r i
Editorial, Advertising and Publishing Offices: DORSET HOUSE, STAMFORD STREET, LONDON, S.E.I I
iTelegrams : Truditur, Sefflit, London. Telephone : Waterloo 3333 (50 lines).HERTFORD ST.,
COVENTRY.
Telegrams : Autocar, Coventry.Telephone: Coventry 5210.
GUILDHALL BTTILOTNGS,NAVIGATION ST., BIRMINGHAM, 2.
Telegrams: Autopress, Birmingham.Telephone: Midland 2971.
860, DEANSQATfi,MANCHESTER, 3.
Telegrams: Iliffe, Manchester.
Telephone: Blackfriars 4412.
26B, BENFIELD ST.,XJLA3GOW, C.2.
Telegrams: IUSe, Glasgow.Telephone: Central 4857.
SUBSCRIPTIONEATE8: Home and Canada:Other Countries; year, £1Year, SI 13 0.16 0. 6 months, 16a. Gel.G mouths, 18s. Od. 3 months. 8s. 6d.3 mouths, 9s. Od. ⢠m
No. 1515. Vol. XXXIII. JANUARY 6, 1938. Thursdays, Price 66.
The Outlooks
ResolutionsT
ilS is the time of year for good resolutions, but
before they can be formulated it is usually necessary
to take stock in order to see where one has failed in
the past, no less than to derive from previous successes
encouragement for further effort. At the present moment
the expansion and re-equipment of the R.A.F. is the most
vital preoccupation of the British aircraft industry. One
may, perhaps, say that it is going on as well as could be
expected, although the fact that Germany is believed to
have produced 7,000 aero engines in 1937 and to be pro-
ducing aircraft of all types at the rate of 400 per month
gives no cause for complacency. At any rate, there is
obviously nothing that can be done about it.
If one turns from military to civil aviation, there is good
cause to be alarmed. We publish in this issue an article in
which an American correspondent lifts the veil and reveals
an activity on the other side of the Atlantic which makes
our own efforts look lethargic. And last week Capt. Wil-
cockson, Imperial Airways' well-known pilot, expressed the
view that in the matter of training of personnel, operating
experience and instrument development and use, this coun-
try is three years behind the United States. Taken to-
gether, the two give one something to think about.
Losing a LeadF
ROM the point of view of technical progress, military
and civil aviation cannot be altogether segregated,
developments in one field having their immediate re-
percussions in the other, although possibly with modifica-
tions and adaptations. It is now a good many years ago
that Great Britain established a lead in flying-boat design
and construction. That lead appears to be in very great
danger of being lost, not because of inability among our
designing staffs to produce the world's best, but through
lack of support. Until Imperial Airways placed the order
for 28 Empire flying boats with Short Brothers, no British
firm had had any great encouragement to develop civil
flying boats. The Air Ministry ordered a few boats at a
time, but could not seem to make up its mind about what it
wanted, and changes in policy occurred which did not help
towards continuity of effort.
Without regarding as other than ambitious projects which
may or may not be built some day, the two American
designs for a 55O-ft.-span flying boat and a 500-passenger
one, there is sufficient actual planning, building and test-
ing going on in America to show how much in earnest are
our cousins on the other side. Glenn Martin is developing
a 100-passenger boat of 188 ft. span, intended for trans-
oceanic commercial work; his firm has already built and
flown the 157-ft-span boat for Russia described and illus-
trated in Flight recently. Martin, Boeing and Sikorsky are
building 60-tonners for the U.S. Navy, and Sikorsky is
reported, in addition, to be developing a 50-tomier for the
Atlantic service which will carry 36 passengers when oper-
ating Atlantic ranges.
Some warning that America intended to apply herself
seriously to the flying-boat problem has been evident for
some years from the fact that her research establishments
have carried out very extensive tests on hull forms. Thus,
when the Government purse-strings were loosened, Ameri-
can constructors had ready to hand a wealth of informa-
tion upon which to base, their designs. -,-..
Although it is known that Shorts have on the drawing
board an improved version of the Empire boat, and
although there are rumours of a very large machine, one
cannot quite feel that encouragement sufficient to enable
one firm to cope with the competition from three or four
American companies is being given British firms.
ExperienceI
N the meantime, as Capt. Wilcockson pointed out, the
Americans have been steadily accumulating experi-
ence in long-range transoceanic flying-boat operation.
Although weather conditions over the Pacific are very dif-
ferent from those over the Atlantic, the operation of the
Pacific route has enabled Pan-American Airways to train
their crews in long-distance navigation, and when the
Americans come to operate an Atlantic service, they will
start with a very great advantage compared with us. Navi-
gational equipment also has been greatly developed and
thoroughly tested in actual operational conditions, com-
pared with which the very limited, although very promis-
ing, experience which Imperial Airways had an opportunity
to accumulate during ten crossings of the Atlantic is insig-
nificant. Operation of the Empire routes will help to fill
the gap, but is not strictly applicable to Atlantic conditions. | {
"pile_set_name": "Pile-CC"
} |
Q:
What approach should I use to do client side filtering?
I am making the front end of a asp.net mvc3 web application. A controller action sends a database driven list to a view model which then populates a series of divs. I have a filtering section above the div list. I am not sure which approach to take to implement the filter. I have considered rolling my own (I always keep this option on the table), using jQuery's .filter(), or finding some JavaScript functionality to use.
What is the standard way to filter client side with JavaScript (or a js derived library)?
EDIT
For gdoron's lack of context:
js
var gdoronArray = [];
for(var i = 0; i < 10000; i++){
gdoronArray.push("text" + i + " " + (i*10));
}
Is there a standard library to pull only the items in gdoronArray which contain "ext5" or is this just a roll your own situation?
A:
gdoronArray.filter( function(v){
return !!~v.indexOf("ext5");
});
https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Array/filter
| {
"pile_set_name": "StackExchange"
} |
The week you spend in Japan will certainly be memorable!!!
Get a taste of the foreign travel offered by printed guidebooks. Enjoy a tour around Japan from the comfort of your own home!
As the gameâs protagonist, youâll travel to famous Japanese sightseeing spots in the company of a pair of beautiful young sisters. The girls will describe the spots, take you out to dinner, and grow closer to you throughout your travels.
The thrill of taking a trip through Japan with pretty girls is the experience offered by this âGuide Gameâ â a guidebook in game format!
Beautiful Tour Guides!! Famous Japanese Sightseeing Spots!! Delicious Japanese Dishes!! | {
"pile_set_name": "OpenWebText2"
} |
Italian ship Cassiopea
Cassiopea has been borne by at least three ships of the Italian Navy and may refer to:
, a launched in 1906 and discarded in 1927.
, a launched in 1936 and stricken in 1959.
, a launched in 1988.
Category:Italian Navy ship names | {
"pile_set_name": "Wikipedia (en)"
} |
/* (No Commment) */
"CFBundleName" = "é€å
";
| {
"pile_set_name": "Github"
} |
Introduction {#sec1}
============
In contrast to bulk silver, nanometric silver materials exhibit many extraordinary properties such as a high surface-to-volume ratio,^[@ref1],[@ref2]^ quantum tunneling effects,^[@ref3],[@ref4]^ an abundance of free electrons,^[@ref5]^ surface plasmon resonance,^[@ref2],[@ref6]â[@ref8]^ and antibacterial behaviors.^[@ref9],[@ref10]^ Because of these unique properties, noble silver nanomaterials are widely applied in diverse areas, including thermotherapy,^[@ref5],[@ref11]^ medicine,^[@ref2],[@ref5],[@ref12]^ sensors,^[@ref13]â[@ref16]^ surface-enhanced spectroscopy,^[@ref6],[@ref17]â[@ref20]^ biology,^[@ref2]^ catalysis,^[@ref21]â[@ref28]^ and electronics.^[@ref29]â[@ref31]^ Among these many applications, the catalysis of the reduction of nitroarenes to aromatic amines is increasingly attracting attention because of pharmaceutical needs and the importance of this industry.^[@ref25],[@ref26],[@ref32]â[@ref38]^ Various strategies have been proposed to reduce nitroarenes more efficiently and more rapidly using silver nanomaterials. These strategies include depositing silver nanoparticles (AgNPs) on supports used as heterogeneous catalysts,^[@ref24],[@ref26],[@ref39]â[@ref42]^ combining AgNPs with reduced graphene oxide or graphene oxide as catalysts,^[@ref43]â[@ref47]^ and using silver nanocolloids as a quasi-homogeneous nanocatalyst.^[@ref25],[@ref33],[@ref48]â[@ref54]^ All of the aforementioned catalytic approaches are efficient and selective. However, the reaction rate for heterogeneous catalysis is rather low and quasi-homogeneous catalysis suffers from possible aggregation of the nanocatalyst. Furthermore, the procedures for preparing such nanocatalysts are somewhat complex and time-consuming.
Applications of magnetic nanoparticles have also been extensively investigated in recent years because they feature many notable characteristics, such as a high surface-to-volume ratio, easy attraction and redispersion, and paramagnetism. Because of these crucial properties and advantages, the combination of AgNPs and magnetic nanoparticles has become one of the most favorable approaches for the catalytic reduction of nitroarenes.^[@ref55]â[@ref60]^ In this study, a simple but facile method was applied in a single step to prepare silver-doped magnetic nanoparticles (AgMNPs) for the catalytic reduction of nitroarenes through spontaneous oxidation--reduction and coprecipitation. When mixing Fe^2+^ with Ag^+^, a spontaneous reaction is caused by the difference in standard reduction potential between the ionic species. When Ag^+^ is reduced to Ag^0^, an equivalent number of moles of Fe^2+^ ions are simultaneously oxidized to Fe^3+^. After the addition of precipitation agents, AgNPs were coprecipitated with iron oxide magnetic nanoparticles, which led to the formation of AgMNPs. The proposed preparation can be achieved in a single step, and the prepared AgMNPs can subsequently be utilized as nanocatalysts for the reduction of *o*-nitroaniline (*o*-NA). The parameters (pH, temperature, and amount of nanocatalyst) that affect the morphology and composition of the prepared AgMNPs and efficiencies of the catalytic reduction were systematically studied to gain a greater understanding of the characteristics of the AgMNPs prepared using the method proposed in this study. Additionally, the catalytic activity of the AgMNPs prepared for the reduction of other nitroarenes and their recyclability were investigated to fully evaluate their potential for practical applications.
Results and Discussion {#sec2}
======================
Effect of Oxidation--Reduction Time on AgMNP Preparation {#sec2.1}
--------------------------------------------------------
[Figure S1](http://pubs.acs.org/doi/suppl/10.1021/acsomega.7b01987/suppl_file/ao7b01987_si_001.pdf) shows the typical measured hysteresis loops of the prepared AgMNPs, which confirmed that the prepared AgMNPs were paramagnetic and usable for further applications. [Figure [1](#fig1){ref-type="fig"}](#fig1){ref-type="fig"} depicts the transmission electron microscopy (TEM) images of AgMNPs obtained using various reaction times. From the images, dark-sphere-like Ag nanoparticles were mixed with light-colored Fe~3~O~4~ NPs because Ag has a higher electron density that allows fewer electrons to transmit.^[@ref61],[@ref62]^ The AgNPs formed after a 10 min reaction time were larger than those formed after a 2 min reaction time. Notably, the size of the Fe~3~O~4~ NPs was mostly unaffected by the reaction time.
![TEM images of prepared AgMNPs with oxidation--reduction times of (a) 2 min, (b) 8 min, and (c)--(f) 10 min, where the \[Fe^2+^\]~0~ to \[Ag^+^\]~0~ ratios are (a)--(c) 3:1, (d) 2:1, (e) 4:1, and (f) 6:1. \[Fe^2+^\]~0~ are all 12 mM, and the magnifications of the images are all 100â000Ã. The yellow arrows indicate the examples of AgNPs for each sample.](ao-2017-019876_0008){#fig1}
[Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}a shows the evolution of the UV--vis spectra for the reduction of *o*-NA catalyzed by AgMNPs over time. The absorbance peak at 412 nm, which corresponds to the characteristic *o*-NA peak,^[@ref63]^ decreased as the reaction proceeded. The variations of the spectra indicated that *o*-NA was reduced to 1,2-phenylenediamine (1,2-PPD).^[@ref41],[@ref64]^ The relative concentration (*C*~t~/*C*~0~) of *o*-NA was obtained by dividing the absorbance recorded at 412 nm at the specified time (*C*~t~) by the absorbance at 412 nm before the addition of AgMNPs (*C*~0~). The results in [Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}b were plotted using various AgMNPs prepared using various reduction durations. In [Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}b, the catalytic efficiency of the AgMNPs shows no significant difference between when the reaction time was 4--10 min and when more than 95% of *o*-NA was reduced within 240 s. This finding suggests remarkable catalytic activity. By contrast, achieving the same conversion percentage required more than 500 s when the reaction time to prepare the AgMNPs was 2 or 12 min. Because the catalytic efficiency of nanocatalysts depends on their size and the amount of catalyst loaded,^[@ref26]^ we concluded that the reaction time to achieve the optimal morphology and catalyst loading was 10 min. For comparison, the results of an experiment conducted in parallel, where AgMNPs were replaced with Fe~3~O~4~ NPs, are also plotted in [Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}b, showing that the reduction of *o*-NA can proceed only when AgNPs are doped. In the absence of AgNPs, the reduction of *o*-NA is suspended.
{ref-type="fig"}](#fig1){ref-type="fig"}c. (b) *C*~t~/*C*~0~ of 1 mM *o*-NA (412 nm) versus the catalytic reduction time in the presence of 30 mM NaBH~4~ and AgMNPs prepared with (â ) 2 min, (â) 4 min, (â²) 6 min, (âŒ) 8 min, (â) 10 min, and (â
) 12 min of oxidation--reduction time during the preparation, where the other conditions are the same as described in (a). Parallel experiment (â¡) uses 20 mg of Fe~3~O~4~ NPs as nanocatalysts, where the other conditions are the same as described as (a).](ao-2017-019876_0001){#fig2}
Effect of Fe^2+^/Ag^+^ on AgMNP Preparation {#sec2.2}
-------------------------------------------
As described in the previous section, catalytic efficiency is related to the size and amount of doped AgNPs. In addition, we expected the morphology and amount of doped AgNPs to be affected by the ratio of initial concentration of Fe^2+^ to Ag^+^ because AgNO~3~ acts as the oxidation agent in the formation of AgMNPs. [Figure [1](#fig1){ref-type="fig"}](#fig1){ref-type="fig"} shows the TEM images of the AgMNPs with different ratios of \[Fe^2+^\]~0~ to \[Ag^+^\]~0~. The images reveal that the morphologies of the AgNPs are similar. [Figure S2](http://pubs.acs.org/doi/suppl/10.1021/acsomega.7b01987/suppl_file/ao7b01987_si_001.pdf) shows the typical X-ray diffraction (XRD) spectra of the prepared AgMNPs and illustrates that the locations and intensity of the diffraction peaks were consistent with the standard patterns for JCPDS card no. (79-0417) magnetite and JPCPDS card no. (4-0783) standard Ag crystal. The size of AgNPs can by estimated using the Scherrer equation, and all AgNPs are approximately 20 nm.^[@ref10]^ To further explore the effect of the initial Fe^2+^ to Ag^+^ concentration ratio, energy-dispersive X-ray spectroscopy (EDS) analysis was performed and the typical spectra for the AgMNPs are shown in [Figure S3A](http://pubs.acs.org/doi/suppl/10.1021/acsomega.7b01987/suppl_file/ao7b01987_si_001.pdf). The atomic percentages of Ag, Fe, and O for various initial concentration ratios of Fe^2+^ to Ag^+^ obtained through EDS are plotted in [Figure S3B](http://pubs.acs.org/doi/suppl/10.1021/acsomega.7b01987/suppl_file/ao7b01987_si_001.pdf), which shows that the atomic percentage of Ag increased as the initial concentration ratio of Fe^2+^ to Ag^+^ decreased. When the initial concentration ratio of Fe^2+^ to Ag^+^ was 2.0, the atomic percentage of Ag in the AgMNPs was 8.23%. The atomic percentage decreased to 0.76% when the initial concentration ratio of Fe^2+^ to Ag^+^ was 6.0.
[Figure [3](#fig3){ref-type="fig"}](#fig3){ref-type="fig"} plots the *C*~t~/*C*~0~ of *o*-NA as a function of the reduction time in the presence of AgMNPs prepared with various ratios of \[Fe^2+^\]~0~ to \[Ag^+^\]~0~. The figure shows that the AgMNPs prepared with a smaller ratio of \[Fe^2+^\]~0~ to \[Ag^+^\]~0~ had a higher catalytic efficiency. Through correlation with the EDS results, we concluded that the highest catalytic efficiency was obtained for the highest AgNP loading. Notably, although the catalytic efficiency of the AgMNPs was even higher when the initial concentration ratio of Fe^2+^ to Ag^+^ was 1.5, the yield of AgMNPs was very low because of weak magnetization and most particles formed after the coprecipitation stage could not be collected by the magnet. Similar results were observed when the initial concentration ratio of Fe^2+^ to Ag^+^ was 6.0. Therefore, we can conclude that the initial concentration ratio of Fe^2+^ to Ag^+^ should be between 2 and 3 to ensure that a sufficient amount of Fe^2+^ is oxidized so that the ratio of Fe^2+^ to Fe^3+^ is close to 2 before the coprecipitation agent is added to form magnetic Fe~3~O~4~ NPs.^[@ref65]^
![*C*~t~/*C*~0~ of 1 mM *o*-NA versus catalytic reduction time in the presence of 30 mM NaBH~4~ and AgMNPs prepared with \[Fe^2+^\]~0~ to \[Ag^+^\]~0~ ratios of (â ) 6:1, (â) 4:1, (â²) 3:1, (âŒ) 2:1, and (â) 1.5:1, where the other conditions are the same as described in [Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}a.](ao-2017-019876_0002){#fig3}
Effects of pH and Temperature on the Catalytic Reaction {#sec2.3}
-------------------------------------------------------
The acceleration of the reduction reaction by AgMNPs originates in a relay between the nucleophile and electrophile.^[@ref39]^ Therefore, the catalyzed reduction rate is affected by the abundance of electrons in the reaction system. To observe the relationship between the electron abundance and reaction rate, the catalysis reaction was performed at various pH values. The relative *o*-NA concentration (*C*~t~/*C*~0~) was recorded as a function of reaction time at various pH values, and the results are plotted in [Figure [4](#fig4){ref-type="fig"}](#fig4){ref-type="fig"}a. The pH played a key role in the catalysis reaction; when the pH was 9.8, the reduction of *o*-NA was completed within 4 min. The reduction rate significantly decreased when the pH was set to 8.8 and less than 85% of *o*-NA was reduced after approximately 5 min of reaction. When the pH was lower than 8, almost no conversion of *o*-NA could be observed. Because the acceleration of the reduction reaction by AgMNPs originates in a relay between the nucleophile and electrophile,^[@ref39]^ alkaline conditions enriched the electron densities on the AgNP surfaces by adsorbing more OH^--^, which promoted the reduction of *o*-NA. When the pH was set to 10.8, the reduction of *o*-NA decelerated, possibly because of the formation of yellow-colored 2,3-diaminophenazine under extremely alkaline conditions.^[@ref66]^
{ref-type="fig"}](#fig2){ref-type="fig"}a. (b) *C*~t~/*C*~0~ of 1 mM *o*-NA versus catalytic reduction time in the presence of 30 mM NaBH~4~ under different temperatures of (â ) 0 °C, (â) 25 °C, and (â²) 40 °C, where the other conditions are the same as described in [Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}a.](ao-2017-019876_0003){#fig4}
The reaction rate was affected by the temperature of the reaction system because the reactants had more kinetic energy at higher temperatures and were able to cross the activation state more easily. [Figure [4](#fig4){ref-type="fig"}](#fig4){ref-type="fig"}b depicts the relationship between *C*~t~/*C*~0~ and reaction times at various reaction temperatures. As shown in [Figure [4](#fig4){ref-type="fig"}](#fig4){ref-type="fig"}b, the higher the temperature of the reaction system, the faster was the observed reduction of *o*-NA. Furthermore, when the temperature was 0 °C, the reduction rate of *o*-NA was similar to that at 25 °C in the first minute and *C*~t~/*C*~0~ was almost unchanged after the first minute, suggesting that the reduction was almost interrupted after 1 min at 0 °C.
Effect of the Catalyst Amount on the Catalytic Reaction {#sec2.4}
-------------------------------------------------------
As described in the previous sections, the conversion efficiency is related to the amount of AgNP loading. Accordingly, the conversion efficiency can also be related to the amount of AgMNPs used per experiment. [Figure [5](#fig5){ref-type="fig"}](#fig5){ref-type="fig"} plots the relationship between the *C*~t~/*C*~0~ of *o*-NA and reduction time when various amounts of AgMNPs were used. As shown in [Figure [5](#fig5){ref-type="fig"}](#fig5){ref-type="fig"}, when 1 mg of AgMNPs was used per experiment, approximately 20% of *o*-NA was reduced after 5 min of conversion and the conversion efficiency increased with the amount of AgMNPs used. When 20 mg of AgMNPs was used per experiment, the conversion of *o*-NA was almost 100% within 250 s at room temperature, which is promising for further applications.
{ref-type="fig"}](#fig2){ref-type="fig"}a.](ao-2017-019876_0004){#fig5}
Activity of AgMNPs for the Catalytic Reduction of Other Nitroarenes {#sec2.5}
-------------------------------------------------------------------
After studying the characteristics and catalytic properties of the prepared AgMNPs, we investigated the catalytic reduction of other nitroarenes, including *m*-NA, *p*-NA, and *p*-NP, through the same procedures to investigate the ability of the AgMNPs to accelerate the reduction of other nitroarenes. [Figure S4](http://pubs.acs.org/doi/suppl/10.1021/acsomega.7b01987/suppl_file/ao7b01987_si_001.pdf) plots the evolution of the UV--vis spectra for reducing *m*-NA, *p*-NA, and *p*-NP in the presence of AgMNPs over time. The absorption maxima, which were located at 360 nm for *m*-NA, 380 nm for *p*-NA, and 400 nm for *p*-NP, decreased as the catalytic reduction proceeded. The relationship between *C*~t~/*C*~0~, where *C*~0~ is the absorbance at the initial time and *C*~t~ is the absorbance after the specific reaction time, for the nitroarenes tested and the reaction time under optimal conditions is plotted in [Figure [6](#fig6){ref-type="fig"}](#fig6){ref-type="fig"}. As indicated in the figure, the reductions of the four nitroarenes examined in this study were all completed within 4 min, which suggested that the AgMNPs prepared in this study were capable of catalytically reducing various nitroarenes.
{ref-type="fig"}](#fig2){ref-type="fig"}a.](ao-2017-019876_0005){#fig6}
Furthermore, as reported in other works, the catalytic reduction of nitroarenes follows the pseudo-first-order reaction kinetics.^[@ref26],[@ref43],[@ref51]^ The linear relations between ln(*C*~0~/*C*~t~) of nitroarenes examined in this study against reaction time were obtained and the rate constant (*k*) can be estimated by calculating the slopes of the obtained lines. The calculated *k* values at room temperature are 0.0192, 0.0145, 0.0185, and 0.0196 s^--1^ for the catalytic reduction of *o*-NA, *m*-NA, *p*-NA, and *p*-NP, respectively. To compare the results obtained in this study to those reported recently, [Table [1](#tbl1){ref-type="other"}](#tbl1){ref-type="other"} tabulates the catalytic activities of various AgNP-based catalytic systems. As can be observed in the table, the rate constants of AgMNPs prepared in this study for the reduction of nitroarenes are as good as other reported nanocatalysts. Moreover, because the effects of temperatures have also been studied in the previous sections, the thermodynamic parameters for the catalytic reduction of *o*-NA by AgMNPs prepared in this study can be calculated by following the Arrhenius and Eyring equations.^[@ref67],[@ref68]^ The calculated activation energy (*E*~a~) is 39.88 kJ/mol, activation enthalpy (Î*H*) is 37.34 kJ/mol, and activation entropy (Î*S*) is â123.29 J/(mol K). These results suggest that the AgMNPs prepared in this study are excellent nanocatalysts for the reduction of nitroarenes. The catalytic reduction of nitroarenes by metal nanoparticles is generally explained by the Langmuir--Hinshelwood mechanism, where both reactants are adsorbed on nanocatalyst surfaces and reaction occurred after the adsorption.^[@ref67],[@ref68]^ As a result of conversion, products are formed and then desorb from nanocatalyst surfaces. According to the Langmuir--Hinshelwood mechanism, the rate of catalytic reduction depends on the surface coverage reducing agent and nitroarene molecules.^[@ref69]â[@ref71]^ This mechanism rationalizes the large rate constant and small activation energy obtained in this study because the high surface-to-volume ratio and the quasi-homogeneous reaction conditions increase the surface coverage of reactants significantly.
###### Comparison of Catalytic Activities of Several AgNP-Based Systems
catalysts nitroarene (final concentration; mM) concentration of NaBH~4~ (mM) temperature apparent rate constant (s^--1^) ref
------------------------------------------ -------------------------------------- ------------------------------- ------------------------------------ --------------------------------- ------------
AgNPs/polydopamine/anodic aluminum oxide *o*-NA (1.13) 400 RT[a](#t1fn1){ref-type="table-fn"} 0.0013 ([@ref40])
biogenic AgNPs *p*-NPÂ (0.20) 10 [b](#t1fn2){ref-type="table-fn"} 0.00406 ([@ref41])
AgNPs/partially reduced graphene oxide *p*-NP (0.10) 13 RT 0.0374 ([@ref43])
AgNPs on porous glass filters *o*-NA (1.00) 30 50 °C 0.0094 ([@ref26])
 *p*-NA (1.00) 30 50 °C 0.0071 ([@ref26])
AgNPs in microgels *p*-NP (0.08) 24 22 °C 0.0153 ([@ref50])
AgNPs in microgels *o*-NA (0.09) 18 22 °C 0.0067 ([@ref51])
 *p*-NA (0.09) 18 22 °C 0.0101 ([@ref51])
 *p*-NP (0.09) 18 22 °C 0.0052 ([@ref51])
AgNPs on fibrous nanosilica *o*-NAÂ (0.17) 22 RT 0.0043 ([@ref52])
 *p*-NP (0.099) 83 RT 0.01 ([@ref52])
Fe~3~O~4~\@SiO~2~/Ag nanocomposite *p*-NP (0.06) 6 25 °C 0.00767 ([@ref59])
AgNPs/HLaNb~2~O~7~ *p*-NPÂ (0.091) 18 [b](#t1fn2){ref-type="table-fn"} 0.00301 ([@ref53])
this study *o*-NAÂ (1) 30 RT 0.0192 Â
 *p*-NA (1.00) 30 RT 0.0185 Â
 *p*-NP (1.00) 30 RT 0.0196 Â
Room temperature.
Not mentioned.
Recyclability of the Ag Nanocatalysts {#sec2.6}
-------------------------------------
The recyclability of the AgMNPs prepared in this study was evaluated by consecutively reusing the nanocatalysts for the catalytic reduction of *o*-NA. As shown in [Figure [7](#fig7){ref-type="fig"}](#fig7){ref-type="fig"}, only approximately 40% of *o*-NA was reduced to 1,2-PPD after 10 min of reaction in the second consecutive test. This is possibly related to the adsorption of *o*-NA or 1,2-PPD on the surface of the silver nanocatalysts, which consequently reduced the electron transferability.^[@ref26]^ To reactivate the silver nanocatalysts, we soaked the used AgMNPs in an aqueous solution at pH 3 for 20 min and rinsed them with neutral water before the next use. As shown in [Figure [7](#fig7){ref-type="fig"}](#fig7){ref-type="fig"}, more than 90% of the *o*-NA was reduced within 5 min and more than 95% was reduced within 8 min. Furthermore, after treating the reused AgMNPs with an aqueous acidic solution, the AgMNPs could be recycled and their performance was similar to that in the previous run. Therefore, the AgMNPs prepared in this study can be recycled after treatment with an acidic solution, which significantly extends the practical applicability of these silver nanocatalysts.
{ref-type="fig"}](#fig2){ref-type="fig"}a.](ao-2017-019876_0006){#fig7}
Conclusions {#sec3}
===========
In this study, a simple but facile approach to prepare silver-doped magnetic hybrid nanoparticles was proposed based on a chemical reduction and coprecipitation method. The nanoparticles prepared through this method were used as nanocatalysts for the reduction of *o*-NA. Using the AgMNPs prepared in this study as nanocatalysts exploits the advantages of quasi-homogeneous reaction conditions and enables the easy removal of nanocatalysts from the solution with a magnet. The results indicated that the composition of the AgMNPs prepared can be tuned by adjusting the ratio of \[Fe^2+^\]~0~ to \[Ag^+^\]~0~ and the chemical reduction time during the production of AgMNPs. During the catalytic reduction of *o*-NA, the pH and temperature of the system affect the reduction rate, which is also affected by the amount of nanocatalyst used in the reaction. Furthermore, the prepared AgMNPs were applicable to the catalytic reduction of other nitroarenes. Finally, the silver-doped magnetic nanocatalysts proposed in this study have several advantages, namely, easy preparation, significant catalytic activity at room temperature, high conversion ability, and recyclability, all of which enhance their usefulness for real applications.
Experimental Section {#sec4}
====================
Materials {#sec4.1}
---------
Ferrous sulfate and ferric chloride were obtained from Showa Chemical (Tokyo, Japan). Silver nitrate, *o*-NA, *m*-nitroaniline (*m*-NA), *p*-nitroaniline (*p*-NA), and *p*-nitrophenol (*p*-NP) were purchased from Alfa Aesar (Ward Hill, MA). Sodium borohydride was obtained from Acros Organics (Geel, Belgium). Ammonium hydroxide (28--30%, v/v) and nitric acid were purchased from Fisher Scientific (Hampton, NH). All chemicals were of reagent grade and used as received without further purification. Deionized Milli-Q water (Simplicity, Millipore, Burlington, MA) was used throughout this study.
Preparation of AgMNPs {#sec4.2}
---------------------
The preparation of AgMNPs was based on a chemical reduction and coprecipitation method. Briefly, 100 mL of 12 mM ferrous aqueous solution was mixed with various volumes of 200 mM silver nitrate aqueous solution under vigorous stirring for a specified amount of time. During stirring, a spontaneous oxidation--reduction reaction occurred between Ag^+^ and Fe^2+^. Ag^+^ was reduced to Ag^0^ and an equivalent number of moles of Fe^2+^ ions were oxidized to Fe^3+^. After the specified reaction time, 50 mL of 1.44 M ammonia solution, which acted as the precipitating agent, was rapidly added to the solution under vigorous stirring for 10 min to complete the coprecipitation process. After 3 h in storage, the formed nanoparticles were collected with a magnet and washed three times with distilled water and ethanol. Finally, the washed AgMNPs were dried in an oven at 140 °C for 8 h before further use. As an alternative for comparison, magnetite nanoparticles (Fe~3~O~4~ NPs) without silver doping were prepared following previous reports.^[@ref72]^ We conducted transmission electron microscopy (TEM) with a Hitachi HT-7700 microscope operated at 100 kV, energy-dispersive X-ray spectroscopy (EDS) analysis with a Hitachi SU-8010 microscope at an accelerating voltage of 15.0 kV, and powder X-ray diffraction (PXRD) with a Siemens D5000 XRD system to characterize the morphologies and compositions of the prepared AgMNPs. Hysteresis loops of the prepared AgMNPs were recorded at room temperature with a Quantum Design MPMS 3 SQUID vibrating sample magnetometer system.
Reduction of Nitroaniline Catalyzed by AgMNPs {#sec4.3}
---------------------------------------------
The catalytic efficiency of the AgMNPs was evaluated using the nanoparticles as prepared for the catalytic reduction of *o*-NA. A specific amount of AgMNPs was mixed with 15 mL of an aqueous solution consisting of 1 mM *o*-NA and 30 mM NaBH~4~ at room temperature. Ultraviolet--visible (UV--vis) spectra of the solution were recorded at chosen intervals. All UV--vis spectra in this study were measured using a Thermo Fisher Scientific Genesys 10S Bio UV--Vis spectrometer with a 1 nm resolution. The spectra were recorded within a wavelength range of 250--550 nm. The optical path of the UV--vis cell was 3 mm.
The Supporting Information is available free of charge on the [ACS Publications website](http://pubs.acs.org) at DOI: [10.1021/acsomega.7b01987](http://pubs.acs.org/doi/abs/10.1021/acsomega.7b01987).Hysteresis loops, XRD spectrum, and EDS analysis results of the AgMNPs, and the UV--vis spectra of the reduction of *m*-NA, *p*-NA, and *p*-NP in the presence of AgMNPs with increasing times ([PDF](http://pubs.acs.org/doi/suppl/10.1021/acsomega.7b01987/suppl_file/ao7b01987_si_001.pdf))
Supplementary Material
======================
######
ao7b01987_si_001.pdf
The authors declare no competing financial interest.
The authors acknowledge the financial support from the Taiwan Ministry of Science and Technology under grant MOST106-2113-M-037-016. This work was also supported by the Kaohsiung Medical University Research Foundation under grant KMU-M106016.
| {
"pile_set_name": "PubMed Central"
} |
Q:
How can I generate all possible IPs from a list of ip ranges in Python?
Let's say I have a text file contains a bunch of ip ranges like this:
x.x.x.x-y.y.y.y
x.x.x.x-y.y.y.y
x.x.x.x-y.y.y.y
x.x.x.x-y.y.y.y
x.x.x.x-y.y.y.y
x.x.x.x is start value and y.y.y.y is end value of range.
How can I convert these ip ranges to all possible IPs in a new text file in python?
PS: This question is not same as any of my previous questions. I asked "how to generate all possible ips from cidr notations" in my previous question. But in here I ask "how to generate from ip range list". These are different things.
A:
This function returns all ip addresses like from start to end:
def ips(start, end):
import socket, struct
start = struct.unpack('>I', socket.inet_aton(start))[0]
end = struct.unpack('>I', socket.inet_aton(end))[0]
return [socket.inet_ntoa(struct.pack('>I', i)) for i in range(start, end)]
These are the building blocks to build it on your own:
>>> import socket, struct
>>> ip = '0.0.0.5'
>>> i = struct.unpack('>I', socket.inet_aton(ip))[0]
>>> i
5
>>> i += 1
>>> socket.inet_ntoa(struct.pack('>I', i))
'0.0.0.6'
Example:
ips('1.2.3.4', '1.2.4.5')
['1.2.3.4', '1.2.3.5', '1.2.3.6', '1.2.3.7', ..., '1.2.3.253', '1.2.3.254', '1.2.3.255', '1.2.4.0', '1.2.4.1', '1.2.4.2', '1.2.4.3', '1.2.4.4']
Read from file
In your case you can read from a file like this:
with open('file') as f:
for line in f:
start, end = line.strip().split('-')
# ....
A:
Python 3 only, for IPv4, same idea with @User but use new Python3 standard library: ipaddress
IPv4 is represented by 4 bytes. So next IP is actually next number, a range of IPs can be represented as a range of integer numbers.
0.0.0.1 is 1
0.0.0.2 is 2
...
0.0.0.255 is 255
0.0.1.0 is 256
0.0.1.1 is 257
By code (ignore the In []: and Out []:)
In [68]: from ipaddress import ip_address
In [69]: ip_address('0.0.0.1')
Out[69]: IPv4Address('0.0.0.1')
In [70]: ip_address('0.0.0.1').packed
Out[70]: b'\x00\x00\x00\x01'
In [71]: int(ip_address('0.0.0.1').packed.hex(), 16)
Out[71]: 1
In [72]: int(ip_address('0.0.1.0').packed.hex(), 16)
Out[72]: 256
In [73]: int(ip_address('0.0.1.1').packed.hex(), 16)
Out[73]: 257
ip.packed.hex() returns the hexadecimal form of 4 bytes, as it is in
hexadecimal, it is shorter (e.g: 0xff hex == 255 decimal == 0b11111111 binary),
and thus, often used for representing bytes. int(hex, 16) returns integer value
corresponding to the hex value as it is more human friendly, and can be used as input for ip_address.
from ipaddress import ip_address
def ips(start, end):
'''Return IPs in IPv4 range, inclusive.'''
start_int = int(ip_address(start).packed.hex(), 16)
end_int = int(ip_address(end).packed.hex(), 16)
return [ip_address(ip).exploded for ip in range(start_int, end_int)]
ips('192.168.1.240', '192.168.2.5')
Returns:
['192.168.1.240',
'192.168.1.241',
'192.168.1.242',
'192.168.1.243',
'192.168.1.244',
'192.168.1.245',
'192.168.1.246',
'192.168.1.247',
'192.168.1.248',
'192.168.1.249',
'192.168.1.250',
'192.168.1.251',
'192.168.1.252',
'192.168.1.253',
'192.168.1.254',
'192.168.1.255',
'192.168.2.0',
'192.168.2.1',
'192.168.2.2',
'192.168.2.3',
'192.168.2.4']
| {
"pile_set_name": "StackExchange"
} |
---------------------- Forwarded by Mark Taylor/HOU/ECT on 01/17/2000 11:18
AM ---------------------------
Nony Flores
01/14/2000 06:34 PM
To: Mark Taylor/HOU/ECT@ECT
cc: Dale Neuner/HOU/ECT@ECT, John L Nowlan/HOU/ECT@ECT, Steven M
Elliott/HOU/ECT@ECT, Alan Aronowitz/HOU/ECT@ECT, Harry M Collins/HOU/ECT@ECT,
Janice R Moore/HOU/ECT@ECT, Michael A Robison/HOU/ECT@ECT, Lynn E
Shivers/HOU/ECT@ECT
Subject: Re: On-line GTC's for Enron Clean Fuels Company (ECFC)
Mark -
Mike Robison and I have finalized the form of GTC's for ECFC and are
forwarding them to you for further distribution. Thanks.
John -
The only modifications to ECFC's online GTC's are as follows:
Contract Formation In order to maintain consistency with the other product
lines online GTCs, we've modified this section to reflect the same
introduction as reflected in ELFI, EGLI, ERAC and EPC.
Damages and Limitation We've added a 2 line bilateral provision regarding
payment.
Financial Information We've added the two paragraphs re credit provisions as
reflected in the confirmations.
Compliance with U.S. Laws We've added the buyer/seller trade sanctions as
reflected in the confirmations.
Law and Jurisdiction We replaced this section in its entirety with the new
policy in effect as of June 1998.
Appendix II We've added the Marine Provisions Addendum.
Appendix III We've added the Transportation Addendum.
Thanks.
Nony Flores
Ext. 37541
Dale Neuner on 01/12/2000 05:24:45 PM
To: Nony Flores/HOU/ECT@ECT, Mark Taylor/HOU/ECT@ECT
cc: Dforster@enron.com, Bob Shults/HOU/ECT, Steven M Elliott/HOU/ECT@ECT
Subject: Re: On-line GTC's for Enron Petrochemicals Company (EPC)
Mark -
Although these products were not on the release list for the Jan 5 release,
we have been actively pursuing their completion and posting to EOL. Since our
reviews of the products are complete, the Petrochems guys are extremely eager
to have their products posted. Please offer your approval of the attached
ASAP so I can get this and the product onto EOL.
Nony -
We were expecting delivery of the Methanol GTC today. Please be aware that
the uploading of GTC's can sometimes be laborious, and since the Petrochem
guys are eager to get this out, time is of the essence. Please let me know
when you can expect delivery of the Methanol GTC to Mark Taylor.
Dale
Nony Flores
01/11/2000 06:09 PM
To: Mark Taylor/HOU/ECT@ECT
cc: Alan Aronowitz/HOU/ECT@ECT, Harry M Collins/HOU/ECT@ECT, Janice R
Moore/HOU/ECT@ECT, Michael A Robison/HOU/ECT@ECT, Dale Neuner/HOU/ECT@ECT,
Steven M Elliott/HOU/ECT@ECT
Subject: On-line GTC's for Enron Petrochemicals Company (EPC)
Mark -
Last week Dale Neuner was requested to add four new commodities to on-line
trading and Mike Robison and Steve Elliott have reviewed and approved the
long descriptions for benzene, mixed xylene, toluene and methanol. We have
also finalized the form of GTC's for EPC (who trades benzene, xylene and
toluene) and are forwarding them to you for further distribution.
Note: GTC's for Enron Clean Fuels Company (who trades methanol) will follow
on Wednesday.
Thanks.
Nony Flores
Ext. 3-7541 | {
"pile_set_name": "Enron Emails"
} |
ST. PETERSBURG, Russia, March 7 (UPI) -- Officials in the Russian city of St. Petersburg said street cleaners discovered a baby African crocodile at a construction site.
Workers at a St. Petersburg public utility office said the street cleaners discovered the croc, which measures about 20 inches long and is an estimated four days old, at a construction site and brought it to the office, where it was placed in an aquarium and given fresh beef to eat, RIA Novosti reported Friday.
The utility office workers said they asked the St. Petersburg Zoo to take custody of the reptile, but zoo officials said they do not have the resources to care for an African crocodile, which can grow up to 16 feet long.
Officials said they do not know how the baby crocodile ended up at the construction site. | {
"pile_set_name": "OpenWebText2"
} |
The root of words like electricity, electronic, and electrostatic is the ancient Greek word elektron, meaning amber.
The Greeks used pieces of amber in jewelry. They noticed that when amber was rubbed with cloth, it attracted light objects such as hair. The amber could pick these things up off the ground,
despite the gravitational forces pulling them down. It seemed natural to attribute this behavior to the amber. Subsequently, anything that displayed similar behavior was likened to amber.
The property of amber electrostatic charging comes from triboelectric effect.
To understand the electrostatic charge, you need to think about the atoms that make up everything we can see. All matter is made up of atoms, which are themselves made up of charged particles. Atoms have a nucleus consisting of neutrons and protons. They also have a surrounding "shell" that is made up electrons. Typically, matter is neutrally charged, meaning that the number of electrons and protons are the same. If an atom has more electrons than protons, it is negatively charged. If it has more protons than electrons, it is positively charged.
Static charge is most often generated by contact and separation, or "triboelectrification." When any two materials touch and then break contact, there is an exchange of electrons; one material will be left with an excess, and the other a loss, of electrons.
Some atoms hold on to their electrons more tightly than others do. How strongly matter holds on to its electrons determines its place in the triboelectric series. If a material is more apt to give up electrons when in contact with another material, it is more positive in the triboelectric series. If a material is more apt to "capture" electrons when in contact with another material, it is more negative in the triboelectric series.
The triboelectric series is a list that ranks various materials according to their tendency to gain or lose electrons. It usually lists materials in order of decreasing tendency to charge positively (lose electrons), and increasing tendency to charge negatively (gain electrons). Somewhere in the middle of the list are materials that do not show strong tendency to behave either way. Note that the tendency of a material to become positive or negative after triboelectric charging has nothing to do with the level of conductivity (or ability to discharge) of the material.
The following list shows you the triboelectric series for many materials you find around the house. Positive items in the series are at the top, and negative items are at the bottom (compilation of a few
Internet sources)
Air Very
positive
Human hands (usually too moist, though)
Very positive
Leather
Rabbit Fur
Glass
Human hair
Nylon
Wool
Fur
Lead
Silk
Aluminum
Paper
Cotton Neutral
Steel Neutral
Wood
Amber
Hard rubber
Nickel, Copper
Brass, Silver
Gold, Platinum
Polyester
Styrene (Styrofoam)
Saran Wrap
Polyurethane
Polyethylene (like Scotch Tape)
Polypropylene
Vinyl (PVC)
Silicon
Teflon Very
negative
The relative position of two substances in the triboelectric series tells you how they will act when brought into contact. Glass rubbed by silk causes a charge separation because they are several positions apart in the table. The same applies for amber and wool. The farther the separation in the table, the greater the effect.
Practical conclusions
A common complaint people have on static electricity is that they shoot sparks when touching objects. This is typically caused because they have dry skin, which can become highly positive (+) in charge, especially when the clothes they wear are made of polyester material, which can become negative (-) in charge.
People that build up static charges due to dry skin are advised to wear all-cotton clothes, which is neutral. Also, moist skin reduces the collection of charges.
Neutral materials or those close to neutral are probably the best for peoples' living environment. The goal is to avoid
creation or exposure to a large electrostatic charge.
Amber Jewelry generally close to neutral
Despite the fact that it was amber that gave it's Greek name to electricity, amber is quite close to the middle of the table, relatively to the
synthetic negative materials, not much away from the neutral ones. Amber
jewelry, as compared to other materials used in living environment can be
considered slightly negative or close to neutral. Also, the active surface of
amber jewelry is small or very small and can't be meant as significant. | {
"pile_set_name": "Pile-CC"
} |
"What's Your Constituency?"
Community Grade
(13 Users)
Your Grade
Welcome back to the TV Club coverage of Top Chef, which marks the fourth straight season covered by yours truly. I have to confess upfront that Iâm feeling some trepidation at the start here: Partly because I felt the quality and chemistry of the competitors on Season Six set the bar awfully high for future seasons, partly because I found myself checking out early on the second run of Top Chef Masters: Season Two (which Emily Withrow covered), and partly because the beginning of every season offers a daunting 17 contestants, the vast majority of whom donât stand a chance of winning the thing. That means a good 10 weeks of waiting for wheat to be separated from chaff, and trying to summon up outrage over, say, the 9th best chef getting bounced before the 13th best chef.
But while the grind of the early episodes is inevitableâaided by a reality-show formula that certainly isnât getting any fresherâTop Chef is usually fun in its particulars and Season Seven brings us one major reason for optimism: Eric Ripert, genial culinary superstar and frequent guest in past seasons, has replaced Toby Young as the permanent fourth judge. Iâm sure there must be someone who will miss Youngâs canned quips and facile observationsâand Iâll definitely miss the simmering contempt on Tomâs face whenever Toby opened his mouthâbut I think most will agree this is an upgrade. My one concern about Ripert may be the âgenialâ part: Toby was not afraid to be candid, sometimes to his detriment, and it would be a shame if Ripert became the Paula Abdul of the panel, given more to gentle advise than slash-and-burn critiques. Iâll reserve judgment, but based on tonightâs debut, Ripert at least revealed some sharp, specific insights into how various dish succeeded or failed and that may be enough to compensate for his lack of tartness.
After the standard introduction atop a balcony with Congress as a backdrop, the D.C. Top Chef puts the popular mise en place Quickfire challenge front and center. Had everyone been asked to cook, weâd have gotten the usual mélange of dishes, with some overachievers underachieving and vice versa, and no real sense of who the contenders are this season. Granted, no challenge will tell us everything about how the season is going to shape up, but it stands to reason that those with refined knife skills are very likely to be refined in other areas. And lo, a pair of tough-talking, âalpha-maleâ types emerge as the persons to beat: Kenny, who zips through three of the four stages in first place (including breaking down four whole chickens at a rate of 10 seconds per), and Angelo, who ultimately wins the Quickfire (and $20,000) with his roasted wing & thigh, curry onion jam, and potato noodles.
Playing the role of hubristic culinary hotshot to the hilt, Angelo also emerges as the early villain of Season Seven, announcing his absurd ambition âto be the first contestant to win every single challengeâ and likening his cooking to âan orchestra with flavors.â Still, I prefer brash, overconfident chefs who can cook to the brash, overconfident chefs who donât know what theyâre doing, and thereâs the expected abundance of the latter early on. The Elimination challenge asks the chefs to divide into four groups and create individual dishes that most strongly represent where theyâre fromâall for the 300 or so partygoers (including The Bachelor from The Bachelor!) celebrating cherry blossom season. With the best and worst from each group up for victory and elimination, respectively, the two top Quickfire competitors, Angelo and Kenny, are given the advantage of choosing someone weak to square off against. Tactically, itâs only a minor edge, since nobody knows for sure how good or bad anyone else is yet; on a more personal level, itâs likely to cause friction between the Quickfire champs and the chefs they assume suck the most.
Surprise, surprise, the chefs who make head-slappingly stupid decisions are punished for it later. You had to know Jacqueline was in trouble when she talked about her desire to prove that self-taught cooks who cater can win Top Chef. Then she opted to serve a non-fat (or very low-fat) chicken liver mousse, which evinced mystified âWhy bother?â looks from the judges even before they discovered she hadnât strained the dish to cut back on the unpleasantly grainy texture. She was fortunate that bandana-donning wild man John was around to do something even dumber: Cook dessert, which has been such an Achillesâ heel for contestants in the past that Top Chef has created a spin-off show strictly for pastry maestros. I donât care what deliciousness is oozing out of trees in Michigan: Maple mousse on puff pastry sounds sickly sweet in the best of circumstances. Add to that a pre-formed pastryâwhich, as Gail notes, represents a full third of his three-component dishâand John was doomed for an early exit.
As for the winners, Angelo so far makes good on his goal to win every challenge, orchestrating a bite of arctic char (with, sigh, a bacon foam) that wows the judges, and thereâs Kenny again playing bridesmaid with a nicely layered trout with a cinnamon coffee rub. Kevin, who had made the final four in the Quickfire, too, acquitted himself well enough to be in contention with his lamb sous vide. Of the four, Iâm thinking Alex might emerge as a wild card possibility: Tom is usually quick to hate deconstructed anything, but Alexâs dubious-sounding short rib/borscht combination allayed even his doubts. Too early to tell how the gelatinous blob that is the beginning of all Top Chef seasons will shape up, but at least thereâs some heat at the top.
Stray observations:
⢠Because Bravo sends out the first episode of every season to critics, this review was posted right after airing. Wonât happen again. Future recaps will mostly likely be posted 90 minutes or so later.
⢠âEverything I do will be outstandingâ âAngelo, asking for it.
⢠Proof that Kevin must be an excellent chef: His Jersey restaurant is called Ratâs, and itâs still in business.
⢠As noted in one of last seasonâs posts, I had the pleasure of dining at Seablue, where Stephen Hopcraft works as executive chef. (Itâs one of two Michael Mina restaurants at MGM Grand.) Stephen wound up on the bottom four for his deep-fried ribeye, but I suspect he wonât reside there all season. Seablue offers a âfriedâ quartet of appetizers that includes a taste âlobster corn dog,â but what works for lobster apparently doesnât for beef.
⢠Great tweet from Washington Post political columnist and all-around handsome bastard Ezra Klein on the irony of the Elimination challenge: âI feel like a challenge based on "representation" is a needlessly cruel way to start Top Chef dc. #novoteâ | {
"pile_set_name": "Pile-CC"
} |
Digital Learning
ACG School Jakarta students are well prepared to face the ever-changing world that we live in today. Information Technology is an integral part of day-to-day living; therefore, students need to be able to understand and utilise these tools effectively. Students will acquire the skills necessary to select and manage digital tools that will empower them in all phases of the learning process, including research, critical thinking, creative thinking,communication, self-management and collaboration. ACG School Jakarta is a centre of leadership and educational excellence with a strong vision for learning with technology.
iPads in Primary School
BYOD (Bring Your Own Device) in Secondary School
Digital Citizenship
ACG School Jakarta believes in a Digital Citizenship model for supporting safe and responsible use of the Internet in teaching and learning. An important part of this is that we are able to show others what responsible use looks likewhile we are using technology in our learning. We think a good digital citizen is someone who;
Is a confident and capable user of ICT.
Will use ICT for learning as well as other activities.
Will think carefully about whether the information they see online is true.
Will be able to speak the language of digital technologies.
Understands that they may experience problems when using technology but can deal with them.
Will always use ICT to communicate with others in positive ways.
Will be honest and fair in all their actions using ICT.
Will always respect peopleâs privacy and freedom of speech online.
Will help others to become a better digital citizen.
Because we know this is important for us all, we ask everyone, including the staff, students, and volunteers working at the school to agree to use the Internet and other technologies in a safe and responsible way by following the rules laid out in the Responsible Use Agreement.
Managebac
ManageBac is an online planning, assessment, and reporting platform. At ACG School Jakarta, we use ManageBac to take attendance, record assessment data, plan our units, and report on student progress. Parents can login to ManageBac, where they can access student reports and attendance for their child(ren). Parent guidance to ManageBac is provided by the school. Speak to the Front Desk, DP Coordinator, or PYP Coordinator if you need support.
Students who enroll at ACG School Jakarta are eligible for Office 365 Education for free, including Word, Excel, PowerPoint, OneNote, and Microsoft Teams, plus additional classroom tools. Students use Office 365 in the classroom and learn a suite of skills and applications that employers value most. Whether itâs Outlook, Word, PowerPoint, Access or OneNote, students will be prepared for their futures today.
A record of student learning is kept on the online Seesaw application. The student portfolio includes the evidence of learning in the form of photographs, videos, audio recordings and text documents. The emphasis is on the process of learning and will therefore include ongoing records, not just the final product.
Teachers post a weekly blog to Seesaw about the learning happening in the classroom. We encourage parents to view this weekly to see what has been taught as well as take note of any upcoming events.
Access for parents is available via the Seesaw Family application. For information about how you can access your childâs portfolio, please contact their class teacher. Teachers and parents may also communicate via the Seesaw chat feature built into the application. | {
"pile_set_name": "Pile-CC"
} |
## run-all-benchmarks.pkg
# Compiled by:
# src/app/benchmarks/benchmarks.lib
stipulate
package bj = benchmark_junk; # benchmark_junk is from src/app/benchmarks/benchmark-junk.pkg
herein
package run_all_benchmarks {
#
fun run_all_benchmarks ()
=
{
r = ([]: List( bj::Benchmark_Result ));
# Linux times seem accurate to about 4ms
# so I tweak these to run about 400ms each
# to give us times accurate to roughly +-1%:
r = tagged_int_loop::run_benchmark 200000000 ! r; # tagged_int_loop is from src/app/benchmarks/tagged-int-loop.pkg
r = one_word_int_loop::run_benchmark 200000000 ! r; # one_word_int_loop is from src/app/benchmarks/one-word-int-loop.pkg
r = tagged_int_loops::run_benchmark 200 ! r; # tagged_int_loops is from src/app/benchmarks/tagged-int-loops.pkg
r = one_word_int_loops::run_benchmark 200 ! r; # one_word_int_loops is from src/app/benchmarks/one-word-int-loops.pkg
r = tagged_int_loop_with_overflow_trapping::run_benchmark 200000000 ! r; # tagged_int_loop is from src/app/benchmarks/tagged-int-loop.pkg
r = one_word_int_loop_with_overflow_trapping::run_benchmark 200000000 ! r; # one_word_int_loop is from src/app/benchmarks/one-word-int-loop.pkg
r = tagged_int_loops_with_overflow_trapping::run_benchmark 200 ! r; # tagged_int_loops is from src/app/benchmarks/tagged-int-loops.pkg
r = one_word_int_loops_with_overflow_trapping::run_benchmark 200 ! r; # one_word_int_loops is from src/app/benchmarks/one-word-int-loops.pkg
r = tagged_int_shellsort::run_benchmark 15000000 ! r; # tagged_int_shellsort is from src/app/benchmarks/tagged-int-shellsort.pkg
r = tagged_int_shellsort_no_bounds_checking::run_benchmark 15000000 ! r; # tagged_int_shellsort_no_bounds_checking is from src/app/benchmarks/tagged-int-shellsort.pkg
bj::summarize_all_benchmarks (reverse r);
};
my _ =
run_all_benchmarks ();
};
end;
| {
"pile_set_name": "Github"
} |
No one wants open borders, right? Well, not exactly â and this USA Today column provides evidence that itâs not entirely easy to pigeonhole its support. Jeffrey Miron, director of economic studies at the libertarian Cato Institute, argues that the ills of illegal immigration can all be solved by simply eliminating border enforcement altogether:
The solution to Americaâs immigration problems is open borders, under which the United States imposes no immigration restrictions at all. If the U.S. adopts this policy, the benefits will far outweigh the costs. Illegal immigration will disappear, by definition. Much commentary on immigration â Trump and fellow travelers aside â suggests that legal immigration is good and that illegal immigration is bad. So, legalize all immigration. Government will then have no need to define or interpret rules about asylum, economic hardship, family reunification, family separation, DACA (Deferred Action for Childhood Arrivals) and so on. When all immigration is legal, these issues are irrelevant.
This position doesnât exactly come out of left field, pardon the pun, for libertarians at Cato or in other places. They tend to see most issues in terms of markets and economic outcomes. Most of Mironâs argument follows that pattern, albeit in ambiguous broad strokes that never get much support.
For instance, Miron argues that open borders would âplausiblyâ generate more higher-skilled immigration, on the tenuous idea that backups in H-B visas indicate a throttled demand. It might boost skilled immigration somewhat, but an open border on the south would likely incentivize many more to flood the border to escape the poverty and violence in Mexico and Central America, too. The drug cartels operating in those regions would be the first to take advantage of open borders too, an obvious point that Miron never bothers to address. Instead, he argues that the elimination of border enforcement would incentivize everyone to obey the law, âbecause they have shown respect for the law by not immigrating illegally.â If thereâs no law to respect for immigration, how exactly does crossing the border show respect for it?
Speaking of the law, Miron says itâs not worth even screening for terrorists:
Terrorists could well enter via open borders, but they do so now illicitly. Little evidence suggests that our immigration restrictions prevent terrorist attacks.
Actually, we do know that a lack of enforcement on tourist and business visas allowed some of the 9/11 terrorists to remain in the US while they plotted the murder of thousands. A lack of enforcement on student visas allowed at least one of the people charged as an accessory to the Boston Marathon bombing to stay in place. But this argument is nonsensical in two ways. First, how do we know that some turned away for security reasons werenât intending on terrorism? How do you prove that negative? Mostly, though, the argument that terrorists can enter illicitly is no more an argument for an end to enforcement than would be an argument to stop enforcing speed limits because people tend to break them, or to stop responding to domestic violence complaints because it doesnât stop people from reoffending.
Mironâs argument takes a sneering turn when he dismisses the impact on American culture. In essence, he wonders why we bother saving it at all, emphasis mine:
U.S. culture will not change dramatically. Americaâs immigrants have a long history of assimilation, and most have at least some affinity for American values. Indeed, the world is already more âAmericanizedâ than ever. Even if values and culture change, so what? That happens in free societies. Who says Americaâs current values â some of them deeply evil â are the right ones?
Yikes. Maybe this is a winning argument in think-tank circles, but most Americans like their culture and the shared values we have, among them the rule of law. One doesnât have to believe a culture is perfect to value it, after all, and at least our system of governance allows for those values to get debated and changed through the difficult but liberating process of self-governance. In that one sentence, Miron affirms what most people believe about the intent of the open-borders project â to fundamentally transform America into something very, very different.
Besides, which values does Miron want replaced, and by what? If Mironâs selling open borders on the basis of replacing current American values, thatâs a legit question â and one has to wonder why a Cato Institute scholar seems so sanguine about importing the cultural values of those most likely to freely flow into the country, even apart from the obvious issues like drug cartels and multinational gangs. There arenât many libertarian bastions to our south, or for that matter to our east, west, or north either. His theory in practice would result in a field day for Democratic Socialists, for instance, but libertarians might regret the outcome of this policy, especially those concerned about âdeeply evilâ American values from a libertarian point of view.
Can we recalculate our immigration policy to make it more consistent, effective, and supportive of the rule of law? Of course we can, but the US already has one of the more generous immigration policies in the world, to our credit. However, most people want that generosity to be accessed properly within the law, as our previous election demonstrated â and most Americans are getting pretty tired of hearing about how their values are âdeeply evilâ in the context of people demanding to participate in them. | {
"pile_set_name": "OpenWebText2"
} |
Glenea anticepunctata
Glenea anticepunctata is a species of beetle in the family Cerambycidae. It was described by James Thomson in 1857. It is known from Borneo, Sumatra, India and Malaysia.
Varietas
Glenea anticepunctata var. janthoides Breuning, 1956
Glenea anticepunctata var. mediovitticollis Breuning, 1956
Glenea anticepunctata var. obsoletepunctata (Thomson, 1857)
References
Category:Glenea
Category:Beetles described in 1857 | {
"pile_set_name": "Wikipedia (en)"
} |
Alfredo Romano is the first to admit a soul-crushing process that careens dangerously close to failure and burns years off your life can sometimes have a happy ending. At least thatâs his conclusion after overcoming one obstacle after another, in a journey at times resembling the Labours of Hercules, to build a new community in Torontoâs old Junction Triangle.
âIâm the first to admit we achieved a better result from all the years of hard negotiating,â says Mr. Romano, president of Castlepoint Numa, a Toronto real estate developer which also has large holdings on the cityâs waterfront.
When itâs complete, Castlepointâs eight-acre site, called Lower JCT, is expected to be a vibrant, mixed-use community with 1.1 million square feet of office, retail, residential and cultural space. There will be a total of seven buildings, 32 townhomes, parks, affordable housing, a daycare and bike paths â all contained on a parcel of former industrial land sandwiched between GO train tracks and within walking distance of two subway stations and streetcar lines in the west-end Bloor Street-Dundas Street corridor.
Lower JCT is a joint venture between Castlepoint and Toronto-based private equity firm Greybrook Realty Partners, which has placed more than $1-billion of investments over the past several years, mainly in Toronto and Miami. Equitable Bank is providing construction financing for the first two phases of Lower JCT.
The Lower JCT project is located west of Toronto's core in the Bloor Street-Dundas Street corridor. (Erik Heinrich/The Globe and Mail)
The Tower Automotive Building, a 10-storey heritage structure erected in 1919, will remain the neighbourhoodâs signature motif and tallest building. It will be anchored by the Museum of Contemporary Art Toronto Canada (MOCA), which is relocating here at the end of the year into the first five floors (50,000 square feet) from a much smaller space on Queen Street West.
Mr. Romano envisages that a large number of his commercial tenants will be companies in the arts and digital media, drawn by the pull of MOCA. âI would like to say this was all inspired by a grand design, but that wasnât the case,â says Mr. Romano, whose own career in real estate can be summed up with exactly the same words.
After finishing his master degree in history and theology at Harvard, Mr. Romano was accepted at McMaster University in Hamilton to complete a multidisciplinary PhD in 1985. He never started because he had a life-changing epiphany while filing his tax return. He counted five T4 slips from as many teaching institutions, with no full-time position as a professor in sight. So at 31, Mr. Romano took a job on Bay Street and soon started Castlepoint with his cousin, Mario Romano.
By 2008, Castlepoint was a successful developer but on the losing end of a bid to construct film studios in the Port Lands on Torontoâs eastern waterfront. It rebounded by buying the Junction property in the west end with plans to build rival studios. However, the Port Lands were soon in financial distress, so Castlepoint partnered with Paul Bronfman (whose Montreal family is famous for creating the Seagram liquor brand) and ROI Capital to buy out the owners. Today Pinewood Toronto Studios is a Hollywood-calibre facility with 10 sound stages and special effects capabilities.
The Tower Automotive Building, a 10-storey heritage structure erected in 1919, has been renamed Auto BLDG and will remain the signature motif of the site. (Erik Heinrich/The Globe and Mail)
With the Port Lands once again in play, Castlepoint had to find an alternate plan for its South Junction ambitions, and in 2011 submitted a blueprint to the City of Toronto. Nestlé Canada immediately objected to residential housing being built close to its chocolate factory on nearby Sterling Road. The cityâs planning staff agreed and Castlepointâs proposal was quashed almost as quickly as it had been unveiled. âWe were in a pitched battle with Nestlé over the future direction of the area,â Mr. Romano remembers. âIt got nasty.â
Resident groups supported the idea of revitalizing a desolate brownfield site that had been home to an Alcan aluminum factory for about 80 years, but certain conditions had to be met. At least a dozen meetings with stakeholders followed. Finally Castlepoint took its plan for the South Junction to the Ontario Municipal Board (OMB), which has the power to overrule municipalities in development matters.
All the stakeholders, including Nestlé, sat down at the same table and hashed out their differences. The key sticking points centred around the Tower Automotive Building. The city insisted no structure on the site exceed its height, and Nestlé wanted it rezoned to commercial from residential. Castlepoint conceded both points.
âThe Automotive Building is an iconic structure and weâve restored it to the highest standards,â says Mr. Romano. âWith so much character itâs important that it maintain its prominence. Thatâs the smart approach.â | {
"pile_set_name": "OpenWebText2"
} |
Simplify (((z**9/z)/z**(-17))**(4/5))**21 assuming z is positive.
z**420
Simplify (a*(a*a**8)/a*a**(1/9))/(((a*(a**(-11)*a)/a*a)/a)/a**5) assuming a is positive.
a**(217/9)
Simplify (u/(u/(u/((u/(u/(u*u/(u*u*u**(-2)*u))))/u))*u))**(-4)/((u**11*u)/u**(2/23)) assuming u is positive.
u**(-274/23)
Simplify ((z*z**(-18)/z)/z*z)/(z*z**(5/9)*z)*z**(-2/11)/z**(7/3) assuming z is positive.
z**(-2284/99)
Simplify ((p*p*(p*p**8)/p)/p*p*p/p**3*(p**(1/3))**(6/25))**14 assuming p is positive.
p**(2828/25)
Simplify (a*a**(-24/5))**(-14)/(a**(-2/3))**(2/111) assuming a is positive.
a**(88598/1665)
Simplify (((m/(m/m**(1/13)))/(m/(m**(2/33)*m)))**(-40))**27 assuming m is positive.
m**(-21240/143)
Simplify (z/z**(-6))**(-1/14)/(z**(1/2)/z**(-13)) assuming z is positive.
z**(-14)
Simplify (n*n/n**13*n*n**(8/9))/(n/(((n*n**(-1))/n)/n*n*n*n))**(-8) assuming n is positive.
n**(-82/9)
Simplify (d**(8/9)/(d*d**10*d))/(d**(-10)/(d/(d*d*d/(d*d**(3/11))))) assuming d is positive.
d**(-182/99)
Simplify (v**4)**(4/11)/((v/v**(2/17))/(v/v**16)) assuming v is positive.
v**(-2698/187)
Simplify ((i**(-5)*i*i)/i)**29*((i/i**5*i)/i)/i**17 assuming i is positive.
i**(-137)
Simplify (g/((g*g**7)/g))**(-10/9)*g**(-4)*g/((g*g**15)/g) assuming g is positive.
g**(-34/3)
Simplify (((l*l/(l/l**3))/l)/(l**(-1/4)*l)*((l/(l**2/l))/l)/(((l*l*l**(-4/9)*l)/l)/l))**(1/2) assuming l is positive.
l**(25/72)
Simplify ((d**(-3))**(-24)/(d**(-12)/(d/d**(1/4))))**(-1/5) assuming d is positive.
d**(-339/20)
Simplify (d**14/d*d)/(d**11*d*d)*d*(d/d**(-1/4))/d*d*d*d**8 assuming d is positive.
d**(49/4)
Simplify ((j*j**(-13/3)/j)/j**20)/(j/(j**(-7)*j)*(j*j*j**(-1/18)*j*j)/j) assuming j is positive.
j**(-617/18)
Simplify ((y*y**1)**(2/119)*y*y**(-6)*y*(y*y**(-3/5)/y)/y)**(1/3) assuming y is positive.
y**(-1104/595)
Simplify (t**(1/4))**(-1/15)*(t**(-3/8))**(-5) assuming t is positive.
t**(223/120)
Simplify (p*p*p/p**(7/2))**(1/31)/((p/(p*p**3/p))/p)**(35/2) assuming p is positive.
p**(1627/31)
Simplify ((((w**(1/3)*w*w)/w)**(-6/29))**(-16/7))**(3/8) assuming w is positive.
w**(48/203)
Simplify ((p**(-1/19)*p*p)/(p/((p*(p/(p/((p**(-10/3)/p)/p)))/p)/p)*p))/(p*p**(1/4)*p*p**(-23)) assuming p is positive.
p**(3275/228)
Simplify ((q**11)**(2/17))**(1/44) assuming q is positive.
q**(1/34)
Simplify ((o**(1/4)/o*o)/o)**(-38)*(o*o**(-13))/(o/(o/(o**(-2/63)*o))) assuming o is positive.
o**(1957/126)
Simplify ((w*((w**(-3/7)/w)/w*w)/w)**44)**30 assuming w is positive.
w**(-13200/7)
Simplify ((l*l**(-1/2))**(2/25)/(l**(-1)/(l*l/l**(1/2)*l)))**(-14/5) assuming l is positive.
l**(-1239/125)
Simplify (d**1*d**(-1/3)/d*d**5/d**(-6))**33 assuming d is positive.
d**352
Simplify (t**11/t*t**(1/4)*t)/(((t*t/t**(-2/31))/t)/t*t*((t*t*t*t**5*t*t)/t*t)/t) assuming t is positive.
t**(147/124)
Simplify (d**(-2/7))**(-20/9)*(d**(-1/2))**28 assuming d is positive.
d**(-842/63)
Simplify (d*d/(d**(9/5)*d))/d**6*d/d**(-3/13)*d/d**9 assuming d is positive.
d**(-882/65)
Simplify (((d*((d**(-9/4)*d)/d)/d)/(d**(-2)/d))**(-4/5))**(-10) assuming d is positive.
d**6
Simplify (r*r/(r**10*r))/r**(-1)*r**9/(r*r**6*r*r) assuming r is positive.
r**(-8)
Simplify u**(-2/15)*u/u**20*u**8*(u/u**(-1/2))/u assuming u is positive.
u**(-319/30)
Simplify s*s*s/s**14*s*s/(s/(s*(s**13*s*s)/s))*s*s*(s**4)**(-43) assuming s is positive.
s**(-165)
Simplify ((g**(-2)*g*g*g)**(-2/17)*((g*g**2)/g*g)**(-9))**(-1/7) assuming g is positive.
g**(461/119)
Simplify (g/(g*g*g**5*g))/g*g*(g**12*g)/g*(g/(g**(2/3)*g))/(g**(-6)/g) assuming g is positive.
g**(34/3)
Simplify (r*r**(-3)*r)**16*r*((r/(r*r*r**8*r*r)*r)/r)/r*r**(-1) assuming r is positive.
r**(-28)
Simplify (x*x**(-11))/(x/x**(-17/2)*x*x*x*x)*x**(-23)/x*x*x**24/x assuming x is positive.
x**(-47/2)
Simplify (x**(-1/6))**(9/7)*(x/x**0)/x**(3/2) assuming x is positive.
x**(-5/7)
Simplify (b**(-4)*b/b**(1/10)*(b**(2/3)*b)**(-35))**(29/3) assuming b is positive.
b**(-53447/90)
Simplify ((n/((n*n*n**(-1/2))/n)*n)/n**6)/((n*n*n**8)/(n**(-1/4)/n)) assuming n is positive.
n**(-63/4)
Simplify (m/m**(-2/47)*m)/((m/m**31)/m)*(m**(-5/12)/m)**(-8) assuming m is positive.
m**(6257/141)
Simplify (q*q**(-7)/q)**(-1)/(q*q**(-1/15)*q*q**(-1/21)*q) assuming q is positive.
q**(144/35)
Simplify n**(-4)*n*n**17*n*n*(n*n*(n*n**(-11)/n)/n)/n**(-2/11) assuming n is positive.
n**(68/11)
Simplify i**13*i*i*i**(-2/9)*i*i*(i/(i/i**(-3/14)))/(i*i**11) assuming i is positive.
i**(575/126)
Simplify ((g**(-1/2))**(-9/7)*g/(g/(g*g**(-7)/g))*g*(g/(g*(g/((g/g**(1/7))/g))/g))/g*g)**(13/2) assuming g is positive.
g**(-143/4)
Simplify (y**(10/3)/y*y**(-29))**(-5) assuming y is positive.
y**(400/3)
Simplify t**(-3/11)*t**(-6/5)*(t*t**(-1/10))/(t/(t**(-2/7)*t)) assuming t is positive.
t**(-661/770)
Simplify (((l**(-1/32)/l)/l)/l**9)/((l*l*l/l**(-14))/(l/(l*(l/(l/l**(1/11))*l)/l)*l)) assuming l is positive.
l**(-9547/352)
Simplify (c*c/(c*c**(1/5))*c)**35*c**(5/4)/((c**(11/5)*c)/c) assuming c is positive.
c**(1241/20)
Simplify x**5/(x**(7/3)*x)*(x/(x**(-2/5)/x)*x)**(5/2) assuming x is positive.
x**(61/6)
Simplify j**1*j*j**(-2/9)*j/((j/(j*j**(-1/3)))/j*j)*j/j**(2/5)*j assuming j is positive.
j**(182/45)
Simplify h*h/(h/(h**14/h)*h*h)*h/(h/(h/h**12))*h**(-11)/(((h**(-4)*h)/h)/h) assuming h is positive.
h**(-5)
Simplify (k/(k*k**1)*k/(k/(k/(k/k**21))))**(5/16) assuming k is positive.
k**(25/4)
Simplify ((n**(-3/5))**(-1/4))**(-1/19) assuming n is positive.
n**(-3/380)
Simplify (j/j**(-2/41))/j**(-20)*(j**(2/19))**(-36) assuming j is positive.
j**(13445/779)
Simplify (k**(-33)*k**(-2/15))/((k**(-1/27)/k)/k**(-32)) assuming k is positive.
k**(-8653/135)
Simplify (((a*a/(a**(-2)*a))/a)/(((a/((a**(-3/4)/a)/a))/a)/a))/(a*a*a*a**(-6)*a)**(-2/47) assuming a is positive.
a**(31/188)
Simplify (q**(-3/2)/(q/q**(1/11))*q*q/(q/(q/(q/(q*q*q**(-2/3)/q))))*q/(q/((q*q**(-2/7))/q))*q)**29 assuming q is positive.
q**(-4843/462)
Simplify (o**(2/5)/o*o*o)**(-13)/((o**(-4/5)*o*o)/o**11) assuming o is positive.
o**(-42/5)
Simplify ((c*c/(c**(-31)/c)*c)/c**(2/7))**(1/4) assuming c is positive.
c**(243/28)
Simplify h/(h/(h*h*h**6/h))*h*h**(1/3)*h**(2/37)*h*h*h**(-4)*h*h*h*h assuming h is positive.
h**(1153/111)
Simplify ((v/v**(2/5))**(1/56)/(v*v*v/(v**(-3)*v)*v*v**(-5/4)*v*v))**41 assuming v is positive.
v**(-77367/280)
Simplify ((t**(-2/5)*t*t/(t*t**(2/5)*t))/(t**(-1))**(12/7))**(-10) assuming t is positive.
t**(-64/7)
Simplify ((f**(3/5))**(3/5))**(1/10) assuming f is positive.
f**(9/250)
Simplify (r**(1/4)/r)**(-2/13)*(r**(2/9))**(-2/107) assuming r is positive.
r**(2785/25038)
Simplify (b**(-2/5)*b*b**(-2/9))/(b*b**(-1/33))**(-2/11) assuming b is positive.
b**(3017/5445)
Simplify ((y**(2/5)/y)/y*y)**(3/7)/((y**(2/27)/y)/(y**(1/3)*y)) assuming y is positive.
y**(1892/945)
Simplify (d**(-1/13))**2/(d**(2/21))**(7/5) assuming d is positive.
d**(-56/195)
Simplify o*o/o**(-16)*o**(-2)*(o/(((o*o**(1/2))/o)/o))**(-23/2) assuming o is positive.
o**(-5/4)
Simplify (t**(-2/11)*t/(t/(t*(t**(-24)*t)/t)))/(t*t**(-1/2)*t*t**(-8)*t) assuming t is positive.
t**(-389/22)
Simplify (g*g**(-2/21)/g)/(g*g**8*g*g)*(((g*g/(g*g*g**(-2/35)*g))/g)/g*g)/(g/g**(2/25)) assuming g is positive.
g**(-7328/525)
Simplify ((i**(-11/6)*i*i)/(i/i**(-11)))/(i*i**1*i*i/(i/i**(2/57))) assuming i is positive.
i**(-565/38)
Simplify d**(2/9)*(d/d**(11/3))/d*d*((d*d*d/(d/d**(-11)))/d)/(d**(4/5)/d*d) assuming d is positive.
d**(-596/45)
Simplify (t**(-5/8)*t**8)/(((t*t**(9/5))/t)/t*t*t**(-23)) assuming t is positive.
t**(1143/40)
Simplify ((w**(-11)*w)/w*w*w**3/w)**(4/11) assuming w is positive.
w**(-32/11)
Simplify ((h/((h/(h**(-1/2)/h))/h))**20*(h*h**(-3))/((h/(h*h**(-2)*h))/h))**(-26) assuming h is positive.
h**312
Simplify ((r*r*r/(r/(r/(r/r**(-2/91))*r)))/r*r**(-2/21))**(-2/97) assuming r is positive.
r**(-1028/26481)
Simplify y**(-3)/y**(2/9)*(y*(y/y**(3/7))/y)**(-5/9) assuming y is positive.
y**(-223/63)
Simplify (w/(w*(w**7/w)/w*w))**20/(w**(-2/9))**(-3/10) assuming w is positive.
w**(-1801/15)
Simplify ((b*b**21*b)/b*b**(-9/4))/(b/b**(7/3))**(2/21) assuming b is positive.
b**(5009/252)
Simplify ((t/(t*t**(-2/13)))**(3/10)/(t**(2/13))**(3/4))**12 assuming t is positive.
t**(-54/65)
Simplify ((j/(j/(j/j**17)))**(-5/2))**(-5) assuming j is positive.
j**(-200)
Simplify (i**(8/3)*i**(3/23))/(i**16)**34 assumin | {
"pile_set_name": "DM Mathematics"
} |
---
abstract: 'We give a detailed analysis of the proportion of elements in the symmetric group on $n$ points whose order divides $m$, for $n$ sufficiently large and $m\geq n$ with $m=O(n)$.'
address: |
School of Mathematics and Statistics,\
University of Western Australia,\
Nedlands, WA 6907\
Australia.
author:
- 'Alice C. Niemeyer'
- 'Cheryl E. Praeger'
date: '31 March 2006.'
title: On Permutations of Order Dividing a Given Integer
---
Introduction
============
The study of orders of elements in finite symmetric groups goes back at least to the work of Landau [@Landau09 p. 222] who proved that the maximum order of an element of the symmetric group $S_n$ on $n$ points is $e^{(1+o(1))(n\log n)^{1/2}}$. ErdÅs and Turán took a probabilistic approach in their seminal work in the area, proving in [@ErdosTuran65; @ErdosTuran67] that, for a uniformly distributed random element $g\in S_n$, the random variable $\log|g|$ is normally distributed with mean $(1/2) \log^2n$ and standard deviation $\frac{1}{\sqrt{3}} \log^{3/2}(n)$. Thus most permutations in $S_n$ have order considerably larger than $O(n)$. Nevertheless, permutations of order $O(n)$, that is, of order at most $cn$ for some constant $c$, have received some attention in the literature. Let $P(n,m)$ denote the proportion of permutations $g\in S_n$ which satisfy $g^m = 1$, that is to say, $|g|$ divides $m$. In 1952 Chowla, Herstein and Scott [@Chowlaetal52] found a generating function and some recurrence relations for $P(n,m)$ for $m$ fixed, and asked for its asymptotic behaviour for large $n$. Several years later, Moser and Wyman [@MoserWyman55; @MoserWyman56] derived an asymptotic for $P(n,m)$, for a fixed prime number $m$, expressing it as a contour integral. Then in 1986, Wilf [@Wilf86] obtained explicitly the limiting value of $P(n,m)$ for an arbitrary fixed value of $m$ as $n\rightarrow\infty$, see also the paper [@Volynets] of Volynets. Other authors have considered equations $g^m=h$, for a fixed integer $m$ and $h\in S_n$, see [@BouwerChernoff85; @GaoZha; @MineevPavlov76a; @MineevPavlov76b].
However in many applications, for example in [@Bealsetal03], the parameters $n$ and $m$ are linearly related, so that $m$ is unbounded as $n$ increases. For the special case where $m=n$, Warlimont [@Warlimont78] showed in 1978 that most elements $g\in S_n$ satisfying $g^n=1$ are $n$-cycles, namely he proved that $P(n,n)$, for $n$ sufficiently large, satisfies $$\frac{1}{n} + \frac{2c}{n^2} \le P(n,n) \le \frac{1}{n} + \frac{2c}{n^2} +
O\left(\frac{1}{n^{3-o(1)}}\right)$$ where $c =1$ if $n$ is even and $c=0$ if $n$ is odd. Note that the proportion of $n$-cycles in $S_n$ is $1/n$ and, if $n$ is even, the proportion of elements that are a product of two cycles of length $n/2$ is $2/n^2$. Warlimontâs result proves in particular that most permutations satisfying $g^n=1$ are $n$-cycles. More precisely it implies that the conditional probability that a random element $g\in S_n$ is an $n$-cycle, given that $g^n
=1$, lies between $1-2c n^{-1} - O(n^{-2+o(1)})$ and $1-2c n^{-1} +
O(n^{-2})$.
The main results of this paper, Theorems \[leadingterms\] and \[bounds\], generalise Warlimontâs result, giving a detailed analysis of $P(n,m)$ for large $n$, where $m=O(n)$ and $m\geq n$. For this range of values of $n$ and $m$, we have $rn\leq m<(r+1)n$ for some positive integer $r$, and we analyse $P(n,m)$ for $m$ in this range, for a fixed value of $r$ and $n\rightarrow\infty$. It turns out that the kinds of elements that make the largest contribution to $P(n,m)$ depend heavily on the arithmetic nature of $m$, for example, on whether $m$ is divisible by $n$ or by $r+1$. We separate out several cases in the statement of our results. Theorem \[leadingterms\] deals with two cases for which we give asymptotic expressions for $P(n,m)$. The first of these reduces in the case $m=n$ to Warlimontâs theorem [@Warlimont78] (modulo a small discrepancy in the error term). For other values of $m$ lying strictly between $rn$ and $(r+1)n$ we obtain in Theorem \[bounds\] only an upper bound for $P(n,m)$, since the exact value depends on both the arithmetic nature and the size of $m$ (see also Remark \[remark:leadinterms\]).
\[leadingterms\] Let $n$ and $r$ be positive integers. Then for a fixed value of $r$ and sufficiently large $n$, the following hold.
1. $\displaystyle{
P(n,rn)=\frac{1}{n}+\frac{c(r)}{n^2}
+O\left(\frac{1}{n^{2.5-o(1)}}\right)
}$ where $c(r)=\sum
(1+\frac{i+j}{2r})$ and the sum is over all pairs $(i,j)$ such that $1\leq i,j\leq r^2,
ij =r^2,$ and both $r+i, r+j$ divide $rn$. In particular $c(1)=0$ if $n$ is odd, and $2$ if $n$ is even.
2. If $r=t!-1$ and $m=t!(n-t)=(r+1)n-t\cdot t!$, then $$P(n,m)=\frac{1}{n}+\frac{t+c'(r)}{n^2}+O\left(\frac{1}{n^{2.5-o(1)}}
\right)$$ where $c'(r)=\sum(1+\frac{i+j-2}{2(r+1)})$ and the sum is over all pairs $(i,j)$ such that $1< i,j\leq (r+1)^2,
(i-1)(j-1) =(r+1)^2,$ and both $r+i, r+j$ divide $m$.
\[bounds\] Let $n,m,r$ be positive integers such that $rn< m<(r+1)n$, and ${{\delta}}$ a real number such that $0<{{\delta}}\leq 1/4$. Then for a fixed value of $r$ and sufficiently large $n$, $$P(n,m)\leq \frac{\alpha.(r+1)}{m}+\frac{k(r)}
{n^2}+ O\left(\frac{1}{n^{2.5-2{{\delta}}}}\right)$$where $k(r) = \frac{4(r+3)^4}{r^2}$ and $$\alpha=\left\{\begin{array}{ll}
1&\mbox{if $r+1$ divides $m$ and $n-\frac{m}{r+1}
< \frac{m}{2(r+1)(r+2)-1}$}\\
0&\mbox{otherwise.}
\end{array}\right.$$
\[remark:leadinterms\]
\(a) In Theorem \[leadingterms\](a), the leading term $1/n$ is the proportion of $n$-cycles, while the proportion of permutations containing an $(n-t)$-cycle is $\frac{1}{n-t} = \frac{1}{n} +
\frac{t}{n^2} + O(\frac{1}{n^3})$, which contributes to the first two terms in Theorem \[leadingterms\](b). The terms $\frac{c(r)}{n^2}$ and $\frac{c'(r)}{n^2}$ correspond to permutations in $S_n$ that have two long cycles, and these have lengths $\frac{m}
{r+i}$ and $\frac{m}{r+j}$, for some $(i,j)$ satisfying the conditions in Theorem \[leadingterms\] (a) or (b) respectively, (where $m=rn$ in part (a)).
\(b) In Theorem \[bounds\], if $r+1$ divides $m$ and $n-m/(r+1)<\frac{m}{2(r+1)(r+2)-1}$, then the term $(r+1)/m$ comes from elements containing a cycle of length $m/(r+1)$. The term $\frac{k(r)}{n^2}$ corresponds to permutations with exactly two âlargeâ cycles. More details are given in Remark \[rem:general\].
Our interest in $P(n,m)$ arose from algorithmic applications concerning finite symmetric groups. For example, $n$-cycles in $S_n$ satisfy the equation $g^n=1$, while elements whose cycle structure consists of a 2-cycle and a single additional cycle of odd length $n-t$, where $t = 2$ or $3$, satisfy the equation $g^{2(n-t)} =1$. For an element $g$ of the latter type we can construct a transposition by forming the power $g^{n-t}$. In many cases the group $S_n$ is not given as a permutation group in its natural representation, and, while it is possible to test whether an element $g$ satisfies one of these equations, it is often impossible to determine its cycle structure with certainty. It is therefore important to have lower bounds on the conditional probability that a random element $g$ has a desired cycle structure, given that it satisfies an appropriate equation. Using Theorem \[leadingterms\], we obtained the following estimates of various conditional probabilities.
\[cdnlprobs1\] Let $r, n$ be positive integers and let $g$ be a uniformly distributed random element of $S_n$. Then for a fixed value of $r$ and sufficiently large $n$, the following hold, where $c(r)$ and $c'(r)$ are as in Theorem $\ref{leadingterms}$.
1. The conditional probability $P$ that $g$ is an $n$-cycle, given that $|g|$ divides $rn$, satisfies $$\begin{aligned}
1-\frac{c(r)}{n}-O\left(\frac{1}
{n^{1.5-o(1)}}\right)&\leq& P
\leq 1-\frac{c(r)}{n}+O\left(\frac{1}
{n^{2}}\right).\\\end{aligned}$$
2. If $r=t!-1$, then the conditional probability $P$ that $g$ contains an $(n-t)$-cycle, given that $|g|$ divides $t!(n-t)$, satisfies $$\begin{aligned}
1-\frac{c'(r)}{n}-O\left(\frac{1}
{n^{1.5-o(1)}}\right)&\leq& P
\leq 1-\frac{c'(r)}{n}+O\left(\frac{1}
{n^{2}}\right).\\\end{aligned}$$
We note that Theorem \[leadingterms\] improves the upper bound of $(1+o(1))/n$ obtained in [@Bealsetal03 Theorem 3.7], while Corollary \[cdnlprobs1\] improves the corresponding lower bound of $1-o(1)$ of [@Bealsetal03 Theorem 1.3(a)]. These results have been developed and refined further in [@NiemeyerPraeger05b] to derive explicit ânon-asymptoticâ bounds that hold for all $n$ and can be applied directly to improve the recognition algorithms for $S_n$ and $A_n$ in [@Bealsetal03].
[**Commentary on our approach**]{}
Warlimontâs proof in [@Warlimont78] of an upper bound for $P(n,n)$ and the proof of [@Bealsetal03 Theorem 3.7] by Beals and Seress of an upper bound for $P(n,m)$ for certain values of $m$, rely on dividing the elements of $S_n$ into disjoint unions of smaller sets. Warlimont divides the elements according to how many âlargeâ cycles a permutation contains. Fix a real number $s$ such that $1/2 < s <
1$. We say that a cycle of a permutation in $S_n$ is *$s$-small* if its length is strictly less than $n^s$, and is *$s$-large* otherwise. Beals and Seress divide the elements according to the number of cycles in which three specified points lie. Both strategies are sufficient to prove Warlimontâs result or the slightly more general results of [@Bealsetal03 Theorem 3.7]. However, neither is sufficient to prove the general results in this paper. In particular, Warlimontâs approach breaks down when trying to estimate the proportion of elements with no or only one large cycle, which is perhaps why no progress has been made since his paper [@Warlimont78] towards answering Chowla, Herstein and Scottâs original question about the asymptotic behaviour of $P(n,m)$ for large $n$. One of the key ideas that allowed us to generalise Warlimontâs work is the insight that the number of permutations which contain no $s$-large cycles can be estimated by considering their behaviour on three specified points. Another important strategy is our careful analysis of elements containing only one large cycle by separating out divisors of $m$ which are very close to $n$.
We regard Theorem \[lem:props\] below as the main outcome of the first stage of our analysis. It is used in the proof of Theorem \[leadingterms\]. The statement of Theorem \[lem:props\] involves the number $d(m)$ of positive divisors of $m$, and the fact that $d(m)=m^{o(1)}$, see Notation \[notation\] (c). It estimates the proportion $P_0(n,m)$ of elements of $S_n$ of order dividing $m$ and having no $s$-large cycles.
\[lem:props\] Let $n,m$ be positive integers such that $m\geq n$, and let $s$ be a positive real number such that $1/2<s<1$. Then, with $P_0(n,m)$ as defined above, there is a constant $c$ such that $$P_0(n,m)<\frac{c d(m)m^{2s}}{n^3}=O\left(\frac{m^{2s+o(1)}}{n^3}\right).$$
Theorem \[lem:props\] is proved in Section \[sec:proportions\] and the other results are proved in Section \[sec:stheo\].
Proof of Theorem \[lem:props\] {#sec:proportions}
==============================
In this section we introduce some notation that will be used throughout the paper, and we prove Theorem \[lem:props\]. Note that the order $|g|$ of a permutation $g \in S_n$ divides $m$ if and only if the length of each cycle of $g$ divides $m$. Thus $P(n,m)$ is the proportion of elements in $S_n$ all of whose cycle lengths divide $m$. As indicated in the introduction, we estimate $P(n,m)$ by partitioning this proportion in various ways. Sometimes the partition is according to the number of large cycle lengths, and at other times it is defined in terms of the cycles containing certain points. We specify these partitions, and give some other notation, below.
\[notation\]
The numbers $n,m$ are positive integers, and the symmetric group $S_n$ acts naturally on the set $\Omega=\{1,2,\dots,n\}$.
1. $s$ is a real number such that $1/2 < s < 1$. A divisor $d$ of $m$ is said to be $s$-*large* or $s$-*small* if $d \geq m^{s}$ or $d < m^s$, respectively; $D_\ell$ and $D_s$ denote the sets of all $s$-large and $s$-small divisors $d$ of $m$, respectively, such that $d \le n$.
2. For $g\in S_n$ with order dividing $m$, a $g$-cycle of length $d$ is called $s$-*large* or $s$-*small* according as $d$ is an $s$-large or $s$-small divisor of $m$.
3. $d(m)$ denotes the number of positive divisors of $m$ and $\delta$ and $c_\delta$ are positive real numbers such that $\delta < s$ and $d(m) \le c_\delta m^{\delta}$ for all $m \in {\bf{N}}$.
4. The following functions of $n$ and $m$ denote the proportions of elements $g\in S_n$ of order dividing $m$ and satisfying the additional properties given in the last column of the table below.
--------------------- ---------------------------------------------
$P_0(n,m)$ all $g$-cycles are $s$-small
${P_0^{(1)}}(n,m)$ all $g$-cycles are $s$-small and
$1,2,3$ lie in the same $g$-cycle,
${P_0^{(2)}}(n,m)$ all $g$-cycles are $s$-small and
$1,2,3$ lie in exactly two $g$-cycles
${P_0^{(3)}}(n,m)$ all $g$-cycles are $s$-small and
$1,2,3$ lie in three different $g$-cycles
$P_1(n,m)$ $g$ contains exactly one $s$-large cycle
$P_2(n,m)$ $g$ contains exactly two $s$-large cycles
$P_3(n,m)$ $g$ contains exactly three $s$-large cycles
${P_{\geq 4}}(n,m)$ $g$ contains at least four $s$-large cycles
--------------------- ---------------------------------------------
With respect to part (c) we note, see [@NivenZuckermanetal91 pp. 395-396], that for each $\delta >
0$ there exists a constant $c_\delta > 0$ such that $d(m) \le c_\delta
m^\delta$ for all $m \in {\bf{N}}.$ This means that the parameter $\delta$ can be any positive real number and in particular that $d(m) = m^{o(1)}.$
Note that $$\label{eq-pi}
P_0(n,m) = {P_0^{(1)}}(n,m) + {P_0^{(2)}}(n,m) + {P_0^{(3)}}(n,m)$$ and $$\label{eq-qi}
P(n,m) = P_0(n,m) + P_1(n,m) + P_2(n,m) + P_3(n,m)+{P_{\geq 4}}(n,m).$$ We begin by deriving recursive expressions for the $P_0^{(i)}(n,m)$.
\[lem:theps\] Using Notation $\ref{notation}$, the following hold, where we take $P_0(0,m) = 1.$
1. $\displaystyle{{P_0^{(1)}}(n,m) = \frac{(n-3)!}{n!}
\sum_{d \in D_s,\ d\ge 3}{(d-1)(d-2)}P_0(n-d,m),}$
2. $\displaystyle{
{P_0^{(2)}}(n,m) = \frac{3(n-3)!}{n!}\sum_{\stackrel{d_1, d_2 \in D_s }{2\le
d_2,\ d_1+d_2\le n}} (d_2-1)P_0(n-d_1-d_2,m)}$,
3. $\displaystyle{
{P_0^{(3)}}(n,m) = \frac{(n-3)!}{n!} \sum_{\stackrel{d_1,d_2,d_3\in D_s
}{d_1+d_2+d_3 \le n}}
P_0(n-d_1-d_2 -d_3,m)}$.
We first compute ${P_0^{(1)}}(n,m)$, the proportion of those permutations $g\in S_n$ of order dividing $m$ with all cycles $s$-small, for which the points $1, 2, 3$ are contained in one $g$-cycle, $C$ say, of length $d$ with $d \in D_s$ and $d\geq 3.$ We can choose the remainder of the support set of $C$ in $\binom{n-3}{d-3}$ ways and then the cycle $C$ in $(d-1)!$ ways. The rest of the permutation $g$ can be chosen in $P_0(n-d,m)(n-d)!$ ways. Thus, for a given $d$, the number of such elements is $(n-3)!(d-1)(d-2)P_0(n-d,m)$. We obtain the proportion ${P_0^{(1)}}(n,m)$ by summing over all $d\in D_s$ with $d\geq3$, and then dividing by $n!$, so part (a) is proved.
Next we determine the proportion ${P_0^{(2)}}(n,m)$ of those permutations $g\in S_n$ of order dividing $m$ with all cycles $s$-small, for which one of the points $1, 2, 3$ is contained in a $g$-cycle $C_1$, and the other two of these points are contained in a different $g$-cycle $C_2$. Let $d_1$ and $d_2$ denote the lengths of the cycles $C_1$ and $C_2$, respectively, so $d_1, d_2\in D_s$ and $d_2 \ge 2.$ Firstly we choose the support set of $C_1$ in $\binom{n-3}{d_1-1}$ ways and the cycle $C_1$ in $(d_1-1)!$ ways. Secondly we choose the support set of $C_2$ in $\binom{n-d_1 -2}{d_2-2}$ ways and the cycle $C_2$ in $(d_2-1)!$ ways. Finally, the rest of the permutation $g$ is chosen in $P_0(n-d_1
-d_2,m)(n-d_1-d_2)!$ ways. Thus, for a given pair $d_1, d_2$, the number of these elements is $(n-3)!(d_2-1)P_0(n-d_1-d_2,m)$. Since there are three choices for $C_1\cap\{ 1, 2, 3\}$, we have $$\begin{aligned}
{P_0^{(2)}}(n,m) & = & \frac{3(n-3)!}{n!}\sum_{\stackrel{d_1, d_2 \in D_s}{2\le
d_2,\ d_1+d_2 \le n}} (d_2-1) P_0(n-d_1-d_2,m). \\ \end{aligned}$$ Finally we consider the proportion ${P_0^{(3)}}(n,m)$ of those permutations $g\in S_n$ of order dividing $m$ with all cycles $s$-small, for which each one of the points $1, 2, 3$ is contained in a separate $g$-cycle, say $C_i$ contains $i$ and $C_i$ has length $d_i \in D_s$. We can choose, in order, the support set of $C_1$ in $\binom{n-3}{d_1-1}$ ways and the cycle $C_1$ in $(d_1-1)!$ ways, the support set of $C_2$ in $\binom{n-d_1 -2}{d_2-1}$ ways and the cycle $C_2$ in $(d_2-1)!$ ways, the support set of $C_3$ in $\binom{n-d_1 -d_2 -1}{d_3-1}$ ways and the cycle $C_3$ in $(d_3-1)!$ ways, and the rest of the permutation in $P_0(n-d_1-d_2-d_3,m)(n-d_1-d_2-d_3)!$ ways. The expression for ${P_0^{(3)}}(n,m)$ in part (c) now follows.
Next we derive expressions for the $P_i(n,m)$ and ${P_{\geq 4}}(n,m)$.
\[lem:qi\] Using Notation $\ref{notation}$, and writing $P_0(0,m)=1$,
1. ${\displaystyle P_0(n,m) = \frac{1}{n}\sum_{d\in D_s}
P_0(n-d, m),}$
2. ${\displaystyle P_1(n,m) = \sum_{d\in D_\ell }
\frac{1}{d} P_0(n-d, m)},$
3. ${\displaystyle P_{2}(n,m) = \frac{1}{2} \sum_{d_1, d_2\in D_\ell }
\frac{1}{d_1d_2} P_0(n-d_1-d_2, m)},$ where the sum is over all ordered pairs $(d_1, d_2)$ with $d_1 + d_2
\le n$.
4. ${\displaystyle P_3(n,m) = \frac{1}{6}\sum_{d_1, d_2, d_3
\in D_\ell}
\frac{1}{d_1d_2d_3} P_0(n-d_1-d_2 - d_3, m)}$, where the sum is over all ordered triples $(d_1,d_2,d_3)$ with $d_1 + d_2 + d_3 \le n$.
5. ${\displaystyle {P_{\geq 4}}(n,m) \leq
\frac{1}{24}\sum_{d_1, d_2, d_3,d_4 \in D_\ell}
\frac{1}{d_1d_2d_3d_4} P(n-d_1-d_2 - d_3-d_4, m)}$, where the sum is over all ordered $4$-tuples $(d_1,d_2,d_3,d_4)$ with $d_1 + d_2 + d_3+d_4 \le n$.
For each permutation in $S_n$ of order dividing $m$ and all cycles $s$-small, the point 1 lies in a cycle of length $d$, for some $d\in D_s$. For this value of $d$ there are $\binom{n-1}
{d-1}(d-1)!$ choices of $d$-cycles containing 1, and $P_0(n-d,m)(n-d)!$ choices for the rest of the permutation. Summing over all $d\in D_s$ yields part (a).
The proportion of permutations in $S_n$ of order dividing $m$ and having exactly one $s$-large cycle of length $d$ is $\binom{n}{d}(d-1)! P_0(n-d,m)
(n-d)!/n!$. Summing over all $d\in D_\ell$ yields part (b).
In order to find the proportion of elements in $S_n$ of order dividing $m$ and having exactly two $s$-large cycles we count triples $(C_1, C_2, g)$, where $C_1$ and $C_2$ are cycles of lengths $d_1$ and $d_2$ respectively, $d_1, d_2\in D_\ell$, $g\in S_n$ has order dividing $m$, $g$ contains $C_1$ and $C_2$ in its disjoint cycle representation, and all other $g$-cycles are $s$-small. For a given $d_1, d_2$, we have $\binom{n}{d_1}(d_1-1)!$ choices for $C_1$, then $\binom{n-d_1}{d_2}(d_2-1)!$ choices for $C_2$, and then the rest of the element $g$ containing $C_1$ and $C_2$ can be chosen in $P_0(n-d_1-d_2,m)(n-d_1-d_2)!$ ways. Thus the ordered pair $(d_1,d_2)$ contributes $\frac{n!}{d_1d_2}P_0(n-d_1-d_2,m)(n-d_1-d_2)!$ triples, and each element $g$ with the properties required for part (c) contributes exactly two of these triples. Hence, summing over ordered pairs $d_1, d_2\in D_\ell$ yields (c).
Similar counts are used for parts (d) and (e). For $P_3(n,m), {P_{\geq 4}}(n,m)$ we count 4-tuples $(C_1, C_2,C_3, g)$ and $5$-tuples $(C_1,C_2,C_3,C_4,g)$ respectively, such that, for each $i$, $C_i$ is a cycle of length $d_i$ for some $d_i\in D_\ell$, $g\in S_n$ has order dividing $m$, and $g$ contains all the cycles $C_i$ in its disjoint cycle representation. The reason we have an inequality for ${P_{\geq 4}}(n,m)$ is that in this case each $g$ occurring has at least four $s$-large cycles and hence occurs in at least 24 of the 5-tuples, but possibly more.
We complete this section by giving a proof of Theorem \[lem:props\]. The ideas for its proof were developed from arguments in Warlimontâs paper [@Warlimont78].
\[newPs\] Let $m\geq n\geq3$, and let $s, {{\delta}}$ be as in Notation [\[notation\]]{}. Then $$P_0(n,m) < \frac{(1 + 3c_\delta + c_\delta^2)d(m)m^{2s}}{n(n-1)(n-2)}<
\frac{c'd(m)m^{2s}}{n^3}= O\left(\frac{m^{2s+\delta}}{n^3}\right)$$ where, if $n\geq6$, we may take $$c'=\left\{\begin{array}{ll}
2(1 + 3c_\delta + c_\delta^2)&\mbox{for any $m\geq n$}\\
10&\mbox{if $m\geq c_\delta^{1/(s-\delta)}$.}
\end{array}\right.$$ In particular Theorem [\[lem:props\]]{} is true. Moreover, if in addition $n\geq m^s+cn^a$ for some positive constants $a,c$ with $a\leq 1$, then $P_0(n,m)=O\left(\frac{m^{2s+2{{\delta}}}}{n^{1+3a}}\right)$.
First assume only that $m\geq n\geq3$. Let $D_s$, and $P_0^{(i)}(n,m)$, for $i = 1, 2, 3$, be as in Notation \[notation\]. By (\[eq-pi\]), $P_0(n,m)$ is the sum of the $P_0^{(i)}(n,m)$. We first estimate ${P_0^{(1)}}(n,m).$ By Lemma \[lem:theps\] (a), and using the fact that $d<m^s$ for all $d\in D_s$, $${P_0^{(1)}}(n,m) \le\frac{(n-3)!}{n!}
\sum_{\stackrel{d \in D_s}{d\ge 3}}{(d-1)(d-2)}<
\frac{d(m) m^{2s}}{n(n-1)(n-2)}.$$ Similarly, by Lemma \[lem:theps\] (b), $$\begin{aligned}
{P_0^{(2)}}(n,m) & < & \frac{3(n-3)!}{n!}\sum_{d_1, d_2 \in D_s} (d_2-1)
\le \frac{3d(m)^2m^{s}}{n(n-1)(n-2)}\end{aligned}$$ and by Lemma \[lem:theps\] (c), $$\begin{aligned}
{P_0^{(3)}}(n,m) &<& \frac{(n-3)!}{n!} \sum_{d_1,d_2,d_3\in D_s} 1
\le \frac{d(m)^3}{n(n-1)(n-2)}.\\\end{aligned}$$
Thus, using the fact noted in Notation \[notation\] that $d(m) \le c_\delta m^\delta$, $$\begin{aligned}
P_0(n,m) & \le &
\frac{d(m) \left( m^{2s} +3d(m)m^{s} + d(m)^2\right)
}{n(n-1)(n-2)} \\
&\le&\frac{d(m)m^{2s}\left( 1 +3c_\delta m^{\delta-s} + (c_\delta m^{\delta-s})^2\right)}{
n(n-1)(n-2)}< \frac{c'd(m) m^{2s}}{n^3}.\end{aligned}$$ To estimate $c'$ note first that, for $n\geq6$, $n(n-1)(n-2)> n^3/2$. Thus if $n\geq6$ then, for any $m\geq n$ we may take $c'= 2(1 + 3c_\delta + c_\delta^2).$ If $m\geq c_\delta^{1/(s-\delta)}$, then $c_\delta m^{\delta-s}\leq 1$ and so we may take $c'=10$. Theorem \[lem:props\] now follows since $d(m)=m^{o(1)}$. Now assume that $n\geq m^s+cn^a$ for some positive constants $c$ and $a$. By Lemma \[lem:qi\], $$P_0(n,m)= \frac{1}{n}\sum_{d\in D_s}P_0(n-d, m).$$ For each $d\in D_s$ we have $m>n-d\geq n-m^s\geq cn^a$, and hence applying Theorem \[lem:props\] (which we have just proved), $$P_0(n-d,m) < \frac{c'd(m)m^{2s}}{(n-d)^3}
\leq \frac{c'd(m) m^{2s}}{c^3 n^{3a}}.$$ Thus, $P_0(n,m) \leq \frac{d(m)}{n} \left(\frac{c'd(m)m^{2s}}{c^3n^{3a}}
\right)\le \frac{c'c_\delta^2m^{2s + 2\delta}}{c^3n^{1+3a}}$.
Proof of Theorem \[leadingterms\] {#sec:stheo}
=================================
First we determine the âvery largeâ divisors of $m$ that are at most $n$.
\[lem:divat\] Let $r, m$ and $n$ be positive integers such that $rn\le m < (r+1)n$.
1. If $d$ is a divisor of $m$ such that $d \le n$, then one of the following holds:
1. $d=n = \frac{m}{r}$,
2. $d = \frac{m}{r+1}$ so that $\frac{r}{r+1}n \le d < n$,
3. $d \le \frac{m}{r+2}<\frac{r+1}{r+2}n$.
2. Moreover, if $d_1, d_2$ are divisors of $m$ for which $$d_1\le d_2 \le \frac{m}{r+1}\quad \mbox{and}\quad
n \ge d_1 + d_2 > \frac{m(2r+3)}{2(r+1)(r+2)},$$ then $d_1=\frac{m}{c_1}, d_2=
\frac{m}{c_2}$, where $c_1, c_2$ divide $m$, and satisfy $c_2 \le 2r+3$, and either $r+2\leq c_2 \le c_1 < 2(r+1)(r+2)$, or $c_2=r+1$, $c_1\geq r(r+1)$.
As $d$ is a divisor of $m$ there is a positive integer $t$ such that $d = \frac{m}{t}$. Now $\frac{m}{t} \le n \le \frac{m}{r}$ and therefore $r \le t.$ If $r = t$ then $r$ divides $m$ and $d = \frac{m}{r} \le n$, and since also $rn \le m$ it follows that $d = \frac{m}{r}=n$ and (i) holds. If $t \ge r+2$ then (iii) holds. Finally, if $t=r+1$, then $d = \frac{m}{r+1}$ and $\frac{r}{r+1}n \le \frac{m}{r+1} < n$ and hence (ii) holds.
Now we prove the last assertion. Suppose that $d_1, d_2$ are divisors of $m$ which are at most $ \frac{m}{r+1}$, and such that $d_1\leq d_2$ and $n\geq d_1 + d_2 > \frac{m(2r+3)}{2(r+1)(r+2)}$. Then, as $d_1,
d_2$ divide $m$, there are integers $c_1, c_2$ such that $d_1 = m/c_1$ and $d_2 = m/c_2.$ Since $d_i \le m/(r+1)$ we have $c_i \ge r+1$ for $i = 1,2$, and since $d_1\le d_2$ we have $c_1\ge c_2$. Now $m/r \ge n \ge d_1 + d_2 > \frac{m(2r+3)}{2(r+1)(r+2)}$, and hence $1/r \ge 1/c_1 + 1/c_2 > \frac{2r+3}{2(r+1)(r+2)}$. If $c_2 \ge 2(r+2)$ then, as $c_1\ge c_2$, we would have $1/c_1 + 1/c_2 \le 1/(r+2)$, which is not the case. Thus $r+1 \le c_2 \le 2r+3.$ If $c_2\geq r+2$, then $$\frac{1}{c_1}> \frac{2r+3}{2(r+1)(r+2)} - \frac{1}{c_2} \ge
\frac{2r+3}{2(r+1)(r+2)} - \frac{1}{r+2} =
\frac{1}{2(r+1)(r+2)}$$ and hence $c_1 < 2(r+1)(r+2)$ as in the statement. On the other hand, if $c_2=r+1$, then $$\frac{1}{c_1}\leq \frac{n}{m}-\frac{1}{c_2}\leq \frac{1}{r}-\frac{1}{r+1}=\frac{1}{r(r+1)}$$ so $c_1\geq r(r+1)$.
The next result gives our first estimate of an upper bound for the proportion $P(n,m)$ of elements in $S_n$ of order dividing $m$. Recall our observation that the parameter $\delta$ in Notation \[notation\](c) can be any positive real number; in Proposition \[prop:general\] we will restrict to $\delta \le s-\frac{1}{2}.$ Note that the requirement $rn\leq m<(r+1)n$ implies that $\frac{n}{r+1}\leq n-\frac{m}{r+1}\leq \frac{m}{r(r+1)}$; the first case of Definition \[def:kr\] (b) below requires an upper bound of approximately half this quantity.
\[def:kr\] Let $r,\, m,\, n$ be positive integers such that $rn\le m < (r+1)n$. Let $1/2<s\leq 3/4$ and $0<{{\delta}}\leq s-\frac{1}{2}$.
- Let $\alpha = \begin{cases} 1 & \mbox{if\ } m=rn,\\
0 & \mbox{otherwise.}
\end{cases}$
- Let $\alpha' = \begin{cases} 1 & \mbox{if\ } (r+1) \mbox{\
divides\ } m \
\mbox{and\ }n-\frac{m}{r+1}<\frac{m}{2(r+1)(r+2)-1}, \\
0 & \mbox{otherwise.}
\end{cases}$
- Let $t(r,m,n)$ denote the number of divisors $d$ of $m$ with $\frac{m}{2r+3} \leq d\leq\frac{m}{r+1}$ such that there exists a divisor $d_0$ of $m$ satisfying
- $d+d_0\leq n$ and
- $\frac{m}{2(r+1)(r+2)}< d_0\leq d$.
- Let $k(r,m,n)=t(r,m,n)\frac{2(r+1)(r+2)(2r+3)}{r^2}.$
\[prop:general\] Let $r,\, m,\, n, s$ and $\delta$ be as in Definition [\[def:kr\]]{}. Then, for a fixed value of $r$ and sufficiently large $n$, $$P(n,m) \le \frac{\alpha}{n}+\frac{\alpha'.(r+1)}{m}+\frac{k(r,m,n)}{n^2}+
O\left(\frac{1}{n^{1+2s-2{{\delta}}}}
\right),$$ where $\alpha, \alpha', t(r, m, n)$ and $k(r, m, n)$ are as in Definition $\ref{def:kr}.$ Moreover, $t(r,m,n) \le r+3$ and $k(r,m,n) \le
\frac{4(r+3)^4}{r^2} $.
\[rem:general\]
\(a) The term $\frac{1}{n}$, which occurs if and only if $m=rn$, corresponds to the $n$-cycles in $S_n$, and is the exact proportion of these elements. We refine the estimate for $P(n,rn)$ in Theorem \[rn\] below.
\(b) The term $\frac{r+1}{m}$, which occurs only if $r+1$ divides $m$ and $n-\frac{m}{r+1}<\frac{m}{2(r+1)(r+2)}$, corresponds to permutations with order dividing $m$ and having either one or two $s$-large cycles, with one (the larger in the case of two cycles) of length $\frac{m}{r+1}$. The proportion of elements of $S_n$ containing a cycle of length $\frac{m}{r+1}$ is $\frac{r+1}{m}$, and if there exists a positive integer $d\leq n-\frac{m}{r+1}$ such that $d$ does not divide $m$, then some of these elements have a $d$-cycle and hence do not have order dividing $m$. Thus $\frac{r+1}{m}$ may be an over-estimate for the proportion of elements in $S_n$ (where $n-\frac{m}{r+1}<\frac{m}{2(r+1)(r+2)}$) having order dividing $m$, having exactly one $s$-large cycle of length $\frac{m}{r+1}$, and possibly one additional $s$-large cycle of length dividing $m$. However it is difficult to make a more precise estimate for this term that holds for all sufficiently large $m,n$. In Theorem \[rn\] we treat some special cases where this term either does not arise, or can be determined precisely.
\(c) The term $\frac{k(r,m,n)}{n^2}$ arises as follows from permutations that have exactly two $s$-large cycles of lengths dividing $m$. For each of the $t(r,m,n)$ divisors $d$ of $m$ as in Definition \[def:kr\](c), let $d_0(d)$ be the largest of the divisors $d_0$ satisfying Definition \[def:kr\](c)(i),(ii). Note that $d_0(d)$ depends on $d$. Then $k(r,m,n)/n^2$ is an upper bound for the proportion of permutations of order dividing $m$ and having two $s$-large cycles of lengths $d$ and $d_0(d)$, for some $d$ satisfying $\frac{m}{2r+3} \leq d\leq\frac{m}{r+1}$. As in (b) this term may be an over-estimate, not only for the reason given there, but also because lower bounds for the cycle lengths $d, d_0(d)$ were used to define $k(r,m,n)$. Indeed in the case $m=rn$ we are able to obtain the exact value of the coefficient of the $\frac{1}{n^2}$ summand.
We divide the estimation of $P(n,m)$ into five subcases. Recall that, by (\[eq-qi\]), $P(n,m)$ is the sum of ${P_{\geq 4}}(n,m)$ and the $P_i(n,m)$, for $i=0,1,2,3$, where these are as defined in Notation \[notation\]. We will use the recursive formulae for ${P_{\geq 4}}(n,m)$ and the $P_i(n,m)$ in Lemma \[lem:qi\], together with the expressions for $P_0(n,m)$ in Theorem \[lem:props\] and Lemma \[newPs\], to estimate these five quantities. Summing these estimates will give, by (\[eq-qi\]), our estimate for $P(n,m)$. We also use the information about divisors of $m$ in Lemma \[lem:divat\].
First we deal with $P_0(n,m)$. Since $r$ is fixed, it follows that, for sufficiently large $n$ (and hence sufficiently large $m$), we have $m^s
\leq \frac{m}{r+2}$, which is less than $\frac{(r+1)n}{r+2}=n-\frac{n}{r+2}$. Thus $n>m^s+\frac{n}{r+2}$, and applying Lemma \[newPs\] with $a=1, c=\frac{1}{r+2}$, it follows that $$P_0(n,m)=O\left(\frac{m^{2s+2{{\delta}}}}{n^4}\right)=O\left(\frac{1}{n^{4-2s-
2{{\delta}}}}\right)\leq O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$$ since $4-2s-2{{\delta}}\geq 1+2s-2{{\delta}}$ when $s\leq 3/4$.
Next we estimate $P_3(n,m)$ and ${P_{\geq 4}}(n,m)$. By Lemma \[lem:qi\], the latter satisfies ${P_{\geq 4}}(n,m)\leq \frac{1}{24}\sum\frac{1}{d_1d_2d_3d_4}$, where the summation is over all ordered 4-tuples of $s$-large divisors of $m$ whose sum is at most $n$. Thus ${P_{\geq 4}}(n,m)\leq \frac{1}{24}\,\frac{d(m)^4}{m^{4s}}=
O\left(\frac{1}{n^{4s-4{{\delta}}}}\right)$. Also $$P_3(n,m)= \frac{1}{6}\sum
\frac{1}{d_1d_2d_3}P_0(n-d_1-d_2-d_3,m),$$ where the summation is over all ordered triples of $s$-large divisors of $m$ whose sum is at most $n$. For such a triple $(d_1,d_2,d_3)$, if each $d_i\leq\frac{m}
{4(r+1)}$, then $n-\sum d_i\geq n-\frac{3m}{4(r+1)}>\frac{n}{4}$, and so by Lemma \[newPs\], $P_0(n-\sum d_i,m)=O\left(\frac{m^{2s+{{\delta}}}}{n^{3}}
\right)$. Thus the contribution of triples of this type to $P_3(n,m)$ is at most $O\left(\frac{d(m)^3m^{2s+{{\delta}}}}{m^{3s}n^3}
\right)=O\left(\frac{1}{n^{3+s-4{{\delta}}}}\right)$. For each of the remaining triples, the maximum $d_i$ is greater than $\frac{m}{4(r+1)}$ and in particular there is a bounded number of choices for the maximum $d_i$. Thus the contribution of the remaining triples to $P_3(n,m)$ is at most $O\left(\frac{d(m)^2}{m^{1+2s}}
\right)=O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. It follows that $$P_3(n,m)+{P_{\geq 4}}(n,m)=O\left(\frac{1}{n^{x_3}}\right),$$ where $x_3=\min\{4s-4{{\delta}},3+s-4{{\delta}},1+2s-2{{\delta}}\}=1+2s-2{{\delta}}$ (using the fact that ${{\delta}}\leq s-\frac{1}{2}\leq \frac{1}{4}$).
Now we estimate $P_2(n,m)$. By Lemma \[lem:qi\], $$P_{2}(n,m)= \frac{1}{2}\sum
\frac{1}{d_1d_2}P_0(n-d_1-d_2,m),$$ where the summation is over all ordered pairs of $s$-large divisors of $m$ whose sum is at most $n$. We divide these pairs $(d_1,d_2)$ into two subsets. The first subset consists of those for which $n- d_1-d_2\geq n^\nu$, where $\nu=(1+2s+{{\delta}})/3$. Note that $\nu<1$ since $\nu\leq s -\frac{1}{6}<1$ (because ${{\delta}}\leq s-\frac{1}{2}$ and $s\leq \frac{3}{4}$). For a pair $(d_1,d_2)$ such that $n- d_1-d_2\geq n^\nu$, by Lemma \[newPs\], $P_0(n-d_1-d_2,m)=O\left(\frac{m^{2s+{{\delta}}}}{n^{3\nu}}
\right)$. Thus the total contribution to $P_{2}(n,m)$ from pairs of this type is at most $O\left(\frac{d(m)^2m^{2s+{{\delta}}}}{m^{2s}n^{3\nu}}
\right)=O\left(\frac{1}{n^{3\nu-3{{\delta}}}}\right)=O\left(\frac{1}{n^{1+2s-2{{\delta}}}}
\right)$.
Now consider pairs $(d_1,d_2)$ such that $n- d_1-d_2< n^\nu$. Since each $d_i<n\leq m/r$, it follows that each $d_i\leq m/(r+1)$. Since $\nu<1$, for sufficiently large $n$ (and hence sufficiently large $m$) we have $n^\nu\leq \left(\frac{m}{r}
\right)^\nu<\frac{m}{2(r+1)(r+2)}$. Thus, for each of the pairs $(d_1,d_2)$ such that $n- d_1-d_2< n^\nu$, we have $d_1+d_2>n-n^\nu>\frac{m}{r+1}-
\frac{m}{2(r+1)(r+2)}=\frac{m(2r+3)}{2(r+1)(r+2)}$, and hence one of $(d_1,d_2)$, $(d_2,d_1)$ (or both if $d_1=d_2$) satisfies the conditions of Lemma \[lem:divat\] (b). Thus, by Lemma \[lem:divat\] (b), it follows that if $d_1 \le d_2$, then either $(d_0,d):=(d_1, d_2)$ satisfies the conditions of Definition \[def:kr\](c), or $d_2=\frac{m}{r+1}$ and $d_1\leq
\frac{m}{2(r+1)(r+2)}$. Let $P_2'(n,m)$ denote the contribution to $P_2(n,m)$ from all the pairs $(d_1,d_2)$ where $\{d_1,d_2\}=\{
\frac{m}{r+1},d_0\}$ and $d_0 \leq \frac{m}{2(r+1)(r+2)}$.
For the other pairs, we note that there are $t(r,m,n) \le r+3$ choices for the larger divisor $d$. Consider a fixed $d\leq \frac{m}{r+1}$, say $d = \frac{m}{c}.$ Then each divisor $d_0$ of $m$, such that $\frac{m}{2(r+1)(r+2)} < d_0 \le d$ and $d + d_0 \le
n$, is equal to $\frac{m}{c_0}$ for some $c_0$ such that $c \le c_0 < 2(r+1)(r+2)$. Let $d_0(d) = \frac{m}{c_0}$ be the largest of these divisors $d_0.$ By Lemma \[lem:divat\](b), the combined contribution to $P_2(n,m)$ from the ordered pairs $(d,d_0(d))$ and $(d_0(d),d)$ is (since $d$ and $d_0(d)$ may be equal) at most $$\frac{1}{dd_0(d)} < \frac{2r+3}{m} \cdot \frac{2(r+1)(r+2)}{m} =
\frac{2(r+1)(r+2)(2r+3)}{m^2}.$$ (Note that $\frac{1}{dd_0(d)} \ge \frac{(r+1)^2}{m^2} > \frac{1}{n^2}$.) If $d_0=\frac{m}{c'}$ is any other divisor of this type and $d_0 < d_0(d)$, then $c_0+1 \le c' < 2(r+1)(r+2)$, and so $n-d-d_0=(n-d-d_0(d))+d_0(d)-d_0$ is at least $$d_0(d)-d_0=\frac{m}{c_0} - \frac{m}{c'} \ge\frac{m}{c_0} - \frac{m}{c_0+1}=
\frac{m}{c_0(c_0+1)} > \frac{m}{4(r+1)^2(r+2)^2}.$$ By Lemma \[newPs\], the contribution to $P_2(n,m)$ from the pairs $(d,d_0)$ and $(d_0,d)$ is $O( \frac{1}{m^2}\cdot
\frac{m^{2s+\delta}}{m^3}) = O(\frac{1}{n^{5-2s-\delta}})$. Since there are $t(r,m,n) \le r+3$ choices for $d$, and a bounded number of divisors $d_0$ for a given $d$, the contribution to $P_2(n,m)$ from all the pairs $(d_1,d_2)$ such that $n- d_1-d_2< n^\nu$ is at most $$P_2'(n,m) + t(r,m,n) \frac{2(r+1)(r+2)(2r+3)}{n^2r^2}+ O\left(\frac{1}{n^{5-2s-{{\delta}}}}
\right).$$ Thus $$\begin{aligned}
P_2(n,m)&\le& P_2'(n,m) + \frac{2t(r,m,n)(r+1)(r+2)(2r+3)}{n^2r^2}+
O\left(\frac{1}{n^{x_2}}\right) \\
&=& P_2'(n,m) +\frac{k(r,m,n)}{n^2} + O\left(\frac{1}{n^{x_2}}\right)\end{aligned}$$ with $x_2=\min\{1+2s-2{{\delta}},5-2s-{{\delta}}\}=1+2s-2{{\delta}}$. Note that $$k(r,m,n)\leq (r+3)
\frac{2(r+1)(r+2)(2r+3)}{r^2}=4r^2+30r+80+\frac{90}{r}+\frac{36}{r^2}$$ which is less than $\frac{4(r+3)^4}{r^2}$.
Finally we estimate $P_1(n,m)+P'_2(n,m)$. By Lemma \[lem:qi\], $P_1(n,m)=
\sum \frac{1}{d}P_0(n-d,m)$, where the summation is over all $s$-large divisors $d$ of $m$ such that $d\leq n$, and we take $P_0(0,m)=1$. Note that $d\leq n\leq \frac{m}{r}$, so each divisor $d=\frac{m}{c}$ for some $c\geq r$. In the case where $m=rn$, that is, the case where $n$ divides $m$ (and only in this case), we have a contribution to $P_1(n,m)$ of $\frac{1}{n}$ due to $n$-cycles. If $d<n$ then $d=\frac{m}{c}$ with $c\geq r+1$.
Next we consider all divisors $d$ of $m$ such that $d\leq \frac{m}{r+2}$. For each of these divisors, $n-d\geq n - \frac{m}{r+2}\ge n-\frac{(r+1)n}{r+2}
=\frac{n}{r+2}$. Thus by Lemma \[newPs\], $P_0(n-d,m)
= O\left(\frac{m^{2s + \delta}}{n^{3}}\right)
= O\left(\frac{1}{n^{3-2s-\delta}}\right)$. The number of $d$ satisfying $d\geq \frac{m}{2(r+1)}$ is bounded in terms of $r$ (which is fixed), and hence the contribution to $P_1(n,m)$ from all the divisors $d$ satisfying $\frac{m}{2(r+1)}\leq d\leq \frac{m}{r+2}$ is at most $O\left(\frac{1}{m}\,\frac{1}{n^{3-2s-\delta}}\right)=O\left(
\frac{1}{n^{4-2s-\delta}}\right)$. On the other hand, if $m^s\leq d
<\frac{m}{2(r+1)}$, then $n-d>n - \frac{(r+1)n}{2(r+1)} =\frac{n}{2}$. Now since $r$ is fixed and $s<1$, for sufficiently large $n$, we have $m^s<\frac{n}
{4}$, and so $n-d> m^s +\frac{n}{4}$. Then, by Lemma \[newPs\] (applied with $a=1$ and $c=\frac{1}{4}$), $P_0(n-d,m)= O\left(\frac{m^{2s + 2\delta}}{(n-d)^{4}}\right)
= O\left(\frac{1}{n^{4-2s-2\delta}}\right)$, and the contribution to $P_1(n,m)$ from all $s$-large divisors $d< \frac{m}{2(r+1)}$ is at most $\frac{d(m)}{m^s}O\left(\frac{1}{n^{4-2s-2\delta}}\right)=
O\left(\frac{1}{n^{4-s-3\delta}}\right)$. Thus, noting that $\min\{4-2s-{{\delta}},
4-s-3{{\delta}}\}\geq 1+2s-2{{\delta}}$, the contribution to $P_1(n,m)$ from all $s$-large divisors $d$ of $m$ such that $d\leq\frac{m}{r+2}$ is $O\left(\frac{1}{n^{1+2s-2\delta}}\right)$.
By Lemma \[lem:divat\], the only divisor not yet considered is $d=\frac{m} {r+1}$ and this case of course arises only when $r+1$ divides $m$. Suppose then that $r+1$ divides $m$. We must estimate the contribution to $P_1(n,m)+P'_2(n,m)$ from elements containing a cycle of length $d=\frac{m}{r+1}$. The contribution to $P_1(n,m)+P'_2(n,m)$ due to the divisor $d=\frac{m}{r+1}$ is $\frac{r+1}{m}P_0(n-\frac{m}{r+1},m)+\frac{r+1}{m}\sum_{d_0}\frac{1}{d_0}
P_0(n-\frac{m}{r+1}-d_0,m)$, where the summation is over all $s$-large $d_0\leq
\frac{m}{2(r+1)(r+2)}$. Suppose first that $n=\frac{m}{r+1}\geq \frac{m}{2(r+1)(r+2)-1}$, so that for each $d_0$, $n-\frac{m}{r+1}-d_0>\frac{m}{2(r+1)^2(r+2)^2}$. Then, by Lemma \[newPs\], the contribution to $P_1(n,m)+P'_2(n,m)$ is at most $$O\left(\frac{1}{m}.\frac{m^{2s+{{\delta}}}}{m^{3}}\right)
+d(m) O\left(\frac{1}{m^{1+s}}.\frac{m^{2s+{{\delta}}}}{m^{3}}\right)
=O\left(\frac{1}{n^{4-2s-{{\delta}}}}\right)$$ and this is $ O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$ since $4-2s-{{\delta}}\geq 1+2s-2{{\delta}}$. Finally suppose that $n-\frac{m}{r+1} < \frac{m}{2(r+1)(r+2)}$. In this case we estimate the contribution to $P_1(n,m)+P'_2(n,m)$ from $d=\frac{m}{r+1}$ by the proportion $\frac{1}{d}=\frac{r+1}{m}$ of elements of $S_n$ containing a $d$-cycle (recognising that this is usually an over-estimate). Putting these estimates together we have $$P_1(n,m)+P'_2(n,m)\leq\frac{\alpha}{n}+\frac{\alpha'.(r+1)}{m}+
O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right),$$ where $\alpha=1$ if $m=rn$ and is $0$ otherwise, and $\alpha'=1$ if $r+1$ divides $m$ and $n-\frac{m}{r+1}<\frac{m}{2(r+1)(r+2)-1}$, and is 0 otherwise. The result now follows using (\[eq-qi\]) and the estimates we have obtained for each of the summands.
It is sometimes useful to separate out the results of Proposition \[prop:general\] according to the values of $m,n$. We do this in the theorem below, and also obtain in parts (a) and (b) exact asymptotic expressions for $P(n,rn)$ and $P(n,t!(n-t))$ where $r, t$ are bounded and $n$ is sufficiently large. For this it is convenient to define two sets of integer pairs.
\[T\][For positive integers $r$ and $m$, define the following sets of integer pairs: $$\mathcal{T}(r)=\{(i,j)\,|\, 1\leq i,j\leq r^2, ij
=r^2,\ \mbox{and both}\ r+i, r+j\ \mbox{divide}\ m\}$$ and $\mathcal{T}'(r)=\{(i,j)\,|\, 1< i,j\leq (r+1)^2,
(i-1)(j-1) =(r+1)^2,$ and both $r+i, r+j\ \mbox{divide}\ m\}.
$ ]{}
\[rn\] Let $n,m,r$ be positive integers such that $rn\leq m<(r+1)n$. Let $1/2<s\leq 3/4$ and $0<{{\delta}}\leq s-1/2$. Then, the following hold for $r$ fixed and sufficiently large $n$ (where the sets $\mathcal{T}(r)$ and $\mathcal{T}'(r)$ are as in Definition [\[T\]]{}).
1. If $m=rn$, then ${\displaystyle P(n,m)=\frac{1}{n}+\frac{c(r)}{n^2}
+O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)}$, where\
${\displaystyle
c(r)=\sum_{(i,j)\in\mathcal{T}(r)}(1+\frac{i+j}{2r}).}
$ In particular $c(1)=0$ if $n$ is odd, and $2$ if $n$ is even.
2. If $r=t!-1$ and $m=t!(n-t)=(r+1)n-t\cdot t!$, then\
${\displaystyle
P(n,m)=\frac{1}{n-t}+\frac{c'(r)}{(n-t)^2}+O\left(\frac{1}{n^{1+2s-2{{\delta}}}}
\right)},$ where\
${\displaystyle c'(r)=\sum_{(i,j)\in\mathcal{T}'(r)}(1+\frac{i+j-2}{2(r+1)})}$.
3. If $rn<m$, then ${\displaystyle P(n,m)\leq \frac{\alpha'.(r+1)}{m}+\frac{k(r,m,n)}
{n^2}+ O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)}$, where $\alpha'$ and $k(r,m,n)$ are as in Definition [\[def:kr\]]{}.
Part (c) follows immediately from Proposition \[prop:general\]. Next we prove part (a). Suppose that $m=rn$. If $r+1$ divides $m$ then we have $n-\frac{m}{r+1}=
\frac{m}{r(r+1)}>\frac{m}{2(r+1)(r+2)-1}$. It follows from Proposition \[prop:general\] that $P(n,m)\leq\frac{1}{n}+\frac{k(r,m,n)}
{n^2}+O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. To complete the proof we refine the argument given in the proof of Proposition \[prop:general\] for $P_2(n,m)$ which gave rise to the term $\frac{k(r,m,n)}{n^2}$. The elements contributing to this term were those with exactly two $s$-large cycles, where one of these cycles had length $d=\frac{m}{r+i}$ for some $i$ such that $1\leq
i\leq r+3$, and the other had length $d_0(d)=\frac{m}{r+j}$ for some $j$ such that $r+i\leq r+j <
2(r+1)(r+2)$ and $d + d_0(d) \le n.$ Moreover, for a given value of $d$, the value of $d_0(d)$ was the largest integer with these properties. Since we now assume that $m=rn$ we have $$d+d_0(d)=\frac{m(2r+i+j)}{(r+i)(r+j)}\leq n=\frac{m}{r}$$ that is, $r(2r+i+j)\leq(r+i)(r+j)$, which is equivalent to $r^2\leq ij$. If $d+d_0(d)$ is strictly less than $n$, that is to say, if $r^2<ij$, and thus $ij-r^2\geq1$, then $$n-d-d_0(d)=n-\frac{rn(2r+i+j)}{(r+i)(r+j)}=\frac{n(ij-r^2)}{(r+i)(r+j)}\geq
\frac{n}{(r+i)(r+j)},$$ and since $i\leq r+3$ and $r+j<2(r+1)(r+2)$ we have $\frac{n}{(r+i)(r+j)}
\geq \frac{n}{2(r+1)(r+2)(2r+3)}$. It now follows from Lemma \[newPs\] that the contribution to $P_2(n,m)$ from all ordered pairs $(d,d_0(d))$ and $(d_0(d),d)$ with $d,d_0(d)$ as above and $n>d+d_0(d)$ is $O\left(
\frac{1}{n^2}\,\frac{m^{2s+{{\delta}}}}{n^3}\right)=O\left(\frac{1}{n^{5-2s-{{\delta}}}}
\right)\leq O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. Thus when $m=rn$, the only contributions to the $O\left(\frac{1}{n^2}\right)$ term come from pairs $(\frac{m}{r+i},\frac{m}{r+j})$ such that $r^2=ij$ and $1\leq i,j\leq
r^2$. (Note that we no longer assume $i\leq j$.) These are precisely the pairs $(i,j)\in\mathcal{T}(r)$. For such a pair $(\frac{m}{r+i},\frac{m}{r+j})$, the contribution to $P_2(n,m)$ is $$\frac{1}{2}\cdot\frac{r+i}{m}\cdot\frac{r+j}{m}=
\frac{r^2+r(i+j)+ij}{2n^2r^2}=\frac{1}{n^2}(1+\frac{i+j}{2r})$$ (since $ij=r^2$). Thus $P(n,m)\leq\frac{1}{n}+\frac{c(r)}{n^2}
+O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. Moreover, for each $(i,j)\in\mathcal{T}(r)$, each permutation in $S_n$ having exactly two cycles of lengths $\frac{m}{r+i}$ and $\frac{m}{r+j}$ is a permutation of order dividing $m$. Thus $P(n,rn)\geq \frac{1}{n}+\frac{c(r)}{n^2}$, and the main assertion of part (a) is proved. Finally we note that, if $r=1$ then the only possible pair in $\mathcal{T}(1)$ is $(1,1)$, and for this pair to lie in the set we require that $r+1=2$ divides $m=n$. Thus $c(1)$ is 0 if $n$ is odd, and is 2 if $n$ is even.
Finally we prove part (b) where we have $r=t!-1$ and $m=t!(n-t)$. Then $rn=(t!-1)n=m+t\cdot t!-n$ which is less than $m$ if $n>t\cdot t!$. Also $(r+1)n=t!\,n>m$. Thus, for sufficiently large $n$, we have $rn<m<(r+1)n$. Moreover, $r+1$ divides $m$ and $n-\frac{m}{r+1}=n-(n-t)=t$, which for sufficiently large $n$ is less than $\frac{n-t}{3t!}<\frac{m}{2(r+1)(r+2)-1}$. It now follows from part (c) that $P(n,t!(n-t))\leq \frac{1}{n-t}+\frac{k(r,m,n)}{n^2}+O\left(\frac{1}
{n^{1+2s-2{{\delta}}}}\right)$. Our next task is to improve the coefficient of the $O(\frac{1}{n^2})$ term using a similar argument to the proof of part (a). The elements contributing to this term have exactly two $s$-large cycles of lengths $d=\frac{m}{r+i}$ and $d_0(d)=\frac{m}{r+j}$, with $r+i,r+j\leq (r+1)(r+2)$ and $$d+d_0(d)=\frac{m(2r+i+j)}{(r+i)(r+j)}\leq n=\frac{m}{r+1}+t.$$ This is equivalent to $(r+1)(2r+i+j)\leq(r+i)(r+j)+\frac{t(r+1)(r+i)(r+j)}{m}$, and hence, for sufficiently large $n$ (and hence sufficiently large $m$), $(r+1)(2r+i+j)\leq (r+i)(r+j)$. This is equivalent to $(i-1)(j-1)\geq (r+1)^2$. If $(i-1)(j-1)> (r+1)^2$, then $$\begin{aligned}
n-d-d_0(d)&=&(t+\frac{m}{r+1}) - \frac{m(2r+i+j)}{(r+i)(r+j)}\\
&=&t+\frac{m((i-1)(j-1)-(r+1)^2)}{(r+1)(r+i)(r+j)}\\
&>&\frac{rn}{(r+1)^3(r+2)^2}.\end{aligned}$$ As for part (a), the contribution to $P_2(n,m)$ from all pairs $(\frac{m}{r+i},\frac{m}{r+j})$ with $(i-1)(j-1)> (r+1)^2$ is $O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. Thus the only contributions to the $O\left(\frac{1}{n^2}\right)$ term come from pairs $(d,d_0(d))=(\frac{m}{r+i},\frac{m}{r+j})$ such that $(r+1)^2=(i-1)(j-1)$ and $1\leq i,j\leq (r+1)^2$. These are precisely the pairs $(i,j)\in\mathcal{T}'(r)$. For each of these pairs we have $r^2+2r=ij-i-j$ and the contribution to $P_2(n,m)$ is $$\begin{aligned}
\frac{1}{2dd_0(d)}&=&\frac{(r+i)(r+j)}{2m^2}=
\frac{r^2+r(i+j)+ij}{2(r+1)^2(n-t)^2}\\
&=&\frac{(r+1)(2r+i+j)}{2(r+1)^2(n-t)^2}=
\frac{1}{(n-t)^2}\left(1+\frac{i+j-2}{2(r+1)}\right).\end{aligned}$$ Thus $P(n,m)\leq\frac{1}{n-t}+\frac{c'(r)}{n^2}
+O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. On the other hand, each permutation in $S_n$ that contains an $(n-t)$-cycle has order dividing $t!(n-t)=m$, and the proportion of these elements is $\frac{1}{n-t}$. Also, for each $(i,j)\in\mathcal{T}'(r)$, each permutation in $S_n$ having exactly two cycles of lengths $\frac{m}{r+i}$ and $\frac{m}{r+j}$, and inducing any permutation on the remaining $n-\frac{m}{r+i}-\frac{m}{r+j}=t$ points, is a permutation of order dividing $m=t!(n-t)$, and the proportion of all such elements is $\frac{c'(r)}{(n-t)^2}$. Thus $P(n,m)\geq \frac{1}{n-t}+\frac{c'(r)}{(n-t)^2}$, and the assertion of part (b) is proved.
It is a simple matter now to prove Theorems \[leadingterms\] and \[bounds\].
The first theorem follows from Theorem \[rn\] (a) and (b) on setting $s=3/4$ and allowing $\delta \rightarrow 0$. Note that $\frac{1}{n-t} = \frac{1}{n} + \frac{t}{n^2} + O(\frac{1}{n^3})$ and $\frac{1}{(n-t)^2} = \frac{1}{n^2} + O(\frac{1}{n^3})$. For the second theorem, again we set $s=3/4$ in Theorem \[rn\](c). By Proposition \[prop:general\] we have $k(r,m,n) \le \frac{4(r+3)^4}{r^2}$. If we define $k(r) = \frac{4(r+3)^4}{r^2}$ the result follows.
Finally we derive the conditional probabilities in Corollary \[cdnlprobs1\].
Let $r,\, n$ be positive integers with $r$ fixed and $n$ âsufficiently largeâ, and let $g$ be a uniformly distributed random element of $S_n$. First set $m = rn.$ Let $A$ denote the event that $g$ is an $n$-cycle, and let $B$ denote the event that $g$ has order dividing $m$, so that the probability ${{\rm{Prob}}}(B)$ is $P(n,m)$. Then, by elementary probability theory, we have $$\begin{aligned}
{{\rm{Prob}}}( A \mid B) &= &\frac{{{\rm{Prob}}}( A \cap B)} {{{\rm{Prob}}}(B)} = \frac{{{\rm{Prob}}}( A )}
{{{\rm{Prob}}}(B)}
= \frac{\frac{1}{n}}{P(n,m)}. \\\end{aligned}$$ By Theorem \[leadingterms\], $\frac{1}{n}+\frac{c(r)}{n^2}<P(n,m)=\frac{1}{n}+\frac{c(r)}{n^2}+O\left(\frac{1}
{n^{2.5-o(1)}}\right)$, and hence $$\begin{aligned}
1-\frac{c(r)}{n}-O\left(\frac{1}
{n^{1.5-o(1)}}\right)&\leq& {{\rm{Prob}}}(A \mid B)
\leq 1-\frac{c(r)}{n}+O\left(\frac{1}
{n^{2}}\right).\\\end{aligned}$$
Now suppose that $r=t!-1$ for some integer $t\geq2$, and let $A$ denote the event that $g$ contains an $(n-t)$-cycle, so that ${{\rm{Prob}}}(A)=\frac{1}{n-t}$. Then, with $B$ as above for the integer $m:=t!(n-t)$, we have $$\begin{aligned}
{{\rm{Prob}}}( A \mid B) &= &\frac{{{\rm{Prob}}}( A \cap B)} {{{\rm{Prob}}}(B)} = \frac{{{\rm{Prob}}}( A )}
{{{\rm{Prob}}}(B)}
= \frac{\frac{1}{n-t}}{P(n,m)}. \\\end{aligned}$$ By Theorem \[rn\](b), $\frac{1}{n-t}+\frac{c'(r)}{(n-t)^2}<P(n,m)=\frac{1}{n-t}+
\frac{c'(r)}{(n-t)^2}+O\left(\frac{1} {n^{2.5-o(1)}}\right)$, and hence $$\begin{aligned}
1-\frac{c'(r)}{n}-O\left(\frac{1}
{n^{1.5-o(1)}}\right)&\leq& {{\rm{Prob}}}(A \mid B)
\leq 1-\frac{c'(r)}{n}+O\left(\frac{1}
{n^{2}}\right).\end{aligned}$$
This research was supported ARC Discovery Grants DP0209706 and DP0557587. The authors thank the referee for carefully reading the submitted version and advice on the paper. {#this-research-was-supported-arc-discovery-grants-dp0209706-and-dp0557587.-the-authors-thank-the-referee-for-carefully-reading-the-submitted-version-and-advice-on-the-paper. .unnumbered}
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[10]{}
Robert Beals, Charles R. Leedham-Green, Alice C. Niemeyer, Cheryl E. Praeger, and Ãkos Seress, , , 355(5),(2003), [2097â2113]{}.
I.Z. Bouwer and W.W. Chernoff, [Solutions to [$x\sp r=\alpha$]{} in the symmetric group]{}, [Tenth British combinatorial conference (Glasgow, 1985)]{}, [*Ars Combin.*]{}(A) 20, (1985), 83-88.
S. Chowla, I. N. Herstein and W. R. Scott, The solutions of $x^d=1$ in symmetric groups, *Norske Vid. Selsk.* [**25**]{} (1952), 29â31.
P. Erd[Å]{}s, and P. Tur[á]{}n, , , [**4**]{}, (1965), [175â186]{}.
P. Erd[Å]{}s, and P. Tur[á]{}n, , , [18]{}, (1967), [309â320]{}.
Lu Gao and Jian Guo Zha. [Solving the equation [$x\sp n=\sigma$]{} in the symmetric group [$S\sb m$]{}]{}, [*J. Math. (Wuhan)*]{}, 7 (2), (1987), 173â176, 1987.
E. Landau. , , 1909.
. [An equation in permutations]{}, [*Trudy Mat. Inst. Steklov.*]{}, 142 : 182â194, 270, 1976.
. [*The number of permutations of a special form*]{}, [*Mat. Sb. (N.S.)*]{}, 99(141) [**3**]{}: 468â476, 480, 1976.
Leo Moser and Max Wyman, , , 7, (1955), 159â168.
Leo Moser and Max Wyman, , , 8, (1956), 225â233.
Alice C. Niemeyer and Cheryl E. Praeger, On the proportion of permutations of order a multiple of the degree, preprint, 2005.
Alice C. Niemeyer and Cheryl E. Praeger, On the frequency of permutations containing a long cycle, *J. Algebra* [**300**]{} (2006), 289-304.
Ivan Niven, Herbert S. Zuckerman, and Hugh L. Montgomery. . John Wiley & Sons, New York, 5th edition, 1991.
L.M. Volynets. . , 40:155â160, 286, 1986.
Richard Warlimont. Ãber die [A]{}nzahl der [L]{}ösungen von $x\sp{n}=1$ in der symmetrischen [G]{}ruppe ${S}\sb{n}$. , 30(6), (1978), 591â594.
Herbert S. Wilf. , , 15(2), (1986), [228-232]{}.
| {
"pile_set_name": "ArXiv"
} |
Analysis of the mechanism of the vasodepressor effect of urocortin in anesthetized rats.
The aim was to examine if the depressor effect of urocortin involves activation of the nitric oxide (NO)/L-arginine pathway, production of prostanoids or opening of K(+)-channels. I. v. bolus urocortin (0.1-3 nmol/kg) dose-dependently decreased mean arterial pressure in thiobutabarbital-anesthetized rats. The depressor effect of urocortin was unaffected by pretreatment with N(G)-nitro-L-arginine methyl ester (L-NAME, inhibitor of NO synthase, i.v. bolus) or noradrenaline (i.v. infusion), which increased arterial pressure to a similar level as that produced by L-NAME. In addition, methylene blue (inhibitor of soluble guanylyl cyclase, i.v. infusion), indomethacin (cyclooxygenase inhibitor, i.v. bolus), glibenclamide (blocker of ATP-sensitive K(+)-channels, i.v. bolus) or tetraethylammonium (a non specific K(+)-channel blocker, i.v. bolus) did not affect the depressor effect of urocortin. In conclusion, the depressor effect of urocortin in anesthetized rats is not mediated via the NO/L-arginine pathway, activation of soluble guanylyl cyclase, production of prostanoids, opening of TEA sensitive K(+)-channels nor opening of ATP sensitive K(+)-channels. | {
"pile_set_name": "PubMed Abstracts"
} |
Checklist for Fiction Readers
It can be hard to get useful feedback from friends and family. The key is to ask specific questions â questions that will get them to notice their own reactions as an ordinary reader. Honest answers to these questions will prove invaluable to any fiction writer.
1) Is there anything you didnât understand?⢠Was anything confusing or hard to follow?
⢠Was it ever hard to figure out what was going on?
⢠Did anything not make sense to you?
⢠Were the events in the story believable?
⢠Were you caught up in the story? Or were there things that interrupted the flow and reminded you that you were reading a made-up story?
⢠Did you ever get characters confused, or forget who a character was?
2) Were you ever bored?
⢠Were there points where your interest flagged?
⢠Were there parts that felt extraneous or unnecessary?
3) Did you want to know more?
⢠Were there characters you wanted to know more about?
⢠Were you left wondering âwhat happened then?â at any point?
⢠Was there anything else â setting, theme, messages â that you wish had been explored more?
4) Did the characters seem real?⢠Were the characters believable? (Were they ever unbelievable?)
⢠Did you care about the characters? (Did you ever not care?)
⢠Which characters did you like? Which characters did you hate? Which characters left you indifferent?
5) Did anything seem like a cliché?⢠Did you ever roll your eyes at a plot twist?
⢠Did any of the descriptions or metaphors feel tired and unoriginal?
With inspiration from Orson Scott Cardâs concept of the âWise Reader,â from pp.121-124 of his book How to Write Science Fiction and Fantasy (Writerâs Digest Books, 1990). | {
"pile_set_name": "Pile-CC"
} |
Kim Kardashian will give a talk of the objectification of women in the media.
The selfie queen herself, 34, who recently âwroteâ a book detailing her life in selfies, will speak at the Paramount Theatre in Oakland, California, later this month.
Kardashian, worth a reported $45 million as of last year, is perhaps uniquely placed to discuss the mediaâs attitude and portrayal of the female form.
Billed as âthe first lady of #fame,â Kardashian will also give her opinion on new book Selfish, the Kardashian and Jenner clan and the âbusiness of millennial cultureâ.
Still, hearing Mrs Kardashian West â currently expecting her second child with husband Kanye West â speak does not come cheap.
In Pictures: Kim Kardashian West in fashion Show all 11 1 /11 In Pictures: Kim Kardashian West in fashion In Pictures: Kim Kardashian West in fashion Kim Kardashian West in fashion Kim and Kanye star in Balmain's spring/summer 2015 campaign copyright: Mario Sorrenti/Balmain In Pictures: Kim Kardashian West in fashion Kim Kardashian West in fashion Kim Kardashian's February Love magazine cover LOVE In Pictures: Kim Kardashian West in fashion Kim Kardashian West in fashion Kim Kardashian on the cover of Paper magazine In Pictures: Kim Kardashian West in fashion Kim Kardashian West in fashion Kim Kardashian's canvas and leather Hermes tote with baby North's drawings on iy Rex Features In Pictures: Kim Kardashian West in fashion Kim Kardashian West in fashion Kim Kardashian West and Baby North West at Givenchy SS15 Getty In Pictures: Kim Kardashian West in fashion Kim Kardashian West in fashion Kim Kardashian West and Kanye West on the front row of the Balmain fashion show in Paris, September 2014 In Pictures: Kim Kardashian West in fashion Kim Kardashian West in fashion Kim Kardashian and Kanye West front row at Lanvin SS15 Getty In Pictures: Kim Kardashian West in fashion Kim Kardashian West in fashion The cover of 'Selfish' by Kim Kardashian Getty In Pictures: Kim Kardashian West in fashion Kim Kardashian West in fashion Olivier Rousteing with Kim Kardashian, flanked by models wearing autumn/winter 2014 Balmain Getty Images In Pictures: Kim Kardashian West in fashion Kim Kardashian West in fashion Kim marries Kanye in a custom dress created by Riccardo Tisci In Pictures: Kim Kardashian West in fashion Kim Kardashian West in fashion Kim Kardashian and Kanye West in the video for 'Bound 2' shot by renowned photographer Nick Knight
Ticket prices for the 30 June event range from $300 for the VIP Party (including a copy of her book and the chance to have it signed, along with a cocktail reception) to the lesser $150 â perks include a chance to get your book signed, but no cocktails. | {
"pile_set_name": "OpenWebText2"
} |
---
abstract: 'This paper introduces a novel feature detector based only on information embedded inside a CNN trained on standard tasks (e.g. classification). While previous works already show that the features of a trained CNN are suitable descriptors, we show here how to extract the feature locations from the network to build a detector. This information is computed from the gradient of the feature map with respect to the input image. This provides a saliency map with local maxima on relevant keypoint locations. Contrary to recent CNN-based detectors, this method requires neither supervised training nor finetuning. We evaluate how repeatable and how âmatchableâ the detected keypoints are with the repeatability and matching scores. Matchability is measured with a simple descriptor introduced for the sake of the evaluation. This novel detector reaches similar performances on the standard evaluation HPatches dataset, as well as comparable robustness against illumination and viewpoint changes on Webcam and photo-tourism images. These results show that a CNN trained on a standard task embeds feature location information that is as relevant as when the CNN is specifically trained for feature detection.'
author:
- Assia Benbihi
- Matthieu Geist
- 'C[é]{}dric Pradalier'
bibliography:
- '../egbib.bib'
title: 'ELF: Embedded Localisation of Features in pre-trained CNN'
---
Introduction
============
![(1-6) Embedded Detector: Given a CNN trained on a standard vision task (classification), we backpropagate the feature map back to the image space to compute a saliency map. It is thresholded to keep only the most informative signal and keypoints are the local maxima. (7-8): simple-descriptor.[]{data-label="fig:pipeline"}](img.png){width="\linewidth"}
Feature extraction, description and matching is a recurrent problem in vision tasks such as Structure from Motion (SfM), visual SLAM, scene recognition and image retrieval. The extraction consists in detecting image keypoints, then the matching pairs the nearest keypoints based on their descriptor distance. Even though hand-crafted solutions, such as SIFT [@lowe2004distinctive], prove to be successful, recent breakthroughs on local feature detection and description rely on supervised deep-learning methods [@detone18superpoint; @ono2018lf; @yi2016lift]. They detect keypoints on saliency maps learned by a Convolutional Neural Network (CNN), then compute descriptors using another CNN or a separate branch of it. They all require strong supervision and complex training procedures: [@yi2016lift] requires ground-truth matching keypoints to initiate the training, [@ono2018lf] needs the ground-truth camera pose and depth maps of the images, [@detone18superpoint] circumvents the need for ground-truth data by using synthetic one but requires a heavy domain adaptation to transfer the training to realistic images. All these methods require a significant learning effort. In this paper, we show that a trained network already embeds enough information to build State-of-the-Art (SoA) detector and descriptor.
The proposed method for local feature detection needs only a CNN trained on standard task, such as ImageNet [@deng2009imagenet] classification, and no further training. The detector, dubbed ELF, relies on the features learned by such a CNN and extract their locations from the feature map gradients. Previous work already highlights that trained CNN features are relevant descriptors [@fischer2014descriptor] and recent works [@balntas2016learning; @han2015matchnet; @simo2015discriminative] specifically train CNN to produce features suitable for keypoint description. However, no existing approach uses a pre-trained CNN for feature detection.
ELF computes the gradient of a trained CNN feature map with respect to *w.r.t* the image: this outputs a saliency map with local maxima on keypoint positions. Trained detectors learn this saliency map with a CNN whereas we extract it with gradient computations. This approach is inspired by [@simonyan2013deep] which observes that the gradient of classification scores *w.r.t* the image is similar to the image saliency map. ELF differs in that it takes the gradient of feature maps and not the classification score contrary to existing work exploiting CNN gradients [@selvaraju2017grad; @smilkov2017smoothgrad; @springenberg2015striving; @sundararajan2017axiomatic]. These previous works aim at visualising the learning signal for classification specifically whereas ELF extracts the feature locations. The extracted saliency map is then thresholded to keep only the most relevant locations and standard Non-Maxima Suppression (NMS) extracts the final keypoints (Figure \[fig:heatmap\_coco\]).
![ Saliency maps thresholding to keep only the most informative location. Top: original image. (Left-Right: Webcam [@verdie2015tilde], HPatches [@balntas2017hpatches], COCO[@lin2014microsoft]) Middle: blurred saliency maps. Bottom: saliency map after threshold. (Better seen on a computer.) []{data-label="fig:heatmap_coco"}](fig3_heatmap.png){width="\linewidth"}
ELF relies only on six parameters: 2$\times$2 Gaussian blur parameters for the automatic threshold level estimation and for the saliency map denoising; and two parameters for the (NMS) window and the border to ignore. Detection only requires one forward and one backward passes and takes $\sim$0.2s per image on a simple Quadro M2200, which makes it suitable for real-time applications.
ELF is compared to individual detectors with standard *repeatability* [@mikolajczyk2005comparison] but results show that this metric is not discriminative enough. Most of the existing detectors can extract keypoints repeated across images with similar repeatability scores. Also, this metric does not express how âusefulâ the detected keypoints are: if we sample all pixels as keypoints, we reach 100% of *rep.* but the matching may not be perfect if many areas look alike. Therefore, the detected keypoints are also evaluated on how âmatchableâ they are with the *matching score* [@mikolajczyk2005comparison]. This metric requires to describe the keypoints so we define a simple descriptor: it is based on the interpolation of a CNN feature map on the detected keypoints, as in [@detone18superpoint]. This avoids biasing the performance by choosing an existing competitive descriptor. Experiments show that even this simple descriptor reaches competitive results which comforts the observation of [@fischer2014descriptor], on the relevance of CNN features as descriptors. More details are provided section 4.1.
ELF is tested on five architectures: three classification networks trained on ImageNet classification: AlexNet, VGG and Xception [@krizhevsky2012imagenet; @simonyan2014very; @chollet17xception], as well as SuperPoint [@detone18superpoint] and LF-Net [@ono2018lf] descriptor networks. Although outside the scope of this paper, this comparison provides preliminary results of the influence of the network architecture, task and training data on ELFâs performance. Metrics are computed on HPatches [@balntas2017hpatches] for generic performances. We derive two auxiliary datasets from HPatches to study scale and rotation robustness. Light and 3D viewpoint robustness analysis are run on the Strecha, Webcam and datasets [@strecha2008benchmarking; @verdie2015tilde]. These extensive experiments show that ELF is on par with other sparse detectors, which suggests that the feature representation and location information learnt by a CNN to complete a vision task is as relevant as when the CNN is specifically trained for feature detection. We additionally test ELFâs robustness on 3D reconstruction from images in the context of the CVPR 2019 Image Matching challenge [@cvpr19challenge]. Once again, ELF is on par with other sparse methods even though denser methods, e.g. [@detone18superpoint], are more appropriate for such a task. Our contributions are the following:
- We show that a CNN trained on a standard vision task embeds feature location in the feature gradients. This information is as relevant for feature detection as when a CNN is specifically trained for it.
- We define a systematic method for local feature detection. Extensive experiments show that ELF is on par with other SoA deep trained detectors. They also update the previous result from [@fischer2014descriptor]: self-taught CNN features provide SoA descriptors in spite of recent improvements in CNN descriptors [@choy2016universal].
- We release the python-based evaluation code to ease future comparison together with ELF code[^1]. The introduced robustness datasets are also made public [^2].
Related work
============
Early methods rely on hand-crafted detection and description : SIFT [@lowe2004distinctive] detects 3D spatial-scale keypoints on difference of gaussians and describes them with a 3D Histogram Of Gradients (HOG). SURF [@bay2006surf] uses image integral to speed up the previous detection and uses a sum of Haar wavelet responses for description. KAZE [@alcantarilla2012kaze] extends the previous multi-scale approach by detecting features in non-linear scale spaces instead of the classic Gaussian ones. ORB [@rublee2011orb] combines the FAST [@rosten2006machine] detection, the BRIEF [@calonder2010brief] description, and improves them to make the pipeline scale and rotation invariant. MSER-based detector hand-crafts desired invariance properties for keypoints, and designs a fast algorithm to detect them [@matas2004robust]. Even though these hand-crafted methods have proven to be successful and to reach state-of-the-art performance for some applications, recent research focus on learning-based methods.
One of the first learned detector is TILDE [@verdie2015tilde], trained under drastic changes of light and weather on the Webcam dataset. They use supervision to learn saliency maps which maxima are keypoint locations. Ground-truth saliency maps are generated with âgood keypointsâ: they use SIFT and filter out keypoints that are not repeated in more than 100 images. One drawback of this method is the need for supervision that relies on another detector. However, there is no universal explicit definition of what a good keypoint is. This lack of specification inspires Quad-Networks [@savinov2017quad] to adopt an unsupervised approach: they train a neural network to rank keypoints according to their robustness to random hand-crafted transformations. They keep the top/bottom quantile of the ranking as keypoints. ELF is similar in that it does not requires supervision but differs in that it does not need to further train the CNN.
Other learned detectors are trained within full detection/description pipelines such as LIFT [@yi2016lift], SuperPoint [@detone18superpoint] and LF-Net [@ono2018lf]. LIFT contribution lies in their original training method of three CNNs. The detector CNN learns a saliency map where the most salient points are keypoints. They then crop patches around these keypoints, compute their orientations and descriptors with two other CNNs. They first train the descriptor with patches around ground-truth matching points with contrastive loss, then the orientation CNN together with the descriptor and finally with the detector. One drawback of this method is the need for ground-truth matching keypoints to initiate the training. In [@detone18superpoint], the problem is avoided by pre-training the detector on a synthetic geometric dataset made of polygons on which they detect mostly corners. The detector is then finetuned during the descriptor training on image pairs from COCO [@lin2014microsoft] with synthetic homographies and the correspondence contrastive loss introduced in [@choy2016universal]. LF-Net relies on another type of supervision: it uses ground-truth camera poses and image depth maps that are easier to compute with laser or standard SfM than ground-truth matching keypoints. Its training pipeline builds over LIFT and employs the projective camera model to project detected keypoints from one image to the other. These keypoint pairs form the ground-truth matching points to train the network. ELF differs in that the CNN model is already trained on a standard task. It then extracts the relevant information embedded inside the network for local feature detection, which requires no training nor supervision.
The detection method of this paper is mainly inspired from the initial observation in [@simonyan2013deep]: given a CNN trained for classification, the gradient of a class score *w.r.t* the image is the saliency map of the class object in the input image. A line of works aims at visualizing the CNN representation by inverting it into the image space through optimization [@mahendran2015understanding; @gatys2016image]. Our work differs in that we backpropagate the feature map itself and not a feature loss. Following works use these saliency maps to better understand the CNN training process and justify the CNN outputs. Efforts mostly focus on the gradient definitions [@smilkov2017smoothgrad; @springenberg2015striving; @sundararajan2017axiomatic; @zeiler2014visualizing]. They differ in the way they handle the backpropagation of the non-linear units such as Relu. Grad-CAM [@selvaraju2017grad] introduces a variant where they fuse several gradients of the classification score *w.r.t* feature maps and not the image space. Instead, ELF computes the gradient of the feature map, and not a classification score, *w.r.t* the image. Also we run simple backpropagation which differs in the non-linearity handling: all the signal is backpropagated no matter whether the feature maps or the gradients are positive or not. Finally, as far as we know, this is the first work to exploit the localisation information present in these gradients for feature detection.
The simple descriptor introduced for the sake of the matchability evaluation is taken from UCN [@choy2016universal]. Given a feature map and the keypoints to describe, it interpolates the feature map on the keypoints location. Using a trained CNN for feature description is one of the early applications of CNN [@fischer2014descriptor]. Later, research has taken on specifically training the CNN to generate features suitable for keypoint matching either with patch-based approaches, among which [@simo2015discriminative; @melekhov2016siamese; @han2015matchnet; @zagoruyko2015learning], or image-based approaches [@taira2018inloc; @choy2016universal]. We choose the description method from UCNÂ [@choy2016universal], also used by SuperPoint, for its complexity is only $O(1)$ compared to patch-based approaches that are $O(N)$ with $N$ the number of keypoints. We favor UCN to InLoc [@taira2018inloc] as it is simpler to compute. The motivation here is only to get a simple descriptor easy to integrate with all detectors for fair comparison of the *detector* matching performances. So we overlook the description performance.
Method
======
This section defines ELF, a detection method valid for any trained CNN. Keypoints are local maxima of a saliency map computed as the feature gradient *w.r.t* the image. We use the data adaptive Kapur method [@kapur1985new] to automatically threshold the saliency map and keep only the most salient locations, then run NMS for local maxima detection.
![(Bigger version Figure \[fig:big\_saliency\_coco\].) Saliency maps computed from the feature map gradient $\left| ^TF^l(x) \cdot \frac{\partial F^l}{\partial \mathbf{I}} \right|$. Enhanced image contrast for better visualisation. Top row: gradients of VGG $pool_2$ and $pool_3$ show a loss of resolution from $pool_2$ to $pool_3$. Bottom: $(pool_i)_{i \in [1,2,5]}$ of VGG on Webcam, HPatches and Coco images. Low level saliency maps activate accurately whereas higher saliency maps are blurred.[]{data-label="fig:saliency_coco"}](fig2_saliency_bis.png){width="\linewidth"}
Feature Specific Saliency
-------------------------
We generate a saliency map that activates on the most informative image region for a specific CNN feature level $l$. Let $\mathbf{I}$ be a vector image of dimension $D_I = H_I \cdot W_I \cdot C_I$. Let $F^l$ be a vectorized feature map of dimension $D_F= H_l \cdot W_l \cdot C_l$. The saliency map $S^l$, of dimension $D_I$, is $S^l(\mathbf{I})=\left| ^tF^l(\mathbf{I}) \cdot \nabla_I F^l \right|$, with $\nabla_I F^l$ a $D_F \times D_I$ matrix.
The saliency activates on the image regions that contribute the most to the feature representation $F^l(\mathbf{I})$. The term $\nabla_I F^l$ explicits the correlation between the feature space of $F^l$ and the image space in general. The multiplication by $F^l(\mathbf{I})$ applies the correlation to the features $F^l(\mathbf{I})$ specifically and generate a visualisation in image space $S^l(\mathbf{I})$. From a geometrical point of view, this operation can be seen as the projection $\nabla_I F^l$ of a feature signal $F^l(\mathbf{I})$ into the image space. From a signal processing approach, $F^l(\mathbf{I})$ is an input signal filtered through $\nabla_I F^l$ into the image space. If $C_I>1$, $S^l$ is converted into a grayscale image by averaging it across channels.
Feature Map Selection
---------------------
We provide visual guidelines to choose the feature level $l$ so that $F^l$ still holds high resolution localisation information while providing a useful high-level representation.
CNN operations such as convolution and pooling increase the receptive field of feature maps while reducing their spatial dimensions. This means that $F^{l}$ has less spatial resolution than $F^{l-1}$ and the backpropagated signal $S^l$ ends up more spread than $S^{l-1}$. This is similar to when an image is too enlarged and it can be observed in Figure \[fig:saliency\_coco\], which shows the gradients of the VGG feature maps. On the top row, $pool_2$âs gradient (left) better captures the location details of the dome whereas $pool_3$âs gradient (right) is more spread. On the bottom rows, the images lose their resolution as we go higher in the network. Another consequence of this resolution loss is that small features are not embedded in $F^l$ if $l$ is too high. This would reduce the space of potential keypoint to only large features which would hinder the method. This observation motivates us to favor low-level feature maps for feature detection. We chose the final $F^l$ by taking the highest $l$ which provides accurate localisation. This is visually observable by sparse high intensity signal contrary to the blurry aspect of higher layers.
Automatic Data-Adaptive Thresholding
------------------------------------
The threshold is automatic and adapts to the saliency map distribution to keep only the most informative regions. Figure \[fig:heatmap\_coco\] shows saliency maps before and after thresholding using Kapurâs method [@kapur1985new], which we briefly recall below. It chooses the threshold to maximize the information between the image background and foreground *i.e.* the pixel distribution below and above the threshold. This method is especially relevant in this case as it aims at maintaining as much information on the distribution above the threshold as possible. This distribution describes the set of local maxima among which we choose our keypoints. More formally, for an image $\mathbf{I}$ of $N$ pixels with $n$ sorted gray levels and $(f_i)_{i \in
n}$ the corresponding histogram, $p_i=\frac{f_i}{N}$ is the empirical probability of a pixel to hold the value $f_i$. Let $s \in n$ be a threshold level and $A,B$ the empirical background and foreground distributions. The level $s$ is chosen to maximize the information between A and B and the threshold value is set to $f_s$: $A = \left(
\frac{p_i}{\sum_{i<s}pi}\right)_{i<s}$ and $B =
\left(\frac{p_i}{\sum_{i>=s}pi}\right)_{i>s}$. For better results, we blur the image with a Gaussian of parameters $(\mu_{thr}, \sigma_{thr})$ before computing the threshold level.
Once the threshold is set, we denoise the image with a second Gaussian blur of parameters $(\mu_{noise}, \sigma_{noise})$ and run standard NMS (the same as for SuperPoint) where we iteratively select decreasing global maxima while ensuring that their nearest neighbor distance is higher than the window $w_{\textrm{NMS}} \in \mathbb{N}$. Also we ignore the $b_{\textrm{NMS}} \in \mathbb{N}$ pixels around the image border.
Simple descriptor
-----------------
As mentioned in the introduction, the repeatability score does not discriminate among detectors anymore. So they are also evaluated on how âmatchableâ their detected keypoints are with the matching score. To do so, the ELF detector is completed with a simple descriptor inspired by SuperPointâs descriptor. The use of this simple descriptor over existing competitive ones avoids unfairly boosting ELFâs perfomance. Inspired by SuperPoint, we interpolate a CNN feature map on the detected keypoints. Although simple, experiments show that this simple descriptor completes ELF into a competitive feature detection/description method.
The feature map used for description may be different from the one for detection. High-level feature maps have wider receptive field hence take higher context into account for the description of a pixel location. This leads to more informative descriptors which motivates us to favor higher level maps. However we are also constrained by the loss of resolution previously described: if the feature map level is too high, the interpolation of the descriptors generate vector too similar to each other. For example, the VGG $pool_4$ layer produces more discriminative descriptors than $pool_5$ even though $pool_5$ embeds information more relevant for classification. Empirically we observe that there exists a layer level $l'$ above which the description performance stops increasing before decreasing. This is measured through the matching score metric introduced in [@mikolajczyk2005comparison]. The final choice of the feature map is done by testing some layers $l'>l$ and select the lowest feature map before the descriptor performance stagnates.
The compared detectors are evaluated with both their original descriptor and this simple one. We detail the motivation behind this choice: detectors may be biased to sample keypoints that their respective descriptor can describe âwellâ [@yi2016lift]. So it is fair to compute the matching score with the original detector/descriptor pairs. However, a detector can sample âuseless pointsâ (e.g. sky pixels for 3d reconstructions) that its descriptor can characterise âwellâ. In this case, the descriptor âhidesâ the detector default. This motivates the integration of a common independent descriptor with all detectors to evaluate them. Both approaches are run since each is as fair as the other.
Experiments
===========
This section describes the evaluation metrics and datasets as well as the methodâs tuning. Our method is compared to detectors with available public code: the fully hand-crafted SIFT [@lowe2004distinctive], SURF [@bay2006surf], ORB [@rublee2011orb], KAZE [@alcantarilla2012kaze], the learning-based LIFT [@yi2016lift], SuperPoint [@detone18superpoint], LF-Net [@ono2018lf], the individual detectors TILDE [@verdie2015tilde], MSER [@matas2004robust].
Metrics
-------
We follow the standard validation guidelines [@mikolajczyk2005comparison] that evaluates the detection performance with *repeatability (rep)*. It measures the percentage of keypoints common to both images. We also compute the *matching score (ms)* as an additional *detector* metric. It captures the percentage of keypoint pairs that are nearest neighbours in both image space and descriptor space i.e. the ratio of keypoints correctly matched. For fair completeness, the mathematical definitions of the metrics are provided in Appendix and their implementation in the soon-to-be released code.
A way to reach perfect *rep* is to sample all the pixels or sample them with a frequency higher than the distance threshold $\epsilon_{kp}$ of the metric. One way to prevent the first flaw is to limit the number of keypoints but it does not counter the second. Since detectors are always used together with descriptors, another way to think the detector evaluation is: *âa good keypoint is one that can be discriminatively described and matchedâ*. One could think that such a metric can be corrupted by the descriptor. But we ensure that a detector flaw cannot be hidden by a very performing descriptor with two guidelines. One experiment must evaluate all detector with one fixed descriptor (the simple one defined in 3.4). Second, *ms* can never be higher than *rep* so a detector with a poor *rep* leads to a poor *ms*.
Here the number of detected keypoints is limited to 500 for all methods. As done in [@detone18superpoint; @ono2018lf], we replace the overlap score in [@mikolajczyk2005comparison] to compute correspondences with the 5-pixel distance threshold. Following [@yi2016lift], we also modify the matching score definition of [@mikolajczyk2005comparison] to run a greedy bipartite-graph matching on all descriptors and not just the descriptor pairs for which the distance is below an arbitrary threshold. We do so to be able to compare all state-of-the-art methods even when their descriptor dimension and range vary significantly. (More details in Appendix.)
Datasets
--------
All images are resized to the 480$\times$640 pixels and the image pair transformations are rectified accordingly.
![Left-Right: HPatches: planar viewpoint. Webcam: light. HPatches: rotation. HPatches: scale. Strecha: 3D viewpoint.[]{data-label="fig:datasets"}](fig13.png){width="\linewidth"}
**General performances.** The HPatches dataset [@balntas2017hpatches] gathers a subset of standard evaluation images such as DTU and OxfordAffine [@aanaes2012interesting; @mikolajczyk2005performance]: it provides a total of 696 images, 6 images for 116 scenes and the corresponding homographies between the images of a same scene. For 57 of these scenes, the main changes are photogrammetric and the remaining 59 show significant geometric deformations due to viewpoint changes on planar scenes.
**Illumination Robustness.** The Webcam dataset [@verdie2015tilde] gathers static outdoor scenes with drastic natural light changes contrary to HPatches which mostly holds artificial light changes in indoor scenes.
**Rotation and Scale Robustness.** We derive two datasets from HPatches. For each of the 116 scenes, we keep the first image and rotate it with angles from $0^{\circ}$ to $210^{\circ}$ with an interval of $40^{\circ}$. Four zoomed-in version of the image are generated with scales $[1.25, 1.5, 1.75, 2]$. We release these two datasets together with their ground truth homographies for future comparisons.
**3D Viewpoint Robustness.** We use three Strecha scenes [@strecha2008benchmarking] with increasing viewpoint changes: *Fountain, Castle entry, Herzjesu-P8*. The viewpoint changes proposed by HPatches are limited to planar scenes which does not reflect the complexity of 3D structures. Since the ground-truth depths are not available anymore, we use COLMAP [@schonberger2016structure] 3D reconstruction to obtain ground-truth scaleless depth. We release the obtained depth maps and camera poses together with the evaluation code. ELF robustness is additionally tested in the CVPR19 Image Matching Challenge [@cvpr19challenge] (see results sections).
Baselines
---------
We describe the rationale behind the evaluation. The tests run on a QuadroM2200 with Tensorflow 1.4, Cuda8, Cudnn6 and Opencv3.4. We use the OpenCV implementation of SIFT, SURF, ORB, KAZE, MSER with the default parameters and the authorâs code for TILDE, LIFT, SuperPoint, LF-Net with the provided models and parameters. When comparing detectors in the feature matching pipeline, we measure their matching score with both their original descriptor and ELF simple descriptor. For MSER and TILDE, we use the VGG simple descriptor.
**Architecture influence.** ELF is tested on five networks: three classification ones trained on ImageNet (AlexNet, VGG, Xception [@krizhevsky2012imagenet; @simonyan2014very; @chollet17xception]) as well as the trained SuperPointâs and LF-Netâs descriptor ones. We call each variant with the networkâs names prefixed with ELF as in saliency. The paper compares the influence of i) architecture for a fixed task (ELF-AlexNet [@krizhevsky2012imagenet] *vs.* ELF-VGG [@simonyan2014very] *v.s.* ELF-Xception [@chollet17xception]), ii) the task (ELF-VGG *vs.* ELF-SuperPoint (SP) descriptor), iii) the training dataset (ELF-LFNet on phototourism *vs.* ELF-SP on MS-COCO). This study is being refined with more independent comparisons of tasks, datasets and architectures soon available in a journal extension.
We use the authorâs code and pre-trained models which we convert to Tensorflow [@abadi2016tensorflow] except for LF-Net. We search the blurring parameters $(\mu_{thr}, \sigma_{thr})$, $(\mu_{noise}, \sigma_{noise})$ in the range $ [\![3,21]\!]^2$ and the NMS parameters $(w_{NMS},
b_{NMS})$ in $[\![4,13]\!]^2$.
**Individual components comparison.** Individual detectors are compared with the matchability of their detection and the description of the simple VGG-pool3 descriptor. This way, the *m.s.* only depends on the detection performance since the description is fixed for all detectors. The comparison between ELF and recent deep methods raises the question of whether triplet-like losses are relevant to train CNN descriptors. Indeed, these losses constrain the CNN features directly so that matching keypoints are near each other in descriptor space. Simpler loss, such as cross-entropy for classification, only the constrain the CNN output on the task while leaving the representation up to the CNN.
ELF-VGG detector is also integrated with existing descriptors. This evaluates how useful the CNN self-learned feature localisation compares with the hand-crafted and the learned ones.
**Gradient Baseline.** Visually, the feature gradient map is reminiscent of the image gradients computed with the Sobel or Laplacian operators. We run two variants of our pipeline where we replace the feature gradient with them. This aims at showing whether CNN feature gradients embed more information than image intensity gradients.
Results
=======
Experiments show that ELF compares with the state-of-the-art on HPatches and demonstrates similar robustness properties with recent learned methods. It generates saliency maps visually akin to a Laplacian on very structured images (HPatches) but proves to be more robust on outdoor scenes with natural conditions (Webcam). When integrated with existing feature descriptors, ELF boosts the matching score. Even integrating ELF simple descriptor improves it with the exception of SuperPoint for which results are equivalent. This sheds new light on the representations learnt by CNNs and suggests that deep description methods may underexploit the information embedded in their trained networks. Another suggestion may be that the current metrics are not relevant anymore for deep learning methods. Indeed, all can detect repeatable keypoints with more or less the same performances. Even though the matchability of the points (*m.s*) is a bit more discriminative, neither express how âusefulâ the *kp* are for the end-goal task. One way to do so is to evaluate an end-goal task (*e.g.* Structure-from-Motion). However, for the evaluation to be rigorous all the other steps should be fixed for all papers. Recently, the Image Matching CVPR19 workshop proposed such an evaluation but is not fully automatic yet. These results also challenge whether current descriptor-training loss are a strong enough signal to constrain CNN features better than a simple cross-entropy.
The tabular version of the following results is provided in Appendix. The graph results are better seen with color on a computer screen. Unless mentioned otherwise, we compute repeatability for each detector, and the matching score of detectors with their respective descriptors, when they have one. We use ELF-VGG-$pool_4$ descriptor for TILDE, MSER, ELF-VGG, ELF-SuperPoint, and ELF-LFNet. We use AlexNet and Xception feature maps to build their respective simple descriptors. The meta-parameters for each variants are provided in Appendix.
**General performances.** Figure \[fig:hpatch\_gle\_perf\] (top) shows that the *rep* variance is low across detectors whereas *ms* is more discriminative, hence the validation method (Section 4.1). On HPatches, SuperPoint (SP) reaches the best *rep*-*ms* \[68.6, 57.1\] closely followed by ELF (e.g. ELF-VGG: \[63.8, 51.8\]) and TILDE \[66.0, 46.7\]. In general, we observe that learning-based methods all outperform hand-crafted ones. Still, LF-Net and LIFT curiously underperform on HPatches: one reason may be that the data they are trained on differs too much from this one. LIFT is trained on outdoor images only and LF-Net on either indoor or outdoor datasets, whereas HPatches is made of a mix of them. We compute metrics for both LF-Net models and report the highest one (indoor). Even though LF-Net and LIFT fall behind the top learned methods, they still outperform hand-crafted ones which suggests that their framework learn feature specific information that hand-crafted methods can not capture. This supports the recent direction towards trained detectors and descriptors.
**Light Robustness** Again, *ms* is a better discriminant on Webcam than *rep* (Figure \[fig:hpatch\_gle\_perf\] bottom). ELF-VGG reaches top *rep*-*ms* \[53.2, 43.7\] closely followed by TILDE \[52.5, 34.7\] which was the state-of-the-art detector.
Overall, there is a performance degradation ($\sim$20%) from HPatches to Webcam. HPatches holds images with standard features such as corners that state-of-the-art methods are made to recognise either by definition or by supervision. There are less such features in the Webcam dataset because of the natural lighting that blurs them. Also there are strong intensity variations that these models do not handle well. One reason may be that the learning-based methods never saw such lighting variations in their training set. But this assumption is rejected as we observe that even SuperPoint, which is trained on Coco images, outperforms LIFT and LF-Net, which are trained on outdoor images. Another justification can be that what matters the most is the pixel distribution the network is trained on, rather than the image content. The top methods are classifier-based ELF and SuperPoint: the first ones are trained on the huge Imagenet dataset and benefit from heavy data augmentation. SuperPoint also employs a considerable data strategy to train their network. Thus these networks may cover a much wider pixel distribution which would explain their robustness to pixel distribution changes such as light modifications.
**Architecture influence** ELF is tested on three classification networks as well as the descriptor networks of SuperPoint and LF-Net (Figure \[fig:hpatch\_gle\_perf\], bars under âELFâ).
For a fixed training task (classification) on a fixed dataset (ImageNet), VGG, AlexNet and Xception are compared. As could be expected, the network architecture has a critical impact on the detection and ELF-VGG outperforms the other variants. The *rep* gap can be explained by the fact that AlexNet is made of wider convolutions than VGG, which induces a higher loss of resolution when computing the gradient. As for *ms*, the higher representation space of VGG may help building more informative features which are a stronger signal to backpropagate. This could also justify why ELF-VGG outperforms ELF-Xception that has less parameters. Another explanation is that ELF-Xceptionâs gradient maps seem smoother. Salient locations are then less emphasized which makes the keypoint detection harder. One could hint at the depth-wise convolution to explain this visual aspect but we could not find an experimental way to verify it. Surprisingly, ELF-LFNet outperforms the original LF-Net on both HPatches and Webcam and ELF-SuperPoint variant reaches similar results as the original.
![HPatches scale. Left-Right: rep, ms.[]{data-label="fig:robust_scale"}](fig7_scale.png){width="\linewidth"}
**Scale Robustness.** ELF-VGG is compared with state-of-the art detectors and their respective descriptors (Figure \[fig:robust\_scale\]). Repeatability is mostly stable for all methods: SIFT and SuperPoint are the most invariant whereas ELF follows the same variations as LIFT and LF-Net. Once again, *ms* better assesses the detectors performance: SuperPoint is the most robust to scale changes, followed by LIFT and SIFT. ELF and LF-Net lose 50% of their matching score with the increasing scale. It is surprising to observe that LIFT is more scale-robust than LF-Net when the latterâs global performance is higher. A reasonable explanation is that LIFT detects keypoints at 21 scales of the same image whereas LF-Net only runs its detector CNN on 5 scales. Nonetheless, ELF outperforms LF-Net without manual multi-scale processing.
![HPatches rotation. Left-Right: rep, ms.[]{data-label="fig:robust_rotation"}](fig7_angle.png){width="\linewidth"}
**Rotation Robustness.** Even though *rep* shows little variations (Figure \[fig:robust\_rotation\]), all learned methodsâ *ms* crash while only SIFT survives the rotation changes. This can be explained by the explicit rotation estimation step of SIFT. However LIFT and LF-Net also run such a computation. This suggests that either SIFTâs hand-crafted orientation estimation is more accurate or that HOG are more rotation invariant than learned features. LF-Net still performs better than LIFT: this may be because it learns the keypoint orientation on the keypoint features representation rather than the keypoint pixels as done in LIFT. Not surprisingly, ELF simple descriptor is not rotation invariant as the convolutions that make the CNN are not. This also explains why SuperPoint also crashes in a similar manner. These results suggest that the orientation learning step in LIFT and LF-Net is needed but its robustness could be improved.
![Robustness analysis: 3D viewpoint.[]{data-label="fig:robust_strecha"}](fig7_strecha.png){width="\linewidth"}
**3D Viewpoint Robustness.** While SIFT shows a clear advantage of pure-rotation robustness, it displays similar degradation as other methods on realistic rotation-and-translation on 3D structures. Figure \[fig:robust\_strecha\] shows that all methods degrade uniformly. One could assume that this small data sample is not representative enough to run such robustness analysis. However, we think that these results rather suggest that all methods have the same robustness to 3D viewpoint changes. Even though previous analyses allows to rank the different feature matching pipelines, each has advantages over others on certain situations: ELF or SuperPoint on general homography matches, or SIFT on rotation robustness. This is why this paper only aims at showing ELF reaches the same performances and shares similar properties to existing methods as there is no generic ranking criteria. The recent evaluation run by the CVPR19 Image Matching Challenge [@cvpr19challenge] supports the previous conclusions.
![Left-Middle-Right bars: original method, integration of ELF detection, integration of ELF description.[]{data-label="fig:ind_component"}](fig11.png){width="\linewidth"}
**Individual components performance.** First, all methodsâ descriptor are replaced with the simple ELF-VGG-$pool_3$ one. We then compute their new *ms* and compare it to ELF-VGG on HPatches and Webcam (Figure \[fig:ind\_component\], stripes). The description is based on $pool_3$ instead of $pool_4$ here for it produces better results for the other methods while preserving ours. ELF reaches higher *ms* \[51.3\] for all methods except for SuperPoint \[53.7\] for which it is comparable. This shows that ELF is as relevant, if not more, than previous hand-crafted or learned detectors. This naturally leads to the question: *âWhat kind of keypoints does ELF detect ?â* There is currently no answer to this question as it is complex to explicitly characterize properties of the pixel areas around keypoints. Hence the open question *âWhat makes a good keypoint ?â* mentioned at the beginning of the paper. Still, we observe that ELF activates mostly on high intensity gradient areas although not all of them. One explanation is that as the CNN is trained on the vision task, it learns to ignore image regions useless for the task. This results in killing the gradient signals in areas that may be unsuited for matching.
Another surprising observation regards CNN descriptors: SuperPoint (SP) keypoints are described with the SP descriptor in one hand and the simple ELF-VGG one in the other hand. Comparing the two resulting matching scores is one way to compare the SP and ELF descriptors. Results show that both approaches lead to similar *ms*. This result is surprising because SP specifically trains a description CNN so that its feature map is suitable for keypoint description [@choy2016universal]. In VGG training, there is no explicit constraints on the features from the cross-entropy loss. Still, both feature maps reach similar numerical description performance. This raises the question of whether contrastive-like losses, which input are CNN features, can better constrain the CNN representation than simpler losses, such as cross-entropy, which inputs are classification logits. This also shows that there is more to CNNs than only the task they are trained on: they embed information that can prove useful for unrelated tasks. Although the simple descriptor was defined for evaluation purposes, these results demonstrate that it can be used as a description baseline for feature extraction.
The integration of ELF detection with other methodsâ descriptor (Figure \[fig:ind\_component\], circle) boosts the *ms*. [@yi2016lift]Â previously suggested that there may be a correlation between the detector and the descriptor within a same method, i.e. the LIFT descriptor is trained to describe only the keypoints output by its detector. However, these results show that ELF can easily be integrated into existing pipelines and even boost their performances.
**Gradient Baseline** The saliency map used in ELF is replaced with simple Sobel or Laplacian gradient maps. The rest of the detection pipeline stays the same and we compute their performance (Figure \[fig:gradient\_perf\] Left). They are completed with simple ELF descriptors from the VGG, AlexNet and Xception networks. These new hybrids are then compared to their respective ELF variant (Right). Results show that these simpler gradients can detect systematic keypoints with comparable *rep* on very structured images such as HPatches. However, the ELF detector better overcomes light changes (Webcam). On HPatches, the Laplacian-variant reaches similar *ms* as ELF-VGG (55 *vs* 56) and outperforms ELF-AlexNet and ELF-Xception. These scores can be explained with the images structure: for heavy textured images, high intensity gradient locations are relevant enough keypoints. However, on Webcam, all ELF detectors outperform Laplacian and Sobel with a factor of 100%. This shows that ELF is more robust than Laplacian and Sobel operators. Also, feature gradient is a sparse signal which is better suited for local maxima detection than the much smoother Laplacian operator (Figure \[fig:sobel\_visu\]).
![Feature gradient (right) provides a sparser signal than Laplacian (middle) which is more selective of salient areas.[]{data-label="fig:sobel_visu"}](fig5_sobel_similar_ter.png){height="3cm"}
**Qualitative results** Green lines show putative matches based only on nearest neighbour matching of descriptors. More qualitative results are available in the video [^3].
![Green lines show putative matches of the simple descriptor before RANSAC-based homography estimation.[]{data-label="fig:matching_pic"}](fig6_matching_ter.png){width="\linewidth"}
**CVPR19 Image Matching Challenge [@cvpr19challenge]** This challenge evaluates detection/description methods on two standard tasks: 1) wide stereo matching and 2) structure from motion from small image sets. The *matching score* evaluates the first task, and the camera pose estimation is used for both tasks. Both applications are evaluated on the photo-tourism image collections of popular landmarks [@thomee59yfcc100m; @heinly2015reconstructing]. More details on the metrics definition are available on the challenge website [@cvpr19challenge].
*Wide stereo matching:* Task 1 matches image pairs across wide baselines. It is evaluated with the keypoints *ms* and the relative camera pose estimation between two images. The evaluators run COLMAP to reconstruct dense âground-truthâ depth which they use to translate keypoints from one image to another and compute the matching score. They use the RANSAC inliers to estimate the camera pose and measure performance with the âangular difference between the estimated and ground-truth vectors for both rotation and translation. To reduce this to one value, they use a variable threshold to determine each pose as correct or not, then compute the area under the curve up to the angular threshold. This value is thus the mean average precision up to x, or mAPx. They consider 5, 10, 15, 20, and 25 degrees" [@cvpr19challenge]. Submissions can contain up to 8000 keypoints and we submitted entries to the sparse category i.e. methods with up to 512 keypoints.
![*Wide stereo matching.* Left: matching score (%) of sparse methods (up to 512 keypoints) on photo-tourism. Right: Evolution of mAP of camera pose for increasing tolerance threshold (degrees).[]{data-label="fig:cvpr19_task1"}](fig14.png){width="\linewidth"}
Figure \[fig:cvpr19\_task1\] (left) shows the *ms* (%) of the submitted sparse methods. It compares ELF-VGG detection with DELF [@noh2017largescale] and SuperPoint, where ELF is completed with either the simple descriptor from pool3 or pool4, and SIFT. The variant are dubbed respectively ELF-256, ELF-512 and ELF-SIFT. This allows us to sketch a simple comparison of descriptor performances between the simple descriptor and standard SIFT.
As previously observed on HPatches and Webcam, ELF and SuperPoint reach similar scores on Photo-Tourism. ELF-performance slightly increases from 25% to 26.4% when switching descriptors from VGG-pool3 to VGG-pool4. One explanation is that the feature space size is doubled from the first to the second. This would allow the pool4 descriptors to be more discriminative. However, the 1.4% gain may not be worth the additional memory use. Overall, the results show that ELF can compare with the SoA on this additional dataset that exhibits more illumination and viewpoint changes than HPatches and Webcam.
This observation is reinforced by the camera pose evaluation (Figure \[fig:cvpr19\_task1\] right). SuperPoint shows as slight advantage over others that increases from 1% to 5% across the error tolerance threshold whereas ELF-256 exhibits a minor under-performance. Still, these results show ELF compares with SoA performance even though it is not trained explicitly for detection/description.
![*SfM from small subsets*. Evolution of mAP of camera pose for increasing tolerance threshold.[]{data-label="fig:cvpr19_task2"}](fig15.png){width="0.7\linewidth"}
*Structure-from-Motion from small subsets.* Task 2 âproposes to to build SfM reconstructions from small (3, 5, 10, 25) subsets of images and use the poses obtained from the entire (much larger) set as ground truth" [@cvpr19challenge].
Figure \[fig:cvpr19\_task2\] shows that SuperPoint reaches performance twice as big as the next best method ELF-SIFT. This suggests that when few images are available, SuperPoint performs better than other approaches. One explanation is that even in âsparse-modeâ, *i.e.* when the number of keypoints is restricted up to 512, SuperPoint samples points more densely than the others ($\sim$383 *v.s.* $\sim$210 for the others). Thus, SuperPoint provides more keypoints to triangulate i.e. more 2D-3D correspondences to use when estimating the camera pose. This suggests that high keypoint density is a crucial characteristic of the detection method for Structure-from-Motion. In this regard, ELF still has room for improvement compared to SuperPoint.
Conclusion
==========
We have introduced ELF, a novel method to extract feature locations from pre-trained CNNs, with no further training. Extensive experiments show that it performs as well as state-of-the art detectors. It can easily be integrated into existing matching pipelines and proves to boost their matching performances. Even when completed with a simple feature-map-based descriptor, it turns into a competitive feature matching pipeline. These results shed new light on the information embedded inside trained CNNs. This work also raises questions on the descriptor training of deep-learning approaches: whether their losses actually constrain the CNN to learn better features than the ones it would learn on its own to complete a vision task. Preliminary results show that the CNN architecture, the training task and the dataset have substantial impact on the detector performances. A further analysis of these correlations is the object of a future work.
{width="\linewidth"}
Metrics definition
==================
We explicit the repeatability and matching score definitions introduced in [@mikolajczyk2005comparison] and our adaptations using the following notations: let $(\mathbf{I}^1, \mathbf{I}^2)$, be a pair of images and $\mathcal{KP}^i = (kp_j^i)_{j<N_i}$ the set of $N_i$ keypoints in image $\mathbf{I_i}$. Both metrics are in the range $[0,1]$ but we express them as percentages for better expressibility.
#### Repeatability
Repeatability measures the percentage of keypoints common to both images. We first warp $\mathcal{KP}^1$ to $\mathbf{I}^2$ and note $\mathcal{KP}^{1,w}$ the result. A naive definition of repeatability is to count the number of pairs $(kp^{1,w}, kp^2) \in \mathcal{KP}^{1,w} \times \mathcal{KP}^2$ such that $\|kp^{1,w}-kp^2\|_2 < \epsilon$, with $\epsilon$ a distance threshold. As pointed by [@verdie2015tilde], this definition overestimates the detection performance for two reasons: a keypoint close to several projections can be counted several times. Moreover, with a large enough number of keypoints, even simple random sampling can achieve high repeatability as the density of the keypoints becomes high.
We instead use the definition implemented in VLBench [@lenc12vlbenchmarks]: we define a weighted graph $(V,E)$ where the edges are all the possible keypoint pairs between $\mathcal{KP}^{1,w}$ and $\mathcal{KP}^2$ and the weights are the euclidean distance between keypoints. $$\label{eq: graph_dfn}
\begin{split}
V &= (kp^{1,w} \in \mathcal{KP}^{1,w}) \cup (kp^2 \in \mathcal{KP}^2) \\
E &= (kp^{1,w}, kp^2, \|kp^{1,w} - kp^2\|_2) \in \mathcal{KP}^{1,w} \times \mathcal{KP}^2
\end{split}$$
We run a greedy bipartite matching on the graph and count the matches with a distance less than $\epsilon_{kp}$. With $\mathcal{M}$ be the resulting set of matches:
$$\label{rep_dfn}
repeatability = \frac{\mathcal{M}}{\textrm{min}(|\mathcal{KP}^1|, |\mathcal{KP}^2|)}$$
We set the distance threshold $\epsilon=5$ as is done in LIFT [@yi2016lift] and LF-Net [@ono2018lf].
#### Matching score
The matching score definition introduced in [@mikolajczyk2005comparison] captures the percentage of keypoint pairs that are nearest neighbours both in image space and in descriptor space, and for which these two distances are below their respective threshold $\epsilon_{kp}$ and $\epsilon_{d}$. Let $\mathcal{M}$ be defined as in the previous paragraph and $\mathcal{M}_d$ be the analog of $\mathcal{M}$ when the graph weights are descriptor distances instead of keypoint euclidean distances. We delete all the pairs with a distance above the thresholds $\epsilon$ and $\epsilon_d$ in $\mathcal{M}$ and $\mathcal{M}_d$ respectively. We then count the number of pairs which are both nearest neigbours in image space and descriptor space i.e. the intersection of $\mathcal{M}$ and $\mathcal{M}_d$:
$$\label{MS}
matching \; score = \frac{\mathcal{M} \cap \mathcal{M}_d}{\textrm{min}(|\mathcal{KP}^1|, |\mathcal{KP}^2|)}$$
One drawback of this definition is that there is no unique descriptor distance threshold $\epsilon_d$ valid for all methods. For example, the SIFT descriptor as computed by OpenCV is a $[0,255]^{128}$ vector for better computational precision, the SuperPoint descriptor is a $[0,1]^{256}$ vector and the ORB descriptor is a 32 bytes binary vector. Not only the vectors are not defined over the same normed space but their range vary significantly. To avoid introducing human bias by setting a descriptor distance threshold $\epsilon_d$ for each method, we choose to set $\epsilon_d = \infty$ and compute the matching score as in [@mikolajczyk2005comparison]. This means that we consider any descriptor match valid as long as they match corresponding keypoints even when the descriptor distance is high.
Tabular results
===============
---------------- ------------------------ -------------------- ------------------------ -------------------- -- --
[@balntas2017hpatches] [@verdie2015tilde] [@balntas2017hpatches] [@verdie2015tilde]
ELF-VGG 63.81
ELF-AlexNet 51.30 38.54 35.21 31.92
ELF-Xception 48.06 29.81
ELF-SuperPoint 59.7 46.29 44.32 18.11
ELF-LFNet 60.1 41.90 44.56 33.43
LF-Net 61.16 48.27 34.19 18.10
SuperPoint 46.35 32.44
LIFT 54.66 42.21 34.02 17.83
SURF 54.51 33.93 26.10 10.13
SIFT 51.19 28.25 24.58 8.30
ORB 53.44 31.56 14.76 1.28
KAZE 56.88 41.04 29.81 13.88
TILDE 52.53 46.71 34.67
MSER 47.82 52.23 21.08 6.14
---------------- ------------------------ -------------------- ------------------------ -------------------- -- --
: Generic performances on HPatches [@balntas2017hpatches]. Robustness to light (Webcam [@verdie2015tilde]). (Fig. 5).[]{data-label="tab:whole_pipeline"}
-- ----------- ----------- ----------- ----------- ----------- -----------
34.19 **57.11** 34.02 24.58 26.10 14.76
**44.19** 53.71 **39.48** **27.03** **34.97** **20.04**
18.10 32.44 17.83 10.13 8.30 1.28
**30.71** **34.60** **26.84** **13.21** **21.43** **13.91**
-- ----------- ----------- ----------- ----------- ----------- -----------
: Individual component performance (Fig. \[fig:ind\_component\]-stripes). Matching score for the integration of the VGG $pool_3$ simple-descriptor with otherâs detection. Top: Original description. Bottom: Integration of simple-descriptor. HPatches: [@balntas2017hpatches]. Webcam: [@verdie2015tilde][]{data-label="tab:cross_res_des"}
-- ----------- ----------- ----------- ----------- ----------- -----------
34.19 **57.11** 34.02 24.58 26.10 14.76
**39.16** 54.44 **42.48** **50.63** **30.91** **36.96**
18.10 32.44 17.83 10.13 8.30 1.28
**26.70** **39.55** **30.82** **36.83** **19.14** **6.60**
-- ----------- ----------- ----------- ----------- ----------- -----------
: Individual component performance (Fig. \[fig:ind\_component\]-circle). Matching score for the integration of ELF-VGG (on $pool_2$) with otherâs descriptor. Top: Original detection. Bottom: Integration of ELF. HPatches: [@balntas2017hpatches]. Webcam: [@verdie2015tilde][]{data-label="tab:cross_res_det"}
---------------- ------------------------ -------------------- ------------------------ -------------------- -- --
[@balntas2017hpatches] [@verdie2015tilde] [@balntas2017hpatches] [@verdie2015tilde]
Sobel-VGG 56.99 33.74 42.11 20.99
Lapl.-VGG **65.45** 33.74 **55.25** 22.79
VGG 63.81 **53.23** 51.84 **43.73**
Sobel-AlexNet 56.44 33.74 30.57 15.42
Lapl.-AlexNet **65.93** 33.74 **40.92** 15.42
AlexNet 51.30 **38.54** 35.21 **31.92**
Sobel-Xception 56.44 33.74 34.14 16.86
Lapl.-Xception **65.93** 33.74 **42.52** 16.86
Xception 48.06 **49.84** 29.81 **35.48**
---------------- ------------------------ -------------------- ------------------------ -------------------- -- --
: Gradient baseline on HPatches [@balntas2017hpatches] and Webcam [@verdie2015tilde] (Fig. \[fig:gradient\_perf\] ).[]{data-label="tab:cmp_sobel"}
ELF Meta Parameters
===================
This section specifies the meta parameters values for the ELF variants. For all methods, $(w_{NMS}, b_{NMS})=(10,10)$.
- Denoise: $(\mu_{noise}, \sigma_{noise})$.
- Threshold: $(\mu_{thr}, \sigma_{thr})$.
- $F^l$: the feature map which gradient is used for detection.
- simple-des: the feature map used for simple-description. Unless mentioned otherwise, the feature map is taken from the same network as the detection feature map $F^l$.
Nets Denoise Threshold $F^l$ simple-desc
------------ --------- ----------- -------------- ------------- --
VGG (5,5) (5,4) pool2 pool4
Alexnet (5,5) (5,4) pool1 pool2
Xception (9,3) (5,4) block2-conv1 block4-pool
SuperPoint (7,2) (17,6) conv1a VGG-pool3
LF-Net (5,5) (5,4) block2-BN VGG-pool3
: Generic performances on HPatches (Fig. \[fig:hpatch\_gle\_perf\]). (BN: Batch Norm)[]{data-label="tab:meta_params"}
Nets Denoise Threshold $F^l$ simple-desc
------------ --------- ----------- -------------- ------------- --
VGG (5,5) (5,4) pool2 pool4
Alexnet (5,5) (5,4) pool1 pool2
Xception (9,9) (5,4) block2-conv1 block4-pool
SuperPoint (7,2) (17,6) conv1a VGG-pool3
LF-Net (5,5) (5,4) block2-conv VGG-pool3
: Robustness to light on Webcam (Fig. \[fig:hpatch\_gle\_perf\]).[]{data-label="tab:meta_params"}
Nets Denoise Threshold $F^l$ simple-desc
------ --------- ----------- ------- ------------- --
VGG (5,2) (17,6) pool2 pool4
: Robustness to scale on HPatches (Fig. \[fig:robust\_scale\]).[]{data-label="tab:meta_params"}
Nets Denoise Threshold $F^l$ simple-desc
------ --------- ----------- ------- ------------- --
VGG (5,2) (17,6) pool2 pool4
: Robustness to rotation on HPatches (Fig. \[fig:robust\_rotation\]).[]{data-label="tab:meta_params"}
Nets Denoise Threshold $F^l$ simple-desc
------ --------- ----------- ------- ------------- --
VGG (5,2) (17,6) pool2 pool4
: Robustness to 3D viewpoint on Strecha (Fig. \[fig:robust\_strecha\]).[]{data-label="tab:meta_params"}
Nets Denoise Threshold $F^l$ simple-desc
------ --------- ----------- ------- ------------- --
VGG (5,5) (5,5) pool2 pool3
: Individual component analysis (Fig. \[fig:ind\_component\])[]{data-label="tab:meta_params"}
Nets Denoise Threshold $F^l$ simple-desc
----------- --------- ----------- ------- ------------- --
VGG (5,5) (5,4) pool2 pool4
Sobel (9,9) (5,4) - pool4
Laplacian (9,9) (5,4) - pool4
: Gradient baseline on HPatches and Webcam (Fig. \[fig:gradient\_perf\]).[]{data-label="tab:meta_params"}
[^1]: ELF code:<https://github.com/ELF-det/elf>
[^2]: Rotation and scale dataset: <https://bit.ly/31RAh1S>
[^3]: <https://youtu.be/oxbG5162yDs>
| {
"pile_set_name": "ArXiv"
} |
[The Etest--an alternative to the NCLLS standard for susceptibility testing of yeasts?].
The Etest and the NCCLS method are not much differing in respect to their reproducibility. Only single observations exist on the clinical correlation of both tests. The correlation between both tests are known as being dependent on the yeast species and antifungals used. Due to the easy and simple handling the Etest is attractive for routine laboratories. The Etest has to be more evaluated before it can be generally recommended. The NCCLS method also is not validated until now. Different test methods should be compared thoroughly with the above mentioned standard. | {
"pile_set_name": "PubMed Abstracts"
} |
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build amd64,linux
// +build !gccgo
package unix
import "syscall"
//go:noescape
func gettimeofday(tv *Timeval) (err syscall.Errno)
| {
"pile_set_name": "Github"
} |
Federation Space
Federation Space is a Star Trek-based space naval combat board game published by Task Force Games in 1981.
Design and gameplay
Federation Space was designed by Stephen Wilcox as a strategic companion to the more tactical Star Fleet Battles system.
The game components are:
20" x 24" hex grid map
432 back-printed die-cut cardboard counters
fleet organization charts
combat results table
16-page rulebook
The map covers the large extent of space and many of the space-faring races described in the Star Trek TV series. This included the Federation, Klingons, Romulans, Gorns, Tholians and Hydrans. The Kzinti, a race mentioned in the Ringworld novels of Larry Niven, is also featured.
This is a game of naval fleet action involving dozens (or more) ships. Nearly all ships move at the same speed (Warp Factor 6). With so many ships involved, combat relies on a simple two-step reduction system to determine damage. Each ship only has two statuses: normal (counter face up); and damaged (counter face-down). Taking any damage results in a ship receiving the "Damaged" status. Taking any subsequent damage destroys the ship. Battles continue until one side is either destroyed or withdraws.
Since this is a strategic-level game, some scenarios describe the capture of base stations, starbases or even planets.
There is also a campaign game for three or more players (each playing a different race) in which the players use diplomatic alliances and multiple navies to achieve their strategic goals.
Reception
In the March 1982 edition of The Space Gamer (Issue No. 49), William A. Barton recommended the game, saying, "Federation Space succeeds in its purpose to present a relatively simple, playable Star Trek game which can serve as a strategic module for Star Fleet Battles. Recommended to Trek gamers everywhere."
In the August 1983 edition of Dragon (Issue 76), Tony Watson liked a number of things, including its simplicity of rules and combat, the fleet organization charts, the simple step-reduction damage system, and the colourful components. Watson criticized the size of the map, which although large compared to other combat games, was too small and restrictive for entire fleets. Watson also thought the game did not reward clever fleet maneuvers, relying instead on massed fleets simply engaging head on. He concluded, "Federation Space has much to recommend itself. Both those who play Starfleet Battles and those interested in a fast-moving, action-oriented strategic space game should find this title to their liking."
References
Category:Board games based on Star Trek
Category:Board games introduced in 1981
Category:Star Fleet Battles | {
"pile_set_name": "Wikipedia (en)"
} |
Putative risk factors for postoperative pneumonia which affects poor prognosis in patients with gastric cancer.
Several recent studies identified that postoperative infectious complications contribute to recurrence and poor outcome in patients with gastric cancer. This study was designed to investigate the prognostic impact of postoperative pneumonia, and to identify the putative risk factors for its occurrence. We retrospectively analyzed 1,415 consecutive patients who underwent curative gastrectomy for gastric cancer between 1997 and 2013. A total of 31 (2.2 %) patients developed postoperative pneumonia (Clavien-Dindo classification â¥II). Patients with postoperative pneumonia showed a significantly poorer prognosis than patients without (P < 0.001). Concerning the occurrence of postoperative pneumonia, univariate and multivariate analyses identified older age (â¥65 years; P = 0.010; odds ratio [OR] 3.59), lower nutritious status (albumin <3 0; P = 0.029; OR 4.51), advanced stage (pStage â¥II; P = 0.045; OR 2.35), concurrent hypertension (P = 0.042; OR 2.21) and total gastrectomy (P = 0.026; OR 2.42) as independent risk factors. Postoperative pneumonia was shown to be associated with long-term poor outcome in patients with gastric cancer. Care should be taken for patients with clinical factors such as older age, lower nutritional status, advanced stage, concurrent hypertension, and total gastrectomy. | {
"pile_set_name": "PubMed Abstracts"
} |
Q:
jsf dynamic component that restores state
I am trying to display HtmlInputText dynamically in a JSF page. However, I am getting
javax.faces.FacesException: Cannot add the same component twice: j_idt10:hitDyn
During the first request to the page the input text renders well. That exception happens during postback of the page, when I enter some text in the input component and press Enter.
In the .xhtml page, I have the following code:
<h:form>
<h:outputLabel value="Welcome!"></h:outputLabel>
<f:metadata>
<f:event type="preRenderView" listener="#{dynamicBacking.addDynComp}" />
</f:metadata>
<h:panelGroup id="dynOuter"></h:panelGroup>
</h:form>
In the backing bean, I have the following code:
@ManagedBean(name="dynamicBacking")
public class DynamicBacking {
public void addDynComp() {
Application app = FacesContext.getCurrentInstance().getApplication();
HtmlInputText hit = (HtmlInputText)app.createComponent(HtmlInputText.COMPONENT_TYPE);
hit.setId("hitDyn");
UIComponent parent = findComponent("dynOuter");
if( parent != null ) {
parent.getChildren().add(hit);
}
}
public UIComponent findComponent(final String id) {
FacesContext context = FacesContext.getCurrentInstance();
UIViewRoot root = context.getViewRoot();
final UIComponent[] found = new UIComponent[1];
root.visitTree(new FullVisitContext(context), new VisitCallback() {
@Override
public VisitResult visit(VisitContext context, UIComponent component) {
if(component.getId().equals(id)){
found[0] = component;
return VisitResult.COMPLETE;
}
return VisitResult.ACCEPT;
}
});
return found[0];
}
}
I guess that there is some problem with restoring the state of the dynamic component in a postback. Am I adding the dynamic component too late in the lifecycle of the JSF page? I know that in ASP.NET I could add a dynamic control during Page.Load phase. But I can't so far figure out how to achieve the same in JSF. Please, help!
A:
The exception appears because the component is added in the tree on the initial page load. When performing a postback your listener gets called again and it tries to add another component with the same id and this causes the exception. A solution of the issue is to check if the request is NOT a postback when adding the component. The following code shows how to check for postback:
if (FacesContext.getCurrentInstance().isPostback()) {....
| {
"pile_set_name": "StackExchange"
} |
Q:
Pandas Data Frame not Appending
I am trying to append dataframes via for loop.
CODE
def redshift_to_pandas(sql_query,**kwargs):
# pass a sql query and return a pandas dataframe
cur.execute(sql_query)
columns_list = [desc[0] for desc in cur.description]
data = pd.DataFrame(cur.fetchall(),columns=columns_list)
return data
Input -
all_schema = [('backup')]
Loop -
try:
if len(all_schema) == 0:
raise inputError("The Input has no schema selected. EXITING")
else:
modified_schemadf=pd.DataFrame(columns=['columns_name','status'])
for i in range(len(all_schema)):
#print (redshift_to_pandas("select '"+all_schema[i]+"' as columns_name,(select exists ( select distinct table_schema from information_schema.tables where table_schema like '%"+all_schema[i]+"')) as status",mechanism='append'))
modified_schemadf.append(redshift_to_pandas("select '"+all_schema[i]+"' as columns_name,(select exists ( select distinct table_schema from information_schema.tables where table_schema like '%"+all_schema[i]+"')) as status",mechanism='append'))
print (modified_schemadf)
except inputError as e:
print(e.message)
logger.error("UNEXPECTED INPUT FOUND, Please check the I/P List . EXITING")
print (modified_schemadf)
I feel the issue is very obvious but i dont seem to find the issue.
Here is the o/p -
So the the first print ( commented out ), does return me the correct result.
the next steps i.e appending the result to the declared dataframe ( name - modified_schemadf) is the problem area. When i print its value , it still throws a empty dataframe. For some reason the appending isnt happening.
When the code enters else , i.e when the input is legit, there will be empty dataframe created called modified_schemadf. To this empty dataframe, there will be as many number of appends as there are inputs.
Thanks in Advance.
Please dont mind the indentations, copying might have affected them.
A:
Isn't the issue just that you don't assign the appended dataframe? Try changing this line
modified_schemadf.append(redshift_to_pandas("select '"+all_schema[i]+"' as columns_name,(select exists ( select distinct table_schema from information_schema.tables where table_schema like '%"+all_schema[i]+"')) as status",mechanism='append'))
to this line
modified_schemadf = modified_schemadf.append(redshift_to_pandas("select '"+all_schema[i]+"' as columns_name,(select exists ( select distinct table_schema from information_schema.tables where table_schema like '%"+all_schema[i]+"')) as status",mechanism='append'))
| {
"pile_set_name": "StackExchange"
} |
Interview with a quality leader: Dr. John Combes on boards and governance.
Dr. Combes is senior vice president at the American Hospital Association (AHA) and president and COO of the Center for Healthcare Governance. The Interview with Dr. John Combes on Boards and Governance provides a perspective on key changes, issues, competencies, and metrics that hospital boards must address. The role of quality professionals to be effective with boards is also described. | {
"pile_set_name": "PubMed Abstracts"
} |
{% load static %}
<script src="{% static 'fiber/js/markitup-1.1.10/jquery.markitup.js' %}" type="text/javascript"></script>
<script src="{% static 'fiber/js/markitup-1.1.10/sets/textile_fiber/set.js' %}" type="text/javascript"></script>
<script src="{% static 'fiber/js/fiber.markitup.js' %}" type="text/javascript"></script>
| {
"pile_set_name": "Github"
} |
2010 Vuelta España â Stage 18 â Mean Green
Mark Cavendish has his third stage win of the Vuelta España, holding off a surging Haedo who nearly came over the top of Cav on the slightly uphill finish. The nervous finish caused a dramatic set-up to the final sprint with Farrar getting lost, Quickstep dropping their sprinter and Matthew Goss going deep to launch Cavendish. No change in the GC. | {
"pile_set_name": "Pile-CC"
} |
[Assessment of the new TNM classification for resected lung cancer].
To evaluate the revised TNM classification, we investigated the prognoses of 552 consecutive patients who had resection of non-small-cell lung cancer between April 1982 and March 1996. According to the new classification, the 5-year survival rate was 76.9% for stage I A, 57.2% for stage I B (I A versus I B, p < 0.0005), 47.7% for stage IIA, 49.8% for stage IIB, 18.6% for stage IIIA (IIB versus IIIA, p = 0.005), 16.7% for stage IIIB, and 7.9% for stage IV (IIIB versus IV, p = 0.02). Especially for patients in stage I A, there was significant difference in survival between patients with the tumor size within 1.5 cm and those with larger than 1.5 cm. The survival rate for T3N0M0 patients was significantly better than that for T3N1-2M0, but there was no significant difference between patients with T3N0M0 disease and those with T2N1M0 disease. Concerning the pm1 patients, the survival rate was significantly better than other stage IIIB patients. Our results supported the revision for dividing stage I and putting T3N0M0 into stage IIB. However, the classification is controversial about dividing stage II and putting pm1 as T4 disease. Furthermore, subgrouping of T1N0M0 disease by tumor size, T3 by tumor invaded organ will be necessary in the next revisions. | {
"pile_set_name": "PubMed Abstracts"
} |
Q:
pandas - binning data and getting 2 columns
I have a very simple dataframe. There are 2 columns, day_created (int, could change to datetime) and suspended (int, could change to boolean). I can change the data if it makes it easier to work with.
Day created Suspended
0 12 0
1 6 1
2 24 0
3 8 0
4 100 1
5 30 0
6 1 1
7 6 0
The day_created column is the integer of the day the account was created (from a start date), starting at 1 and increasing. The suspended column is a 1 for suspension and a 0 for no suspension.
What I would like to do is bin these accounts into groups of 30 days or months, but from each bin get a total number of accounts for that month and the number of accounts suspended that were created in that month. I then plan on creating a bar graph with 2 bars for each month.
How should I go about this? I don't use pandas often. I assume I need to do some tricks with resample and count.
A:
Use
df.index = start_date + pd.to_timedelta(df['Day created'], unit='D')
to give the DataFrame an index of Timestamps representing when the accounts were created.
Then you can use
result = df.groupby(pd.TimeGrouper(freq='M')).agg(['count', 'sum'])
to group the rows of the DataFrame (by months) according to the Timestamps in the index.
.agg(['count', 'sum']) computes the number of accounts (the count) and the number of suspended accounts for each group.
Then result.plot(kind='bar', ax=ax) plots the bar graph:
import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame(
{'Day created': [12, 6, 24, 8, 100, 30, 1, 6],
'Suspended': [0, 1, 0, 0, 1, 0, 1, 0]})
start_date = pd.Timestamp('2016-01-01')
df.index = start_date + pd.to_timedelta(df['Day created'], unit='D')
result = df.groupby(pd.TimeGrouper(freq='M'))['Suspended'].agg(['count', 'sum'])
result = result.rename(columns={'sum':'suspended'})
fig, ax = plt.subplots()
result.plot(kind='bar', ax=ax)
locs, labels = plt.xticks()
plt.xticks(locs, result.index.strftime('%Y-%m-%d'))
fig.autofmt_xdate()
plt.show()
yields
| {
"pile_set_name": "StackExchange"
} |
Q:
Display difference between volatile and usual variable in Java
I am trying to create an example to display the difference between volatile and usual variables like:
package main;
public class TestVolatile extends Thread {
public int l = 5;
public volatile int m = -1;
public TestVolatile(String str) {
super(str);
}
public void run() {
int i = 0;
while ((l > 1) && (l < 10)) {
if (m >= 0) {
m++;
}
i++;
l = 5;
System.out.println("5=" + i + " m=" + m);
}
}
public static void main(String[] args) throws InterruptedException {
TestVolatile tva = new TestVolatile("ThreadA");
tva.start();
sleep(5);
synchronized (tva) {
tva.m = 5;
tva.l = 10;
}
}
}
So m is volatile, l is not. I suppose that exiting from the while loop depends on the value of l.
Because the value of l is not volatile - m will be incremented at least 1 time after l has been assigned 5. But I have run the code 10 times and always m==5.
So I suppose that I am wrong. How to fix this problem? Thank you.
Thanks for answers, but not all run well.
I set like:
volatile int x = 0;
volatile int y = 0;
So now the variables have to be the same! But that is not the case.
x: 346946234 y: 346946250
x: 346946418 y: 346946422
x: 346946579 y: 346946582
x: 346946742 y: 346946745
x: 346946911 y: 346946912
A:
You are synchronizing the main thread and your test thread. Therefore Java guarantees to make any changes visible performed by the other thread.
Btw, it is impossible to construct an example which deterministically shows a difference between volatile and non-volatile. The best you can hope is to get a program which shows the difference with a quite high probability. If the threads run interleaved on the same core. You won't be able to show any difference at all.
The following program shows on my computer the difference between volatile and non-volatile variables.
public class ShowVolatile {
final static int NUM_THREADS = 1;
int x = 0;
volatile int y = 0;
public static void main(String... args) {
final ShowVolatile sv = new ShowVolatile();
for (int i=0; i< NUM_THREADS; i++) {
new Thread(new Runnable() {
public void run() {
while (true) {
sv.x += 1;
sv.y += 1;
}
}
}).start();
}
while (true) {
System.out.println("x: " + sv.x + " y: " + sv.y);
}
}
}
If you increase the number of threads you will see additional synchronization misses. But a thread count of 1 is enough. At least on my hardware a Quad-Core i7.
| {
"pile_set_name": "StackExchange"
} |
Q:
systemd service script for libreoffice/openoffice
I'm trying to setup correctly a headless libreoffice/openoffice server on a debian jessie. I created a script named /etc/systemd/system/openoffice.service with the following content
[Unit]
Description=OpenOffice service
After=syslog.target
[Service]
ExecStart=/usr/bin/soffice '--accept=socket,host=localhost,port=8101;urp;StarOffice.ServiceManager' --headless --nofirststartwizard --nologo
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all
User=www-data
[Install]
WantedBy=multi-user.target
And I enabled it via:
systemctl enable openoffice.service
I'm in a situation that is only partially working:
it correctly starts on boot
if queried status systemctl status openoffice.service it clams it is still activating
If I try to start it it just hangs
I haven't been able to find a working example, I'd also like to understand how to create the debian /etc/init.d script that uses systems...
A:
You set Type=notify in your service. This is meant to be used only for specific services which are designed to notify systemd when they have finished starting up. At the moment, these are rather uncommon, and I don't think LibreOffice is among them.
You should most likely be using Type=simple instead.
| {
"pile_set_name": "StackExchange"
} |
The European Parliament Committee on Economic and Monetary Affairs (ECON) published a report this July that concluded that centralised digital currencies could be the answer to the lack of any real competition in the crypto arena. According to the report, central bank-issued digital currencies (CBDC) also have the potential to be used as an alternative to fiat currencies. In a previous article on Crypthor, we discussed the rise of cryptocurrencies and their lack of regulation. As such, the aim of the EU Central Bank (ECB) is to have a regulated, centralised currency that it can use for trade.
The main theme of the report was that ECB issued digital currencies have the potential to create a more stable financial system. Factors based on the law of supply and demand as well as the dependence on goods and foreign currencies, create the volatility of current cryptocurrencies. Therefore, not being backed by any monetary authority, they are not a viable alternative to any fiat currency. The centralised nature of ECB issued digital currencies, however, backed by a trusted central party, has the potential to complement or substitute the current banking infrastructure and pave the way to a more stable financial system.
In the European financial context, the report also highlighted how the financial technology (Fintech) behind cryptocurrencies can be a disruptive and innovative service provider. It described how Fintech provides lower cross-border barriers and facilitates financial services between EU states by providing financing and investment alternatives to European businesses and private individuals. Moreover, cryptocurrencies and Fintech can contribute to EU objectives like the Digital Single Market and the Capital Markets Union. These innovations have the potential to bring benefits like cost reduction, better efficiency, improved transparency and ultimately contribute to the goal of financial inclusion in the European market.
The Eurozone market, therefore, is a uniquely crucial region for crypto markets, as events that affect one country can greatly affect the rest in terms of trade. As such, digital currency traders and investors will do well to keep an eye on world events that can impact their trade and affect the already volatile crypto markets. The Economic Calendar on FXCM shows trends and volatility within the EU, with the current consumer price index showing a bullish reading for the Euro. It also notes European Central Bank (ECB) policies and how they have the potential to influence currencies in the Eurozone. This gives crypto traders and investors a tool for analysis of market.
The ECON report also mentions that CBDCs will have an impact in the Eurozone by reshaping the current competition level in the cryptocurrency market. It defined competition as inter-cryptocurrency, between digital coins and intra-cryptocurrency, and between service providers like wallets, exchanges and payment providers. This competition has the potential to create asymmetric effects, including barriers, formation of cartels and collusive agreements. It can also lead to situations where service providers could create incongruent policies that could keep other competitors out of the market, leading to an unfair system. The ECB believes that CBDCs will be able to resolve these issues.
Now, with new EU directives for the Eurozone that came in mid-July, which set stricter rules for digital currencies in order to protect against terrorist financing and money laundering, it seems an opportune time for the ECB to forge forward with its CBDC. Coin Telegraph remarks that the closed cryptocurrency systems require a supervisory authority, and are based on a bilateral settlement with trusted party, unlike decentralized digital currencies which are not regulated. By adopting a digital currency, the ECB, according to the report, embraced the disruptive change that has the potential to avoid the recurrent instability of the current banking system. | {
"pile_set_name": "OpenWebText2"
} |
The backwoods-gothic terrain may be familiar, but the jolts are doled out with an expert hand in âBlue Ruin,â a lean and suspenseful genre piece that follows a bloody trail of vengeance to its cruel, absurd and logical conclusion. Writer-director Jeremy Saulnier shows impressive progress from his funny-scary 2007 debut, âMurder Party,â with this tense, stripped-down tale of a Virginia drifter who finds himself in way over his head when he tries to exact payback for his parentsâ deaths.
Saulnier cleverly establishes a man-on-the-run theme in his opening shot, before the action proper has even started. Thereafter the camera practically stays glued to Dwight Evans (Macon Blair), a quiet vagrant who gets by sifting through Dumpsters and sleeping in his beat-up blue Pontiac. Yet his seemingly pointless existence is marked by curious flashes of daring and resourcefulness, if not exactly great intelligence, and he suddenly snaps into action and returns to his rural Virginia hometown upon learning that one Will Cleland has been released from prison.
Making it clear thereâs a score to settle without immediately disclosing the gruesome details, the script lures the viewer into an unnerving sense of complicity as Dwight follows Will and his folks to a bar and, armed with a small knife, initiates the first of several brutal setpieces. The filmmaking is clean and efficient but the killing isnât, and in the course of his clumsy, foolhardy getaway, Dwight ends up putting Willâs entire gun-toting redneck family on his tail. In a twist that streamlines the narrative considerably, the Clelands opt not to inform the police of the attack, choosing instead to keep things âin-house.â
While Dwightâs not-so-bright actions generate some darkly humorous beats (none grislier than when he tries, and fails, to clean a nasty arrow wound), Saulnier resists turning his protagonist into an object of outright ridicule, never compromising the audienceâs intense identification with this reluctant renegade. In name and appearance, Dwight is the sort of pudgy, clean-shaven Everyman more suited to an office cubicle than a shootout, and even as the arguable aggressor in this scenario, he seems to act more out of fear and protectiveness than out of a real desire for retribution.
Blairâs engaging, soulful-eyed performance succeeds by locating the sweet spot between idiot and amateur, predator and prey. Repeatedly, Dwight plans ahead, takes calculated risks and still messes up, and much of the filmâs tension derives from his very fallibility, as well as his increasing awareness that none of this can possibly end well. If the climax goes inevitably over-the-top, itâs nonetheless the sort of gruesome finish the storyâs steady, merciless buildup demands.
Carefully exploiting the audienceâs fear of what it canât (or can only partially) see, Saulnierâs shallow-focus widescreen compositions amp up the suspense at key intervals, as do Julia Blochâs crisp editing, Matt Snedecor and Dan Flosdorfâs meticulously layered sound design, and Brooke and Will Blairâs ominous synth score. While Dwight is often the cameraâs sole focus, warm character notes are provided by Amy Hargreaves as Dwightâs sister, who is at once grateful for and angered by his reckless actions, and Devin Ratray as an old high-school friend whom Dwight enlists to help, in one of his smarter decisions. | {
"pile_set_name": "Pile-CC"
} |
Litia Cakobau
Adi Litia Qalirea Cakobau (c. 1941 â 8 October 2019) was a Bau high Fijian Chief and political leader. Cakobau, the daughter of Ratu Sir George Cakobau, who was Fiji's Governor-General from 1973 to 1983, was appointed to the Senate in 2001 as one of nine nominees of the Fijian government. She held this post till 2006, when her elder sister, Adi Samanunu Cakobau-Talakuli was appointed to the Senate.
Prior to her appointment to the Senate, she had previously held Cabinet office as Minister for Women, a post to which she was appointed in 1987.
Her brother, Ratu George Cakobau, was also a Senator from 2001 to 2006, but was nominated by the Great Council of Chiefs rather than the government, as she was. She died at her home in Lautoaka in October 2019 at the age of 78.
References
Category:1940s births
Category:2019 deaths
Category:Fijian chiefs
Category:I-Taukei Fijian members of the Senate (Fiji)
Category:Tui Kaba
Category:Soqosoqo Duavata ni Lewenivanua politicians
Category:Soqosoqo ni Vakavulewa ni Taukei politicians
Category:Politicians from Bau (island) | {
"pile_set_name": "Wikipedia (en)"
} |
Q:
How do you equally space out elements in a Row?
Row{
width: parent.width
spacing: ????
Checkbox{}
Checkbox{}
Checkbox{}
Checkbox{}
}
So just to be clear, the checkboxes should be spaced in such a manner that however wide the row is, it will expand or compress the spacing in accordance to this.
A:
The simplest solution would be to set width: parent.width/4 for each of the checkboxes. If you want to keep the checkbox width set at some known value, you could instead set spacing: (parent.width - 4 * checkboxwidth)/3 on the Row. Note that this will cause the elements to overlap when the parent is narrow.
If you're targeting Qt 5.1 or higher, you may want a RowLayout. I'm still on 5.0, though, so I can't help you there.
Yet another way to do this would to be to put each CheckBox in an Item. Each Item would have width: parent.width/4, and each CheckBox would have anchors.centerIn: parent. This would give a half-width margin on the far left and far right, which may or may not be desired.
| {
"pile_set_name": "StackExchange"
} |
Event Description:You'll meet our team of obstetricians, certified nurse midwives, and family medicine physicians. You can also tour our recently renovated Labor & Delivery unit and in-hospital Birth Center. You can also attend an optional tour before or after the event (5 p.m. or 7 p.m.). Please plan on arriving 15 minutes before the event to allow time to sign in. If you are planning on attending the 5 p.m. tour, please arrive at 4:45 p.m.
If you are unable to register online, please call 619-543-3168.Parking information | {
"pile_set_name": "Pile-CC"
} |
jsToolBar.strings = {};
jsToolBar.strings['Strong'] = 'Gras';
jsToolBar.strings['Italic'] = 'Italique';
jsToolBar.strings['Underline'] = 'Souligné';
jsToolBar.strings['Deleted'] = 'Rayé';
jsToolBar.strings['Code'] = 'Code en ligne';
jsToolBar.strings['Heading 1'] = 'Titre niveau 1';
jsToolBar.strings['Heading 2'] = 'Titre niveau 2';
jsToolBar.strings['Heading 3'] = 'Titre niveau 3';
jsToolBar.strings['Unordered list'] = 'Liste à puces';
jsToolBar.strings['Ordered list'] = 'Liste numérotée';
jsToolBar.strings['Quote'] = 'Citer';
jsToolBar.strings['Unquote'] = 'Supprimer citation';
jsToolBar.strings['Preformatted text'] = 'Texte préformaté';
jsToolBar.strings['Wiki link'] = 'Lien vers une page Wiki';
jsToolBar.strings['Image'] = 'Image';
| {
"pile_set_name": "Github"
} |
// Copyright (c) 2006 Foundation for Research and Technology-Hellas (Greece).
// All rights reserved.
//
// This file is part of CGAL (www.cgal.org).
// You can redistribute it and/or modify it under the terms of the GNU
// General Public License as published by the Free Software Foundation,
// either version 3 of the License, or (at your option) any later version.
//
// Licensees holding a valid commercial license may use this file in
// accordance with the commercial license agreement provided with the software.
//
// This file is provided AS IS with NO WARRANTY OF ANY KIND, INCLUDING THE
// WARRANTY OF DESIGN, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
//
// $URL$
// $Id$
//
//
// Author(s) : Menelaos Karavelas <mkaravel@iacm.forth.gr>
#ifndef CGAL_VORONOI_DIAGRAM_2_VALIDITY_TESTERS_H
#define CGAL_VORONOI_DIAGRAM_2_VALIDITY_TESTERS_H 1
#include <CGAL/license/Voronoi_diagram_2.h>
#include <CGAL/Voronoi_diagram_2/basic.h>
#include <algorithm>
#include <CGAL/Triangulation_utils_2.h>
#include <CGAL/Voronoi_diagram_2/Finder_classes.h>
namespace CGAL {
namespace VoronoiDiagram_2 { namespace Internal {
//=========================================================================
//=========================================================================
template<class VDA, class Base_it>
class Edge_validity_tester
{
// tests whether a halfedge has as face a face with zero area.
private:
const VDA* vda_;
private:
typedef Triangulation_cw_ccw_2 CW_CCW_2;
// Base_it is essentially VDA::Edges_iterator_base
typedef Base_it Edges_iterator_base;
typedef typename VDA::Halfedge_handle Halfedge_handle;
typedef typename VDA::Delaunay_graph::Vertex_handle Delaunay_vertex_handle;
public:
Edge_validity_tester(const VDA* vda = NULL) : vda_(vda) {}
bool operator()(const Edges_iterator_base& eit) const {
CGAL_assertion( !vda_->edge_rejector()(vda_->dual(), eit->dual()) );
int cw_i = CW_CCW_2::cw( eit->dual().second );
CGAL_assertion_code( int ccw_i = CW_CCW_2::ccw( eit->dual().second ); )
CGAL_assertion_code(Delaunay_vertex_handle v_ccw_i = eit->dual().first->vertex(ccw_i);)
CGAL_assertion( !vda_->face_rejector()(vda_->dual(), v_ccw_i) );
Delaunay_vertex_handle v_cw_i = eit->dual().first->vertex(cw_i);
if ( !vda_->face_rejector()(vda_->dual(), v_cw_i) ) {
return false;
}
Halfedge_handle he(eit);
Halfedge_handle he_opp = eit->opposite();
CGAL_assertion( he_opp->opposite() == he );
return he->face()->dual() < he_opp->face()->dual();
}
};
//=========================================================================
//=========================================================================
template<class VDA>
class Vertex_validity_tester
{
private:
const VDA* vda_;
private:
typedef typename VDA::Delaunay_graph::Face_handle Delaunay_face_handle;
typedef typename VDA::Delaunay_graph::Finite_faces_iterator
Delaunay_faces_iterator;
public:
Vertex_validity_tester(const VDA* vda = NULL) : vda_(vda) {}
bool operator()(const Delaunay_faces_iterator& fit) const {
Delaunay_face_handle f(fit);
Delaunay_face_handle fvalid = Find_valid_vertex<VDA>()(vda_,f);
return f != fvalid;
}
};
//=========================================================================
//=========================================================================
} } //namespace VoronoiDiagram_2::Internal
} //namespace CGAL
#endif // CGAL_VORONOI_DIAGRAM_2_VALIDITY_TESTERS_H
| {
"pile_set_name": "Github"
} |
Q:
Why is a JavaScript reserved keyword allowed as a variable name?
We know that let is a reserved keyword that defines a variable in JavaScript.
var let = 2;
console.log(let); // return 2
So why is this not an error?
A:
let is only a reserved word in strict mode:
'use strict';
var let = 5;
Uncaught SyntaxError: Unexpected strict mode reserved word
This is because browsers generally prioritize backwards compatibility above all else. Although let was introduced in ES2015 (and its use was forseen sometime before then), prior scripts which used let as a variable name would continue to work as desired. For example, if your script was written in 2008:
var let = 2;
console.log(let);
Then it would continue to work in 2020 as well.
For very similar reasons, async and await are also permitted as variable names.
As for why the use of let errors in strict mode - strict mode was introduced in ES5, in 2009. Back then, the language designers saw that the use of new keyword(s) to declare variables was a possibility in the future, but it wasn't set in stone yet, and ES6 was still a long ways off. Once ES5 came out, script writers could opt-in to strict mode to make code less confusing, and change silent errors to explicit errors. Although let wasn't usable for variable declaration yet, prohibiting it as a variable name in strict mode improved the readability of future scripts which opted into strict mode, while also not breaking any existing scripts.
A:
let and some of the other works acts as reserved words only in strict mode. The specs says
Disallowed in strict mode: Those that are contextually disallowed as identifiers, in strict mode code: let, static, implements, interface, package, private, protected, and public;
You can see let inside the list of words which are only disallowed in strict mode. If you want to throw error for using let as variable name you can use strict mode
"use strict";
var let = 3
| {
"pile_set_name": "StackExchange"
} |
165 Pa. Commonwealth Ct. 573 (1994)
645 A.2d 474
BOROUGH OF KENNETT SQUARE
v.
Amrit LAL, Appellant.
Commonwealth Court of Pennsylvania.
Submitted on Briefs June 6, 1994.
Decided July 8, 1994.
Reargument Denied August 17, 1994.
*577 Thomas R. Kellogg, for appellant.
John L. Hall, for appellee.
Before COLINS and PELLEGRINI, JJ., and NARICK, Senior Judge.
NARICK, Senior Judge.
Appellant, Amrit Lal, appeals from an order of the Court of Common Pleas of Chester County, sitting in equity, ordering injunctive relief and appointing an agent for Appellant to manage Appellant's apartment complex known as "Scarlett Manor Apartments," in order to bring it into compliance with the Borough of Kennett Square's (Borough) housing and building codes.
This matter commenced in February, 1993 when the Borough filed this action in an effort to bring an end to almost five years of continuous litigation with Appellant. From the time Appellant purchased Scarlett Manor Apartments, in March 1988, when they were apparently in a good state of repair and free of any Housing Code (Code) violations, until December 1993, Appellant was cited for more than 160 Code violations, and the Borough had spent more than $40,000.00 in attorney's fees in this effort to compel Appellant's compliance. As noted by the trial court judge, The Honorable Thomas J. Gavin, who had personally heard more than one hundred (100) cases involving Appellant's rental real estate, "[Appellant] is the *578 single most litigious person in the history of Chester County."[1] (T.C. Opinion at 5, December 8, 1993.)
According to the trial court, Appellant's obstructive conduct usually conformed to the following pattern:
Following inspections, the borough would communicate deficiencies to the defendant and suggest that he contact the borough regarding the resolution of same. Several months would pass, often with another intervening inspection, but no corrective action taken. More letters would be generated and ultimately defendant would agree to remedy the deficiencies by a date certain. It is important to note that the defendant was always given leeway to select a date by which the repairs, or deficiencies, would be corrected. The corrective date would come and go with no action by defendant, whereupon the borough would issue citations. Hearings would then be scheduled before the district court (District Court 15-3-04) where the defendant would or would not appear to defend. Whether found guilty by the District Justice or in absentia, an automatic appeal would be taken to the Court of Common Pleas. By the time the cases found their way to my courtroom the deficiencies, which would now be months if not years old, remained uncorrected. In each case numerous pre-trial motions would be filed, often on the day scheduled for trial, asserting that the court lacked jurisdiction, was biased against the defendant, that the borough was discriminating against defendant because of his third world origin, etc. etc. Ultimately the cases would be heard, appropriate verdicts rendered and plaintiff advised that if the repairs were corrected pre-imposition of sentence, nominal fines would be imposed. Invariably, post-verdict motions would be filed, no corrections made and the defendant sentenced accordingly. Thereafter, motions to vacate sentence and/or appeals would be filed with the deficiencies still uncorrected. The deficiencies *579 cited by the borough, by way of example and not limitation, run the gamut from countless vectors (a polite euphemism for cockroaches) scurrying about the apartments to defective and/or leaking and/or missing plumbing fixtures, lighting fixtures that do not work, windows with broken or missing panes, or screens, loose or missing balcony railings, leaking roofs and trash strewn about the properties.
(T.C. Opinion at 2-3, December 8, 1993).
As a result of this delay and vexatious conduct, the Borough filed a complaint asking for the extraordinary remedy of appointment of an agent to manage the apartments and correct the Code violations. In response, Appellant filed preliminary objections which were denied, and Appellant was given leave to file an answer to the Borough's complaint within twenty (20) days. Appellant failed to file an answer within the time allowed, and instead appealed the trial court's denial of his preliminary objections, via a petition for review, to this court. Appellant's petition to vacate Judge Gavin's order denying his preliminary objections was denied by Judge MacElree of the Chester County Court of Common Pleas. Appellant was notified that a default judgment would be taken if he did not file an answer within ten (10) days, and when such answer was not filed, a default judgment was entered and a final hearing to frame an appropriate final decree was scheduled. The final hearing concluded on November 19, 1993, which resulted in the appointment of an agent to manage the apartment buildings in order to correct the problems and bring them into compliance with the Borough's ordinances. Meanwhile, Appellant's legal maneuverings continued with, inter alia, a petition to quash the Borough's request for a final hearing to fashion an appropriate final decree, a petition for recusal of Judge Gavin or transfer to another county, continuing requests for production of documents after a protective order had been granted, and a motion to disqualify the court's appointed agent.
On appeal to this Court, Appellant raises eleven issues for our review, three of which have been waived by failure to *580 raise them in post-trial motions.[2] Pa.R.C.P. No. 227.1(b)(2); Estate of Hall, 517 Pa. 115, 535 A.2d 47 (1987); Borough Council for Borough of Millbourne v. Bargaining Committee of Millbourne Borough Police, 109 Pa.Commonwealth Ct. 474, 531 A.2d 565 (1987). We will therefore consider the remaining issues on their merits.
First, Appellant argues that Judge Gavin should have recused because of animosity to Appellant. Judge Gavin denied the motion stating that his actions do not evince any bias towards Appellant. (T.C. Opinion at 2, December 17, 1993.) Like his post-trial motions, Appellant's brief on appeal contains repetitive, generalized, boilerplate allegations of bias and prejudice, but he only indicates one instance which he believes shows the court's animosity, Judge Gavin's threat to hold Appellant in contempt for continuing to cross-examine a witness, Mr. Marguriet, on irrelevant matters. (R. at 50a.) The record indicates that the first questions Appellant asked on cross-examination of Mr. Marguriet, the Manager and Code Enforcement Officer of the Borough, concerned the deeds of properties owned by other landowners, and a case pending against a property owner in the Borough. These questions were clearly irrelevant, and such was Judge Gavin's ruling. Yet, Appellant continued to ask irrelevant questions, until he repeated some he had attempted to ask earlier, and at that point, Judge Gavin warned Appellant to cross-examine only on relevant issues, or risk a contempt citation. (R. 40a-50a.) Judge Gavin remained remarkably patient while Appellant asked one irrelevant question after another, but his repeated rulings were ignored. His warning was therefore warranted, and his threatened use of his contempt powers was entirely proper.
*581 In this jurisdiction, it is presumed that a trial judge is capable of recognizing in himself/herself the symptoms of bias and prejudice. If the judge believes that he or she can hear and dispose of the case without partiality, then that decision will not be overturned, absent an abuse of discretion. Reilly by Reilly v. Southeastern Pennsylvania Transportation Authority, 507 Pa. 204, 489 A.2d 1291 (1985); Commonwealth v. Knight, 421 Pa.Superior Ct. 485, 618 A.2d 442 (1992). Here, there was no abuse of discretion in warning Appellant that he would be in contempt of court if he continued to ask totally irrelevant questions. Therefore, there is no merit to Appellant's claim that Judge Gavin should have recused.
Next, Appellant claims that the enforcement of the Borough's ordinances was discriminatory against the low income groups which reside in Scarlett Manor and against Appellant, who claims to be a "member of a minority group, being an Asiatic Indian." Appellant not only failed to prove that there was discriminatory enforcement of the Building Code, but has failed to allege any facts which, if true, would support this claim. Township of Ridley v. Pronesti, 431 Pa. 34, 244 A.2d 719 (1968); Harasty v. Borough of West Brownsville, 50 Pa.Commonwealth Ct. 186, 412 A.2d 688 (1980). We will not recapitulate the facts of this case, but suffice to say that the Borough's ordinances were enforced against Appellant because Appellant had violated these ordinances numerous times, there had been numerous complaints by the tenants and others, and because Appellant engaged in every delaying tactic he knew, including abusing his legal rights, to resist abating the conditions for which he was cited and avoid complying with minimal standards of habitability. (Plaintiff's Exhibit 1 and 3.) Therefore, we find no merit to this argument.
Next, Appellant claims that the court erred in entering a final order in this case before the receipt and consideration of post-trial motions. Pa.R.C.P. No. 227.1. Appellant relies on Reading Anthracite Co. v. Rich, 525 Pa. 118, 577 A.2d 881 (1990), where an adjudication and decree nisi were entered which ordered the convening of a meeting within ten (10) days *582 of entry of the final order, and also invited the parties to submit post-verdict motions within ten (10) days. The petitioners filed their motions on the tenth day, but prior to their receipt the chancellor disposed of the ultimate issue in the case. The Supreme Court held that the petitioners were denied due process when they were denied the right to file exceptions or post-trial motions.
This case is readily distinguishable from Reading Anthracite. First, although the December 8, 1993 filing was labeled an opinion and order, Appellant had the opportunity and did file post-trial motions which were thoroughly addressed and considered by the trial court. Moreover, the court was not required to enter a decree nisi because a judgment by default had already been entered. Panther Valley Television Co. v. Borough of Summit Hill, 372 Pa. 524, 94 A.2d 735 (1953). Thus, Reading Anthracite is inapplicable here, and there is no merit to this argument.
Next, Appellant claims that the trial court erred in failing to insure that the party intending to purchase the Scarlett Manor was represented at the hearing, and that the tenants were joined as parties. The issue of the necessity of joining the tenants has been waived because it was not raised at the hearing or in post-trial motions. Moreover, notwithstanding Appellant's allegations regarding a potential buyer for his property, the buyer remains unnamed and unproven in the record. When Appellant's attorney, Mr. Kalmbach, was asked about this buyer, he responded that the Borough was more involved in the negotiations for sale than he was. (R. 84.) However, the attorney for the Borough stated that the Borough knew very little about the potential sale, apart from what was told them by Appellant months before. The Borough never saw a copy of the alleged agreement of sale and never knew the name of the alleged, potential buyers. (R. 83-85.) Appellant did not offer any more specific information about the buyers during the hearing, although he was given every opportunity to do so. Instead, he continued to rely on his and his attorney's assertions that the sale was imminent. *583 (R. 85.) The court cannot join a person or persons in a proceeding when it has not been given information as to the identity of such persons, and when it has received no evidence confirming an interest in the property which would be affected by the court's proceedings. Therefore, there is no merit to Appellant's allegation of trial court error on this issue.
Next, Appellant claims that the relief ordered was not a proper exercise of the equitable powers of the court because he made substantial efforts to provide decent housing for the tenants and to comply with the general intent of the ordinance. Appellant cites his own testimony and that of his manager, Mr. Ayra, both of whom the court specifically found not credible. As an example of Mr. Ayra's testimony, he made the incredible statement that perhaps he failed to notice certain Code violations because he made inspections only in the evening. (R. 167a). Although he claimed to have called repair persons to correct the problems, he did not produce a single receipt or cancelled check to prove that they had undertaken the repairs they claimed to have accomplished. As factfinder in the evidentiary hearing, the trial court was free to disregard Appellant's testimony and make findings as to credibility. Commonwealth v. Nunez, 312 Pa.Superior. Ct. 584, 459 A.2d 376 (1983). In no uncertain terms, the trial court found Appellant incredible and disregarded the testimony he and Mr. Ayra offered.
Next, Appellant claims that the trial court had an adequate remedy at law, and therefore equitable relief should not have been granted. Citing School District of West Homestead v. Allegheny County Board of School Directors, 440 Pa. 113, 269 A.2d 904 (1970), Appellant argues that the trial court had no jurisdiction to consider this action in equity because: (1) there is a constitutionally valid statute, the Borough's citation procedures, which provide an explicit and exclusive administrative remedial process, with review by the Court of Common Pleas; and (2) the statutory remedy is adequate and compliance with the statutory remedy will not cause irremedial harm.
*584 The Borough Code provides that boroughs are specifically vested with the power to enforce housing ordinances by instituting appropriate actions or proceedings in law or in equity. The Borough Code, Act of February 1, 1966, P.L. (1965) 1656, as amended, 53 P.S. § 46202(24). Although there are explicit legal and administrative procedures for serving citations for violations of the Borough's housing ordinances, under the Borough Code, they are not the exclusive remedies available to the authorities, and the courts may proceed in equity.
The trial court held that the inadequacy of the available legal remedies was proven by evidence that Code violations continue unabated, notwithstanding the filing of multiple actions by the Borough against Appellant. (T.C. Opinion at 10, December 8, 1993). In explaining this holding the trial court stated, "The borough has tried amicably and legally for five years to compel defendant to meet those minimum standards its other citizens are required to adhere to . . . If equitable relief is not granted, the borough will continue to be frustrated in its legitimate efforts to enforce its housing codes." (T.C. Opinion at 8-9, December 8, 1993).
Equity has jurisdiction notwithstanding a failure to pursue an available statutory remedy if that remedy is inadequate. While this Court is reluctant to favor equity over administrative remedies, it is appropriate to take equity jurisdiction to avoid a multiplicity of actions. Temple University v. Department of Public Welfare, 30 Pa.Commonwealth Ct. 595, 374 A.2d 991 (1977). We hold that the remedies at law, the hundreds of citations for violations of the housing code received by Appellant, have been inadequate to insure their enforcement. Thus, the safety and the habitability of the premises can not be guaranteed, and the health and welfare of the tenants residing in Appellant's apartment complex is endangered. Therefore, it was perfectly appropriate for the court to provide equitable relief in the form of appointment of an agent to manage the Scarlett Manor apartments.
*585 Appellant next claims that the court erred in failing to open the default judgment. The decision to open a default judgment is left to the sound discretion of the trial court, which must determine that: (1) the petition to open was promptly filed; (2) there was a reasonable excuse for failure to respond; and (3) a meritorious defense must be shown. Southeastern Pennsylvania Transportation Authority v. DiAntonio, 152 Pa.Commonwealth Ct. 237, 618 A.2d 1182 (1992). Appellant cannot meet any part of this test.
First, judgment by default for failure to answer the Borough's complaint was entered on September 10, 1993. Appellant did not file his petition to open until December 13, 1993. Although he states he relied on his petition for review of the trial court's dismissal of his preliminary objections, which were filed in this court, to stay the proceedings on the default judgment so that he did not need to file an answer, such reliance was misplaced. Pa.R.A.P. 1701(b)(6) provides that the trial court may proceed further in any matter in which a nonappealable interlocutory order has been entered, notwithstanding the filing of a notice of appeal or a petition for review. Here, Appellant attempted to appeal a nonappealable interlocutory order, which this court dismissed on two different occasions, September 15, 1993 and October 29, 1993, and therefore the trial court properly continued to proceed in this matter while the appeals were pending. Even if we were to accept that Appellant were relying on his petitions for review to stay the proceeding, he still waited for over a month to file his petition to open after his petitions to this Court were dismissed. Thus, the petition to open was not promptly filed, and there is no reasonable excuse for Appellant's failure to respond to the trial court's order to file an answer to the Borough's complaint.
Moreover, as we have discussed, no meritorious defense, which has been defined as a defense sufficient to justify relief if proven, Id., was offered. As discussed above, all Appellant's defenses are without merit and are therefore insufficient to justify relief. Therefore, the trial court did not err in refusing to open the default judgement.
*586 Finally, Appellant claims that the decree should be vacated because the court did not require that the agent appointed by the court to manage Appellant's property post bond pursuant to Pa.R.C.P. No. 1533(d). Rule 1533(d) provides that a "receiver" must give security for the faithful performance of his duty as the court shall direct, and shall not act until the security is paid.
Here, however, an "agent" was appointed, similar to the agent required by the Borough Code, Section 8-107, which requires an owner of any apartment building to register a person to serve as a responsible local agent. Traditionally a person seeking a receiver does so to protect property in which he or she has an interest. Levin v. Barish, 505 Pa. 514, 481 A.2d 1183 (1984); Northampton National Bank of Easton v. Piscanio, 475 Pa. 57, 379 A.2d 870 (1977).[3] Here, the agent was not appointed to protect the assets of a party which has a property interest in Appellant's property; he was only appointed to manage the property in compliance with local ordinances, as would a responsible local agent. Moreover, the appointment does not divest Appellant of his interest in the property; he retains the power to repair and maintain his property if he so chooses. Therefore, we find no merit to Appellant's claim that the trial court erred in failing to require the agent to post security.
We believe that the repetitious and frivolous nature of this appeal entitles the Borough to the award of reasonable counsel fees pursuant to Section 2503(7) of the Judicial Code, 42 Pa.C.S. § 2503(7) and Pa.R.A.P. 2744(1). Gossman v. Lower Chanceford Township Board of Supervisors, 503 Pa. 392, 469 A.2d 996 (1983). Moreover, in In the Matter of Appeal of Richard Michael George, 101 Pa.Commonwealth Ct. 241, 515 A.2d 1047 (1986), and Patel v. Workmen's Compensation *587 Appeal Board (Sauquoit Fibers Co.), 103 Pa.Commonwealth Ct. 290, 520 A.2d 525, appeal denied, 515 Pa. 616, 530 A.2d 869 (1987), we held that we are clearly authorized, under Pa.R.A.P. 2744, to sua sponte impose on the appellant the sanction of paying the reasonable counsel fees of the appellee, and while we did not at that time award fees, we held that such abuse of this Court's appeals process may in the future result in the imposition of such sanctions.
Here, Appellant has so clearly abused the legal process that we now impose on Appellant, sua sponte, the sanction of paying reasonable counsel fees. This appeal was a result of Appellant's refusal to respond to the Borough's complaint in equity, and instead twice appealing the trial court's preliminary rulings. Then, when Appellant suffered a judgement by default, brought on by his own intentional conduct, he filed this appeal, raising numerous, frivolous issues designed to obstruct and delay the equitable relief ordered by the trial court.
Accordingly, we affirm the trial court's order in its entirety, and remand to the trial court for the calculation of reasonable fees incurred by the Borough in this appeal, to be paid by Appellant.
ORDER
AND NOW, this 8th day of July, 1994, the order of the Court of Common Pleas of Chester County in the above-captioned matter is affirmed. Further, the case is remanded to the trial court for calculation of reasonable attorney's fees incurred by the Borough in this appeal, to be paid by Appellant.
Jurisdiction relinquished.
NOTES
[1] The trial court noted that Appellant has a Ph.D. and a law degree, and the level of sophistication of his pleadings and his ability to manipulate the rules shows he was not a typical pro se litigator. Rather, he was more an unlicensed lawyer of considerable skill. (T.C. Opinion at 6, December 8, 1993.)
[2] Although Appellant raised more than fifty (50) issues in the trial court he still has waived the following issues: (1) The Building Code of the Borough of Kennett Square was unconstitutional as it bears no reasonable relationship to the health, safety, morals or general welfare of the community; (2) There is no basis for equity jurisdiction because by its inaction the Borough brought about the conditions of which it complains; (3) The Court was in error to exclude certain evidence.
[3] It was held in DeAngelis v. Commonwealth Land Title Insurance Co., 467 Pa. 410, 358 A.2d 53 (1976), that it was improper to appoint a receiver when the party petitioning for such an appointment does not have a lien on the property in question, and only has contract rights to the property which have not been reduced to judgment. Thus, the petitioning party did not have sufficient property rights in the disputed property to force the appointment of a receiver.
| {
"pile_set_name": "FreeLaw"
} |
And check out the kohlrabi slaw and Gladaâs garbanzo zucchini salad in the recipe sheet!
Also, FFS will have its own filming taking place tomorrow at pick up: Kate Perkins of Perkins Films will be shooting the whole day, from the fruit delivery at 8am to the arrival of the truck from the Farm at Millerâs Crossing in the middle of the day to our 5th distribution. So, just a heads up. And thank you Kate! | {
"pile_set_name": "Pile-CC"
} |
Q:
How to have simple google apps script send mails from Sheets from owner account regardless of who's accessing file
I've clicked around for the past few days trying to find an answer but can't seem to find one that makes sense to me (forgive me, I'm fairly new to GAS). I am trying to set up a Fantasy Golf Draft sheet to be used by about 12 users, but over half of which don't have/aren't willing to use a Gmail address. Getting access to the file is no problem, where I am running into an issue is trying to run a script, where when a Button/Shape is clicked, it sends an automated email to the next person who's turn it is to pick. The functionality of the script is working, when it comes from myself or someone with a Google account who can authorize the script etc. I run into troubles when it's someone without a Google account.
My question - how can I set the script to ONLY send from my email, or the Sheet/Script Owner's email - regardless of who is modifying/clicking the button? I see links about creating the script as a webapp to do this, but I get lost quickly.
Here's a link to my sheet:
[https://docs.google.com/spreadsheets/d/16AppcmrcuhatnzcEs7eIQyD_p1swbRimRZZ4FdbhBKI/edit?usp=sharing][1]
And here is my send mail code:
function sendAlertEmails() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
ss.setActiveSheet(ss.getSheetByName("Send Mails"));
var sheet = SpreadsheetApp.getActiveSheet();
var dataRange = sheet.getRange("A2:f2");
var data = dataRange.getValues();
for (i in data) {
var rowData = data[i];
var emailAddress = rowData[1];
var recipient = rowData[0];
var message1 = rowData[2];
var message2 = rowData[3];
var message3 = rowData[4];
var message4 = rowData[5];
var message = 'Hey ' + recipient + ',\n\n' + message1 + '\n\n' + ' The last player picked was ' + message2 + '\n\n' + message3 +'\n\n' + message4;
var subject = '*GOLF DRAFT 2018* - YOU ARE ON THE CLOCK';
MailApp.sendEmail(emailAddress, subject, message);
var ss = SpreadsheetApp.getActiveSpreadsheet();
ss.setActiveSheet(ss.getSheetByName("DRAFT"));
}
}
Any help would be greatly appreciated!
A:
I felt interested in this issue and worked a bit more on it. I changed from it being a get request to being a post request.
Here is what I have in the Google sheet.
function sendAlertEmails() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
ss.setActiveSheet(ss.getSheetByName("Send Mails"));
var sheet = SpreadsheetApp.getActiveSheet();
var dataRange = sheet.getRange("A2:f2");
var data = dataRange.getValues();
for (i in data) {
var rowData = data[i];
var emailAddress = rowData[1];
var recipient = rowData[0];
var message1 = rowData[2];
var message2 = rowData[3];
var message3 = rowData[4];
var message4 = rowData[5];
var message = 'Hey ' + recipient + ',\n\n' + message1 + '\n\n' + ' The last player picked was ' + message2 + '\n\n' + message3 +'\n\n' + message4;
var subject = '*GOLF DRAFT 2018* - YOU ARE ON THE CLOCK';
var data = {
'name': 'Bob Smith',
'email': 'a@b.com',
'message': message,
'subject': subject,
};
var options = {
'method' : 'post',
'contentType': 'application/json',
'payload' : data
};
var secondScriptID = 'STANDALONE_SCRIPT_ID'
var response = UrlFetchApp.fetch("https://script.google.com/macros/s/" + secondScriptID + "/exec", options);
Logger.log(response) // Expected to see sent data sent back
var ss = SpreadsheetApp.getActiveSpreadsheet();
ss.setActiveSheet(ss.getSheetByName("DRAFT"));
// Browser.msgbox("Your Pick Has Been Made");
}
}
Below is what I have in the standalone script. There are some provisos on the standalone script working:
It needs to be published under "Deploy as a webapp"
Access should be set to 'Anyone, even anonymous'
Every time you make a change to the standalone script publish again and
change the Project version to new. This is so the call from the first sheet calls to the latest code.
Standalone Script
function convertURItoObject(url){
url = url.replace(/\+/g,' ')
url = decodeURIComponent(url)
var parts = url.split("&");
var paramsObj = {};
parts.forEach(function(item){
var keyAndValue = item.split("=");
paramsObj[keyAndValue[0]] = keyAndValue[1]
})
return paramsObj; // here's your object
}
function doPost(e) {
var data = e.postData.contents;
data = convertURItoObject(data)
var recipient = data.email;
var body = data.message;
var subject = data.subject;
try {
MailApp.sendEmail(recipient, subject, body)
}
catch(e){
Logger.log(e)
}
return ContentService.createTextOutput(JSON.stringify(e));
}
| {
"pile_set_name": "StackExchange"
} |
Hello, everyone. Player Unknownâs Battlegrounds Mobile (PUBGM) by Tencent Games got 0.9.0 update today for its mobile app on both Android and iOS platforms. In this article I will go in detail about the new PUBG Mobile 0.9.0 Update Night Mode, QBU Sniper, Updated major Patch Notes and here is how to download it. Here is the tentative Live Countdown to this major update. Subscribe to our Reddit community for more Updates.
PUBG MOBILE LIVE COUNTDOWN TO 0.9.0 UPDATE
Official Version is now available on Play Store and App Store
Status = Released, Check your app updates.
Download here:
Apple App Store Link. Google Play Store Link.
This Update includes the famous Night Mode in Erangel randomly, new guns, vehicles, new mode (competitive) and new in game improvements, major Bug fixes. Released today PUBG Mobile 0.9.0 version. Watch official teaser below:
A sad news is Beryl M762 wonât be there in this update but it would be there in 0.10.0, now instead a QBU Sniper would be there in Sanhok map which will replace Mini 14. The 0.9.0 update is not live for anyone who will be asking. This type of maintenance is a preliminary process to get ready for the update on the 25th.
Here is detailed Link/Source to the official Google Document from Tencent for overview skeptics of this update. All of the data in it belongs to the Tencent Games, here it is just for informational purposes as per fair use.
The update mainly consists of New Weather, Map Improvements, Festival Effects, Matchmaking Improvements, Enemy/friend Spectator Mode, new Ingame-Purchases, Crew Challenges, changes in Custom Rooms, Character Action Improvements, New settings; Improvements, Projectile Improvements, Bug Fixes. | {
"pile_set_name": "OpenWebText2"
} |
The invention relates to methods and equipment for establishing data security in an e-mail service between an e-mail server and a mobile terminal.
Data security in an e-mail service is achieved by using cryptographic techniques in which traffic in a potentially insecure channel is encrypted using cryptographic information, commonly called encryption keys. A problem underlying the invention relates to distributing such encryption information. Prior art techniques for distributing the encryption information are commonly based on public key encryption techniques, such as Diffie-Hellman. A problem with this approach is that the parties have to trust the underlying mobile network and its operator, which they are surprisingly reluctant to do. Another problem is that mobile terminals tend to have small and restricted user interfaces. | {
"pile_set_name": "USPTO Backgrounds"
} |
Your Collection: Sticker Collection
Here are some pictures of my sticker collection from late 70's forward. I worked for Cycle Five Kawasaki in Calvert county Maryland owned by Dale and Ann Norfolk the parents of Skip, Shawn, and Scott. Two of these names are household names in the MX/SX world. The other one is smart. We all collected and traded stickers and beer cans. I never bought any of these, companies either gave them out or I traded for them. Someday I am going to restore some old motocross bikes and then I will stick them.
ATTENTION READERS: WE NEED YOUR COLLECTIONS!
Do you have something cool you'd like to show off? Submit a piece from your collection as well as your name and mailing address to madavis@graphicslab.com and win Throttle Jockey stickers. You will be notified via e-mail if you are the winner!
*Please note that while international readers may submit their Collections, we are only able to award and ship prizes to winners within the United States. | {
"pile_set_name": "Pile-CC"
} |
Start Date: 4/12/01; HourAhead hour: 14; No ancillary schedules awarded.
Variances detected.
Variances detected in Energy Import/Export schedule.
LOG MESSAGES:
PARSING FILE -->> O:\Portland\WestDesk\California Scheduling\ISO Final
Schedules\2001041214.txt
---- Energy Import/Export Schedule ----
$$$ Variance found in table tblINTCHG_IMPEXP.
Details: (Hour: 14 / Preferred: 20.00 / Final: 19.97)
TRANS_TYPE: FINAL
SC_ID: ECTRT
MKT_TYPE: 2
TRANS_DATE: 4/12/01
TIE_POINT: MEAD_2_WALC
INTERCHG_ID: EPMI_CISO_BERT
ENGY_TYPE: FIRM | {
"pile_set_name": "Enron Emails"
} |
244 F.Supp.2d 1250 (2003)
HORIZON HOLDINGS, L.L.C. f/k/a Horizon Marine L.C.; Geoffrey Pepper; Cassandra O'Tool; and John O'Tool; Plaintiffs,
v.
GENMAR HOLDINGS, INC.; Genmar Industries, Inc.; and Genmar Manufacturing of Kansas, L.L.C, Defendants.
No. 01-2193-JWL.
United States District Court, D. Kansas.
February 11, 2003.
*1255 Floyd R. Finch, Jr., Blackwell Sanders Peper Martin LLP, George A. Hanson, Stueve Helder Siegel LLP, Kansas City, MO, Nicole T. Bock, Blackwell Sanders Peper Martin LLP, Omaha, NE, Todd M. McGuire, Stueve Helder Siegal LLP, Kansas City, MO, for Plaintiffs.
Harlan D. Burkhead, Lathrop & Gage L.C., Kansas City, MO, Holly S.A. Eng, Judith Williams-Killackey, Thomas Tinkham, Dorsey & Whitney LLP, Minneapolis, MN, Rosalee M. McNamara, Tedrick A. Housh, III, Timothy K. McNamara, Lathrop & Gage L.C., Kansas City, MO, for Defendants.
MEMORANDUM & ORDER
LUNGSTRUM, District Judge.
Plaintiffs filed suit against defendants asserting various claims arising out of defendants' acquisition of plaintiff Horizon Marine LC, an aluminum boat manufacturing company. Specifically, plaintiffs Horizon Holdings, LLC f/k/a Horizon Marine LC (hereinafter "Horizon") and Geoffrey Pepper claimed that defendants breached both the express terms of the purchase agreement entered into between the parties and the duty of good faith and fair dealing implied in the purchase agreement. Plaintiffs Horizon and Mr. Pepper further claimed that defendants made a variety of fraudulent misrepresentations to them for the purpose of inducing plaintiffs to enter into the purchase agreement. In addition, plaintiffs Cassandra O'Tool and John O'Tool alleged that defendants breached the employment agreements signed by them. Ms. O'Tool further alleged that defendants discriminated against her on the basis of her pregnancy when they denied her a raise and when they terminated her employment. Finally, Ms. O'Tool and Mr. Pepper claimed that defendants unlawfully terminated their employment in retaliation for Ms. O'Tool's and Mr. Pepper's complaints of pregnancy discrimination. For a more thorough understanding of the facts of this case, please see the court's order resolving defendants' motions for summary judgment, Horizon Holdings, L.L.C. v. Genmar Holdings, Inc., 241 F.Supp.2d 1123 (D.Kan.2002).
In November 2002, plaintiffs' claims were tried to a jury and, at the conclusion of the trial, the jury returned a verdict in favor of plaintiffs Horizon and Mr. Pepper on their breach of contract claim in the amount of $2,500,000. The jury also found in favor of the O'Tools on their claims that defendants breached the O'Tools' employment contracts and awarded Ms. O'Tool the sum of $63,200 and Mr. O'Tool the sum of $20,313. The jury found in favor of defendants on all other claims.
This matter is presently before the court on three post-trial motions-plaintiffs' motion to alter or amend the judgment (doc. # 197); plaintiffs' motion for attorneys' fees, costs and expenses (doc. # 198); and defendants' renewed motion for judgment as a matter of law pursuant to Rule 50(b) or, in the alternative, motion for remittitur *1256 and/or new trial pursuant to Rule 59 (doc. # 199). As set forth in more detail below, plaintiffs' motion to alter or amend the judgment is granted only to the extent that a typographical error in the judgment will be corrected and is otherwise denied; plaintiffs' motion for attorneys' fees, costs and expenses is granted in part and denied in part; and defendants' renewed motion for judgment as a matter of law, for remittitur and/or for a new trial is denied.
I. Defendants' Renewed Motion for Judgment as a Matter of Law, for Remittitur and/or for New Trial
Defendants seek post-trial relief on all aspects of the jury's verdict that are favorable to plaintiffs. The primary thrust of defendants' post-trial motion concerns the jury's verdict of $2.5 million in favor of Horizon and Mr. Pepper on the breach of contract claim. According to defendants, this award constitutes a windfall unsupported by the facts or the law. Defendants urge that plaintiffs, as a matter of law, are not entitled to recover any damages in the form of lost earn-out. In the alternative, defendants contend that the award must be remitted or a new trial must be granted on lost earn-out damages. Defendants also seek judgment as a matter of law on the jury's liability finding on the breach of contract claim, asserting that plaintiffs failed to present legally sufficient evidence that defendants breached the express or implied terms of the purchase agreement. Similarly, defendants move for judgment as a matter of law on the O'Tools' claims for breach of their respective employment agreements or for a remittitur of those verdicts. Finally, defendants assert that they are entitled to a new trial because the court erroneously admitted parol evidence and erroneously instructed the jury on the duty of good faith and fair dealing.
A. The Jury's Verdict in favor of Plaintiffs Horizon and Geoff Pepper on their Breach of Contract Claim
The court first addresses defendants' argument that they are entitled to judgment as a matter of law on the jury's liability finding with respect to Horizon and Mr. Pepper's breach of contract claim. Judgment as a matter of law under Rule 50(b) "should be cautiously and sparingly granted," Black v. M & W Gear Co., 269 F.3d 1220, 1238 (10th Cir.2001), and is appropriate only if the evidence, viewed in the light most favorable to the nonmoving party, "points but one way and is susceptible to no reasonable inferences supporting the party opposing the motion." Sanjuan v. IBP, Inc., 275 F.3d 1290, 1293 (10th Cir.2002). In determining whether judgment as a matter of law is proper, the court may not weigh the evidence, consider the credibility of witnesses, or substitute its judgment for that of the jury. See Turnbull v. Topeka State Hosp., 255 F.3d 1238, 1241 (10th Cir.2001).
In essence, the court must affirm the jury verdict if, viewing the record in the light most favorable to the nonmoving party, it contains evidence upon which the jury could properly return a verdict for the nonmoving party. See Roberts v. Progressive Independence, Inc., 183 F.3d 1215, 1219-20 (10th Cir.1999) (citing Harolds Stores, Inc. v. Dillard Dep't Stores, Inc., 82 F.3d 1533, 1546 (10th Cir.1996)). Conversely, the court must enter judgment as a matter of law in favor of the moving party if "there is no legally sufficient evidentiary basis ... with respect to a claim or defense ... under the controlling law." Deters v. Equifax Credit Information Servs., Inc., 202 F.3d 1262, 1268 (10th Cir. 2000) (quoting Harolds, 82 F.3d at 1546-47).
In their papers, defendants assert that, as a matter of law, they did not breach the express terms of the purchase *1257 agreement or the implied terms of the purchase agreement. The jury was instructed that they could find in favor of plaintiffs on plaintiffs' breach of contract claim if they found that plaintiffs had proved a breach of one or more express terms or a breach of the implied duty of good faith and fair dealing. See Jury Instruction 12. Because the court concludes that there was ample evidence presented at trial to support a finding that defendants breached the implied covenant of good faith and fair dealing, the court declines to address defendants' arguments concerning whether the evidence was sufficient to support a finding that defendants had breached any express terms of the purchase agreement.
According to defendants, plaintiffs' claim for breach of the implied covenant of good faith and fair dealing fails as a matter of law because it purports to "add wholly new terms to the contract" and "requires the court to rewrite or supply omitted provisions to the purchase agreement in contravention of Delaware law." [1] This is, of course, an accurate statement of Delaware law. See, e.g., Cincinnati SMS A Limited Partnership v. Cincinnati Bell Cellular Systems Co., 708 A.2d 989, 992 (Del. 1998) ("Delaware observes the wellestablished general principle that ... it is not the proper role of a court to rewrite or supply omitted provisions to a written agreement."). Nonetheless, principles of good faith and fair dealing permit a court to imply certain terms in an agreement so as to honor the parties' reasonable expectations when those obligations were omitted, in the literal sense, from the text of the written agreement but can be understood from the text of the agreement. Id. In determining whether to imply terms in an agreement, the proper focus is on "what the parties likely would have done if they had considered the issue involved." Id.
Nothing in this court's instructions to the jury would have permitted the jury to "rewrite" the purchase agreement or to inject into that agreement wholly new terms. In fact, the jury was instructed, entirely consistent with Delaware law, that they should consider "whether it is clear from what was expressly agreed upon by the parties that the parties would have agreed to prohibit the conduct complained of as a breach of the agreement had they thought to negotiate with respect to that matter." See Jury Instruction 12. Defendants argue in their papers that Mr. Pepper did not demonstrate at trial that the parties would have agreed to prohibit the challenged conduct if they had thought to negotiate about such conduct. Of course, defendants also made this argument to the jury. The jury rejected the argument and there was more than sufficient evidence presented at trial to support that conclusion.
For example, the jury could have readily concluded that, in light of the express agreement that plaintiffs would have an opportunity to realize up to $5.2 million in earn-out consideration (defined in the agreement itself as part of the "purchase price"), that the parties would have agreed, had they thought about it, that defendants would not be permitted to undermine Mr. Pepper's authority as president of Genmar Kansas; to abandon the Horizon brand name entirely; to mandate production of Ranger and Crestliner brands at the Genmar Kansas facility to the detriment of the Horizon brand; or to reimburse Genmar Kansas at only "standard cost"[2] for the manufacture of Ranger *1258 and Crestliner boats thereby impairing realization of the earn-out. If the jury concluded that defendants had engaged in such conduct (and there was sufficient evidence to draw such a conclusion), then the jury was free to conclude that such conduct was inconsistent with the spirit of the agreement concerning the earn-out consideration and that such conduct constituted a breach of the implied covenant of good faith and fair dealing. In short, there is evidence in the record upon which a jury could properly return a verdict for Horizon and Mr. Pepper on their breach of contract claim. Judgment as a matter of law, then, is not appropriate.
Defendants also assert that they are entitled to judgment as a matter of law on Horizon and Mr. Pepper's breach of contract claim because plaintiffs failed to present evidence upon which a reasonable jury could have concluded that defendants acted in bad faith. In support of this argument, defendants point to a Delaware Supreme Court decision defining "bad faith" as "the conscious doing of a wrong because of a dishonest purpose or moral obliquity; it is different from the negative idea of negligent in that it contemplates a state of mind affirmatively operating with furtive design or ill will." See Desert Equities, Inc. v. Morgan Stanley Leveraged Equity Fund. II, L.P., 624 A.2d 1199, 1209 n. 16 (Del. 1993). According to defendants, the evidence concerning defendants' course of conduct demonstrates only that defendants were attempting to make a profit and that no evidence was presented that defendants were acting with any furtive design or ill will.
As an initial matter, the jury was instructed that a "violation of the implied covenant of good faith and fair dealing implicitly indicates bad faith conduct." See Jury Instruction 12. Thus, the court's instruction certainly requires that defendants' conduct reflect some element of bad faith. While the jury was not required to find specifically that defendants acted with furtive design or ill will in order to find that defendants had breached the covenant of good faith and fair dealing, defendants have not directed the court to any cases suggesting that proof of a breach of the duty of good faith and fair dealing is inadequate in the absence of proof of some furtive design or ill will. Certainly, the Desert Equities case does not suggest such a conclusion. There, the court defined "bad faith" only for purposes of contrasting the nature of that claim with a fraud claim in explaining why it was rejecting the defendants' argument that a plaintiff must plead with particularity under Rule 9(b) a claim of bad faith. See 624 A.2d at 1208. The court, then, rejects defendants' suggestion that evidence of some furtive design or ill will was necessary for a finding of liability on plaintiffs' claim that defendants breached the covenant of good faith and fair dealing. See True North Composites, LLC v. Trinity Indus., Inc., 191 F.Supp.2d 484, 517-18 (D.Del.2002) (rejecting argument that claimant must prove that the other party acted "with furtive design or ill will" in order to prove a breach of the covenant of good faith and fair dealing).
In any event, even assuming that plaintiffs were required to prove that defendants acted with furtive design or ill will *1259 in order to prove a breach of the covenant of good faith and fair dealing, copious evidence was presented at trial demonstrating that defendants acted with the requisite "dishonest purpose" or "furtive design." There was ample evidence, for example, that defendants had ulterior motives for acquiring Horizon Marine, including the desire to remove a potentially significant competitor from the market and the desire to obtain a facility in the "southern" market dedicated primarily to the production of Ranger boats. There was also substantial evidence demonstrating that defendants' course of conduct was intended to benefit defendants' bottom line to the financial detriment of Mr. Pepper.
In that regard, the jury could reasonably have concluded that defendants' efforts to undermine Mr. Pepper's authority as president of Genmar Kansas and their decisions to abandon the Horizon brand name entirely, to mandate the production of Ranger and Crestliner brands at the Genmar Kansas facility and to reimburse Genmar Kansas at only "standard cost" for the manufacture of Ranger and Crestliner boats were all designed to either force Mr. Pepper to quit his employment (thereby extinguishing Mr. Pepper's right to collect any earn-out) or prevent Mr. Pepper from achieving the profit margins necessary to realize his earn-out (because the formula pursuant to which the earn-out was calculated was weighted heavily in favor of the production of Horizon boats). While defendants urge that such a characterization of the evidence simply makes no sense because defendants themselves made no money on the Horizon Marine acquisition (an argument that defendants presented at length to the jury), the evidence was sufficient to support the conclusion that defendants believed (but were ultimately incorrect) that they could still turn a profit through the production of Ranger and Crestliner boats at Genmar Kansas while simultaneously preventing Mr. Pepper from realizing any earn-out by stifling the production of Horizon boats and reimbursing Genmar Kansas only at standard cost for the production of other boats. Simply put, ample evidence was presented from which the jury could reasonably conclude that defendants' conduct, taken as a whole, was in "bad faith," regardless of how that phrase is defined.
In sum, the evidence presented at trial was more than adequate for the jury to conclude that defendants breached the implied covenant of good faith and fair dealing. Defendants' motion on this issue is denied.
B. The Jury's Award of $2.5 Million for Lost Earn-Out Consideration
Defendants contend that they are entitled to judgment as a matter of law on Horizon and Mr. Pepper's claim for damages for two separate but related reasons. First, defendants assert that plaintiffs presented no evidence whatsoever for the jury to ascertain what position plaintiffs would have been in if the purchase agreement had been properly performed. Second, defendants assert that Delaware law precludes any recovery because Genmar Kansas was a new business with no profit history and no evidence was presented from which the jury could conclude that Genmar Kansas was reasonably certain to realize the gross profit margins necessary to achieve any earn-out under the agreement. In the alternative, defendants seek an order remitting the award to nominal damages of one dollar or a new trial on the issue of damages.
1. Judgment as a Matter of Law
The jury was instructed that if they found that defendants had breached the purchase agreement and that plaintiffs sustained damages as a result of that *1260 breach, then Horizon and Mr. Pepper were entitled to compensation "in an amount that [would] place them in the same position they would have been in if the purchase agreement had been properly performed." See Jury Instruction 13. According to defendants, plaintiffs made no effort to explain to the jury how, assuming defendants had performed their contractual obligations in good faith, Genmar Kansas would have ever met the requisite gross profit margins or generated the gross revenues necessary to entitle them to substantial earn-out payments. Stated another way, defendants urge that there was simply no evidence presented at trial that Genmar Kansas would have been profitable absent defendants' breach of the purchase agreement.
The evidence presented at trial, however, was more than sufficient to permit the jury to conclude that Genmar Kansas would have been profitable absent defendants' breach. Mr. Pepper, for example, testified on the second day of his direct examination that, in his mind, the requisite 13 percent gross profit margin was reasonable and obtainable based on his prior experience with other industry boat companies. According to Mr. Pepper, he had worked for other companies where the gross profit margins ranged from 15 percent to 30 percent, so the 13 percent figure seemed "low" to him. Mr. Pepper further testified that during the time that he was responsible for directing Lowe's manufacturing operations,[3] Lowe achieved gross profit percentages in the range of 30 percent. Mr. Pepper cautioned, however, that he needed a certain level of autonomy with respect to the management of Genmar Kansas to ensure that Genmar Kansas would realize the profits and revenues necessary for Mr. Pepper to obtain the earnout. Specifically, Mr. Pepper testified on the first day of his direct examination that he sought (and received) assurances from Mr. Oppegaard and Mr. Cloutier that they would "allow [him] to do what is necessary in managing the company to obtain that earn-out." According to Mr. Pepper, Mr. Oppegaard further assured him that he would be in control of Genmar Kansas' operations and that he would be able to make the "operation decisions necessary" to obtain the earn-out.
The evidence presented at trial was also sufficient from which the jury could conclude that Horizon Marine, just prior to defendants' acquisition, was about to "break into the black" and turn a profit. Mr. Pepper, for example, testified on the first day of his direct examination that Horizon Marine was enjoying significant progress in late 1997 and the first six months of 1998. Mr. Pepper fully expected Horizon Marine to start making a profit in 1998. Indeed, the opinions and perspectives of other people associated with the acquisition lent additional credence to Mr. Pepper's beliefs. Mr. Pepper testified on direct examination, for example, that Bill Ek, a consultant for defendants who visited the Horizon Marine facility in November 1997, was "amazed" at "how far [Horizon Marine] had come in such a short period of time." Mr. Oppegaard testified on cross-examination that Mr. Ek had advised him that Mr. Pepper was "the best product development person in the industry." Similarly, the jury heard testimony on the first day of Mr. Pepper's direct examination that Mr. Oppegaard was impressed and excited about what Mr. Pepper had been able to accomplish with Horizon Marine in a short period of time. In fact, Mr. Oppegaard, after meeting Mr. Pepper and visiting Horizon Marine for the first time, sent an internal memorandum *1261 to his executive team in which he described Mr. Pepper and the Horizon product as "a major competitor if left alone to grow." Mr. Oppegaard also testified on cross-examination that he anticipated that Horizon Marine would grow very fast.
From this evidence, a reasonable jury could infer that if defendants had allowed Mr. Pepper to direct the daily operations of Genmar Kansas, then Mr. Pepper would have been able to achieve the requisite gross profit margins to realize the earnout. See Harrington v. Hollingsworth, 1992 WL 91165, at *4 (Del.Super.Ct. Apr. 15, 1992) (in breach of contract case, lost income damages not speculative where commercial fisherman testified that had the defendant constructed his larger commercial fishing boat on time, he would have been able to catch more sea bass and double his annual income; fisherman's testimony was sufficient to establish damages with reasonable probability where his projections were based on bass fishing industry, an industry with which plaintiff was familiar and in which he had participated for 20 years).
Moreover, defendants attempted to demonstrate at trial-through both argument and the examination of witnessesthat plaintiffs' claim for damages based on the earn-out was unreasonable because it was uncertain whether the company would have been able to meet the requisite profit margins and revenues. Defendants' efforts in that regard apparently had some impact-the jury awarded only half of the total earn-out consideration. Presumably, then, the jury concluded that plaintiffs had not proved loss of the total earn-out amount with reasonable certainty. Finally, any doubt concerning the amount of damages sustained by plaintiffs is resolved against defendants. As the breaching party, defendants "should not be permitted to reap advantage from [their] own wrong by insisting on proof which by reason of [their] breach is unobtainable." See E. Allan Farnsworth, Contracts § 12.15 at 922 (2d ed.1990); accord Restatement (Second) of Contracts § 352 cmt. a (Any doubts in the proof of damages are resolved against the party in breach because "[a] party who has, by his breach, forced the injured party to seek compensation in damages should not be allowed to profit from his breach where it is established that a significant loss has occurred.").
In a related argument, defendants contend that they are entitled to judgment as a matter of law on plaintiffs' claim for damages because, under Delaware law, "a new business with no profit history cannot obtain lost profit damages." See Defs. Br. at 7. On its face, then, defendants' argument is premised on the idea that plaintiffs' damages for lost earn-out consideration is the equivalent of an award for damages based on lost profits. Given the nature of the earn-out consideration at issue in this case, however, it is simply not appropriate to subject plaintiffs' claim for damages to a traditional lost profits analysis.
To be sure, Genmar Kansas' profitability was an important component of the earnout formula. However, unlike those cases in which one party seeks to recover lost profits when the issue of whether that party could reasonably expect such profits is in dispute, the parties here agreed at the outset of their relationship that it was reasonable for Mr. Pepper to expect an additional $5.2 million in earn-out consideration pursuant to a formula developed by defendants. Indeed, the parties agreed that the earn-out consideration was part of the total purchase price for the acquisitionan agreement that is reflected in Article 2 of the contract, which states that the "Cash Consideration and the Earn-Out Consideration described in Section 2.2 below are referred to in this Agreement in *1262 the aggregate as the `Purchase Price.'" See Trial Ex. 227a § 2.1. As Mr. Pepper explained on the second day of his direct examination, defendants initially proposed the earn-out consideration as "more of an incentive-type thing" separate and apart from the purchase price. However, after multiple discussions during which Mr. Pepper, Mr. Oppegaard and Mr. Cloutier all agreed that the earn-out was obtainable and that Mr. Pepper would be given the requisite autonomy to obtain the earn-out, defendants ultimately agreed to include the earn-out as part of the purchase price.
While both parties agreed at trial that the earn-out was not a "guarantee," ample evidence was presented that all parties believed there to be "reasonable probability" that Mr. Pepper would realize the full amount of the earn-out. Indeed, on his direct examination, Mr. Pepper testified that both Mr. Cloutier and Mr. Oppegaard assured him that the earn-out was obtainable. On his cross-examination, Mr. Pepper testified that he advised his investors in writing that "the management of Horizon believes there is a reasonable probability that ... the earn-out consideration will be achieved." Similarly, Mr. Cloutier testified on direct examination that he he believed at the time of the transaction that Mr. Pepper had a "very realistic" opportunity to achieve the earn-out. Moreover, on cross-examination, Mr. Cloutier testified that he believed that the earn-out portion of the purchase agreement was achievable based in part on defendants' own internal projections.
In their papers, defendants now characterize their assurances and beliefs that the earn-out was obtainable as mere "pre-contractual guesswork" and contend that to permit plaintiffs to recover damages based on such guesswork without considering Genmar Kansas' "actual performance" is to provide plaintiffs with an "unwarranted windfall." This argument, however, ignores the significance of the jury's implicit finding-that Genmar Kansas' actual performance would have been different (indeed, it would have been profitable) had defendants performed their obligations under the purchase agreement consistent with plaintiffs' reasonable expectations. In other words, the jury apparently found that defendants' conduct, including undermining Mr. Pepper's managerial authority and requiring increased production of multiple models of Ranger boats, had the effect of rendering Mr. Pepper unable to perform as he had planned, unable to operate Genmar Kansas appropriately and ultimately unable to succeed in achieving any earnout consideration. For these reasons, defendants' reliance on the actual performance of Genmar Kansas as a basis for judgment as a matter of law is misplaced.
In sum, the court rejects defendants' attempt to analyze plaintiffs' claim for damages as one for lost profits. The jury's award of $2.5 million is not speculative and is supported by evidence that Genmar Kansas would have been profitable and that the earn-out would have been obtainable if defendants had performed in good faith their obligations under the purchase agreement.
2. Remittitur
As an alternative to their argument that they are entitled to judgment as a matter of law on plaintiffs' claim for damages in the form of lost earn-out, defendants maintain that this court should enter a remittitur reducing the $2.5 million verdict to nominal damages of one dollar in light of the "utterly speculative nature" of the lost earn-out damages. Of course, the court has already concluded that the jury's award of $2.5 million was not speculative, so the motion for remittitur is denied. In any event, under Delaware law, the court may order a remittitur only if the verdict *1263 "is so grossly out of proportion as to shock the Court's conscience." See Gillenardo v. Connor Broadcasting Delaware Co., 2002 WL 991110 at *10 (Del.Super.Ct. Apr.30, 2002) (citing Mills v. Telenczak, 345 A.2d 424, 426 (Del.1975)); see also Century 21 Real Estate Corp. v. Meraj Int'l Investment Corp., 315 F.3d 1271, 1281 (10th Cir.2003) (in assessing measure of damages awarded pursuant to contract containing choice of law provision, district court must follow chosen state's law-absent any argument that choice of law provision is unenforceable-including that state's law concerning remittitur).
Again, the jury had before it sufficient evidence to conclude that plaintiffs would have realized a significant portion of the earn-out consideration had defendants performed in good faith their obligations under the contract. The $2.5 million verdict represents exactly half of the entire earnout portion of the purchase agreement and exactly half of what the plaintiffs sought to recover on their breach of contract claim. The award is not excessive, it is not unreasonable, it does not shock the court's conscience and, thus, it will not be remitted. See id. at 1282-83 (affirming district court's refusal to remit $700,000 verdict on breach of contract claim, despite concerns about reliability of testimony concerning lost profits and "unrealistic" projections; district court reviewed award under "shock the conscience" standard).
3. New Trial
Defendants' final arguments with respect to the jury's verdict on plaintiffs' breach of contract claim is that they are entitled to a new trial because the verdict is against the weight of the evidence and the result of passion and prejudice. Delaware law permits a district court to set aside a verdict and order a new trial only if "the evidence preponderates so heavily against the jury verdict that a reasonable jury could not have reached the result." See Gannett Co. v. Re, 496 A.2d 553, 558 (Del. 1985). For the reasons set forth above in connection with defendants' motion for judgment as a matter of law, the court concludes that evidence presented at trial was sufficient for the jury to have reached the result that it did. Similarly, for the reasons explained above, the court cannot conclude that the verdict is so clearly excessive as to indicate that it was the result of passion or prejudice. See Yankanwich v. Wharton, 460 A.2d 1326, 1332 (Del.1983) ("A verdict will not be disturbed as excessive unless it is so clearly so as to indicate that it was the result of passion, prejudice, partiality, or corruption; or that it was manifestly the result of disregard of the evidence or applicable rules of law."). The jury's verdict of $2.5 million on plaintiffs' breach of contract claim will stand.
C. The Jury's Verdicts in favor of Cassandra O'Tool and John O'Tool
The jury also found in favor of Cassandra O'Tool and John O'Tool on their claims that defendants breached the O'Tools' employment contracts. The jury awarded Ms. O'Tool the sum of $63,200 and Mr. O'Tool the sum of $20,313. Defendants assert that they are entitled to judgment as a matter of law on the O'Tools' claims for breach of their employment contracts or, in the alternative, that they are entitled to a remittitur reducing the damages awarded to the O'Tools. For the reasons explained below, defendants' motion is denied.
1. Judgment as a Matter of Law
At trial, Cassandra and John O'Tool argued that defendants breached the express terms of their respective employment agreements. Specifically, the O'Tools maintained that, pursuant to the express language of their employment agreements, defendants could not discharge Mr. or Ms. O'Tool prior to the end *1264 of an initial three-year employment period except in four narrow circumstances and that they were not discharged for any of those four reasons. In support of their argument, the O'Tools highlighted for the jury section 3 and section 7 of their employment agreements:
3. Term of Employment. This Agreement shall have a term of three (3) years, subject to earlier termination pursuant to the provisions of Section 7 hereof.
* * * * * *
7. Termination and Severance.
(a) This Agreement may be terminated prior to the end of the three (3) year term by Genmar Kansas for (i) cause, (ii) lack of adequate job performance as determined by Genmar Kansas' President and the President of Genmar Holdings, (iii) death of Employee, or (iv) disability of Employee.
(b) In the event Genmar Kansas terminates Employees employment for any reason other than termination for cause, death or disability Employee shall be entitled to six (6) months of severance pay at the base salary Employee is earning on the date of such termination.
Defendants attempted to convince the jury, and now the court, that the O'Tools were terminated for "lack of adequate job performance" consistent with section 7 of their employment contracts. The jury clearly rejected defendants' argument and, in finding that defendants breached the O'Tools' employment contracts, concluded that the O'Tools were not terminated for inadequate job performance or any other reason set forth in section 7. Indeed, ample evidence was presented at trial to support the jury's conclusion. In that regard, the jury could have concluded (and presumably did conclude) that the O'Tools were terminated not because of any performance issues but because of their familial ties with Geoff Pepper, the key individual with whom defendants were attempting to sever their relationship. In other words, the jury could have easily concluded from the evidence presented at trial that defendants terminated Mr. and Mrs. O'Tool because defendants believed it would be awkward to retain the O'Tools after terminating Geoff Pepper.
Another possibility, equally supported by the evidence, is that the jury concluded that the O'Tools were terminated for inadequate job performance but that the assessment of their job performance was not, as required by section 7, "determined by Genmar Kansas' President and the President of Genmar Holdings." Specifically, the jury could have concluded that Mr. Pepper was still serving as the president of Genmar Kansas during the relevant time period and that Mr. Pepper had not determined that his daughter and son-inlaw were performing inadequately. Moreover, the jury could have concluded from the evidence presented at trial that Mr. Oppegaard, the president of Genmar Holdings, had simply not made an assessment of the O'Tools' job performance. In fact, Mr. Oppegaard testified at trial that he had never discussed with Mr. Pepper the adequacy of the O'Tools' job performance and that he did not make the decision to terminate the O'Tools.
Defendants also reiterate their argument (made at the summary judgment stage, to the court at the close of plaintiffs' case and to the jury throughout the trial) that Section 12 of the O'Tools' employment agreements eviscerates any notion that the O'Tools were guaranteed employment for a three-year term.[4] Section 12 of the *1265 agreement, entitled "Miscellaneous," contains the following sentence: "This Agreement shall not give Employee any right to be employed for any specific time or otherwise limit Genmar Kansas' right to terminate Employees employment at any time with or without cause." As the court noted in its summary judgment order, however, any ambiguity created when sections 3 and 7 are read together with section 12 was for the jury to resolve and defendants certainly are not entitled to judgment as a matter of law on the O'Tools' breach of contract claims based on the language of section 12. See Horizon Holdings, L.L.C. v. Genmar Holdings, Inc., 241 F.Supp.2d 1123, 1146 (D.Kan.2002). Moreover, the jury could have concluded that section 12, read literally, gives only Genmar Kansas the right to terminate an employee for any reason whatsoever and that, in contrast, Genmar Holdings and Genmar Industries are bound by the language of sections 3 and 7.
In sum, the court certainly cannot conclude as a matter of law that the O'Tools were terminated for lack of adequate job performance consistent with section 7 of their employment agreements or that the O'Tools were not guaranteed any specific term of employment. The record contains more than sufficient evidence upon which the jury could properly return a verdict for the O'Tools on their breach of contract claims.
2. Remittitur
In the alternative, defendants urge that the damages awarded by the jury to the O'Tools are excessive and against the weight of the evidence and, as a result, they ask the court to enter an order of remittitur reducing the awards. The court begins with defendants' arguments concerning the jury's award of $63,200 to Ms. O'Tool. According to defendants, Ms. O'Tool's lost wages for the relevant time period were only $52,000 and thus, the jury must have awarded Ms. O'Tool more than $11,000 in lost MIP earnings (a bonus pursuant to defendants' Management Incentive Program). Defendants urge that the $52,000 in lost wages must be reduced because the jury failed to deduct from this amount any wages that Ms. O'Tool could have earned if she had made reasonable efforts to obtain other employment.
Of course, the burden was on defendants to prove that Ms. O'Tool failed to mitigate her damages. See Leavenworth Plaza Assocs., L.P. v. L.A.G. Enterprises, 28 Kan.App.2d 269, 272, 16 P.3d 314 (2000) (citing Kelty v. Best Cabs, Inc., 206 Kan. 654, 659, 481 P.2d 980 (1971); Rockey v. Bacon, 205 Kan. 578, 583, 470 P.2d 804 (1970)).[5] Defendants spent very little time on this issue at trial. They presented no evidence regarding any specific jobs that might have been available to Ms. O'Tool and, in contrast, plaintiffs presented evidence reflecting that Ms. O'Tool did, in fact, attempt to find alternative employment but was unsuccessful. Ultimately, defendants simply failed to carry their burden on the mitigation issue.
Defendants further contend that the jury's calculation of Ms. O'Tool's lost MIP earnings was inaccurate. Consistent with the evidence presented by plaintiffs at trial, the jury apparently awarded Ms. O'Tool approximately $11,000 in lost MIP earnings, *1266 representing 20 percent of Ms. O'Tool's salary. Significantly, defendants do not contest that Ms. O'Tool's employment agreement provided that her MIP compensation would be 20 percent of her salary assuming that both Genmar Holdings and Genmar Kansas met their operating profit goals. Moreover, defendants do not contest that 20 percent of Ms. O'Tool's salary over the relevant 15-month period at issue (the time of her termination through the time when Ms. O'Tool's employment contract would have expired) would be roughly $11,000.[6] Rather, defendants urge that the jury incorrectly assumed that both Genmar Holdings and Genmar Kansas would have met their operating profit goals during the relevant time frame-an assumption that defendants characterize as "clearly erroneous" in light of the fact that Genmar Kansas never reached the operating profits necessary to generate MIP payments.
Similarly, defendants contend that the jury improperly calculated Mr. O'Tool's lost MIP earnings when it awarded him $20,313. In that regard, the jury's verdict represents only lost MIP earnings as it was undisputed that Mr. O'Tool earned more money in his subsequent job than he would have earned if he had stayed at Genmar Kansas. Defendants do not dispute that Mr. O'Tool's employment contract provided that his MIP compensation would be 25 percent of his salary (assuming that both Genmar Holdings and Genmar Kansas met their operating profit goals). Defendants also do not dispute that the jury's verdict of $20,313 represents almost to the penny 25 percent of Mr. O'Tool's annual salary of $65,000 over the course of 15 months.[7] Again, defendants maintain only that the jury incorrectly assumed (or wildly speculated) that both Genmar Holdings and Genmar Kansas would have met their operating profit goals during the relevant time frame and that, in fact, Genmar Kansas never met the requisite profit goals.
Of course, defendants had the opportunity to make this argument to the jury and did, in fact, make this argument to the jury. The jury, as it was entitled to do, rejected this argument and plainly adopted plaintiffs' theory, thoroughly developed at trial, that Genmar Kansas would have reached its operating profit goals but for defendants' breach of their obligations under the purchase agreement, including their duty of good faith and fair dealing. In short, the jury's award of $63,200 to Ms. O'Tool and $20,313 to Mr. O'Tool does not shock the conscience of this court and, thus, no remittitur will be issued. See Dougan v. Rossville Drainage Dist, 270 Kan. 468, 486, 15 P.3d 338 (2000) (court has the power to issue a remittitur where a verdict is so manifestly excessive that it shocks the conscience of the court); see also Century 21 Real Estate Corp. v. Meraj Int'l Investment Corp., 315 F.3d 1271, 1281 (10th Cir.2003) (in assessing measure of damages awarded pursuant to contract containing choice of law provision, district court must follow chosen state's law-absent any argument that choice of law provision is unenforceable-including that state's law concerning remittitur).
*1267 D. Remaining Arguments in Support of New Trial
Finally, defendants assert that they are entitled to a new trial pursuant to Federal Rule of Civil Procedure 59(a) in light of two "substantial errors of law" committed by the court. Specifically, defendants contend that the court erred in admitting parol evidence of the parties' negotiations prior to the execution of the purchase agreement and that the court erred in its instruction to the jury regarding the appropriate standard for determining whether defendants breached the implied covenant of good faith and fair dealing. The court addresses each of these arguments in turn and, as explained below, rejects both arguments.
1. Admission of Parol Evidence
In their motion, defendants initially argue that the court erred when it admitted, over defendants' objection, parol evidence of the parties' negotiations to support plaintiffs' claim that they were fraudulently induced into executing the purchase agreement. Curiously, defendant concedes (in the same paragraph) that the law permits such evidence to prove fraudulent inducement. What defendants are really arguing is that parol evidence is inadmissible to prove bad faith in a breach of contract claim and that the jury should not have been permitted to consider evidence of the parties' negotiations (and, more specifically, oral assurances made to plaintiffs by defendants prior to the execution of the agreement) in connection with plaintiffs' claim that defendants breached the implied duty of good faith and fair dealing.[8]
While defendants objected at trial to the admission of parol evidence concerning the parties' negotiations, they did not, once the court ruled that such evidence was clearly admissible with respect to plaintiffs' fraud claim, request a limiting instruction or even raise the issue of whether such evidence was admissible with respect to plaintiffs' breach of contract claim. In fact, defendants concede, as they must, that they failed to request a limiting instruction. Defendants, however, urge that parol evidence is a rule of substantive law that is not waived by the failure to object to its admission. See Carey v. Shellburne, Inc., 224 A.2d 400, 402 (Del.1966). While this is certainly time, there is nonetheless an evidentiary objection-relevance under Federal Rules of Evidence 401 and 402-that defendants should have made (and did not) if they desired to preclude the jury from considering such evidence with respect to plaintiffs' breach of contract claim. Because defendants failed to raise a timely objection to the admission of such evidence on that basis and request a limiting instruction, the court reviews the admission of the evidence under the "plain error" standard. See Fed.R.Evid. 103(d).
The court readily concludes that the admission of evidence concerning the parties' negotiations prior to executing the purchase agreement was not plain error. In fact, the point largely is moot because the court, even if defendants had brought the issue to the court's attention at trial, would have permitted the jury to consider such evidence in connection with plaintiffs' claim that defendants breached the implied covenant of good faith and fail" dealing. In other words, the court would have overruled any objection that defendants might have made in this regard.
*1268 The parol evidence rule requires the court to exclude "extraneous evidence that varies or contradicts the terms of a unified written instrument." True North Composites, LLC v. Trinity Indus., Inc., 191 F.Supp.2d 484, 514 (D.Del.2002) (citation omitted). Because defendants have not shown (much less argued) that the evidence presented at trial concerning the parties' negotiations varied or contradicted the terms of the purchase agreement, such evidence simply does not require invocation of the parol evidence rule. Moreover, because the purchase agreement was silent with respect to the majority of the issues discussed by the parties prior to the execution of the agreement (e.g., the number of Ranger boats that Genmar Kansas would be expected to produce or whether Genmar Kansas would be expected to produce any sister-brand boats at all), evidence concerning the parties' pre-acquisition negotiations is entirely appropriate to provide context for plaintiffs' claim that defendants breached their duty of good faith and fair dealing. See id. at 514-15 (denying motion for new trial based on court's alleged error in admitting parol evidence of transaction underlying written agreement because evidence provided context to good-faith-and-fair-dealing claims and testimony did not vary or contradict the terms of the agreement).
In other words, evidence concerning what the parties discussed prior to executing the agreement, to the extent such evidence, as here, does not contradict the agreement, is entirely relevant to whether defendants breached the covenant of good faith and fair dealing because the parties' reasonable expectations at the time of the contract formation determine the reasonableness of the challenged conduct. See id. at 516 (evidence concerning course of dealings between the parties prior to execution of agreement was relevant to claim that party breached the covenant of good faith and fair dealing because such evidence illuminated the parties' expectations of each other at the time of contract formation).
To conclude, then, defendants have not shown that the parol evidence rule required exclusion, at least for purposes of plaintiffs' breach of contract claim, of evidence concerning the parties' negotiations prior to the execution of the purchase agreement. The court rejects defendants' contention that it erred by allowing the jury to consider such evidence.
2. The Good Faith and Fair Dealing Instruction
Defendants' final argument in support of their motion for a new trial is that the court erred in its instruction to the jury concerning the duty of good faith and fair dealing. In its instructions, the court explained the duty, under Delaware law, as follows:
[T]he law imposes a duty of good faith and fair dealing in every contract. This duty is a contract term implied by courts to prevent one party from unfairly taking advantage of the other party. This duty includes a requirement that a party avoid hindering or preventing the other party's performance. The implied covenant of good faith and fair dealing emphasizes faithfulness to an agreed common purpose and consistency for the justified expectations of the other party. The parties' reasonable expectations at the time of the contract formation determines the reasonableness of the challenged conduct. A violation of the implied covenant of good faith and fair dealing implicitly indicates bad faith conduct.
In determining whether defendants breached the implied covenant of good faith and fair dealing, you may consider whether it is clear from what was expressly agreed upon by the parties that *1269 the parties would have agreed to prohibit the conduct complained of as a breach of the agreement had they thought to negotiate with respect to that matter.
See Jury Instruction 12. The court's instruction, in large part, was based on an instruction given by another federal court applying Delaware law concerning the duty of good faith and fair dealing, True North Composites, LLC v. Trinity Indus., Inc., 191 F.Supp.2d 484 (D.Del.2002). In True North, the court, faced with a motion for a new trial based an alleged errors in the good faith and fail dealing instruction, reviewed its instruction and found it to be "consonant with Delaware law." Id. at 517-18. Specifically, the court noted that its instruction "tracks the language of § 205(a) of the Restatement (Second) of Contracts (1979), which has been used by Delaware courts to explain the duty of good faith." Id. at 518.[9] In short, the court readily concluded that its instruction on the duty of good faith and fair dealing was not in error. Id.
Defendants urge, as they did at the instruction conference, that any proper instruction on the duty of good faith and fair dealing under Delaware law must require a finding that the conduct at issue involved "fraud, deceit or misrepresentation." Defendants' proposed instruction, for example, contained the following sentence that the court expressly rejected: "To prove defendants breached the implied duty of good faith and fair dealing in the Purchase Agreement, plaintiffs must demonstrate that defendants engaged in conduct of fraud, deceit or misrepresentation." See Def. Proposed Instruction 5. This proffered language is derived from Corporate Property Associates 6 v. Hallwood Group Inc., 792 A.2d 993 (Del.Ch.2002), a trial court decision from the Court of Chancery in Delaware. In that case, a commercial dispute, the Vice Chancellor stated that a claimant seeking to prove a breach of the implied covenant of good faith and fair dealing "must also demonstrate that the conduct at issue involved `an aspect of fraud, deceit or misrepresentation.'" Id. at 1003. At the instruction conference, defendants relied solely on the Corporate Property case to support their proffered instruction. Indeed, defendants did not direct the court to any other Delaware case-much less a Delaware Supreme Court case or a federal case interpreting Delaware law-in which a court required a finding of fraud, deceit or misrepresentation to support a breach of the covenant of good faith and fair dealing in the context of a commercial transaction.
As the court explained at the conference, the trial court in Corporate Property cites only to Merrill v. Crothall-American, Inc., 606 A.2d 96, 101 (Del.1992) in support of the "fraud, deceit or misrepresentation" language. The Merrill case involved an employment-at-will contract and the court held that when the conduct of an employer in the employment-at-will context rises to the level of fraud, deceit or misrepresentation, then the employer will have violated the implied covenant of good faith and fair dealing. Id. Interestingly, the Merrill court, in turn, relies on two cases from two other state courts in support of its conclusion that an element of fraud, deceit or misrepresentation must be present before an employer violates the covenant of good faith and fail' dealing. Id. Those cases, Magnan v. Anaconda Indus., Inc., 37 Conn.Supp. 38, 429 A.2d 492 (1980) and A. John Cohen Ins. v. Middlesex Ins. Co., 8 Mass.App.Ct. 178, 392 N.E.2d 862 (1979), *1270 both arise in the employment-at-will context.
In the limited and unique context of employment-at-will, requiring an employee to prove that his or her employer's conduct amounted to fraud in order to show a breach of the duty of good faith and fair dealing is entirely consistent with the notion of an at-will employment relationship. For in the absence of a showing of fraud, the covenant of good faith and fair dealing could not operate in the employment-at-will context without wholly defeating the benefit for which the parties bargained-the employer's ability to discharge the employee and the employee's ability to quit his or her employment for good reason, bad reason or no reason at all. Stated another way, parties to an at-will employment relationship are generally not subjected to any good faith standard.[10] On the other hand, in the context of a commercial transaction like the one presented here, the implied covenant of good faith and fair dealing-as it is typically applied (i.e., without a requirement of fraud)-does not conflict with the benefit for which parties to a commercial transaction generally bargain. For these reasons, the court reiterates its belief that the trial court in Corporate Property incorrectly incorporated into the commercial context the "fraud, deceit or misrepresentation" language from the employment-at-will context of Merrill.[11] Defendants, for the first time, now also cite to a Delaware Supreme Court case that they assert rejects the distinction that this court has drawn between the commercial context and employment-at-will context. Specifically, defendants rely on Cincinnati SMSA Limited Partnership v. Cincinnati Bell Cellular Systems Co., 708 A.2d 989 (Del.1998) and contend that in Cincinnati Bell the Delaware Supreme Court "made clear that the same standard applied by the Delaware court in Merrill should also be applied in the commercial contract context." Defendants' characterization of the Cincinnati Bell case is simply inaccurate; in fact, that case supports this court's conclusion that any requirement that a party prove fraudulent conduct to demonstrate a violation of the duty of good faith and fair dealing is limited to the employment-at-will context.
In Cincinnati Bell, the Delaware Supreme Court reviewed a decision by the Court of Chancery dismissing, pursuant to Rule 12(b)(6), a good faith and fair dealing claim arising in the context of a limited partnership agreement. Id. at 990. Specifically, the Delaware Supreme Court affirmed the lower court's conclusion that the implied covenant of good faith and fair dealing could not provide a basis for implying additional noncompete obligations in a limited partnership agreement where the agreement's noncompete clause was unambiguous. Id. at 993-94. In so holding, the Cincinnati Bell court emphasized that "implying obligations based on the covenant of good faith and fair dealing is a cautious enterprise." Id. at 992.
*1271 Tracing the development of the implied covenant under Delaware law, the court in Cincinnati Bell noted that the Merrill case was the first case in which the court "first recognized the limited application of the covenant to inducement representations in at-will employment contracts." Id. The Cincinnati Bell court further noted that in Merrill, the court "was careful to heed the legal right of employers to pursue a certain amount of self-interest in the creation of contractual relationships" and "held that, to plead properly a claim for breach of an implied covenant of good faith and fair dealing in the inducement of employment, a plaintiff must allege `an aspect of fraud, deceit or misrepresentation.'" Id. at 992-93 (quoting Merrill, 606 A.2d at 101-02). The court in the Cincinnati Bell case then stated, "[t]his Court should be no less cautious or exacting when asked to imply contractual obligations from the written text of a limited partnership agreement." Id. at 993. Defendants argue that this single sentence clearly illustrates an intent by the Delaware Supreme Court to incorporate the fraud standard of the employment-at-will context into the commercial transaction context. A full reading of Cincinnati Bell, however, indicates that the court was simply stressing the narrow scope of the implied covenant and that application of the covenant is a "cautious enterprise." Id. at 992-93. There is no indication in Cincinnati Bell that the court utilized the fraud standard of Merrill in resolving the appeal. In short, Cincinnati Bell in no way suggests that the jury in this case should have been instructed that plaintiffs were required to prove that defendants acted fraudulently in order to prove a breach of the implied covenant and, more importantly, the court believes that the Delaware Supreme Court, if faced with the issue, would refuse to adopt such a requirement.
Moreover, defendants' construction of Delaware law on good faith and fair dealing is illogical as it would render a good faith and fair dealing claim entirely duplicative of a fraud claim. In fact, defendants essentially contend that plaintiffs' good faith and fair dealing claim should be converted into one of fraud. Under defendants' theory, then, plaintiffs could not prevail on their good faith and fair dealing claim without also prevailing on their fraud claim. Any distinction, then, between the two claims would be lost. Such a result would be untenable, as the Delaware Supreme Court obviously recognizes a distinction between the two claims. See Desert Equities, Inc. v. Morgan Stanley Leveraged Equity Fund, II, L.P., 624 A.2d 1199, 1207-08 (Del.1993) (distinguishing claim of fraud from allegations of bad faith).
Finally, defendants contend that the court's instruction on the duty of good faith and fair dealing was erroneous because it failed to inform the jury that plaintiffs were required to show affirmative acts of bad faith on the part of defendants. The court's instruction advised the jury that a violation of the implied covenant of good faith and fair dealing "implicitly indicates bad faith conduct." While defendants may have preferred different language concerning bad faith, they have not identified how the court's instruction departs from or incompletely portrays Delaware law. Moreover, defendants have not demonstrated why plaintiffs' proof of a breach of the duty of good faith and fair dealing is inadequate without further proof of affirmative acts of bad faith conduct. The court, then, rejects defendants' argument that the instruction was erroneous. See True North, 191 F.Supp.2d at 517-18 (rejecting argument that instruction was erroneous because it failed to advise that the claimant must prove that the other party acted in bad faith where movant failed to show how the court's instruction was inconsistent with Delaware law).
*1272 11. Plaintiffs' Motion to Alter or Amend the Judgment
The judgment entered on November 21, 2002 states that plaintiffs Horizon and Mr. Pepper shall recover on their breach of contract claim "the sum of $2,500,000.00, with interest thereon at the rate of 1.46 percent per annum as provided by law." Plaintiffs move to alter or amend the judgment to reflect the parties' contractually agreed interest rate of 2 percent per month.[12] In that regard, the relevant section of the purchase agreement executed by the parties states as follows:
In the event that the Non-Defaulting Party is entitled to receive an amount of money by reason of the Defaulting Party's default hereunder, then, in addition to such amount of money, the Defaulting Party shall promptly pay to the Non-Defaulting Party a sum equal to interest on such amount of money accruing at the rate of 2% per month (but if such rate is not permitted under the laws of the State of Delaware, then at the highest rate which is permitted to be paid under the laws of the State of Delaware) during the period between the date such payment should have been made hereunder and the date of the actual payment thereof.
See Purchase Agreement, Section 13.2(b) (Trial Exhibit 227a). Defendants oppose plaintiffs' motion for three reasons. According to defendants, the contractual rate of interest specified in the purchase agreement is preempted by the standard rate contained in 28 U.S.C. § 1961; plaintiffs have waived their right to have the judgment accrue interest at the parties' contractually agreed rate; and the contractually agreed rate is not permitted under Delaware law. As set forth below, the court concludes that parties are free to contract for a rate other than that specified in 28 U.S.C. § 1961 and, thus, the federal statute does not supersede the parties' agreement. Nonetheless, because the court concludes that plaintiffs have waived their right to assert the rate set forth in the purchase agreement by not preserving their claim of entitlement to such rate in the pretrial order and by failing to raise the issue until after the entry of judgment, the court denies plaintiffs' motion to alter or amend the judgment to the extent plaintiffs seek to enforce the rate established in the purchase agreement.
A. Whether Section 1961 Supersedes the Contractually Agreed Rate
Defendants contend that 28 U.S.C. § 1961, the federal statute governing post-judgment interest, must govern the award of post-judgment interest in this case despite the parties' contractual agreement for a different rate. Section 1961 states, in relevant part, that "[i]nterest shall be allowed on any money judgment in a civil case recovered in district court" and that [s]uch interest shall be calculated from the date of the entry of the judgment, at a rate equal to the coupon issue yield equivalent (as determined by the Secretary of the Treasury) of the average accepted auction price for the last auction of the fifty-two week United States Treasury bills settled immediately prior to the date of the judgment." 28 U.S.C. § 1961(a).
In support of their argument, defendants direct the court to Wilmington Trust Co. v. Aerovias de Mexico, S.A. de C.V., 893 F.Supp. 215, 220 (S.D.N.Y.1995), *1273 where the court calculated post-judgment interest at the section 1961 rate despite a contractual agreement providing for a higher rate. In that case, the district court simply stated that the language of section 1961(a) is mandatory and must govern the interest rate on any judgment debt:
The language of [section 1961(a) ] is mandatory: once a claim is reduced to judgment, the original claim is extinguished, and a new claim, called a judgment debt, arises. Section 1961(a) governs the interest rate on this judgment debt. Carte Blanche (Singapore) v. Carte Blanche (Int.), 888 F.2d 260 (2d Cir.1989), citing Kotsopoulos v. Asturia Shipping Co., 467 F.2d 91 (2d Cir.1972).
Id. at 220-21. The Wilmington case, however, is not entirely helpful for purposes of this court's analysis of whether parties can contract for a rate of interest different from the rate set forth in section 1961(a). In that regard, the district court in Wilmington did not expressly address whether the parties could contract around the federal statute. Rather, the court seemed to assume that the parties would not be permitted to do so under Second Circuit precedent. However, Carte Blanche and Kotsopoulos, the Second Circuit cases upon which the Wilmington court relies, do not stand for the proposition that parties cannot contract for a different rate of interest. In Kotsopoulos, a maritime case, the issue before the Second Circuit was only whether state law or federal law would determine the appropriate rate of post-judgment interest in admiralty and maritime cases. See 467 F.2d at 94-95. Similarly, the Second Circuit in Carte Blanche did not address whether parties to a contract could provide for a rate different than the standard rate set forth in section 1961(a). There, the Circuit held that an arbitrator could not impose a postjudgment interest rate different than the rate established in section 1961(a). See 888 F.2d at 268-69 (district court judgment affirming an arbitration award is governed by section 1961(a) rather than rate set forth in arbitration award).
Plaintiffs, on the other hand, urge that nearly every Circuit Court of Appeals to have addressed this issue has concluded that the parties can agree to an interest rate other than the standard one contained in 28 U.S.C. § 1961. For example, the Seventh Circuit in Central States, Southeast & Southwest Areas Pension Fund v. Bomar National, Inc., 253 F.3d 1011 (7th Cir.2001), affirmed a district court's award of post-judgment interest pursuant to the rate agreed upon in a pension trust agreement rather than the standard rate contained in section 1961(a). In so doing, the Seventh Circuit stated that "[i]t is well established that parties can agree to an interest rate other than the standard one contained in 28 U.S.C. § 1961." Id. at 1020. In support of its statement, the Seventh Circuit cites to the Fifth Circuit's decision in Hymel v. UNC, Inc., 994 F.2d 260, 265 (5th Cir.1993).
In Hymel, the Fifth Circuit "noted" that the district court was correct when it awarded post-judgment interest at a rate of 9 percent per annum pursuant to express language contained in a promissory note executed by the parties. Id. at 265-66. The Circuit summarily rejected the argument that section 1961 applies in every case without exception and, in doing so, cited to another Fifth Circuit case, In re Lift & Equipment Service, Inc., 816 F.2d 1013 (5th Cir.1987). See id. In In re Lift, a case arising out of the bankruptcy court, the parties disputed whether the creditor was entitled to post-judgment interest under Louisiana law or under section 1961(a), 816 F.2d at 1018. The Fifth Circuit, however, rejected both arguments and, embracing a view that none of the parties had espoused, applied the interest *1274 rate set forth in the written assignment of accounts receivable. Id. In so doing, the Circuit stated, "While 28 U.S.C. § 1961 provides a standard rate of post-judgment interest, the parties are free to stipulate a different rate, consistent with state usury and other applicable laws." Id.
While the Fifth Circuit in In re Lift offered no explanation for its conclusion, it cited to a Ninth Circuit decision, Investment Service Co. v. Allied Equities Corp., 519 F.2d 508 (9th Cir.1975). In that case, the district court judge applied the interest rate agreed upon by the parties in a promissory note. Id. at 511. The guarantor of the loan argued that the assignee of the note was only entitled to the legal rate of interest under Oregon state law. See id. The Ninth Circuit rejected the argument:
It is true that the contractual duty here is discharged by merger once the judgment is entered on the note. Restatement of Contracts § 444. However, upon entry of the judgment the legal rate of interest applicable should apply unless the parties have agreed in the note that some other rate of interest shall apply. Corbin on Contracts § 1045 (1962).
Id. The court's reliance on Corbin, however, is somewhat puzzling in that Corbin does not purport to draw any conclusion about the effect of a judgment on the parties' contractual agreement to a different rate and it does not address a contractual agreement for post-judgment interest; rather, the section cited by the Ninth Circuit deals only with the payment of interest as "agreed compensation" for a breach of the contract. See Arthur Linton, Corbin on Contracts § 1045 (Interim ed.2002) (expressly stating that section 1045 addresses neither a contract right to interest nor statutory rights thereto, but only interest recoverable as compensatory damages for a breach of contract). In any event, the court ultimately applied Oregon's legal-rate-of-interest statute, which specifically provides that parties to a contract can agree to a higher rate of interest provided that such rate does not exceed the maximum rate allowed by law. See id.
The court concedes at the outset that the cases relied upon by plaintiffs, to the extent those cases purport to stand for a well-recognized rule that parties are free to contract for an interest rate other than the rate established in section 1961(a), are problematic in certain respects. In large part, the cases offer very little analysis as to why parties would be able to contract around the seemingly mandatory language of section 1961(a). Moreover, in several of the cases, the precise issue was not one that the court had to decide and, thus, any conclusions about the issue would be mere dicta. Nonetheless, it is clear that the Seventh, Fifth and Ninth Circuits consider it beyond dispute that parties are free to contract for whatever post-judgment interest rate they choose. In addition, the Fourth Circuit, albeit in an unpublished decision, expressly adopted the Fifth Circuit's Hymel decision in affirming a district court's award of post-judgment interest at a rate set forth in a stock redemption agreement as opposed to the rate set forth in section 1961(a). See Carolina Pizza Huts, Inc. v. Woodward, 67 F.3d 294, 1995 WL 572902, at *3 (4th Cir.1995). Moreover, at least one district court has declined to award post-judgment interest at the section 1961(a) rate where the parties stipulated to the entry of a judgment which provided for interest at a higher rate. See In re Connaught Properties, Inc., 176 B.R. 678, 684-85 (Bankr.D.Conn.1995).
In the end, the court is called upon to resolve a difficult legal issue on which the Tenth Circuit has not been called to opinean issue that is rendered that much more difficult in light of the dearth of on-point *1275 analysis by other courts. After carefully weighing both sides of the issue, the court ultimately believes that the Tenth Circuit would likely concur with those Circuits that have held that parties should be and are able to contract for a rate other than the rate set forth in section 1961(a). While section 1961 without a doubt uses mandatory language, the court concludes that Congress intended it to be mandatory in the sense that a district court or other third party (e.g., an arbitrator) has no discretion to award a different rate of interest or to decline to award post-judgment interest. See, e.g., Bell, Boyd & Lloyd v. Tapy, 896 F.2d 1101, 1104 (7th Cir.1990) (section 1961(a) allows the judge no discretion to deny the interest authorized by that section); Carte Blanche, 888 F.2d at 269 (the language of section 1961 is mandatory and its terms do not permit the exercise of judicial discretion in its application). The court, however, can discern no sound reason why Congress would have intended that parties themselves could not agree to a different rate. Thus, the court rejects defendants' contention that section 1961(a) supersedes the rate agreed upon by the parties in the purchase agreement.
B. Whether Plaintiffs Waived the Right to Assert, the Contractually Agreed Rate
Defendants also oppose plaintiffs' motion to alter or amend on the grounds that plaintiffs waived the right to assert the 2% per month rate by failing to include that rate in the pretrial order. Plaintiffs concede that they did not articulate in the pretrial order their claim of entitlement to a higher rate of post-judgment interest. Nonetheless, plaintiffs contend that no such claim needed to be asserted in the pretrial order. As explained below, the court disagrees with plaintiffs on this point.
In their papers, plaintiffs rely to a large extent on the legal principles that an award of post-judgment interest is mandatory, see Bancamerica Commercial Corp. v. Mosher Steel of Kansas, Inc., 103 F.3d 80, 81 (10th Cir.1996), and, as such, must be made regardless of what was demanded in the complaint or stated in the pretrial order. See Bell, Boyd & Lloyd v. Tapy, 896 F.2d 1101, 1104 (7th Cir.1990); 10 Charles Alan Wright, Arthur R. Miller & Mary Kay Kane, Federal Practice and Procedure § 2664 at 186-87 (1998). However, the issue is not whether plaintiffs were required to request post-judgment interest in the pretrial order to receive an award of post-judgment interest. The law is clear (and defendants do not dispute) that plaintiffs are entitled to post-judgment interest, at least at the rate established in 28 U.S.C. § 1961(a), despite their failure to request such an award in the pretrial order. The issue as this court sees it is whether plaintiffs are entitled to an award of post-judgment interest at the higher rate of interest specified in the purchase agreement when no such request was made in the pretrial order.
It is axiomatic that a Rule 59(e) motion cannot be used to raise a new issue that could have been raised prior to judgment. See Steele v. Young, 11 F.3d 1518, 1520 n. 1 (10th Cir.1993); 11 Charles Alan Wright, Arthur R. Miller & Mary Kay Kane, Federal Practice and Procedure § 2810.1 (2d ed.1995). In other words, Rule 59(e) is "aimed at re consideration, not initial consideration" and, thus, a party may not rely on Rule 59(e) to raise an argument which could, and should, have been made before judgment issued. United States ex rel, Noyes v. Kimberly Constr., Inc., 43 Fed.Appx. 283, 286-87 (10th Cir.2002) (emphasis in original). Despite plaintiffs' insistence that they did not need to raise the issue prior to judgment, *1276 it is beyond dispute that plaintiffs could have raised the issue prior to judgment. Unlike an award of postjudgment interest pursuant to 28 U.S.C. § 1961, the award sought by plaintiffs here was not necessarily a "given." In that regard, while defendants assert only legal arguments in opposition to plaintiffs' claim of entitlement to the higher rate of interest, it is possible that defendants could have sought to raise factual arguments in opposition to the claim. For example, defendants could have asserted that section 13.2(b) was altered by plaintiffs after the contract was signed.[13] Had defendants so asserted, then they would have been entitled to have the jury resolve that dispute. Because a court is not permitted to give relief under Rule 59(e) "if this would defeat a party's right to jury trial on an issue," see Wright, Miller & Kane, supra, § 2810.1, then the fact that one in the place of defendants might have had fact-based defenses available renders plaintiffs' request for award of postjudgment interest pursuant to the purchase agreement the type of request that cannot be raised for the first time pursuant to Rule 59(e).
According to plaintiffs, defendants were nonetheless on notice that plaintiffs would assert a claim of entitlement to an award of postjudgment interest at the higher rate because defendants executed the purchase agreement and are charged with knowledge of the contents of that agreement. The court finds this argument disingenuous as it is clear that plaintiffs themselves did not remember (or perhaps even recognize) that the purchase agreement provided for a higher rate of interest until very late in the litigation process. Indeed, section 13.2(b) provides not only for postjudgment interest, but prejudgment interest-a remedy that plaintiffs failed to request at any time during the course of the litigation (and a remedy that plaintiffs acknowledge they cannot now seek). Plaintiffs' failure in that regard demonstrates to the court that they were not aware of or did not remember the contents of section 13.2(b). Moreover, while section 13.2(a) provides for a prevailing party to recover reasonable attorneys fees, plaintiffs did not assert a claim for fees in the pretrial order. This also demonstrates to the court the likelihood that plaintiffs had not considered the contents of section 13.2 in connection with this case at any time prior to entry of the pretrial order. Only after defendants asserted in the pretrial order a right to recover fees did plaintiffs scour the purchase agreement looking for the source of defendants' claim. At that point, after the entry of the pretrial order, plaintiffs moved to amend the pretrial order to assert a claim for fees. The court granted that motion because defendants, who had asserted a claim for the recovery of fees pursuant to the purchase agreement, were not prejudiced by the addition of that claim in that they clearly had knowledge of that portion of the contract and they had not demonstrated that plaintiffs' right to recover fees would affect the trial of the case in any way.
The court concludes that defendants were entitled to notice from plaintiffs-prior to trial and, hopefully, at least by the date of entry of the final pretrial order-that plaintiffs intended to seek postjudgment interest at the contractual rate. Such notice would have enabled defendants to ascertain *1277 whether they had any good faith factual arguments to raise in the face of section 13.2(b)-factual arguments that could have been presented to the jury. Moreover, such notice would have permitted defendants to assess fully the risk of bringing this case to trial. More specifically, defendants would have been able to ascertain the total potential exposure that they might face if the jury, as they did, returned a verdict in favor of plaintiffs. Indeed, the interest rate set forth in the contract-2 percent per month-would expose defendants to an additional $600,000 per year in indebtedness to plaintiffs on a verdict of $2.5 million, assuming the jury's verdict is upheld on appeal. In short, the court believes that defendants were entitled to actual notice that plaintiffs' recovery might encompass this significant amount.
In sum, plaintiffs' motion to alter or amend the judgment is denied to the extent plaintiffs seek an award of post-judgment interest pursuant to the interest rate set forth in the parties' purchase agreement.
C. Whether Delaware Law Prohibits Application of the Contractually Agreed Rate
Because the court denies plaintiffs' motion on the grounds that plaintiffs waived their right to assert the higher interest rate found in the purchase agreement, the court need not address defendants' argument that the higher rate is not permitted under Delaware law. Nonetheless, in the interest of judicial economy in the event the parties' appeal this court's decision to the Tenth Circuit, the court notes, without elaborating in full detail, that it would conclude that the higher rate established in the contract is permissible under Delaware law.
The Delaware law governing post-judgment interest is codified at section 2301 of Title 6 of the Delaware Code and states, in relevant part, as follows:
Any lender may charge and collect from a borrower interest at any rate agreed upon in writing not in excess of 5% over the Federal Reserve discount rate including any surcharge thereon, and judgments entered after May 13, 1980, shall bear interest at the rate in the contract sued upon. Where there is no expressed contract rate, the legal rate of interest shall be 5% over the Federal Reserve discount rate including any surcharge as of the time from which interest is due; provided, that where the time from which interest is due predates April 18, 1980, the legal rate shall remain as it was at such time.
Id. § 2301(a). The court agrees with defendants that section 2301(a) clearly provides that no interest rate can exceed 5% over the federal discount rate and rejects plaintiffs' argument that because the judgment in this case was entered after May 13, 1980, section 2301(a) permits interest to accrue at a contractually agreed rate.
However, as plaintiffs highlight in their papers, section 2301(c) expressly provides that there is "no limitation on the rate of interest which may be legally charged for the loan or use of money, where the amount of money loaned or used exceeds $100,000, and where repayment thereof is not secured by a mortgage against the principal residence of any borrower." While defendants urge that this provision does not apply because it is limited to the context of a unsecured loan between a lender and a borrower, section 2301(a) on its face would also appear to apply only to lenders and borrowers. Thus, if subsection (a) applies to the purchase agreement (as defendants urge that it does), then subsection (c) would have to apply as well. In any event, defendants are precluded under Delaware law from challenging the *1278 contractual rate as usurious. See Del. Code. tit. 6 § 2306 ("No corporation ... or limited liability company ... shall interpose the defense of usury in any action.").
For these reasons, the court would conclude that the rate of interest agreed upon by the parties in the purchase agreement is not prohibited by Delaware law.
III. Plaintiffs' Motion for Attorneys' Fees, Costs and Expenses
The purchase agreement executed by the parties provides that the prevailing party shall be entitled to recover from the defaulting party all costs and expenses, including reasonable attorneys' fees, incurred in connection with enforcing the terms of the purchase agreement. See Purchase Agreement, Section 13.2(a) (Trial Exhibit 227a). Pursuant to this provision of the contract, and having prevailed on their breach of contract claim, plaintiffs Horizon and Mr. Pepper seek attorneys' fees and expenses totaling $846,740.35.[14] As set forth below, with the exception of a few minor adjustments, the court grants plaintiffs' motion.[15]
The parties have stipulated to the reasonableness of all billing rates and, thus, the court need not address that issue. To the extent defendants do oppose plaintiffs' fee request, that opposition is both exceedingly narrow and easily resolved. Defendants assert that plaintiffs' request is simply too exorbitant because of the "limited success" achieved by plaintiffs at trial. To be clear, defendants have not articulated any objections to any specific portion of the fee request or plaintiffs' billing records and they do not contest any specific time entries. Instead, defendants assert only a general objection to the fee request as unreasonable. Indeed, in the face of a request for nearly $850,000 in fees and expenses, defendants have submitted a brief that is less than 9 pages in length.
Defendants suggest in their papers that they are relieved of the burden of objecting to specific portions of plaintiffs' fee request because, according to defendants, plaintiffs have failed to meet their burden of showing that the request is reasonable. The court disagrees. To meet their burden of proving the number of hours reasonably spent on the litigation, plaintiffs "must submit meticulous, contemporaneous time records that reveal, for each lawyer for whom fees are sought, all hours for which compensation is requested and how those hours were allotted to specific tasks." United Phosphorus, Ltd. v. *1279 Midland Fumigant, Inc., 205 F.3d 1219, 1233 (10th Cir.2000) (citing Case v. Unified Sch. Dist. No. 233, 157 F.3d 1243, 1249-50 (10th Cir.1998)). The district court, then, may reduce the number of hours when the time records provided to the court are inadequate. Id. at 1233-34. The court has reviewed the billing records submitted by plaintiffs and those records are more than adequate to meet plaintiffs' burden.
Defendants also invite the court to dissect plaintiffs' billing records in an effort to determine or "approximate" those fees that are attributable to the breach of contract claim and those fees that are attributable to the unsuccessful claims. The court, however, is not obligated to comb the record to ferret out deficiencies in plaintiffs' submission. It is defendants' obligation to direct the court to such deficiencies if they believe such deficiencies exist. See Public Serv. Co. of Colorado v. Continental Casualty Co., 26 F.3d 1508, 1521 (10th Cir.1994) ("We do not feel that the trial judge was obligated to comb the evidence before him-consisting of voluminous attorney billing records-to ferret out gaps or inconsistencies in the evidence presented on the fees."); see also United States ex rel. C.J.C., Inc. v. Western States Mechanical Contractors, Inc., 834 F.2d 1533, 1549 (10th Cir.1987) ("[T]he trial court is not responsible for independently calculating a `reasonable' fee."). Nonetheless, the court has reviewed the billing records and, in large part, concludes that plaintiffs' fee request is a reasonable one. The court will, however, deduct from plaintiffs' request fees of $67.50 for work performed by attorney Normal Siegel on April 15, 2002 and fees of $585.00 for work performed by attorney Amy Baumann on August 14, 2002. It is apparent from plaintiffs' papers that they intended to deduct these fees from their request (and to request fees for attorney time only to the extent work was done by the two primary lawyers involved in the case-George Hanson and Todd McGuire) but, presumably by oversight, neglected to do so. Similarly, the court will deduct fees of $3195.00 incurred during July 2002 in connection with plaintiffs' motion to compel discovery. Again, plaintiffs' papers indicate that they intended to deduct these fees from their request, having already recovered this sum from defendants by virtue of this court's July 25, 2002 order, but the billing records indicate that this deduction was not, in fact, made.
To reiterate, then, aside from these minor deductions, the court has reviewed the billing records and, in the absence of any specific objection to plaintiffs' request and in the absence of any evidence that the hours claimed by plaintiffs are unreasonable, concludes that plaintiffs' fee request is a reasonable one. See Robinson v. City of Edmond, 160 F.3d 1275, 1279, 1285-86 (10th Cir.1998) (plaintiffs requested $186,000 in fees and defendants generally objected to this request as unreasonable but specifically articulated objections to only $43,000 of the request, leaving $142,000 in requested attorney's fees "not separately contested;" district court abused its discretion in reducing fee award in part because the end result was a fee award that was below the "unrebutted," "unchallenged," and "uncontested" amount of the fee request); Sheets v. Salt Lake County, 45 F.3d 1383 (10th Cir.1995) (affirming trial court's fee award in part because defendants failed to proffer any evidence that the hours claimed were unreasonable and, instead, simply made unsubstantiated allegations that the fees were duplicative and exorbitant in nature).
Defendants' general objection to plaintiffs' request is that the request is simply unreasonable in light of plaintiffs' "limited success"-plaintiffs prevailed only on their "relatively simple" contract claim. In the context of this litigation, however, a verdiet *1280 of $2.5 million is a substantial victory for plaintiffs and there was nothing "simple" about the contract claim. Rather, the case presented complex commercial issues and plaintiffs' counsel successfully developed those issues at trial. Indeed, Mr. Pepper and Horizon's breach of contract claim-the claim on which plaintiffs ultimately succeeded-encompassed a claim that defendants had breached the implied covenant of good faith and fair dealing, a claim that is often difficult for judges and lawyers to comprehend let alone lay persons on a jury. To prove plaintiffs' claim at trial, plaintiffs' counsel could not rely on an express term of the contract and could not point to one specific act that constituted defendants' breach. Instead, counsel was required to convey to the jury that defendants' entire course of conduct (conduct that spanned over 18 months) breached an "implied" duty to act in "good faith." Despite the sheer volume of evidence needed to describe and place in context defendants' course of conduct, coupled with the need to fit that evidence into amorphous concepts like "good faith" and "implied duty," plaintiffs' counsel achieved a multimillion dollar verdict for his clients. For these reasons, the court readily concludes (and defendants cannot seriously dispute) that plaintiffs obtained excellent results at trial. See Hampton v. Dillard Dep't Stores, Inc., 247 F.3d 1091, 1120 (10th Cir.2001) (proper focus is on the overall relief obtained). No blanket reduction is warranted and plaintiffs' counsel is deserving of a fully compensatory fee. See Hensley v. Eckerhart, 461 U.S. 424, 433-35, 103 S.Ct. 1933, 76 L.Ed.2d 40 (1983).
In a related vein, defendants contend that plaintiffs are only permitted to recover those reasonable fees and expenses incurred in connection with the pursuit of their contract claim. Defendants contend that plaintiffs are improperly attempting to recover fees and expenses associated with the numerous claims on which plaintiffs did not prevail at trial and that the time and labor required to present evidence to the jury that defendants breached the purchase agreement was "only a small part of that actually expended by plaintiffs' counsel." The court rejects this argument, too. As an initial matter, plaintiffs' papers demonstrate that plaintiffs' counsel have already excluded from their request those hours associated with discrete research and other work related to plaintiffs' statutory discrimination claims, including hours spent working with plaintiffs' expert witness concerning plaintiffs' potential damages under Title VII. See Robinson, 160 F.3d at 1281 (prevailing party must make a good faith effort to exclude from request those hours that are excessive, redundant or otherwise unnecessary).
In any event, in light of the fact that most, if not all, of the unsuccessful claims were intertwined with the successful breach of contract claim through a common core of fact or related legal theories, any reduction of fees would be inappropriate. See id. at 1283 (reversing district court's reduction of fee award on the grounds that plaintiffs achieved only partial success where all unsuccessful claims were intertwined with the successful claims). The law is clear that when a lawsuit consists of related claims, a plaintiff who has won substantial relief should not have his attorney's fee reduced simply because the court or jury did not adopt each contention raised. See Hampton, 247 F.3d at 1120 (citing Jane L. v. Bangerter, 61 F.3d 1505, 1512 (10th Cir.1995)) (affirming district court's refusal to reduce fee award based on alleged limited success; all of the claims were similar and stemmed from the same set of facts). Indeed, the Supreme Court has cautioned that a court should exclude an unsuccessful claim from a fee award only if that claim is "distinct in *1281 all respects" from the successful claim. See Hensley, 461 U.S. at 440, 103 S.Ct. 1933.
Utilizing this standard (a standard that defendants do not even reference in their papers), the court simply cannot conclude that any of plaintiffs' unsuccessful claims are unrelated to the pursuit of the ultimate result achieved. Indeed, any attempt to divide the hours expended in this case on a claim-by-claim basis would be difficult and unjust. Nearly all of the claims pursued by plaintiffs-particularly plaintiffs' fraud and breach of contract claims-centered on the same core of facts. Any investigation or development of the fraud claim would necessarily have encompassed plaintiffs' breach of contract claim (and vice versa) as both claims required careful scrutiny of the parties' pre-contractual negotiations and the parties' conduct throughout the course of the contractual relationship. Thus, it is not surprising to this court that the billing records of plaintiffs' counsel, in large part, do not distinguish between claims. See id. at 435, 103 S.Ct. 1933 ("Much of counsel's time will be devoted generally to the litigation as a whole, making it difficult to divide the hours expended on a claim-by-claim basis."). Moreover, the Tenth Circuit has emphasized the importance of allowing litigants the "breathing room" necessary to raise alternative legal grounds that seek the same result and, thus, focusing on the actual result of the trial rather than dividing attorneys' fees by the number of successful claims. See Robinson, 160 F.3d at 1283.
For the foregoing reasons, the court rejects defendants' contention that a blanket reduction of fees is warranted and, with the exception of the minor adjustments noted above, grants plaintiffs' motion for fees and costs and expenses.
IT IS THEREFORE ORDERED BY THE COURT THAT plaintiffs' motion to alter or amend the judgment (doc. # 197) is granted in part and denied in part. Specifically, the motion is granted to the extent that a typographical error in the judgment shall be corrected and is otherwise denied; plaintiffs' motion for attorneys' fees, costs and expenses (doc. # 198) is granted in part and denied in part and the court awards plaintiffs fees, costs and expenses in the amount of $842,-892.85; and defendants' renewed motion for judgment as a matter of law pursuant to Rule 50(b) or, in the alternative, motion for remittitur and/or new trial pursuant to Rule 59 (doc. # 199) is denied.
IT IS FURTHER ORDERED BY THE COURT THAT the clerk of the court shall amend the judgment to reflect this court's award of $842,892.85 in attorneys' fees, costs and expenses. The amended judgment should also be corrected to reflect that the jury returned a verdict on November 21, 2002 as opposed to November 12, 2002.
IT IS SO ORDERED.
NOTES
[1] The parties do not dispute that Delaware law governs plaintiffs' claim that defendants breached the terms of the purchase agreement, as that agreement contains an express choice-of-law provision.
[2] The undisputed evidence at trial was that "standard cost" was the amount that it actually cost Genmar Kansas to build the boat in terms of labor, material and overhead. In other words, Genmar Kansas was not making any profit on Ranger or Crestliner boats and, in most instances, was actually losing money on these boats because Genmar Kansas was not operating at maximum efficiency. Profits on these boats that were built on the production line in the Genmar Kansas facility were earned by Ranger and Crestliner when they in turn sold the boats to their dealer network.
[3] Lowe is another aluminum boat manufacturing company. Mr. Pepper worked for Lowe for nearly ten years; ultimately Lowe was purchased by defendants.
[4] In their papers, defendants also assert that section 4 of the employment agreements supports their argument that the O'Tools were not guaranteed a specific term of employment. Defendants, however, have not mentioned section 4 at any time prior to filing their renewed motion and certainly did not highlight this section for the jury.
[5] The parties do not dispute that Kansas law governs the O'Tools' breach of contract claims as the O'Tools' employment contracts contained a provision identifying Kansas law as the parties' choice of law.
[6] In their papers, defendants assert that 20 percent of Ms. O'Tool's salary is only $8320. That figure, however, is based on Ms. O'Tool's annual salary of $41,600 instead of the total salary that Ms. O'Tool would have earned over the relevant 15-month period.
[7] When Mr. O'Tool's annual salary is translated into a monthly salary, and that monthly salary is multiplied by 15 months (measured from the time of Mr. O'Tool's discharge through the time when Mr. O'Tool's employment contract would have expired), his total lost salary is $81,249.90 (65,000/12 = $5,416.66 per month x 15). Twenty-five percent of $81,249.90 is $20,312.47.
[8] This argument presupposes that the jury considered such evidence in connection with plaintiffs' breach of contract claim. Defendants, of course, have no way of knowing that the jury did, in fact, consider such evidence in its assessment of the breach of contract claim.
[9] While the court in True North referenced § 205(a) of the Restatement (Second), that Restatement does not contain a § 205(a); the court intended to reference comment a of § 205.
[10] For this reason, many states, including Kansas, have held that there is simply no implied covenant of good faith and fair dealing in the employment-at-will context. See, e.g., St. Catherine Hosp. of Garden City v. Rodriguez, 25 Kan.App.2d 763, 765, 971 P.2d 754 (1998) (Kansas does not recognize any good faith obligation in the employment-at-will context) (citing cases).
[11] It may be that the court in Corporate Property was simply using the fraud language as a short-hand for the concept of bad faith. The point, however, is that the court fails to explain why it is utilizing that language and fails to provide any insight into the significance, if any, of that language, such as whether a party bringing a good faith and fair dealing claim would be held to proving the elements of fraud (e.g., false representation, scienter and reliance) in order to prevail.
[12] In their motion to alter or amend, plaintiffs also point out that the judgment entered on November 21, 2002 contains a typographical error in that the judgment states that the verdict was returned by the jury on November 12, 2002. The jury, however, returned its verdict on November 21, 2002. The judgment will be corrected, and plaintiffs' motion will be granted, in this respect.
[13] No one, of course, is suggesting that plaintiffs did so; the court is simply posing a hypothetical for illustrative purposes to demonstrate that there might have been factbased defenses available to defendants had the issue been raised by plaintiffs. Thus, because plaintiffs were not necessarily automatically entitled to the higher rate, the court rejects plaintiffs' contention that Federal Rule of Civil Procedure 54(c) requires an award of post-judgment interest at the higher rate irrespective of the contents of the pretrial order.
[14] Plaintiffs' fee request covers the time period ending December 31, 2002. To the extent plaintiffs intend to recover fees, costs and expenses incurred in January 2003 in connection with responding to defendants' motion for judgment as a matter of law and filing their initial fee application, plaintiffs must file a motion for a supplemental award of fees, as those figures are not presently before the court. To the extent plaintiffs intend to seek fees in connection with defending an appeal filed by defendants, plaintiffs must direct such a request to the Tenth Circuit. See, e.g., San Juan Prods., Inc. v. San Juan Pools of Kansas, Inc., 849 F.2d 468, 477 (10th Cir.1988).
[15] Because plaintiffs' fee request stems from a contractual fee provision, plaintiffs' request is subject to far less scrutiny than a request made pursuant to a fee-shifting statute and the court does not possess the same degree of equitable discretion to deny such fees as it has when applying a statute providing for a discretionary award. See United States ex rel. C.J.C, Inc. v. Western States Mechanical Contractors, Inc., 834 F.2d 1533, 1547-50 (10th Cir.1987) (remanding claim for attorneys' fees made pursuant to contractual fee provision where district court reduced the fee and, in doing so, applied the wrong standard and scrutinized the fee request too closely). In such cases, fees are "routinely awarded" unless the trial court determines that an award consistent with the request would be inequitable or unreasonable. Id. at 1548.
| {
"pile_set_name": "FreeLaw"
} |
Q:
Can polyalloy (plastic) pex fittings be used with both styles of attachment rings?
Some pex fittings are made of a type of durable plastic known as polyalloy. Examples:
These fittings appear to be plastic equivalents of their brass counterparts.
In general is it permissible to use either the copper crimp rings OR the stainless steel cinch / pinch clamps with this type of fitting?
Notes:
This wasn't addressed in What is the advantage of PEX pinch clamp vs. crimp rings?
A:
TLDR: Yes, either type of attachment ring can be used. At least in the USA, these products have to conform to standards which make this so. Additionally, some manufacturers specifically state this is the case.
Details:
Primarily, this seems to come down to manufacturing standards.
In the USA, "PolyAlloy" fittings are governed by standard ASTM F2159 Standard Specification for Plastic Insert Fittings Utilizing a Copper Crimp Ring... which states:
This specification establishes requirements for sulfone plastic
insert fittings utilizing a copper crimp ring for [PEX] tubing...
Based on that alone, it would seem that these plastic fittings can only accept the copper crimp rings, not the Oetiker-style stainless steel cinch rings.
However, cinch rings are governed by ASTM F2098 Standard Specification for Stainless Steel Clamps for Securing [PEX] Tubing to Metal Insert and Plastic Insert Fittings which states
This specification covers stainless steel clamps ... that comply with F1807 or F2159, and
cross-linked polyethylene (PEX) plastic tubing ...
Therefore by reference to F2159 it seems that the cinch clamps are effectively retconned into acceptability for use with polyalloy fittings.
For example, Everflow fittings are documented to be "Certified to ASTM F2159". And here's a marking on a Vanguard Apollo package indicating the same:
Update: I contacted Apollo about this and they wrote:
You may use pinch rings, copper crimp rings, pro crimp rings, and
stainless steel sleeves with any of our Poly Alloy fittings.
So at least as far as their products go, there should be no problem. And I think therefore that any of these fittings made to the same standard ought to be fine also.
| {
"pile_set_name": "StackExchange"
} |
Nine things to know about elicitins.
888 I. 888 II. 889 III. 889 IV. 889 V. 891 VI. 891 VII. 891 VIII. 892 IX. 892 X. 893 XI. 893 893 References 893 SUMMARY: Elicitins are structurally conserved extracellular proteins in Phytophthora and Pythium oomycete pathogen species. They were first described in the late 1980s as abundant proteins in Phytophthora culture filtrates that have the capacity to elicit hypersensitive (HR) cell death and disease resistance in tobacco. Later, they became well-established as having features of microbe-associated molecular patterns (MAMPs) and to elicit defences in a variety of plant species. Research on elicitins culminated in the recent cloning of the elicitin response (ELR) cell surface receptor-like protein, from the wild potato Solanum microdontum, which mediates response to a broad range of elicitins. In this review, we provide an overview on elicitins and the plant responses they elicit. We summarize the state of the art by describing what we consider to be the nine most important features of elicitin biology. | {
"pile_set_name": "PubMed Abstracts"
} |
Q:
XML parsers used in iphone sdk
I am quite new to iphone development. I was going through tutorials on XML parsing for which NSXMLParser is used. Are there other parsers we can use for parsing XML. How do we decide which parser to use?
Regards,
Stone
A:
Standard parsers are NSXMLParser or c-based libxml. But there is plenty of 3rd party parsers available. Check this blog post where some of the most popular parsers reviewed and compared.
| {
"pile_set_name": "StackExchange"
} |
Mono/poly relationships, i.e., a relationship between a partner who is monogamous and one who is polyamorous, are notoriously difficult. Traditional poly blogger wisdom points the finger at both parties having to compromise and feeling somewhat shortchanged. The difference between the relationship structures is to blame for the trouble, they write. But I think itâs even simpler than that. I think the problem isnât poly, mono, or some clash of cross-purposes. The problem lies with toxic monogamy culture.
Toxic monogamy is basically the worst.
Monogamy in and of itself has so many good qualities. Sexual exclusivity in particular has a large upside. When practiced perfectly (although not always the case, even when itâs meant to be), it carries a lower STI risk. Iâve previously written that I could easily be sexually monogamous, if I could still have emotional connections with more than one person.
However, many people in long-term monogamous relationships become emotionally and socially isolated in a profound way. This is because a number of socially connected behaviors are perceived as infidelities. Toxic monogamy culturally trains us to be on high alert to detect cheating in our own relationships â and the ones of those around us. This makes us overly sensitive to prosocial acts that could signal something insidious lurking beneath the surface.
Notably, I recall a conversation I overheard between people who agreed that posting pictures with members of the opposite sex on Facebook was in fact cheating on your significant other. Even setting aside the fact that Iâm not straight, this idea perplexed me.
As Noah Brand writes:
Hegemonic heterosexuality is the model for straight relationships that carries as many damaging, ridiculous, impossible assumptions and requirements as does hegemonic masculinity. Shall we list a few?
Relationships are about finding The One youâll spend the rest of your life with. Naturally, a jealous and possessive form of monogamy is a strict requirement. It is necessary to hate all of oneâs exes, because they were not The One, and one must also be jealous of all oneâs partnerâs exes, because they touched your property before you even got there.
Itâs not that mono/poly is unworkable. Itâs that the beliefs that accompany toxic monogamy will consistently torture a person in a polyamorous environment.
How to proceed given this?
Challenge the underlying assumptions of toxic monogamy:
Affection is zero sum. When you care for someone, that leaves less caring to give to others.
One person must meet every possible emotional and social need that we have.
We must do whatever is needed to protect The Relationship â a simultaneously fragile and all-important entity. If this involves complete isolation, then so be it.
If a love is true and valid, we will never, ever be attracted to anyone else. Ever.
If the intensity of that love changes, there is something wrong.
If we are attracted to someone else, this means that our love isnât true. Or weâre a horrible person. Or both. Probably both.
Jealousy is the best indicator of love.
Commitment is chiefly about exclusivity and forsaking all others (and not followthrough).
How much your romantic partner values you should be a large part of your self-worth.
Even poly folks can struggle with some of this. These beliefs linger as nagging doubts. Even though we have actively rejected monogamy as a relationship style, we were raised in the same world. Toxic monogamy was modeled for us over and over again (through media, the relationships of others, etc).
Whether youâre poly, mono, or poly/mono, one thing is true: Toxic monogamy is terrible for you.
Counter to what one might think, acting as though love is scarce is an easy way to lose it. Worrying youâll lose someone can drive them away. At the very least it can drive a wedge between you.
*
Note: âToxic monogamy cultureâ is a phrase that refers to a specific kind of socially isolated, maladaptive monogamy. âToxicâ is a modifier. Much in the same way that one can decry âabusive relationshipsâ and not be a fan of them â while not thinking or asserting that all relationships are abusive, referring to âtoxic monogamy cultureâ doesnât mean that all monogamy is toxic. Thatâs an absurd notion.
*
Fiction by Page Turner:
Psychic City, a slipstream mystery
Non-Fiction:
Dealing with Difficult Metamours
A Geekâs Guide to Unicorn Ranching
Poly Land: My Brutally Honest Adventures in Polyamory | {
"pile_set_name": "OpenWebText2"
} |
[Nutritional status of native and non-native population of Russia's Extreme North and Far East].
Daily diets studied in various groups of native and non-native population living in the Far North and East of Russia (Kamchatka, Chukotka, Sakhalin) are shown to be inadequate for the adult capable population. Chemical composition of food provides insufficient supply of vegetable oil, calcium, vitamins, magnesium. | {
"pile_set_name": "PubMed Abstracts"
} |
Arachidonic acid (all-cis-5,8,11,14-eicosatetraenoic acid) is a polyunsaturated fatty acid (PUFA) containing 20 carbon atoms with four double bonds. The double bonds are arranged with the last one located six carbon atoms from the methyl end of the chain. Therefore, arachidonic acid is referred to as an omega-6 fatty acid. Arachidonic acid is one of the most abundant C.sub.20 PUFA's in the human body. It is particularly prevalent in organ, muscle and blood tissues. Arachidonic acid is a direct precursor for a number of circulating eicosenoids, such as prostaglandins, thromboxanes, leukotrienes and prostacyclins, which are important biological regulators. These eicosenoids exhibit regulatory effects on lipoprotein metabolism, blood rheology, vascular tone, leukocyte function, platelet activation and cell growth. The application of arachidonic acid to an infant's diet is particularly important due to the rapid body growth of an infant. Arachidonic acid is an important precursor to many of the eicosanoids which regulate cellular metabolism and growth in infants. It is found naturally in human breast milk but not in most infant formula. In an effort to have infant formula match the long chain fatty acid profile found in breast milk, scientific and food regulatory bodies have recommended that arachidonic acid be added to infant formula, especially in formula utilized for premature infants.
In particular, it is preferable that arachidonic acid containing oil produced for use with infant formula contain little or no other long chain highly unsaturated fatty acids (e.g., eicosapentanoic acid). Such other long chain highly unsaturated fatty acids are not preferred because some of these fatty acids can interfere with the utilization of arachidonic acid by the infant, and/or can inhibit blending of the arachidonic acid-containing oil with other oils to achieve the appropriate ratio of fatty acids matching breast milk or other desired applications. Highly unsaturated fatty acids are defined as fatty acids containing 4 or more double bonds.
Traditional sources of arachidonic acid include poultry eggs, bovine brain tissue, pig adrenal gland, pig liver and sardines. The yield of arachidonic acid, however, is usually less than 0.2% on a dry weight basis. The use of microorganisms capable of producing arachidonic acid de novo have been suggested by various investigators, including Kyle, PCT Publication No. WO 92/13086, published Aug. 6, 1992; Shinmen et al., U.S. Pat. No. 5,204,250, issued Apr.20, 1993; Shinmen et al., pp. 11-16, 1989, Appl. Microbiol. Biotechnol., vol. 31; Totani et al., pp. 1060-1062, 1987, LIPIDS, vol. 22; Shimizu et al., pp. 509-512, 1992, LIPIDS, vol. 27; Shimizu et al., pp. 342-347, 1989, JAOCS, vol. 66; Shimizu et al., pp. 1455-1459, 1988, JAOCS, vol. 65; Shimizu et al., pp. 254-258, 1991, JAOCS, vol. 68; Sajbidor et al., pp. 455-456, 1990, Biotechnology Letters, vol. 12; Bajpai et al., pp. 1255-1258, 1991, Appl. Environ. Microbiol., vol. 57; Bajpai, pp. 775-780, 1991, JAOCS, vol. 68; and Gandhi et al., pp. 1825-1830, 1991, J. Gen. Microbiol., vol. 137. The arachidonic acid productivity by the microorganisms disclosed by prior investigators, however, is less than 0.67 grams per liter per day. Such amounts are significantly less than the amounts of arachidonic acid produced by the microorganisms of the present invention. These lower productivity values are the result of employing strains: (1) with slow growth or lipid production rates leading to long fermentation times (i.e., greater than 2-3 days) ( Kyle, 1992, ibid.; Shinmen et al., 1993, ibid.; Shinmen et al., 1989, ibid.; Bajpai et al., 1991, ibid.; Bajpai, ibid.; and Gandhi et al., ibid.); and/or (2) that contain low arachidonic acid contents (expressed as % fatty acids) in the final oil produced (Shinmen et al., 1993, ibid.; Shimizu et al., 1989, ibid.; and Kendrick and Ratledge, 1992, pp. 15-20, Lipids, vol. 27); and/or (3) which require long periods of stress (i.e., aging a biomass for 6-28 days) to achieve high levels of arachidonic acid in a biomass (Bajpai et al., 1991, ibid. and Shinmen et al., 1989, ibid.); and/or (4) that only exhibit high arachidonic acid content in non-commercial growth conditions (e.g., malt agar plates) (Totani and Oba, 1987, pp. 1060-1062, Lipids, vol. 22). In addition, non-Mortierella schmuckeri microorganisms that have been proposed for producing arachidonic acid, in particular Pythium insidiosum microorganisms, disclosed by prior investigators (Kyle, 1992, ibid.), have been reported to be pathogenic to humans and/or animals.
Thus, there remains a need for an economical, commercially feasible method for producing arachidonic acid. The present invention satisfies that need. There also remains a need for the an economical, commercially feasible food product for the introduction of arachidonic acid produced according to the present invention into the diet of human infants. | {
"pile_set_name": "USPTO Backgrounds"
} |
Q:
hook-length formula: "Fibonaccized": Part II
This is a natural follow-up to my previous MO question, which I share with Brian Hopkins.
Consider the Young diagram of a partition $\lambda = (\lambda_1,\ldots,\lambda_k)$. For a square $(i,j) \in \lambda$, define the hook numbers $h_{(i,j)} = \lambda_i + \lambda_j' -i - j +1$ where $\lambda'$ is the conjugate of $\lambda$.
The hook-length formula shows that if $\lambda\vdash n$ then
$$n!\prod_{\square\,\in\,\lambda}\frac1{h_{\square}}$$
counts standard Young tableaux whose shape is the Young diagram of $\lambda$.
Recall the Fibonacci numbers $F(0)=0, \, F(1)=1$ with $F(n)=F(n-1)+F(n-2)$. Define $[0]!_F=1$ and $[n]!_F=F(1)\cdot F(2)\cdots F(n)$ for $n\geq1$.
QUESTION. What do these integers count?
$$[n]!_F\prod_{\square\,\in\,\lambda}\frac1{F(h_{\square})}.$$
A:
This is my answer to the original question (https://mathoverflow.net/a/327022/50244) whether these numbers are integers to begin with, it gives some combinatorial meaning as well:
Use the formulas
$F(n) = \frac{\varphi^n -\psi^n}{\sqrt{5}}$, $\varphi =\frac{1+\sqrt{5}}{2}, \psi = \frac{1-\sqrt{5}}{2}$. Let $q=\frac{\psi}{\varphi} = \frac{\sqrt{5}-3}{2}$, so that
$F(n) = \frac{\varphi^n}{\sqrt{5}} (1-q^n)$
Then the Fibonacci hook-length formula becomes:
\begin{align*}
f^{\lambda}_F:= \frac{[n]!_F}{\prod_{u\in \lambda}F(h(u))} = \frac{ \varphi^{ \binom{n+1}{2} } [n]!_q }{ \varphi^{\sum_{u \in \lambda} h(u)} \prod_{u \in \lambda} (1-q^{h(u)})}
\end{align*}
So we have an ordinary $q$-analogue of the hook-length formula. Note that
$$\sum_{u \in \lambda} h(u) = \sum_{i} \binom{\lambda_i}{2} + \binom{\lambda'_j}{2} + |\lambda| = b(\lambda) +b(\lambda') +n$$
Using the $q-$analogue hook-length formula via major index (EC2, Chapter 21) we have
\begin{align*}
f^\lambda_F = \varphi^{ \binom{n}{2} -b(\lambda)-b(\lambda')} q^{-b(\lambda)} \sum_{T\in SYT(\lambda)} q^{maj(T)} = (-q)^{\frac12( -\binom{n}{2} +b(\lambda') -b(\lambda))}\sum_T q^{maj(T)}
\end{align*}
Now, it is clear from the q-HLF formula that $q^{maj(T)}$ is a symmetric polynomial, with lowest degree term $b(\lambda)$ and maximal degree $b(\lambda) + \binom{n+1}{2} - n -b(\lambda) -b(\lambda') =\binom{n}{2} - b(\lambda')$ so the median degree term is
$$M=\frac12 \left(b(\lambda) +\binom{n}{2} - b(\lambda')\right)$$
which cancels with the factor of $q$ in $f^{\lambda}_F$, so the resulting polynomial is of the form
\begin{align*}
f^{\lambda}_F = (-1)^{M} \sum_{T: maj(T) \leq M } (q^{M-maj(T)} + q^{maj(T)-M}) \\
= (-1)^{M} \sum_{T} (-1)^{M-maj(T)}( \varphi^{2(M-maj(T))} + \psi^{2(M-maj(T)}) =
\sum_T (-1)^{maj(T)} L(2(M-maj(T)))
\end{align*}
where $L$ are the Lucas numbers.
Remark. This is a byproduct of collaboration with A. Morales and I. Pak.
| {
"pile_set_name": "StackExchange"
} |
Oct 2, 2004
London Calling
And I...live by the river! etc.
Anyhoo, we're back in London, and on our last frantic day of shopping and meeting people before we fly out tomorrow night. Our sojourn is almost over!
The good news first is I checked my bank accounts and I got a nice whack of interest come through on one account which means I have cash! I can now buy my brother all the Von Dutch T-Shirts he's been barracking for! ;)
I will talk a bit about Budapest, considering I got cut off so quickly when I was in a net cafe over there.
We got up a 4am to leave Glasgow on a 6am flight. Greg, Debbie (his sister) and Julie (her Scottish flatmate) and I were all a bit bleary-eyed, andI was still a bit worried about my labyrinthitis. Luckily I was OK, just the normal sore head I get with airports and planes.
We flew down to London Luton, from where we caught our flight to Budapest. We had to wait a few hours at the airport, and Greg and I actually left a bag in storage so we didn't have to lug it around Budapest! At every castle we visited, Greg bought the guidebook, so consequently we have one very heavy bag!
Budapest was lovely. Quite a big city, with a population just under 2 million, about one-fifth of Hungary's overall people-count. It's very very smoggy though, with many older buildings black with pollution. Budapest was formed in 1873 when the towns of Buda and Pest, on either side of the Danube, merged to form one city. Most of the sight-seeing stuff is still on the Pest side.
The first day we climbed Mt Gillert on the Buda side. Gillert was priest made a saint for trying to convert the pagan Hungarians around 1000. They thanked him by putting him in a barrel and rolling him down the hill! Hence the hill of Gillert. It was a big climb up, but we were rewarded with good views of the city and of the Statue of Liberty on top. It's a giant bronze woman holding a palm frond. It was put up by the Soviets to celebrate their liberation of Budapest from the Nazis. After the communists left Budapest in 1991, they thought about tearing it down. But they changed their minds and left it there to now symbolise Budapest's freedom from the Soviets!
On Tuesday we headed out first to the Synagogue. Over 600 000 Hungarian Jews died during WWII, most in concentration camps. They have a memorial statue out the back of a steel willow tree, with the names of victims on the leaves. It was actually paid for by the American actor Tony Curtis, whose dad was a Hungarian Jew who died in the Holocaust.
We needed some cheering up after that, and boy did we get it. We walked along the Danube on the Pest side, past their impressive Parliament building, currently being cleaned, and across to Margaret island. It's a long island in the middle of the river. The first thing we came across were converted golf buggy-type mini cars, available to rent. It only cost about £3 each so bugger it, let's do it!
We rocketed around (well, as much as you can rocket at about 7 miles an hour) that island for an hour. It was the best fun. We've got some great bits on video, including when Debbie, Greg and I clambered into the buggy and took off, leaving a hapless Julie chasing us! It was great. I don't remember much else about the island but the buggy sure as hell will stay in my memory! My parents are actually going to Budapest in about a fortnight and I strongly suggest a buggy ride!
Tuesday we also decided to visit the baths. Budapest is famous for its hot springs, and there are numerous baths around town. It was a bit of a debacle because no one spoke English at the baths, and we were trying to rent towels and lockers with hand gestures! We also discovered we needed bathing caps to swim, so we had to buy them in the end, as it was cheaper than renting them? Bizarre.
The baths were lovely, and we felt refreshed. So we took off and grabbed the furnicular up to the castle district on the Buda side. We went into the labyrinths underneath - there are miles of caverns naturally carved through the hill by hot water thousands of years ago, and then used and maintained by locals. It was fun, because we had to take a gas lamp with us to see by - as in the evening they turn the lights off down there! They'd also put in for some reason, a fountain that flowed with wine! I think it was to signify the rich history of the early Hungarians. Or something.
Wednesday we took ourselves of on a 'Hammer and Sickle' T0ur. It was about 4 hours of communist talk. Our guide, Czaba (pronounced 'Chubba'), was really interesting. He was about 30, and had been lucky enough to visit Australia when he was 15, while Communism was still in place. It made him a bit of a rebel at school, following his father's footsteps. His father had been a sportsman so had been able to travel the world and see the cool capitalist stuff the Hungarians weren't getting. He was very against it, and it caused clashes with Czaba's grandfather, who was a very committed communist party member and local leader.
Anyhoo, we visited a communist era flat - just a sitting room, bedroom, small kitchen and bathroom, and narrow balcony. There are loads of big ugly concrete blocks still all over Budapest.
We also visited statue park, where famous soviet statues are now kept. It was funny seeing the statues with no artistic value - just propaganda!
Crap! My net time's out again...will write soon!
Hi to everyone, and sorry Clare and Briony for missing your birthdays! I'll try and pick you up something festive!
3 comments:
Hi Natz - glad to hear about your visit to Budapest and looking forward to seeing the video. Got your email re arrival - we will check it out and be there to meet you. Regards to all. See you soon - mum.
Thanks for post on our website. Sorry to hear you have been sick. If it makes you feel any better, Mark and I have both been sick with head colds. Mark was kind enough to share with me. I am on the end of it though luckily before we hit Rome.Happy Birthday Greg!!Talk soonAlisha & Mark | {
"pile_set_name": "Pile-CC"
} |
Integration host factor (IHF) modulates the expression of the pyrimidine-specific promoter of the carAB operons of Escherichia coli K12 and Salmonella typhimurium LT2.
We report the identification of Integration Host Factor (IHF) as a new element involved in modulation of P1, the upstream pyrimidine-specific promoter of the Escherichia coli K12 and Salmonella typhimurium carAB operons. Band-shift assays, performed with S-30 extracts of the wild type and a himA, hip double mutant or with purified IHF demonstrate that, in vitro, this factor binds to a region 300 bp upstream of the transcription initiation site of P1 in both organisms. This was confirmed by deletion analysis of the target site. DNase I, hydroxyl radical and dimethylsulphate footprinting experiments allowed us to allocate the IHF binding site to a 38 bp, highly A+T-rich stretch, centred around nucleotide -305 upstream of the transcription initiation site. Protein-DNA contacts are apparently spread over a large number of bases and are mainly located in the minor groove of the helix. Measurements of carbamoyl-phosphate synthetase (CPSase) and beta-galactosidase specific activities from car-lacZ fusion constructs of wild type or IHF target site mutants introduced into several genetic backgrounds affected in the himA gene or in the pyrimidine-mediated control of P1 (carP6 or pyrH+/-), or in both, indicate that, in vivo, IHF influences P1 activity as well as its control by pyrimidines. IHF stimulates P1 promoter activity in minimal medium, but increases the repressibility of this promoter by pyrimidines. These antagonistic effects result in a two- to threefold reduction in the repressibility of promoter P1 by pyrimidines in the absence of IHF binding. IHF thus appears to be required for maximal expression as well as for establishment of full repression. IHF could exert this function by modulating the binding of a pyrimidine-specific regulatory molecule. | {
"pile_set_name": "PubMed Abstracts"
} |
Tallmadge Township, Michigan
Tallmadge Charter Township is a charter township of Ottawa County in the U.S. state of Michigan. The population was 7,575 at the 2010 census.
Communities
Finnasey was a rural post office in Tallmadge Township from 1882 until 1883.
Lamont is an village on the north side of the Grand River at . It was founded in 1833 by Harry and Zine Steele, and was known for many years as Steele's Landing. The Steele's Landing post office was established January 9, 1851. In the same year, the Steeles had the village platted as "Middleville", due to being located midway between Grand Rapids and Grand Haven, although the post office remained Steele's Landing. In 1855 Lamont Chubb, of Grand Rapids, offered a road scraper to the village in exchange for the community taking on his name. The post office was duly renamed as Lamont on July 2, 1856. The Lamont ZIP code 49430 provide P.O. Box-only service.
Grand Valley is an unincorporated community on M-45 just east of the Grand River.
Tallmadge is an unincorporated community near the center of the township at .
The city of Coopersville is to the northwest, and the Coopersville ZIP code 49404 serves areas in the northwest part of Tallmadge Township.
Marne is an village along the northern boundary with Wright Township. The Marne ZIP code 49435 also serves areas in the central part of Tallmadge Township.
The city of Walker is to the east, and the Walker/Grand Rapids ZIP code 49544 serves the eastern parts of Tallmadge Township.
Geography
According to the United States Census Bureau, the township has a total area of , of which is land and , or 1.76%, is water.
Demographics
As of the census of 2000, there were 6,881 people, 2,283 households, and 1,869 families residing in the township. The population density was 212.3 per square mile (81.9/km²). There were 2,369 housing units at an average density of 73.1 per square mile (28.2/km²). The racial makeup of the township was 97.83% White, 0.31% African American, 0.32% Native American, 0.31% Asian, 0.44% from other races, and 0.80% from two or more races. Hispanic or Latino of any race were 0.92% of the population.
There were 2,283 households out of which 40.4% had children under the age of 18 living with them, 74.0% were married couples living together, 5.3% had a female householder with no husband present, and 18.1% were non-families. 14.3% of all households were made up of individuals and 4.4% had someone living alone who was 65 years of age or older. The average household size was 2.97 and the average family size was 3.32.
In the township the population was spread out with 29.6% under the age of 18, 8.4% from 18 to 24, 28.5% from 25 to 44, 24.3% from 45 to 64, and 9.2% who were 65 years of age or older. The median age was 36 years. For every 100 females, there were 103.8 males. For every 100 females age 18 and over, there were 104.3 males.
The median income for a household in the township was $59,205, and the median income for a family was $65,086. Males had a median income of $45,847 versus $29,434 for females. The per capita income for the township was $23,957. About 3.4% of families and 5.0% of the population were below the poverty line, including 6.2% of those under age 18 and 4.8% of those age 65 or over.
References
External links
Tallmadge Charter Township
Category:Townships in Ottawa County, Michigan
Category:Charter townships in Michigan | {
"pile_set_name": "Wikipedia (en)"
} |
Scottish First Minister Nicola Sturgeon | Pool photo by Jeff J Mitchell/Getty Images Sturgeon aims for âcoalitionâ to stop Mayâs Brexit deal Scotlandâs first minister holds talks with Jeremy Corbyn and targets multiparty alliance.
LONDON â Pro-EU Conservative MPs can be part of a cross-party "coalition" against Theresa May's Brexit deal, and against a no-deal exit, Scotland's First Minister Nicola Sturgeon said Tuesday.
Sturgeon was speaking in Westminster following talks with Jeremy Corbyn, leader of the U.K.'s opposition Labour party. Sturgeon, whose Scottish National Party is the third largest party in the House of Commons, said those opposed to the binary choice of May's deal or no deal should unite behind a single alternative strategy â and that Conservative MPs "have a role to play" in the interests of "building a coalition" in Westminster.
âThe next stage of these discussions has to then look at what option can the opposition coalesce around," Sturgeon said, adding that options included a second referendum, a permanent customs union and single market arrangement.
May's government has said that the only options available to MPs when they vote â most likely before Christmas â on her deal, is the existing agreement or no deal. Labour has said that if MPs reject May's deal then a no-deal exit must be prevented, and say they want a general election. While not ruling out a second referendum, Labour has prioritized a motion of no confidence in the government that could topple May's government and force an election.
Sturgeon is in favor of another referendum that could reverse Brexit altogether. When asked about the possibility of a confidence motion, she said it was "important now ... that we focus on where we can build majorities."
In an apparent acknowledgement that the Conservative rebels required to give opposition parties a majority in parliament would not vote against their own government in a confidence motion, she added: "We can all table different things but if they get voted down because we havenât done the work to build the majority behind them itâs not going to take us very far."
Sturgeon, who is meeting May herself on Tuesday evening, said that she had held a "worthwhile" discussion with Corbyn. The two sides had agreed to stay in close dialogue, she said, with their Brexit spokespeople, Keir Starmer and Stephen Gethins, speaking on an "ongoing basis."
"The option that would stop Brexit is another vote and Labourâs official position is not for that at the moment, but hopefully over the course of the next few weeks as these discussions develop weâll see movement on all sorts of positions," she said.
A Labour party spokesperson said that talks had been "constructive."
"They discussed their common opposition to Theresa Mayâs botched Brexit deal and determination to work across parliament to prevent a disastrous no deal outcome," the spokesperson said. | {
"pile_set_name": "OpenWebText2"
} |
Q:
SQL get unique month year combos
SELECT MONTH(sessionStart) AS Expr1, YEAR(sessionStart) AS Expr2
FROM tblStatSessions
WHERE (projectID = 187)
GROUP BY sessionStart
This returns:
11 | 2010
11 | 2010
11 | 2010
12 | 2010
12 | 2010
But I need it to only return each instance once, IE:
11 | 2010
12 | 2010
If that makes sense!
A:
The following should be what you want:
SELECT MONTH(sessionStart) AS Expr1, YEAR(sessionStart) AS Expr2
FROM tblStatSessions
WHERE (projectID = 187)
GROUP BY MONTH(sessionStart), YEAR(sessionStart)
in general you need to group by every non-aggregate column that you are selecting. Some DBMSs, such as Oracle, enforce this, i.e. not doing so results in an error rather than 'strange' query execution.
| {
"pile_set_name": "StackExchange"
} |
noahâs ark
Tag archive for noahâs ark
Doubling up on employees may seem like a luxury, but when the pressureâs on, youâll be glad you did. Just as Noah filled the Ark with animals two by two before the flood, you have to hire staff in twos. The alternative is that you risk being stuck at the most awkward and stressful time without someone to whom you can delegate a critical function of your gymâs operation. If you own or manage a small fitness center you always need contingency plans, especially when it comes to staffing. Large companies have all kinds of luxuries that come with the [...] | {
"pile_set_name": "Pile-CC"
} |
Tony Christian starts upp Joneskogs 738 camaro
First start upp in Bradenton FL 2009 12 04.
Peter Rosenqvist is starting his Pro Mod Camaro
Peter Rosenqvist in Sweden are starting his Pro Mod Camaro for the first time after
he bought it from Adam Flamholc.
Rickard Asp is helping him from team Top Mod Viper.
Filmed with Canon 5D Mark II with EF 17-40 lens. | {
"pile_set_name": "Pile-CC"
} |
Codecs donât need to be hard. No, really, they donât.
By the end of this article, you will be able to pick the best codec for you on each project. My goal is to empower you to make your own informed decisions about codecs, instead of relying on what worked for someone else.
Iâm going to walk you through every step in the process of making a video. Click on a heading to jump to that section. Iâll cover:
At each stage, Iâll explain which factors you should be considering as you choose a codec, and Iâll give you some examples of the most commonly-used codecs for that stage.
Along the way, weâll cover why low-end codecs and high-end codecs can each slow down your editing, the reasons for a proxy/offline edit, a real-world project walkthrough, some storage-saving strategies, and an explanation for why transcoding cannot improve your image quality.
The benefits of optimizing your codecs can be huge. The right codec will preserve your images in the highest quality, help you work faster, and it will also enable you to take the best advantage of your computer and storage. Youâll be able to work faster on a laptop than many can on a high-end tower.
What a Codec Does
A codec is a method for making video files smaller, usually by carefully throwing away data that we probably donât really need, and theyâre pretty smart about how they do that. A few years ago, I created a video that covers the main compression techniques that many codecs use. Itâs not required viewing to understand this article, but it certainly wonât hurt.
How Codecs Work â Tutorial.
If youâre skipping the video, here are some very basic explanations:
Chroma subsampling: Throws away some color data (4:4:4 is no chroma sampling. 4:2:2 is some chroma subsampling.4:2:0 is lots of chroma subsampling). Bad if youâre doing color-correction. Really bad if youâre doing green screen or VFX work.
Throws away some color data (4:4:4 is no chroma sampling. 4:2:2 is some chroma subsampling.4:2:0 is lots of chroma subsampling). Bad if youâre doing color-correction. Really bad if youâre doing green screen or VFX work. Macro-Blocking: Finds blocks (varying size) of similar colors and makes them all the same color. Bad for VFX and color-correction. Almost all codecs use this to some degree, and the amount tends to vary with the bitrate.
Finds blocks (varying size) of similar colors and makes them all the same color. Bad for VFX and color-correction. Almost all codecs use this to some degree, and the amount tends to vary with the bitrate. Temporal compression: Uses previous frames (and sometimes following frames) to calculate the current frame. Bad for editing.
Uses previous frames (and sometimes following frames) to calculate the current frame. Bad for editing. Bit depth: The number of possible colors. Deeper bit-depth (larger numbers) is good for color-correction and VFX.
Codec Comparison Table
Iâve also pulled together a list of all of the most common codecs used in the postproduction world. This list can help you compare different codecs against each other and make the best decision for your project.
There are many different codecs that can be used in the editing process, but the ones Iâve included are by far the most common. There is a significant advantage to using popular codecs â they are more likely to work on your system, your clientâs system, your system-in-five-years, etc. And itâs easier to find help if something goes wrong.
Open the table in a new tab, and think about which codecs might be a good fit for you as you read through the article.
Check out the table
Lossyness
One of the columns in the table is âlossyness,â which is an important concept with codecs. When Iâm talking about lossyness, I donât necessarily mean what your eye sees. I mean the amount of data that is retained by the codec, only some of which you can see. The question is: If I had an uncompressed image, and then I compressed it with this codec, how similar would the new image be to the old image? How much information is lost in the transcode? If the two images are very similar, then the codec is not very lossy, and if theyâre pretty different, then itâs more lossy.
The lossyness is a combination of the techniques that the particular codec uses and its bitrate. A more lossy codec is not necessarily âbad.â In some cases (when viewing online, for instance), itâs really not necessary to retain 100% of the original image. Using a more lossy codec can be a really smart move because of how much space it saves.
If the image looks just as good to my eye, then why should I care if itâs technically âlossyâ?
You should care because you may want to change the image. If you are doing any sort of color correction, then you will be changing the image, allowing you to see elements of the image that werenât visible (or prominent) when you captured it.
For example here is an image that was captured raw.
Here is a screengrab of it compressed with H.264, using a standard YouTube-recommended settings.
And then compressed with DNxHD 350x:
They all look pretty much the same, donât they? The visual quality is just about the same, and the H.264 file is a fraction of the size of the DNxHD file. This is why itâs the recommended setting for YouTube. It looks just about as good to the eye, and the file is much easier to upload to the internet.
The trouble with the H.264 version, however, comes when you try to make changes to the image. What if you wanted to increase the exposure?
Now we can see where the highly-compressed image falls apart. Her hair and shirt look terrible in the h.264 image, and the buildings by the river look all mushy.
This is why you really want a high-quality codec when you capture the image â because you will probably want to make changes later on, but you donât know yet what those changes might be. Youâll want to tweak the color and contrast, maybe tweak the speed, maybe add some VFX. A highly-compressed file doesnât allow for those changes without breaking down.
This is why itâs a good idea to capture your footage in 10-bit even if you may be outputting an 8-bit file in the end â you donât know, when you shoot, which bits youâre going to want.
The Codec Journey
Now that weâve gotten some of the foundational ideas out of the way, itâs time to walk through the different stages that youâll encounter in each project.
Every projects starts with a codec that you capture in the camera, and it ends with a codec that you export (delivery codec) and hand off to your client or upload to the web. In the simplest case, you do all of your editing and color-correction right on the camera files and then export to your delivery codec, so youâre only ever using two codecs.
But most of the time it gets a little bit more complicated. You might transcode to a different codec for editing, and potentially for color-correction, and definitely for VFX. But it all starts withâŠ
The Codec You Shoot With
(back to the index)
This is your capture codec, (also called the âcamera native codecâ or âacquisition codecâ).
Generally speaking, you should aim for the highest-quality codec that your camera (or your budget) can capture. When I say âhighest qualityâ, I mean that you want to capture as much information as possible, so you want less-lossy codecs: less compression, higher bit-depth, and less chroma subsampling. The more information you have when you capture, the more flexibility you will have later, especially in color-correction and VFX (if youâre doing that).
Of course, you also have to consider a lot of other, practical factors in this decision, otherwise we would always be shooting 8K raw, right?
Cost
The first consideration is obviously cost. Generally speaking, the more expensive the camera, the higher quality codecs are available on it. I say generally because there are some âsweet spotâ cameras that can offer excellent codecs at a reasonable price. Panasonicâs GH series (especially in the early days when the GH2 was hacked) were known for offering better codecs than the other cameras in its price range.
Tip: Better Codecs with External Recorders
One way that people (myself included) have found to capture higher-quality codecs on cheaper cameras is to use an external recorder.
These devices (many of which can double as external monitors) take an uncompressed signal from the camera, via HDMI or SDI, and compress it separately. So you end up with two copies of your footage â one copy heavily compressed on the camera, and a second copy lightly compressed on the external recorder. The key thing here is that the camera sends the signal out to the recorder before compressing it.
One important note here is that many cheaper cameras only output 8-bit, and often not in 4:4:4. An external recorder might be able to compress to a 12-bit codec, but if the camera is only sending 8 bits, the recorder can only record 8 bits. Some cheaper cameras may also not output a âcleanâ HDMI signal that is suitable for recording. We call an output signal âcleanâ when itâs just the pure image with no camera interface overlays.
Storage
The second factor to consider is storage space. High-quality codecs tend to be higher bit-rate, which means that the files are larger. You need to be prepared to store and back up all of those files as youâre shooting, and you may also have to upgrade your memory cards in order to be able to record the high-bitrate data. If youâre shooting solo, then you may end up choosing a lower-quality codec because it allows you to change memory cards less often and focus on the story instead.
Finishing
Another factor to consider is how much color-correction and VFX (collectively referred to as finishing) you plan to do. If youâre going to be doing very minimal color-correction and no VFX, then you can probably get away with lower bit-depth, chroma subsampling, and macro blocking that come with lower quality capture codecs.
Editing Hardware
The last factor to consider is your editing machine, because most capture codecs are not well suited to editing without a high-performance computer. H.264 and some raw files require a powerful CPU/GPU to edit smoothly, and very-high-bitrate codecs may require high-speed hard drives or data servers. Unless you happen to be shooting an edit-friendly codec, you may have to transcode your files to another codec before editing, which can take time. For most people, transcoding the footage isnât a huge issue because it can be done overnight or on a spare computer. If youâre working on very tight turn-around times, however, you may choose a codec that will allow you to start editing immediately after a shoot, even if that means a higher cost or a sacrifice in image quality. I explain which codecs are best for editing in the next section.
The Codec You Edit With
(back to the index)
Alright, youâve shot your film, and youâve got all of your files onto your computer. Now you need to decide whether youâre going to edit with these files, or whether you want to transcode into another format.
Why should I transcode before editing? Canât I just edit the files that came out of the camera?
Well, it depends. Pretty much all of the major software packages can now edit any codec that your camera creates, (unless youâre a badass shooting on a brand-new camera with brand-new technology). But while itâs almost always possible to edit the codecs that your camera shot, itâs not always the best idea.
If you are lucky enough to be shooting on a codec that is great for editing (see the codec chart), then you can skip this step.
For many of us, however, the capture codec isnât going to be optimized for editing. There are two main factors you need to consider when choosing your edit codec: compression type and bit rate.
Highly-Compressed codecs can Slow Down Your Editing
(back to the index)
Most lower to mid-range cameras record with codecs that use temporal compression, also known as long-GOP compression. I will give you a simple explanation here, but if youâre interested in learning in more detail, check out my codecs video, starting at 19:00.
The simple explanation of a long-GOP is that, for each frame, the codec only captures what has changed between this frame and the previous frame. If the video doesnât include a lot of motion, then this means that the new file can be a LOT smaller than the original. The difference between this frame and the last frame is just a few pixels, so all you need to store is a few pixels. Thatâs great!
The issue, however, is that these codecs tend only to work well when played forward. (If youâre curious why, take a look at the video). Thatâs great for viewing on YouTube and your DVD player, but itâs not great for editing, because when youâre editing youâre often jumping around, or playing a clip backward. It takes a lot more processing power to do those things quickly with a long-GOP codec. A high-end computer might have no trouble, but even a mid-range computer will lag and stutter when you skim through the footage quickly or jump around.
Codecs that arenât long-GOP (a.k.a. Intra-frame codecs), however, can play backwards just as easily as forwards, and even a mid-range computer can skip around very smoothly. If youâve only ever edited clips straight from the camera, you not might realize what youâre missing!
The other thing that can cause issues with playback is raw video. Raw video needs to be converted before it can be displayed (sort of like a codec does), and some computers canât decode the raw file fast enough, especially if itâs 4K. Ironically, both the low-end cameras and the highest-end cameras produce files that are hard to edit!
High-Bitrate codecs can Slow Down Your Editing
(back to the index)
For low to mid-range codecs, you donât have to worry about the bitrates at all. Once you start moving up the ladder, however, high bitrate codecs can cause issues with editing, especially if youâre working on everyday computers.
The reason is because your computer needs to be able to read the data from your hard drive at a bitrate that is at least as high as your codecâs bitrate. It makes sense â if your codec is 50Mb/s (fifty megabits per second), then your computer needs to be able to read that file from your hard drive at 50Mb/s or else itâll fall behind and stutter.
(note that Mb/s stands for megabits per second, while MB/s stands for megabytes for second. There are eight bits in a byte, so you need to multiple by 8 when converting from MB/s to Mb/s)
The good news is that hard drives are getting faster every day, so 50Mb/s is never going to cause any problems. But what if youâre editing ProRes 422HQ at 4K, which is 734Mb/s? The average external hard drive is only just barely fast enough to play that back, and some cheaper hard drives wonât manage it. And then, what if youâre editing a multicam with three cameras? Suddenly you need 3x that data rate: 2,202Mb/s! At that point, youâre going to need to invest in some high-performance hard drives or RAIDs.
Here are some rough guidelines for common data storage speeds, though of course there will always be certain models that underperform or overperform.
Standard spinning drive: 100-120MB/s
Professional spinning drive: 150-200MB/s
Standard SSD: 400-500 MB/s
Low-end RAID: 200-300 MB/s
High-end RAID: 1000-2000 MB/s
Shooting in log can slow down your editing
Shooting in log is a way of preserving as much of your dynamic range as possible, allowing you to capture a scene that has bright highlights and dark shadows without blowing out the highlights or crushing the blacks. Blown-out highlights are a particularly nasty side-effect of shooting on video instead of film, and so shooting in log can help make your footage feel more cinematic. Now that log profiles are available even on most prosumer cameras, itâs an extremely popular way to work.
The downside is that the image that comes out of the camera doesnât look so great, so you need to add in a bunch of contrast and saturation in order to get even close to the final image. The most common way to do that is to add a LUT to your footage, which is essentially a simple preset color correction that brings your footage back to a ânormalâ look.
If youâre shooting in a log color space, then you need to apply a LUT to your footage in order to preview it with normal color and contrast. This means that your editor will need to apply the appropriate LUT to all of the clips when editing. This can be annoying to manage, and it can also slow down the computer a bit, because it needs to first decode each frame and then apply the LUT before displaying it. Itâs certainly possible to edit the log footage without any LUT, but itâs not ideal. The color of two shots may influence how you intercut them.
If youâre going to transcode your files before editing them, then you can apply the LUT during the transcode process. That way, the editor is always working with footage that has good contrast and color and never has to bother with LUTs. Note that you should only do this if you are using a Proxy workflow, not a Direct Intermediate workflow (described below).
Consider time spent encoding
The main downside of transcoding your footage before editing is simply the time it takes to do the transcode. If you have a lot of footage to go through, and your computer isnât particularly fast, it may take a long time. If youâre not in a big hurry, you can let the transcode run overnight, potentially on multiple computers if you have access to them, but thatâs not always ideal.
When I worked at Khan Academy, our founder would regularly record short video messages to send out to people, sometimes on very tight schedules. I would usually shoot in 4K in a long-GOP log format, and edit them on a MacBook Pro. Editing 4K long-GOP with a LUT (to correct for the log footage) on a laptop would mean I could play the video back just fine in Premiere Pro, but I couldnât zoom around the timeline as fast as I wanted without stuttering. But that didnât bother me too much because the edit was extremely simple â just a few cuts, maybe some music, a title, and I was done. Even though my editing speed wasnât ideal, I would have spent more time in the transcode than I would have saved in editing speed, so I just used the original files.
If I were editing a longer piece with the same setup, however, I would transcode to DNxHD or ProRes. Generally, I would do most of the transcoding overnight, often with multiple machines running at the same time.
Proxy Edit
(back to the index)
If youâre going to transcode the native camera files before you edit them, then youâll use an âintermediateâ codec. Itâs called intermediate because it comes between the capture codec and the export codec. There are two common ways of working with intermediate codecs:
The first is the âproxyâ workflow or âoffline edit.â This means that you are transcoding your captured footage into an intermediate format, editing with that format, and then re-linking back to the original camera files before exporting. Because you will use the camera files to export and not the proxy files, you donât need to worry so much about picking a proxy codec with great image quality â lossy codecs are fine. You can optimize for editing speed and storage convenience instead.
The proxy workflow is so common that many high-end cameras record a high-end raw file *and* a ProRes or DNxHD proxy file at the same time. After the shoot, the raw files are backed up and put in storage, while the proxy files are sent off to the editors and to the director/producers for dailies.
When choosing a proxy codec, you want to go for one that does not use temporal compression (aka inter-frame compression or long-GOP compression), and you want to pick one that has a lower bitrate. The low bitrate means that the files are much smaller, so you can use fewer/smaller/cheaper hard drives, simplifying your workflow. Woot!
While the proxy files are great for editing, you shouldnât do more than basic color-correction with proxy files. If you are going to do all of your color-correction inside of your editing software, then itâs best to re-link back to your camera files because your proxy files may have lower color quality.
The good news is that most editing software today can switch between the camera files and the proxy files in just a couple clicks, so you can even go back and forth if you need to.
Weâve published detailed guides for proxy workflows in each of the major NLEs:
Some good choices for proxy codecs
By far the most common proxy codecs are DNxHD/DNxHR and ProRes. They have both been around for years, so theyâre very widely supported. Everyone knows how to handle them. They are both very well suited to a proxy workflow (ProRes even has a preset called âproxyâ), and are nearly interchangeable when used for proxies.
Since DNxHD is made by Avid, and ProRes is made by Apple, it makes sense that DNxHD would work better on Media Composer and ProRes would work better on Final Cut Pro X. That used to certainly be true, but nowadays both codecs work very smoothly on all modern editors (including Premiere Pro). There may be a slight speed increase in using the codec that was designed for the system, but itâs very slight.
The only significant difference between the two for a proxy workflow is the fact that you may have trouble creating ProRes on a PC, while DNxHD is very easy to create cross-platform. The only officially-supported way to create ProRes on a PC is with Assimilate Scratch. There are some other unsupported methods for creating ProRes files on a PC, but theyâre not always reliable. PCs can easily play back and edit ProRes files, but you canât encode new ProRes files on a PC as easily as DNxHD, and so some editors prefer a DNxHD workflow for that reason.
Regardless of which of the two codecs you pick, you also have to pick which flavor you want. This is really going to depend on your storage constraints â itâs a tradeoff between image quality and file size. The good news is that you donât need tip-top image quality when youâre editing, so you can choose a low-bitrate codec.
Start off with the smallest ProRes or DNx codec in the same resolution as your capture codec. Look at the GB/hr column and multiply it by the number of hours of footage you have. If you have enough storage space, then youâre good â use that codec. If you have lots of extra storage space, think about using the next largest flavor.
If you donât have enough storage space, or if youâre on an underpowered machine, then take the resolution down a notch. A lot of huge-budget Hollywood films were edited in 480p just a few years ago, so donât sweat it if you need to lower your resolution from 4K down to 720P for the edit.
Direct Intermediate
The other type of intermediate workflow is something that Iâm calling âDirect Intermediate.â This means that you transcode your camera files into a codec that is both good for editing and very high-quality (not very lossy). Because the codec is very high quality, almost all of the original information from the camera files has been preserved, and so itâs not necessary to re-link back to the camera files â you can just export directly from the intermediate files. There will be some theoretical loss of information when you transcode, but if you pick a good enough intermediate codec, itâll be small enough that you donât need to worry about it.
(Note: Iâm calling this process âDirect Intermediateâ because there isnât a common name for this workflow. People usually just call this âintermediate,â but that can be confusing because proxy workflows are also a kind of intermediate workflow. Some people will also call this an âonlineâ workflow, but this is also confusing because that term was created to describe a workflow that includes an offline and an online edit, not a workflow thatâs online from start to finish.)
The key to picking a good Direct Intermediate codec is to make sure that you are preserving all of the information from your capture codec. An intermediate codec will never make your images better (more detailed explanation below), but it can definitely make them worse if you choose the wrong codec. The important thing is to understand the details of your original footage and make sure that your intermediate codec is at least as good as your capture codec in each area. If you capture your footage on a DSLR like a Sony A7Sii at 4K, then you will be recording in a 4:2:0, 8-bit, Long-GOP codec at 100Mbps. You want an intermediate codec that is at least 4:2:0 and 8-bit. Going beyond these values (e.g. to 4:4:4 and 12-bit) wonât hurt, but it also wonât help at all, so itâs probably not worth the extra storage space.
Letâs say, for example, that we want to go with a ProRes codec. We have 4 options to choose from that are 4:2:2 and 10-bit.
145Mb/s ProRes 422 Proxy
328Mb/s ProRes 422 LT
471Mb/s ProRes 422
707Mb/s ProRes 422 HQ
You might think that all you need is to match the camera bitrate (100Mbps), but you actually need to greatly exceed the camera bitrate. This is because h.264 is a much more efficient codec than ProRes. Because h.264 uses long-GOP compression, it can pack a lot more information into those 100 megabits than ProRes can. In order for ProRes to match the image quality of h.264, you need a much higher bitrate. I would recommend only using ProRes 422 or ProRes 422 HQ if youâre starting with a 100Mbps h.264 codec. ProRes 422 will probably do just fine, but if you have lots of storage space, then going up to ProRes 422 HQ will have a slight edge.
While itâs fine to simply match the bit-depth and color sampling when choosing an intermediate, you should always increase the bitrate at least a little. If youâre going from long-GOP to a non-long GOP codec, then you should increase the bitrate a lot.
Side note: If you wanted to go with DNxHD instead of ProRes, you have similar options, except that DNxHD also offers an 8-bit version for the lower-end codecs. Since our footage is 8-bit to start with, that wonât hurt us at all.
The proxy workflow sounded pretty good. Why do the Direct Intermediate?
Part of the reason why the Direct Intermediate workflow is common is because it used to be a lot harder to use a proxy workflow. Some of the major software providers didnât make it particularly easy to relink back to the original camera files, and so people would choose a direct intermediate workflow. Nowadays, however, itâs pretty easy to do in any editing package. The main exception is when you have a lot of mixed footage types. If you have multiple frame rates and frame sizes in the same project, switching back and forth from the proxies to the capture codecs can be a headache.
If you are using some third-party tools to help prep and organize your footage before you start cutting, those can also make the relinking process more tricky. One common example might be software that automatically syncs audio tracks or multicam shoots.
Another reason why you might want to use a Direct Intermediate workflow is because you can move right on to color-correction and VFX (âfinishingâ) process without swapping around any files. Keep reading, and Iâll explain more about why thatâs convenient in the Color-Correction and VFX sections.
One downside, however, is that you canât âbake inâ the LUTs for your editor â youâre going to need to apply a LUT via a color-correction effect in your editing software. If you were to include the LUT in your transcode for Direct Intermediate workflow, you would be losing all of the benefits of recording in log in the first place.
The other obvious downside is that you need to store all of these (much larger) files.
An intermediate codec will never make your images better
(back to the index)
This is very important, because it is very commonly misunderstood, and there is a lot of misinformation online. Transcoding your footage before you edit will never increase the quality of the output. There are some extra operations that you could do in the transcode process (such as using sophisticated up-res tools) that could increase the image quality in some cases, but a new codec by itself will never increase the quality of your image.
If you choose the right codec, you can avoid hurting your image, but you can never improve it.
That includes going from h.264 to DNxHD or ProRes. That includes going from 8-bit to 10-bit. That includes going from 4:2:0 to 4:4:4.
Here is an illustration that can help you understand this concept:
This is a photo of a rose reflected in a water droplet. Itâs 4 megapixels, and it looks pretty nice on my 27-inch monitor.
Now what if I take a photo of my monitor with a Red Helium 8k camera. This is a beast of a camera. I shot the photo of the rose a few years ago with a cheapo Canon Rebel DSLR, worth about $250 today. The Red Helium setup costs about $50,000, itâs 35 megapixels, itâs raw, it has one of the best camera sensors ever produced.
Which will be a better image â the 4 megapixel photo, or the 35 megapixel photo?
The Red camera has more megapixels, right? Itâs raw, and it has all of the Red digital magic, right? But since Iâm using my high-resolution camera to take a photo of the photo, not a photo of the rose, my fancy new image will never be better than the first one. I have a file that is technically higher-resolution, but it does not capture any more of my subject (the rose) than the first one did.
This is what youâre doing when youâre transcoding. You are making a copy of a copy, taking a photo of a photo. If you use a fancy high resolution camera to take a photo of a photo, you will be able to preserve pretty much all of the information in the original image, but you wonât be able to add anything more.
The big caveat is that, if you are doing any processing, any transformation of the image (adding a LUT, for instance), then you definitely do want to transcode into a higher-quality codec, which will retain new information. But if youâre not altering the image, then transcoding will not make your image somehow âbetter.â
Iâll explain this in some more detail in another post, coming soon.
A Real-World Example
(back to the index)
Letâs say youâre editing a documentary that captured 4K footage using a Sony A7sii camera, recording in the long-GOP version of XAVC-S. Not ideal for editing. If they shot 40 hours of footage for your feature-length documentary, youâd end up with about 2.7TB of camera files, which can fit on one hard drive easily (though youâve made other, separate backups, of course!).
You could convert that to a high-quality, not-very-lossy codec for a Direct Intermediate workflow, maybe ProRes 422 HQ in 4K.
The downside is that you would need about 12.7TB in order to store that all of those ProRes files. You would have to use an expensive RAID setup in order to have easy access to all of that footage in one project, at least $1,000. Peanuts for a big facility, but a significant investment for a solo editor.
So you might decide to use a Proxy workflow instead and transcode your files to the ProRes 422 Proxy 4K format. Then your footage would only take up 2.8TB, just barely more than your captured footage. You can then easily edit off of a single hard drive, and your workflow gets a lot simpler. (For instructions on how to calculate bitrates and file sizes, check out this article: The Simple Formula to Calculate Video Bitrates).
Letâs say that youâre working with another editor whoâs on the other side of the country. You might decide to transcode the footage even further down to ProRes 422 Proxy HD, which would shrink your footage down to just 640GB, which becomes more feasible to send over the Internet if you have a fast connection. (18hrs to download on an 80Mbps connection)
When the edit is all done, you just re-link your project back to the original camera files and export. Even though you and your remote editor have been working in a pretty lossy codec, the final export bypasses it, so you donât lose any quality.
The Codec you Color-Correct
(back to the index)
Ok, now youâve got your video edited, and itâs time for color-correction. Everything weâre talking about here will apply whether you are color-correcting inside your editing application, or whether you are sending your edit to dedicated color-correction software.
The big question at this point is whether you want to color-correct straight on the original camera files, or whether you want to transcode. If you did a proxy/offline edit, then you definitely donât want to color-correct the proxy files, because they have a lower image quality. In order to make good decisions about color, you need the highest quality image that you have available, because you need to be able to see exactly what you have to work with.
So we need to work with high-quality images, and we have a few different options:
A. Grade the camera files
This is certainly a simple option. If you did a proxy edit, you can relink to the camera files for the finishing process and go to town. This will give you maximum image quality, but remember how the camera files can be slow to work with? The camera files may slow down the process a little, but depending on the software you use and the amount of work you need to do, you might decide that the simplicity is worth a little bit of potential slowdown. If you have a short edit without a lot of complexity, then this can be a great and easy workflow.
Letâs assume that the color-correction slow-down bothers you, so you need a codec that is easier to work with. You could transcode all of your footage to a high-image-quality codec, link to those files, and then start doing your color-correction. But⊠that kind of defeats the purpose of a proxy workflow, doesnât it? We used proxies because we didnât want to have to deal with the large files that would create. Fortunately, there is another option.
B. Consolidate and Transcode
(back to the index)
If you used a proxy/offline workflow for the edit but donât want to color-correct the camera files, one good option is to relink to the camera files, consolidate your project and then transcode to a high-end codec.
When you consolidate a project, your editing software will make a copy of your project along with a copy of the media, but only the particular files that you ended up using in your sequence. So if you shot 7 takes but only used one of them in the edit, itâll only copy that one take. This cuts down on the storage a lot, which comes in handy at this stage. You can also consolidate down even further so that you only keep the specific portions of each take that you actually used in the edit, discarding the rest. In this case, the software will usually include a few seconds before and after each take (called âhandlesâ), in case you want to add a fade or motion tracking.
Now you can take this new consolidated project (after relinking to the originals) and transcode all of these files to a very high-quality, high-bitrate codec, and start color-correcting. This is different from the Direct Intermediate workflow because you are not transcoding all of your footage â just the footage that made it into the final edit, which might be 1/20th or 1/50th the length of the footage that you originally shot. Now it doesnât sound so bad to transcode to a high-bitrate codec because you donât have to store so much of it. Even at ProRes 4444 4K, a full-length feature film will only be about 2TB â pretty manageable.
Now you can finish your film with top-quality images and speedy processing, on a hard drive that fits in your pocket. Woot!
C. Carry on the Direct Intermediate
The third option is to go with the Direct Intermediate editing workflow, in which case youâre good to go. You already transcoded all of your files to a high-quality codec before you started editing, so you can just carry on with those same files for color-correction. That is also convenient because those files are good both for editing and for color-correcting and VFX (see below).
If you are handing off the project to an external colorist or VFX person, then you can either give them all of your high-quality footage (potentially annoying because of the size), or you can use the same consolidation tip that we used above. Handing off the consolidated project can help you move faster and save your coloristâs time as well.
In addition to the simplicity of the Direct Intermediate workflow (you use only one set of files), you have one other advantage: going back and forth between editing and color-correcting is simpler.
Imagine youâve finished your proxy edit â you consolidate and transcode, send it off to your colorist, and then decide that you need to make some changes to the edit. Now youâve got go back to the proxies to make the edit and then re-consolidate and re-send the footage. The mechanics of that can get pretty messy. In a high-end post-production workflow, there is usually a âlockâ on the edit so that the finishing processes can start. This means that (unless bad things happen) you will try very hard not go back and make changes to the edit. But hey, bad things happen, so itâs best to be prepared.
And now we find another good reason for a Direct Intermediate edit. If you are going to do some of your color work and your editing work simultaneously, or at least are going to go back and forth a couple times, then it can be simpler to use one codec for both. This is especially convenient if you are doing your editing and finishing in the same software package (or set of packages, e.g. Creative Cloud).
The Codec you send to VFX
(back to the index)
If youâre doing any VFX work, then youâre probably going to need to send files to another program (potentially another machine, for another artist). If youâre doing all of your VFX work in your editor (which is becoming more and more viable for simple jobs), then you can skip this section. Just use the same codec as you used for your color correction.
For most of us, however, we need to set up a âround-tripâ process that sends clips from the editor to the VFX software and then back again when theyâre finished. This happens on a shot-by-shot basis, so youâre not sending the entire sequence to VFX, like you probably did for color grading. The question of when in the process you send your shots to VFX depends very much on the particular workflow. Some people will send to VFX after the edit is locked and color-correction finished, but time pressure can force you to start sending off shots before then.
If youâre editing in Premiere Pro and doing mild VFX in After Effects with Dynamic Link, then you can also skip this section. Dynamic Link automatically does the round-tripping for you. If youâre doing a lot of VFX work, you may still want to use the techniques in this section, because Dynamic Link can be a little bit finicky with too many projects. Adobe is always working on those bugs, however, and so itâs partly up to personal taste.
Go big or go home
In the VFX process, you tend to use very high-end (high bitrate) codecs for two main reasons. The first is simply that VFX artists need all the information you can give them in order to do their job well. VFX artists are some of the pickiest people when it comes to codecs, and for good reason. Everyone wants high-quality images, but image issues can often pose more of a problem for VFX than it does for editing, color-correction, and final export.
Many tasks in VFX work require very detailed analysis of the image on a pixel-by-pixel level, which most editors never need to do. For instance, if youâre doing a green-screen extraction, you want the edge between your character and the greenscreen to be as clean as possible. Weâve all seen awful greenscreen shots where the edges of the character are all choppy or blurred out. These problems often arise because of image compression artifacts that are invisible to the naked eye. 4:2:2 or 4:2:0 color subsampling, for instance, has almost no visible impact on the image. The human eye cares mainly about contrast and seldom notices low color resolution, but the greenscreen extraction process relies primarily on color values. If the codec has thrown away a large portion of the color values by using 4:2:0 chroma subsampling, a good color key may be impossible.
The second reason why you want to use high-end codecs is because of generation loss. In the VFX process, you will probably have to compress your file multiple times. You will compress the file once when you send it to them. And then, if they need to pass the file on between multiple specialists, they may compress that file two or three times before they send it back. When a file is compressed multiple times, we call that multiple generation loss. If you are using a low-end codec, the image will get progressively worse each time you re-compress it. One of the great things about the really high-quality codecs is that you can compress them a couple times without losing much quality. While itâs always better to avoid compressing a video multiple times, if youâre using very high-quality codecs, youâre usually pretty fine.
Some high-end VFX workflows will only use lossless compression for this reason. The good news is that your VFX shots are usually only a few seconds per clip, which means your file sizes will be small even with high-end codecs. So go big! If you captured 4:4:4 in the camera, then definitely send 4:4:4 to VFX. Otherwise, I would pick a top-of-the-line 4:2:2 codec (ProRes 422 HQ or DNxHQX).
And of course, you should always communicate beforehand with VFX about what codec to send. If you think theyâre making a bad choice, send them this article ð
The Codec You Export
(back to the index)
Now youâve finished the editing, the color, and the VFX â youâre ready to export. You will usually do the final export from the software that you used for color-correction, using the codec that you used in the color-correction process.
If your client is in the media business, they should know what codec they want, so you can skip the rest of this section!
If your client is not a video expert, they may not know what they want, so you need to make some decisions for them. Most of the time, your client is going to want a video to upload to YouTube and/or other social media sites. You may be tempted to choose a codec that is good for streaming on the Internet. But you would be wrong! The reason why: these sites do not stream the same file that you upload to your viewers â they compress the file *again* before streaming it, and you have absolutely no control over the settings that they use. This means that, if you upload a low-quality codec, then we have the scenario where weâre taking a low-quality photo of a low-quality photo that we talked about. Bad! Avoid!
As a general rule, if you want the best quality result, you should upload the best quality source. Theyâre going to compress again anyway, so giving them more data work with canât hurt, right? If you have a fast enough connection, you could upload a ProRes 422. Some people have reported slightly (only slightly) better results when uploading ProRes instead of the recommended h.264. If you are delivering a file to a client, for them to upload to Youtube, then I would not give them ProRes, since you donât know what kind of bandwidth theyâre going to have. Fortunately, these sites tend to publish recommended upload specs (just Google it). I personally will take whatever bitrate they recommend and multiple by about 1.5x to 2x.
Your client may also want a file that they can embed directly into their website (though I would dissuade them, if you can). Generally speaking, you want a very heavily-compressed h.264. If youâre curious what a good bitrate is, my reasoning is that, if anyone knows what the sweet-spot bitrate is, itâs YouTube. I periodically download a video from YouTube and check its bitrate, and use that as a benchmark.
If the video is not public, they may also want a small file that they can email or link directly to their own clients so that they can download it. In these cases, it may be appropriate to deliver more than two separate files, especially if itâs a long video. The file they should upload to YouTube will be too large to email conveniently. In this case, I will usually down-res the file and compress it very heavily. You also have to be realistic and decide whether you think that your client will actually understand the difference between the two files.
If I need to deliver more than one file, I will usually call one of them âHDâ in the filename and the other one âsmallâ or ânot HDâ in the filename. If you try to describe the different codecs to them, I can almost guarantee theyâll have forgotten the difference by next week, but theyâll probably remember what HD and ânot HDâ means.
The Codec You Archive
(back to the index)
Youâve delivered the file(s) to your client, so now you can sit back and relax⊠almost.
As any working professional in this industry knows, the day when you deliver the finished product to your client is often not the last time you touch a project. Sometimes a client wants to go back and change something weeks later, or they want a higher-quality codec, or maybe you want to add it to your personal reel. In any of these cases, you may have moved on to a different machine or to different software, making it a headache to open up the original project and re-export.
This is where it comes in handy to have a great archive of the finished project in an extremely high-quality codec. If your client requested a very high-quality codec for delivery, then youâre generally set. Just keep a copy of that file, and youâre good. If they need a delivery codec thatâs not tip-top quality, however, itâs always good to do your own export with a codec that is lossless or as close to lossless as you can afford, given the space it will take. I will generally export to a very high-bitrate 4:4:4 codec â either DNxHD/HR or ProRes.
Got questions? Comment away.
I actually read every comment. This article is a work in progress, and Iâll be updating it with more explanations and examples based on your feedback. If you have some private feedback or questions, shoot me an email: david at frame dot io.
Want to write for the Frame.io Blog? Email me: blog at frame dot io.
Many thanks to Larry Jordan, Shane Ross, and Philip Hodgetts for their input on this article! | {
"pile_set_name": "OpenWebText2"
} |
Q:
Where can I get a proper hot chocolate in Firenze-Venezia-Trieste?
I am right now in Firenze but will spend two days in Venice and two days in Trieste and I'd like to drink a proper, thick, tasty hot chocolate but everyone says they don't make it in the summer. Any ideas? In Firenze I have a one week bus pass so I'm not limited to any area.
A:
Hot Chocolate and the Italian Summer
As many, many waiters must have told you, hot chocolate is not exactly a summer drink. I do understand that those same establishments probably serve hot coffee and tea in the summer, however tea is somewhat of a more multi-season drink whereas coffee is a daily drink for most Italians.
In my opinion, if you wish to maximise the likelihood of finding hot chocolate, you should target specialised establishments, or Cioccolaterie (literally chocolate-places in Italian). Your search keywords should be something like cioccolata calda XXX or cioccolateria XXX, where XXX is the city you wish to search in.
Hot Chocolate in Firenze
Searching around on the internet for cioccolata calda Firenze yields many results (see here and here for two sample reviews in Italian). The consensus however seems to point towards Rivoire which is known for making their own chocolate, as well as serving thick hot chocolate beverages. Another option could be Cioccolateria Hemingway. None of these specify if they serve hot chocolate in the summer. Nevertheless it might be worth trying them since, being Cioccolaterie, they are definitely more likely to have hot chocolate on their menus.
A:
There are quite some places to try in Trieste.
I suggest to try Chocolat first. From there you can walk towards Piazza Unità and check a local Torrefazione (they serve tea and coffee also). Behind Piazza Unità there's Gelato Marco, a gelateria (ice cream place) where they serve ice-cream covered in hot chocolate!
After visiting the old city centre, walk in viale XX settembre to try Madison. This is a rather long pedestrian area filled with restaurants, bars, gelaterie. At late afternoon it gets crowded for aperitivo.
Everything I linked is in walking distance.
A:
I just learned of the existence of VizioVIrtù in Venice. Look at this:
And the text suggests it's served in every season:
Each season has its chocolate drink. True, as this drink is exquisite also if served cold. Is your mouth watering? Try the milkless and sugarless one.
| {
"pile_set_name": "StackExchange"
} |
BS Beaver Creek Plush Queen 12\
Building the family area so that it seems rather important to pay attention and relaxed. The warm Beaver Creek Furniture could make buddies the attendees, or relatives who arrive at visit to feel at home. Along with the nice feeling that you might, would not be pleasant if you could spend some time speaking within this room together? Organizing interior planning living by choosing a right seat, room you can begin patterns.
Selection of loving you and a suitable couch, will support the looks of a room that is living. Chair design would you choose must correspond with all the topic carried by the property itself. Beaver Creek Furniture might seem odd if a contemporary livingroom filled up with chairs contemporary and minimalist. Contemporary effect could be stronger radiated should you select a chair that's carvings along with facts that are basic that are other.
Besides getting used for interesting guests, a living room often you utilize to read guides or just relax on Sunday. A seat that has a layout may assist the overall appearance of the room. Nonetheless, the style has to be in accordance with the ease furnished. We advocate that you simply avoid overly reducing convenience to be able to get the design you enjoy.
There are numerous choices advanced layout that offers comfort as possible choose pills. So, don't be satisfied with one option only. Again, don't need to buy a couch for good design alone. In addition to the look, you need to chair Beaver Creek Furniture should be satisfied first.
In case your home is tiny, requiring the room doubles like a family-room, you should look at whether the item is tough if occupied all the time. Once your preferences are fulfilled, you can view to the design and the model. Is recommended to select a design that's not fixated by era. Therefore, although the trend modified, guest chairs won't create uninterested or looks outdated.
There are many alternatives of products that you could pick. Beginning one piece of wood to timber or material body covered with textile and foam multi-faceted. The impact wills bolster if placed in the space contemporary classic-style. However, software of lumber in a minimalist modern room can add a cozy atmosphere that is natural. | {
"pile_set_name": "Pile-CC"
} |
Q:
GetType returns diffent information than is operator uses
Cannot explain what is going on the following program. GetType is returning the type I want to return and not the original one. Does that mean we cannot rely on GetType? is operator is right though. Can anybody please explain it in detail?
using System;
namespace ConsoleApplication2
{
public class MyClass
{
public Type GetType()
{
return typeof(Program);
}
}
class Program
{
static void Main(string[] args)
{
MyClass mc = new MyClass();
if (mc.GetType() == typeof(Program))
{
Console.WriteLine("Confused.");
}
if(mc is Program)
{
Console.WriteLine(mc.GetType()); // Don't get inside the if. Why?
}
}
}
}
Update: I am reading the book CLR via C# 3rd edition. In chapter 4 (2nd page) when it explains different methods in System.Object it says
"The GetType method is nonvirtual,
which prevents a class overriding this
method and lying about its type"
While I agree about the first statement, I am lying about MyClass type. ain't I?
A:
Please take care of the warning, as they do exists for reasons. Your code compiled with the following warning:
Warning 1 'ConsoleApplication2.MyClass.GetType()' hides inherited member 'object.GetType()'. Use the new keyword if hiding was intended.
which means GetType() is non-virtual and you are writing new unrelated method of GetType() that CLR will never call it.
A:
is operator implemented in terms of as operator and finally use isinst IL instruction. And of course this instruction don't know about your not virtual GetType method that you define in some class in your inheritance hierarchy.
To understand this "confusing" behavior lets "implement" our own version of the "is operator":
public class MyClass
{
public Type GetType()
{
return typeof(Program);
}
}
class Program {
//this is oversimplified implementation,
//but I want to show the main differences
public static bool IsInstOf(object o, Type t)
{
//calling GetType on System.Object
return o.GetType().IsAssignableFrom(t);
}
static void Main(string[] args)
{
MyClass mc = new MyClass();
//calling MyClass non-virtual version for GetType method
if (mc.GetType() == typeof(Program))
{
//Yep, this condition is true
Console.WriteLine("Not surprised!");
}
//Calling System.Object non-virtual version for GetType method
if (IsInstOf(mc, typeof(Program)))
{
//Nope, this condition isn't met!
//because mc.GetType() != ((object)mc).GetType()!
}
Console.ReadLine();
}
}
A:
Object.GetType is not a virtual method. So mc is MyClass and effectively calls Object.GetType and not your method.
| {
"pile_set_name": "StackExchange"
} |
Goddess worship
Goddess worship may be
the worship of any goddess in polytheistic religions
worship of a Great Goddess on a henotheistic or monotheistic or duotheistic basis
Hindu Shaktism
the neopagan Goddess movement
Wicca
Dianic Wicca | {
"pile_set_name": "Wikipedia (en)"
} |
Member
I had a series 1 Watch in 38mm and went yesterday and got the series 3 in 42mm and Iâm unsure if itâs too big on me. They didnât have a 42mm on display at AT&T so I didnât realize how much bigger it was. What do you all think? Should I go down to the 38mm and pay a $45 restocking fee or does the 42mm look ok? Thanks in advance for any help!
Attachments
Genius
I had a series 1 Watch in 38mm and went yesterday and got the series 3 in 42mm and Iâm unsure if itâs too big on me. They didnât have a 42mm on display at AT&T so I didnât realize how much bigger it was. What do you all think? Should I go down to the 38mm and pay a $45 restocking fee or does the 42mm look ok? Thanks in advance for any help! | {
"pile_set_name": "Pile-CC"
} |
YOUR CART
Are you close to banging your head on the wall because your printer is not working like it's alleged to? If you can be like me, as an alternative to a technical as everybody else is. I realize how it can be pretty frustrating when all of sudden your reliable printer is bust like it did prior to the.
This error code is generally associated the brand new Dell 922 printer. Usually you might see the cartridge moving front and back and slamming itself against the printer. It is advisable to tighten the strings on the back of your printer after which it should work fine.
After completed all the physical features of the printer, check out if its connected within your PC or. To make sure the printer is connected properly, adhere to the USB cable from the spine of the printer towards the back belonging to the computer.
Paper jams. This is by far the most common problem encountered when printing. Reasons for this include using crumpled papers, and printer roller problems. When your paper is jammed, stop the printing operation, whenever your printer and pull the paper in the direction of the printing path; pulling it backwards might damage your printer good deal. Make sure there aren't any pieces of paper left inside the printer, and turn it back towards. This should work properly by however.
If you have a driver that previously works properly fortunately has a nice problem, maybe you need to update your printer driver. And you can now also fix most driver problems by this method. To update the driver, place go for the Windows Update website or printer manufacturer's website appear whether work involved . any updated driver, if there is, download this tool.
If that corrupt or outdated, your components will wrestle in detecting a as well as there won't be any any icon shows on the taskbar. Is actually one in the common reasons for "Printer Not Responding" crisis. In this case, to fix the problem, you only have to update your USB trucker.
This will be error message asking one to restart the printer. You need to switch trip printer and then connect it via the USB cord. Press the hold http://www.sharpdriversdownload.com/ and cancel buttons at that time. Then power on the printer again. On your LCD display you should typically the message saying 'ready for download mode'. You are able to need to initialize the firmware update process.
We've seen that when you are in possessing a version of Word there are free of charge tools which can help you convert the document proper into a PDF ebook. On all other platforms and perhaps even on Windows you can use LibreOffice or LibreOffice which will do the actual perfectly beautifully. | {
"pile_set_name": "Pile-CC"
} |
The 747 airplane from Burning Man is about to move off of public land -- as soon as it gets over one more hurdle: soft playa.
âWeâre part way through the move," said Big Imagination Camp CEO Ken Feldman. "Sometimes things take longer than expected. Weâre doing our very best. Just trying to get to that final spot.
"We started this, weâre going to finish it," he added.
Almost three weeks after Burning Man 2018, a partially reconstructed 747 that was converted into a nightclub art car was still on the playa due to a series of logistical breakdowns.
But there is a saying at Burning Man that "the playa provides." It means that when someone needs something, the temporary community of 70,000 people mysteriously provides exactly what that person requires to survive.
More:747 airplane from Burning Man still sitting in Black Rock Desert; BLM says get it out
More:Burning Man 2018: 747 airplane fuselage arrives, will be largest-ever 'art car'
Burning Man 2018: Your ultimate guide to photos and stories
So, despite widespread condemnation of the situation on social media, volunteers from unconnected camps returned to the Black Rock Desert two hours north of Reno to help drag the multi-ton aircraft off public land and onto a parcel of private land.
âIt will get done," said a member of the Distrikt camp who goes by Zikae Hellfire at Burning Man.
Members of the Mayan Temple art car camp flew in from Mexico City, while other members of the Distrikt theme camp drove nine hours from Oregon to help. Members of Sunset Lounge camp also came. Burning Man organization's Department of Public Works took a break from deconstructing the temporary Black Rock City to help also.
âI know it sounds strange to go through that trouble but you start caring for people," Hellfire said. "I would hope if I needed help to get through something people would come."
Volunteers such as Hellfire, who were not part of the original 500 volunteers who built and shipped the airplane onto the playa, helped lay down rubber mats meant to protect the desert floor.
But they ran into a problem Friday afternoon, the day the U.S. Bureau of Land Management required the airplane to be moved. The rubber mats were too thin to hold the airplane.
At the edge of the dry lake bed, the hard-packed desert floor turns into softer, wet sand. The mats collapsed and the front landing gear started to dig in, dropping the airplane into the ground. The team spent several hours digging it out, then used two semi-trucks, a six-wheel drive military truck and Hellfire's pick-up to pull the airplane back onto solid ground.
"And this is another challenge," Feldman said. "Itâs taken us four years to get a 747 500 miles across the desert. Weâre not going to lose on this last half mile. Weâre never walking away."
Feldman expected the move to take six hours, but now realizes it'll take several days.
"We got into a predicament but we were all really determined not to leave the airplane like that," Hellfire said. "Now itâs off that spot."
The volunteers filled the holes in the playa created by the landing gear, flattened the dirt and raked it.
âWeâre working toward leaving no trace,â Feldman said.
At the end of the day Friday, Feldman and the volunteers were exhausted. They realized they had to use larger, heavier rubber mats to prevent the airplane from sinking, but that will take longer and require more equipment.
Feldman said he will spend Saturday working with a contractor to rent gear before continuing work on Sunday through early next week.
âThis has been maybe one of the most difficult things Iâve ever done in my life," Feldman said. "I really miss my girlfriend. I miss all my friends. And Iâm fighting pneumonia. I missed my momâs birthday. But you know what, sometimes you gotta get âer done. You gotta suck it up and do it.â
Big Imagination Camp posted an apology to the community on its website last week during all the uproar.
"We acknowledge that our exodus plan has not gone as expected and we apologize for any undue stress this has caused to the greater burner community, for the Burning Man Org, and especially the BLM with whom weâve always sought good relations and whom we are working closely with to resolve this situation as rapidly as possible," according to the front page of their website.
Regardless, some people questioned whether the airplane should be allowed back to Burning Man in the future. Burning Man had no comment on any rumors or the 747 situation, though Feldman said the organization has been supportive during all the chaos.
But some people felt otherwise.
While the airplane sat waiting for moving permits from the BLM last week, people tagged it with "S.S. Hubris" and "Ur MOOP as (expletive) yo!" The acronym "MOOP" means "Matter Out Of Place," a moniker for litter on the playa.
Feldman maintains the 747 is not MOOP because Big Imagination always had a plan to move it off public land. But he decided to keep that part of the graffiti for a little tongue-in-cheek humor.
âThis is not an invitation to vandalize our property," Feldman said. "We made a conscious decision to leave that on because of the irony."
Mike Higdon is the city life reporter and temporary Burning Man 2018 reporter, at the RGJ and can be found on Instagram @MillennialMike, on Facebook at Mike Higdon, Reno Life and on Twitter @MikeHigdon. | {
"pile_set_name": "OpenWebText2"
} |
Cytotoxic analog of somatostatin containing methotrexate inhibits growth of MIA PaCa-2 human pancreatic cancer xenografts in nude mice.
Nude mice bearing xenografts of MIA PaCa-2 human pancreatic cancer cell line were treated for 4 weeks with AN-51, a somatostatin octapeptide analog D-Phe-Cys-Tyr-D-Trp-Lys-Val-Cys-Thr-NH2 (RC-121) containing methotrexate attached to the alpha-amino group of D-Phe in position 1. Control groups of mice received saline, RC-121 or methotrexate. Drugs were given in equimolar doses by daily s.c. injections. After 7 days of treatment with 25 micrograms/day of AN-51, tumor growth was completely inhibited although the treatment had to be suspended because of toxic side effects, especially on the gastrointestinal tract, accompanied by major weight loss of the animals. Mice were allowed to recover for 1 week and treatment was continued with 12.5 micrograms/day AN-51. After 2 weeks of additional therapy, tumor volume, percentage change in tumor volume, and tumor weights were significantly decreased, compared with controls, only in the group treated with AN-51. Methotrexate and RC-121 also inhibited tumor growth, but their effects were not statistically significant. AN-51 retained its hormonal activity and decreased serum growth hormone levels in mice. Binding affinity of AN-51 for somatostatin receptors on MIA PaCa-2 cells was found to be 2.5-times lower than that of parent compound RC-121. This is the first report on inhibition of human pancreatic cancer growth in vivo by somatostatin analogs carrying cytotoxic radicals. | {
"pile_set_name": "PubMed Abstracts"
} |
INTRODUCTION {#s1}
============
Whilst oncogenesis is driven by a multitude of complex, non-programmed molecular events, there are a number of key features of this process, not least of which is the aberrant activation of genes that would normally be silenced in a given tissue context \[[@R1]\]. The so called cancer/testis (CT) or cancer germline (CG) genes are one such group of genes that are frequently activated in a range of different human cancer types \[[@R2]-[@R4]\]. These genes have expression normally restricted to the human germline, many being testis-specific \[[@R2]-[@R4]\]. They have come under intense scrutiny since their original identification as the immunological privilege of their normal germline setting means that the proteins they encode can elicit an immunological response when aberrantly produced in cancers and so have exceptional potential in immunotherapeutics \[[@R5]\]; for example, the *NY-ESO-1* gene product has been successfully targeted in an adoptive therapeutic approach to melanoma therapy \[[@R6]\].
Despite this interest, remarkably little is known about the normal germline function of most CT genes. Moreover, it has been demonstrated that germline genes in *Drosophila melanogaster* are required for the oncogenic process and that the human orthologues of these *Drosophila* genes have up-regulated expression in a range of human cancers, although the functional implications for oncogenesis of this up-regulation remains unclear \[[@R7],[@R8]\]. Interestingly, down-regulation of a number of CT genes in human cancer cells results in perturbation of cellular proliferative potential \[for example, see [@R9],[@R10]\]. These findings open up the exciting possibility that CT genes might encode functions that are required for tumour homeostasis and it has recently been proposed that tumours become 'addicted' to these germline factors \[[@R11],[@R12]\], and recently, meiotic factors have been shown to contribute to telomere maintenance in cancer cells via the ALT pathway \[[@R13], [@R14]\]. The full extent of germline gene requirement is unclear, but these findings expose a new therapeutic opportunity by directly targeting the tumour-associated function of the CT gene products. Additionally, a number of studies have revealed another clinically important feature of CT genes; their expression appears to drive drug resistance as depletion of the gene products results in enhanced sensitization to anti-cancer drugs \[for example, see [@R15]\] expanding the therapeutic potential of this important class of cancer genes.
Germline gene expression profiling has also recently been demonstrated to have applications in prognostics and patient stratification. In a seminal study, Rousseaux and co-workers demonstrated that expression of a sub-set of germ line genes in some lung cancers delineated patients with aggressive, metastasis prone tumours with poor prognosis \[[@R16]\]; they extended this by indicating that this cohort of patients might benefit from a drug therapeutic regime that had previously been dismissed for more general use in lung cancer patients, indicating that profiling patients for expression of a small sub-set of germline genes could be used in therapeutic decision making. Understanding germline gene expression is also critical as drug-induced augmentation of expression has also been postulated to be a potential enhancer of immunotherapeutics, the rationale being that further up-regulation of a tumour-specific antigen will result in enhanced immunological targeting of the tumour \[for example, see [@R17]\].
Taking all these factors together reveals the importance of understanding the regulatory mechanisms for somatic germline gene silencing and their aberrant activation in tumours. To date, the regulation of a number of CT genes has been studied and it has been demonstrated that DNA methylation of regulatory elements, such as promoter-associated CpG islands plays a fundamental role in the somatic silencing of these genes and the hypomethylation of these regulatory DNA regions in cancers is linked to gene activation \[for example, see [@R18]-[@R23]\], whereas gene body hypomethylation has been linked to gene down regulation in cancers \[[@R24]\]. Expression of these genes also becomes activated or further up-regulated upon enforced hypomethylation by the DNA methyltransferase inhibitor 5-aza-2â²-deoxycytidine (5-aza-CdR), and to date, all CT genes studied have up-regulated expression in response to this chemotherapeutic agent, indicating a commonality in the mechanistic pathway for somatic CT gene silencing \[for example, see [@R18]-[@R23]\].
To date, most of the CT genes whose expression has been studied are located on the X chromosome (X-CT genes) and belong to large paralogous gene families \[[@R2]-[@R4]\]. Recently, a computational pipeline combining expressed sequence tag and microarray meta-analyses of the human orthologues of mouse spermatocyte-specific genes revealed a large cohort of new CT genes that were expressed in a broad spectrum of cancer types \[[@R25]-[@R29]\]. Unlike the X-CT genes, the majority of these genes are autosomally encoded and are single copy. To date, the clinical potential of these genes remains largely unexplored. In this current study, analysis of the expression of a small sub-set of these genes reveals a novel feature of CT genes, which indicates that some have a unique mechanism for somatic transcriptional silencing. This is a significant finding as these genes and their associated gene products have an increased prominence in clinical applications and hence the sub-classification of CT genes will play an important role in diagnostics, stratification and therapeutics.
RESULTS {#s2}
=======
All CT genes studied to date (mostly X-CT genes) require hypermethylation of regulatory DNA sequences for somatic silencing and are activated by the hypomethylating agent 5-aza-CdR. Given the clinical potential of enhanced up-regulation of immunogenic CT antigens, we set out to explore whether a similar DNA hypermethylation silencing mechanism was operating in the recently identified autosomally encoded CT genes \[[@R25],[@R27]\]. To do this, we selected a small sub-group of these genes that remained transcriptionally silenced in the colorectal cancer cell lines HCT116 and SW480 (*ARRDC5, C4orf17, C20orf201, DDX4, NT5C1B, STRA8, TDRD12*). We also selected two previously characterized CT genes (both X-CT genes) that remained transcriptionally silenced in these two cell lines to serve as exemplar controls for hypermethylation regulated CT genes, *SSX2* and *GAGE1*. To determine whether the novel CT genes are silenced via hypermethylation mediated mechanisms, similar to the characterized X-CT genes, we treated the two cell lines with the DNA methyltransferase inhibitor 5-aza-CdR to determine whether inhibition of DNA methyltransferase activity can activate these genes. Following 5-aza-CdR treatment of HCT116 and SW480 we made cDNA and carried out RT-PCR and agarose gel electrophoresis analysis of the products. The two X-CT genes were activated from the silent state with relatively low levels of 5-aza-CdR (0.1 ÎŒM; Figure [1](#F1){ref-type="fig"}; Figure [2](#F2){ref-type="fig"}). Some of the novel, autosomally encoded CT genes were similarly activated (*C20orf201, DDX4, STRA8, TDRD12*), although *C20orf201* and *DDX4* required a slightly higher 5-aza-CdR concentration for activation (0.5 ÎŒM; Figure [1](#F1){ref-type="fig"}; Figure [2](#F2){ref-type="fig"}). Additionally, activation of *STRA8* requires slightly higher concentrations of 5-aza-CdR in SW480 (Figure [2](#F2){ref-type="fig"}) than HCT116 (Figure [1](#F1){ref-type="fig"}), which indicates subtle regulatory differences between tumour cell types. However, surprisingly, three genes (*ARRDC5, C4orf17, NT5C1B*) remained tightly transcriptionally silenced, even at high concentrations of 5-aza-CdR in both cell lines (15.0 ÎŒM; Figure [1](#F1){ref-type="fig"}; Figure [2](#F2){ref-type="fig"}). This unexpected result reveals an important distinction in the way CT gene silencing is epigenetically regulated, revealing a hypermethylation-independent pathway. Interestingly, the X-CT genes (*GAGE1, SSX2*) remained activated for a prolonged period following removal of the hypomethylating agent, as did the autosomally encoded CT genes that were activated with the lowest concentration of 5-aza-CdR (*STRA8, TDRD12*) (Figure [3](#F3){ref-type="fig"}); however, the other two autosomally encoded CT genes, *C20orf201* and *DDX4*, which required slightly higher concentrations of 5-aza-CdR for activation, reverted to the silent state relatively soon after removal of the hypomethylating agent (Figure [2](#F2){ref-type="fig"}). This indicates a much greater transcriptional elasticity to the methylation-dependent silencing mechanisms for some CT genes.
![A sub-group of germline genes remain refractory to activation by epigenetic modulating agents\
RT-PCR was used to analyse activation of a group of germline genes that are normally silenced in the cancer cell line HCT116 (an additional colorectal cell line gives similar results \[see Supplementary Figure S1)\]. Whilst a cohort of known and newly identified germline genes become activated at low doses of the demethylating agent 5-aza-CdR (*GAGE1, SSX2, STRA8, TDRD12*) and others become activated with slightly higher levels of 5-aza-CdR (*C20orf201, DDX4*), some remain tightly silenced, even at high concentrations of 5-aza-CdR (*ARRDC5, C4orf17, NT5C1B*) (left column). The histone deacetylase inhibitor trichostatin A (TSA) has little activating potential (other than for *GAGE1* and *STRA8*, indicating the primary epigenetic regulation is mediated by DNA methylation (right column). Untreated and DMSO treated cells exhibit no activation of any of the genes analysed for expression activation. The chromosomal location of each gene is provided in parentheses to the right of the gene name. RT-PCR of β*ACT* shows uniform sample quality and loading.](oncoscience-01-0745-g001){#F1}
{#F2}
{#F3}
To determine whether the silencing of hypermethylation-independent genes (*ARRDC5, C4orf17, NT5C1B*) was mediated via histone deacetylation we also treated the HCT116 and SW480 cells with the histone deacetylase (HDAC) inhibitor trichostatin A (TSA) (Figure [1](#F1){ref-type="fig"}; Figure [2](#F2){ref-type="fig"}) or a combination of 5-aza-CdR and TSA (data not shown). Remarkably, all three genes (*ARRDC5, C4orf17, NT5C1B*) remained tightly silenced under these highly transcriptionally permissive conditions.
DISCUSSION {#s3}
==========
CT antigens are potentially powerful targets for therapeutics, including immunotherapeutics. However, intratumour CT antigen gene expression is often heterogeneous and so there will be a lack of uniformity for any targeting strategy. To overcome this, it has been demonstrated that treatment of tumours with agents that deregulate epigenetic silencing, such as agents that result in DNA hypomethylation can generate a uniform expression of CT antigen genes within a tumour \[for example, see [@R18]-[@R23]\]. However, to date, the epigenetic regulation mechanisms for CT gene silencing has been restricted to a limited number of X-CT genes, all of which are activated by hypomethylating agents. Here we extended the analysis of epigenetic regulation of clinically important biomarkers and reveal that there is a cohort of CT genes that is not activated in response to hypomethylating agents (or HDAC inhibitors). This regulation is not simply due to a lack of methylation target CpG islands within the promoter regions as at least two of the genes (*ARRDC5* and *NT5C1B*) have reported CpG islands in their transcriptional promoter regions \[[https://genome.ucsc.edu/](http://genome.ucsc.edu/)\]. These observations indicate that there is a very broad range of mechanisms controlling CT gene regulation. This has implications for CT gene selection for clinical targeting strategies. Moreover, the mechanistic regulatory pathways might indicate sub-groups of CT genes that are co-regulated, which has implications for the study of these genes both as biomarkers, potential oncogenes and/or encoders of drug targets. Additionally, it has been demonstrated that some CT genes are required for tumour cell proliferation. Turning off these genes could reduce the proliferation-mediated burden of tumours, restricting their disease effect and/or enhancing other therapeutic approaches.
MATERIALS AND METHODS {#s4}
=====================
Maintenance and culturing of human colorectal cell lines HCT116 and SW480 {#s4_1}
-------------------------------------------------------------------------
HCT116 and SW480 cell lines were obtained from the European Collection of Cell cultures. Both lines are tested for authenticity once per annum by LGC StandardsTM (authentication tracking number 710236782). HCT116 cells were grown in McCoy\'s 5A medium with GLUTAMAX^TM^ (Invitrogen, GIBCO 36600) and SW480 cells were grown in Dulbecco\'s modified Eagle\'s medium with GLUTAMAX^TM^ (Invitrogen, GIBCO 61965). Both media types were supplemented with 10% foetal bovine serum (Invitrogen; GIBCO 10270). Cells were incubated in humidified incubators at 37°C in a 5% CO~2~ atmosphere.
Cells cultures were tested for mycoplasma infection using the LookOut^TM^ Mcycoplasma PCR Detection kit (Sigma Aldrich, MP0035). Epigentics modulating agents were added to the concentrations required as indicated in the main text. Treatment with 5-aza-CrD and TSA was for 48 hours (72 hour treatment yielded identical results).
RNA extraction, cDNA synthesis and polymerase chain reaction {#s4_2}
------------------------------------------------------------
Total RNA was isolated using Trizol reagent (Invitrogen; 15596-026). Confluent cells were homogenised in Trizol (1 ml Trizol / 5Ã10^6^ cells) and held at room temperature (RT) for 5 minutes. Chloroform (200 ÎŒl per 1 ml of Trizol) was added to each sample and the homogenate was vigorously shaken for 15 seconds, followed by incubation for 5 minutes at RT. Samples were then centrifuged at 12,000 *g* for 15 minutes at 4°C. The aqueous layer was then removed to a new Eppendorf tube and 500 ÎŒl of isopropanol was added. After incubation at RT for 10 minutes, the samples were centrifuged again at 12,000 *g* for 20 minutes. The supernatant was removed and the pellet was washed with 70% ethanol and re-centrifuged at 7,500 *g* for 5 minutes at 4°C. The supernatant was discarded again and the cell pellet was left to dry at RT for 5-10 minutes, and then 100 ÎŒl RNase free water containing 2 ÎŒl DNase I (Sigma; D5319) was added to each RNA preparation sample. The samples were incubated at 37°C for 10 minutes and then at 75°C for 10 minutes. RNA quality and concentration was measured with a NanoDrop (ND 1000) spectrophotometer.
Total RNA from normal human testis tissues was supplied by Clontech (Catalogue number; 636643) {#s4_3}
----------------------------------------------------------------------------------------------
Total RNA was used to synthesise cDNA using a SuperScript III First Strand Synthesis Kit (Invitrogen; 18080-051). Samples (1-2 Όg) of total RNA were used according to the manufacturer\'s protocol. PCR using β*ACT* primers was used to check the cDNA quality.
Gene sequences were obtained from the National Center for Biotechnology Information (<http://www.ncbi.nlm.nih.gov>). Primers were designed to span more than one intron where possible. Primers were designed using Primer 3 software (<http://primer3.ut.ee/>). Primer sequences are provided in the Supplementary Materials.
For PCR amplification, 2 Όl of diluted cDNA was supplemented with 25 Όl of BioMixTM Red (Bioline; BIO-25006) and 1 Όl each of the forward and reverse primer, and the final volume was adjusted with ddH~2~O to 50 Όl. PCR for samples was initiated with a pre-cycling melting step at 96°C for 5 minutes, followed by 40 cycles of denaturing at 96°C for 30 seconds, an annealing step was carried out between 58-62°C for 30 seconds (specific annealing temperatures are provided in the [table](#T1){ref-type="table"} below), extension at 72°C for 30 seconds and the final extension temperature was 72°C for 5 minutes. All PCR products were evaluated on 1% agarose gels stained with ethidium bromide
###### PCR Primer sequence
Gene Forward primer sequence (5â²-3â²) Reverse primer sequence (5â²-3â²) PCR annealing Temp. (°C)
------------- --------------------------------- --------------------------------- --------------------------
*βACT* TGCTATCCCTGTACGCCTCT CGTCATACTCCTGCTTGCTG 58.0
*GAGE1* TAGACCAAGGCGCTATGTAC CATCAGGACCATCTTCACAC 58.4
*SSX2* CAGAGATCCAAAAGGCC CTCGTGAATCTTCTCAGAGG 58.4
*ARRDC5* CAACAAGGCAGACTACGTGC GCGAGTGTGCATGATCTCAC 60.5
*C4orf17* CCTCATCCCAGAAGAGTCTG CTGCTGCTGGTTCCATTGAG 60.5
*C20orf201* ATCTGCTCTTCGGCGACCTG ACACTCTCAGTCGCCGTCAC 60.0
*DDX4* GTGCTACTCCTGGAAGACTG CCAACCATGCAGGAACATCC 60.5
*NT5C1B* CGGCAGGAAAATCTACGAGC CTGTAACCAGGTAGGTCCTG 60.5
*STRA8* TGGCAGGTTCTGAATAAGGC GAAGCTTGCCACATCAAAGG 58.4
*TDRD12* GAGCTAAAGTGCTGGTGCAG CTGAGGTCACCGACAATACC 60.5
AA was funded by the Government of the Kingdom of Saudi Arabia. JF was supported by the National Institute of Social Care and Health Research (grant HS/09/008). RJM, EGV and JAW were funded by Cancer Research Wales. RJM was funded by North West Cancer Research (project grants CR888 and CR950).
| {
"pile_set_name": "PubMed Central"
} |
Every 98 seconds someone is sexually assaulted in the United States.
Parenting Time Center
Umbrella Tree, a safe place for supervised visitation and exchanges.
Shelter
Temporary Emergency Shelter
Child Advocacy
Support for Children
Violence Intervention Project serves victims of domestic and sexual violence through crisis call response, emergency assistance, advocacy support, shelter, supervised visitation services and other housing support. A trained advocate is available 24 hours a day to listen, help assess needs and safety, and help locate needed resources or help in deciding if reporting a rape or an assault is the right option.We know how hard it is to take the first step or to be scared for someone you know who is in an abusive relationship. Weâre here to listen and provide options in moving forward.
All VIP services are free and confidential and open to people of all genders in Pennington, Kittson, Marshall, Red Lake and Roseau Counties.
Services Include:
A safe and confidential place for victims to share their story in a private and non-judgmental environment
Safety planning
Assistance with protection orders
Accompaniment to court proceedings
Support and accompaniment during a sexual assault exam at the emergency room
Support Groups
Supervised Visitation and Safe exchanges (located in Thief River Falls)
Technology Safety Alert
Violence Intervention Project serves victims of domestic and sexual violence through crisis call response, emergency assistance, advocacy support, shelter, supervised visitation services and other housing support. A trained advocate is available 24 hours a day to listen, help assess needs and safety, and help locate needed resources or help in deciding if reporting a rape or an assault is the right option.We know how hard it is to take the first step or to be scared for someone you know who is in an abusive relationship. Weâre here to listen and provide options in moving forward.
All VIP services are free and confidential and open to people of all genders in Pennington, Kittson, Marshall, Red Lake and Roseau Counties.
Services Include:
A safe and confidential place for victims to share their story in a private and non-judgmental environment
Safety planning
Assistance with protection orders
Accompaniment to court proceedings
Support and accompaniment during a sexual assault exam at the emergency room
Support Groups
Supervised Visitation and Safe exchanges (located in Thief River Falls) | {
"pile_set_name": "Pile-CC"
} |
AVAILABLE Last checked: 51 Minutes ago!
Wisdom of the ages 60 days to enlightenment wayne w dyer on amazoncom free shipping on qualifying offers national bestseller this . Wisdom of the ages 60 days to enlightenment wayne w dyer on amazoncom free shipping on qualifying offers national bestseller this . Wisdom of the ages 60 days to enlightenment thu 17 jan 2019 151200 gmt wisdom of the ages 60 pdf 2 preface by acharya buddharakkhita the dhammapada is the. Wisdom of the ages 60 days to enlightenment english edition ebook wayne w dyer amazonnl kindle store. Wisdom of the ages 60 days to ebook bestselling author and personal development guru wayne w dyer shows us how to apply the insight of 60 of the worlds greatest | {
"pile_set_name": "Pile-CC"
} |
15th Annual Benz Beerfest
Tickets available in-store or online at benzbeerfest.com
$40 â VIP Ticket (Limit Quantities)
$30 â General Admission
$5 in store discount (per ticket) from March 1st â April 30th
VIP tickets come with a special Benz Beerfest tasting glass and it allows you to come in to sample beer an hour before it is open to the public.
Live music
1:00pm-2:00pm (VIP hour)
2:00pm-5:00pm (General Admission) | {
"pile_set_name": "OpenWebText2"
} |
Surveyor Nuclease: a new strategy for a rapid identification of heteroplasmic mitochondrial DNA mutations in patients with respiratory chain defects.
Molecular analysis of mitochondrial DNA (mtDNA) is a critical step in diagnosis and genetic counseling of respiratory chain defects. No fast method is currently available for the identification of unknown mtDNA point mutations. We have developed a new strategy based on complete mtDNA PCR amplification followed by digestion with a mismatch-specific DNA endonuclease, Surveyor Nuclease. This enzyme, a member of the CEL nuclease family of plant DNA endonucleases, cleaves double-strand DNA at any mismatch site including base substitutions and small insertions/deletions. After digestion, cleavage products are separated and analyzed by agarose gel electrophoresis. The size of the digestion products indicates the location of the mutation, which is then confirmed and characterized by sequencing. Although this method allows the analysis of 2 kb mtDNA amplicons and the detection of multiple mutations within the same fragment, it does not lead to the identification of homoplasmic base substitutions. Homoplasmic pathogenic mutations have been described. Nevertheless, most homoplasmic base substitutions are neutral polymorphisms while deleterious mutations are typically heteroplasmic. Here, we report that this method can be used to detect mtDNA mutations such as m.3243A>G tRNA(Leu) and m.14709T>C tRNA(Glu) even when they are present at levels as low as 3% in DNA samples derived from patients with respiratory chain defects. Then, we tested five patients suffering from a mitochondrial respiratory chain defect and we identified a variant (m.16189T>C) in two of them, which was previously associated with susceptibility to diabetes and cardiomyopathy. In conclusion, this method can be effectively used to rapidly and completely screen the entire human mitochondrial genome for heteroplasmic mutations and in this context represents an important advance for the diagnosis of mitochondrial diseases. | {
"pile_set_name": "PubMed Abstracts"
} |
Self Storage Units & Facilities in Dawson Springs, KY
Find Movers and Helpers
in Your Area
If you are looking for Dawson Springs storage facilities, then you have found the right place. Finding outdoor and indoor storage units in Dawson Springs has never been so easy. Moverscorp.com allows you to compare different Dawson Springs, Kentucky self storage units in minutes.
Computers are valuable and also very delicate items. Whether you hire professionals or move yourself, it is important to be extremely careful when packing a computer. Here are few tips on how to prepare a computer.
How to Pack a Computer
Self storage units are a cost-effective and safe way to store household and business items. Consider the following things before storing your items.Prepare Your Items For Storage
Moving artwork and antiques requires professionals who have necessary knowledge and equipment to handle delicate pieces. We do not recommend to cut corners or hire a company that is not specialized in delicate items. However, if the items are not of the high price you can consider packing and moving them yourself.Moving Antiques and Artwork
When hiring a professional residential moving company or just helpers, it's recommended to create an inventory list. Inventory list represents all of the household items that you plan on taking with you, this is done for your own protection.Moving Inventory
Moving process is always hard on someone, but when you try to cut the corners and do it cheaply it can cost you even more. Like everybody else, you don't want to spend a lot, but on the other hand, you donât want to hire someone who is not experienced. Can cheap movers actually be trustworthy?Cheap Movers
Moverscorp.com has provided these listings which are public information drawn from the local internet directories and from our partner companies. To inquire about storage prices, availability, and units sizes please call the phone numbers listed above. Remeber, the cheapest storage unit is not always the best value. Secure and clean storage facility prices are usually higher than other places, but it's totally worth it. | {
"pile_set_name": "Pile-CC"
} |
Q:
Solving system of linear equations (to determine a boundry)
I'm puzzeled how to programmatically (in R) solve the following linear system:
Given $\mathbf{R} \in \mathbb{R}^{n \times n}$, $\mathbf{R}^{-1}$, and a constant $c$ what is the solution to $\mathbf{u} \in \mathbb{R}^n$ with $\mathbf{u}^T = (u_1, \ldots, u_n)$ for
$\mathbf{u}^T\mathbf{R}^{-1}\mathbf{u} = c$
Lets take the simple case for $n = 2$. Fixing a component, say $u_1$, the solution to $u_2$ can be found by explicitly writing down $u_1(u_1r_{11} + u_2r_{21}) + u_2(u_1r_{12}+u_2r_{22}) - c = 0$ and solving for $u_2$ using quadratic formula. But for more dimensions there must be a better way.
I guess I need to bring it into the form $\mathbf{Ax = b}$, in order to use solve. But I havent figured out yet how exactly.
Right now I'm stuck at the following: let
$\mathbf{U} = \left(\begin{matrix}
u_1 & \ldots & 0\\
\ldots & \ldots & \ldots\\
0 & \ldots & 1
\end{matrix}\right)$ and $\mathbf{v}^T = (1, 1, \ldots, u_n)$ with $\mathbf{Uv=u}$ then i would have the fixed terms separated from the variable one ($u_n$) for which I need to determine the value. I can put it into the equation above, but how to proceed? Is this the right way ?
The background is the answer I posted in How to draw confidence areas. I would like to explecitly compute the "exact" threshold boundry. I understand that I need to solve this linear system but I cannot get it quite right yet. I'm unsatisfied with the two possible solutions: 1. using Quadratic formula to hard code the solution and 2. using optimize routine. The first one would only work for 2 dimensions and the second one would be unreliable (because upto two different solutions are possible for every x).
Furthermore, I think there should be a concise solution.
edit (12.03) Thank you for the response. I played with the solution but still have some question.
So as far as I understood, compute_scale would compute my decision boundry. Since I have two possibilities for $\gamma$, i.e. positive and negative, I can compute the critical values. However, if I plot them, I only get the half truth. I tinkered, but havent figured out how to compute the complete boundry. Any advice?
compute_stat <- function(v, rmat) {
transv <- qnorm(v)
return(as.numeric(transv %*% rmat %*% transv))
}
compute_scale <- function(v, rmat) {
gammavar <- sqrt(threshold / (v %*% rmat %*% v))
return(c(pos = pnorm(v * gammavar), neg = pnorm(v * (-gammavar))))
}
Rg <- matrix(c(1, .1, .2, 1), ncol = 2)#matrix(c(1,.01,.99,1), ncol = 2)
Rginv <- MASS::ginv(Rg)
gridval <- seq(10^-2, 1 - 10^-2, length.out = 100)
thedata <- expand.grid(x = gridval,
y = gridval)
thestat <- apply(thedata, 1, compute_stat, rmat = Rginv)
threshold <- qchisq(1 - 0.8, df = 2)
colors <- ifelse(thestat < threshold, "#FF000077", "#00FF0013")
#png("boundry2.png", 640, 480)
plot(y ~ x, data = thedata, bg = colors, pch = 21, col = "#00000000")
theboundry <- t(apply(thedata, 1, compute_scale, rmat = Rginv))
points(pos1 ~ pos2, data = theboundry, col = "blue")
points(neg1 ~ neg2, data = theboundry, col = "purple")
#dev.off()
A:
I understand your problem to be given an $n$ by $n$ matrix $R$ and scalar $c$, find a vector $\mathbf{u}$ such that $\mathbf{u}'R^{-1}\mathbf{u}=c$.
First observe:
You have $n$ unknowns (since $\mathbf{u}$ is an $n$ by 1 vector)
$\mathbf{u}'R^{-1}\mathbf{u}=c$ is a single equation. (It isn't a system of equations.)
In general, there won't be a unique solution $\mathbf{u}$. Almost any vector will work if it is properly scaled.
Solution:
Pick some arbitrary vector $\mathbf{a}$. Let $\mathbf{u} = \lambda \mathbf{a}$. Then $\mathbf{u}'R^{-1}\mathbf{u}=c $ becomes $\lambda^2 \mathbf{a}'R^{-1}\mathbf{a} = c$. Solving for the scalar $\lambda$ we have $\lambda = \sqrt{\frac{c}{\mathbf{a}'R^{-1}\mathbf{a}}}$.
For any vector $\mathbf{a}$ such that $\mathbf{a}'R^{-1}\mathbf{a} \neq 0$, we'll have the solution:
$$\mathbf{u} = \lambda \mathbf{a}\quad \text{where} \quad \lambda = \sqrt{\frac{c}{\mathbf{a}'R^{-1}\mathbf{a}}}$$
| {
"pile_set_name": "StackExchange"
} |
# Copyright (c) 2003-2020, CKSource - Frederico Knabben. All rights reserved.
#
# !!! IMPORTANT !!!
#
# Before you edit this file, please keep in mind that contributing to the project
# translations is possible ONLY via the Transifex online service.
#
# To submit your translations, visit https://www.transifex.com/ckeditor/ckeditor5.
#
# To learn more, check out the official contributor's guide:
# https://ckeditor.com/docs/ckeditor5/latest/framework/guides/contributing/contributing.html
#
msgid ""
msgstr ""
"Language-Team: Lithuanian (https://www.transifex.com/ckeditor/teams/11143/lt/)\n"
"Language: lt\n"
"Plural-Forms: nplurals=4; plural=(n % 10 == 1 && (n % 100 > 19 || n % 100 < 11) ? 0 : (n % 10 >= 2 && n % 10 <=9) && (n % 100 > 19 || n % 100 < 11) ? 1 : n % 1 != 0 ? 2: 3);\n"
msgctxt "Toolbar button tooltip for inserting an image or file via a CKFinder file browser."
msgid "Insert image or file"
msgstr "Ä®terpti vaizdÄ
ar failÄ
"
msgctxt "Error message displayed when inserting a resized version of an image failed."
msgid "Could not obtain resized image URL."
msgstr "Nepavyko gauti pakeisto dydÅŸio paveiksliuko URL."
msgctxt "Title of a notification displayed when inserting a resized version of an image failed."
msgid "Selecting resized image failed"
msgstr "Nepavyko pasirinkti pakeisto vaizdo"
msgctxt "Error message displayed when an image cannot be inserted at the current position."
msgid "Could not insert image at the current position."
msgstr "Nepavyko įterpti vaizdo į dabartinÄ vietÄ
."
msgctxt "Title of a notification displayed when an image cannot be inserted at the current position."
msgid "Inserting image failed"
msgstr "Nepavyko įterpti vaizdo"
| {
"pile_set_name": "Github"
} |
Finnish Fanconi anemia mutations and hereditary predisposition to breast and prostate cancer.
Mutations in downstream Fanconi anemia (FA) pathway genes, BRCA2, PALB2, BRIP1 and RAD51C, explain part of the hereditary breast cancer susceptibility, but the contribution of other FA genes has remained questionable. Due to FA's rarity, the finding of recurrent deleterious FA mutations among breast cancer families is challenging. The use of founder populations, such as the Finns, could provide some advantage in this. Here, we have resolved complementation groups and causative mutations of five FA patients, representing the first mutation confirmed FA cases in Finland. These patients belonged to complementation groups FA-A (n = 3), FA-G (n = 1) and FA-I (n = 1). The prevalence of the six FA causing mutations was then studied in breast (n = 1840) and prostate (n = 565) cancer cohorts, and in matched controls (n = 1176 females, n = 469 males). All mutations were recurrent, but no significant association with cancer susceptibility was observed for any: the prevalence of FANCI c.2957_2969del and c.3041G>A mutations was even highest in healthy males (1.7%). This strengthens the exclusive role of downstream genes in cancer predisposition. From a clinical point of view, current results provide fundamental information of the mutations to be tested first in all suspected FA cases in Finland. | {
"pile_set_name": "PubMed Abstracts"
} |
Dispatches from the 10th Crusade
Entries from What's Wrong with the World tagged with 'Statues'
Apparently, now that the left and the right have joined forces to start removing Confederate flags from public display, some commenters have started suggesting we need to go after street signs and statues next! This is basically insane. As bad... | {
"pile_set_name": "Pile-CC"
} |
Article content
The Conservative government is examining a whole new business model to effectively buy, sell and use the time and skills of federal employees: Meet the BURO-crat.
The government has been planning a pilot project at some federal agencies that would apply âmarket principlesâ to more efficiently use federal bureaucrats and help smooth out busy and slow work periods, according to government records.
We apologize, but this video has failed to load.
tap here to see other videos from our team. Try refreshing your browser, or Federal government mulling âBuroâ scheme to make public service more efficient Back to video
The three-year pilot project is based on a model called âThe Buro,â which the government explains is âlike the Euro for bureaucrats,â according to a presentation deck prepared for the federal governmentâs deputy ministersâ committee on policy innovation. (The Euro is the currency used by many European Union countries.)
The Buro concept, which would face its first test at the governmentâs regional economic agencies, would âestablish an electronic market and currency (the Buro) to allow bureaucrats to âsellâ their time to each other in a pinch,â say the documents, obtained by the Ottawa Citizen using the access to information law. | {
"pile_set_name": "OpenWebText2"
} |
Where's My Size?
Customer Reviews for Freya Core Underwire Sports Bra
Filter By:
Star Rating
Size
Height
Size: 34H
Excellent. It is so hard to find an actually secure bra of any kind, much less a sports bra, at this size. But this one works! I feel secure, and while it's a touch constricting, it's not nearly as bad as your average sports bra/torture device.
Annie from
Height: Petite (5'3" and under)
Age: 40s
Posted: June 26,2017
Size: 38E
Some reviews indicated that this bra produces pointy result, but I took a chance and bought it. The cups and band are just right for me, lots of uplift for my age, and comfortable, the fabric is less stretchy if you prefer stretchier fabric, this bra might be too stiff for you. As to the shape of the cups, I guess I have much to fill them, so they aren't as pointy as some others said.
audrey from Manhattan/KS/USA
Height: Petite (5'3" and under)
Age: 60s
Posted: June 5,2017
Size: 36F
I discovered Freya several years ago. These "sport" bras are so comfortable that I wear them everyday. They give awesome support and difinition to your figure. I have the best service from ordering from HerRoom. The are always dependable
CJ from Holden/Mo/USA
Height: Average Height (5'4"-5'8")
Age: 70s
Posted: June 2,2017
Size: 34K
Was really looking forward to a sport bra in my size. But, unfortunately, this bra didn't work for me. My normal size is a 34HH, but knowing sports bras run small I ordered up a size to 34J. Was too small so I returned & ordered 34K. That was too small too. Liked the 5 hooks, & was comfortable, wish the cups had fit.
HerRoom Response: We suggest trying a 36J in this bra because the bra is high impact the band might be running a little tighter than a usual bra.
Kim from California
Height: Average Height (5'4"-5'8")
Age: 50s
Posted: May 16,2017
Size: 34DD
This bra provides the absolute best support for my girls. Bounce is kept to a minimum as I dance in Zumba class. I love the fit and the durability. It is my favorite sports bra.
Michelle K. from North Jersey
Height: Petite (5'3" and under)
Age: 50s
Posted: May 10,2017
Size: 36DDD
Great support and comfort
TheUndies.com from
Height: Petite (5'3" and under)
Age: Teen
Posted: April 14,2017
Size: 36
Holds perfect and hood support. No excesses under arm
TheUndies.com from
The underwire popped out in less than a month. I hand wash and line dry. No support when I exercise
Freya Response: Weâre sorry this bra didnât work for you. If you felt like it wasnât supportive enough we recommend going down a band and up in the cup. It is also very important to rotate your bras, even sports bras. Your bras need at least a day to rest to help them last longer.
Jacqueline from 11412
Freya
AA4002
Tomima's Tip
"This bra is designed for high-impact. However, some women find it too tight. You may want to consider going up a band size and down a cup size, but you will gain some breast bounce.
"
This amazing high impact sports bra shapes the bust while offering the ultimate support and comfort. Multi-part cups shape the breasts. Made with CoolMax fabric, which dries 5 times faster, wicks moisture away from the body and feels soft against the skin.
Welcome to HerRoom, the worldâs premier online lingerie authority. Founded by Tomima Edmark in 1998, HerRoom has grown from bras and panties to include swimwear, sleepwear, and beyond. We offer over 250 brands, from classic brands you love like Wacoal, Chantelle, and Vanity Fair, to some you may not have heard ofâFantasie, Prima Donna, Elomiâbut are sure to love. We work hard to provide you with as much information as possible, including extensive fit information, measured drawings of each bra, and honest customer reviews. Whether youâre a 28A or a 58J, weâre confident youâll find something you love at HerRoom! | {
"pile_set_name": "Pile-CC"
} |
The physiological effects of vasopressin when used to control intra-abdominal bleeding.
Vasopressin was used in ten critically ill patients with massive intra-abdominal bleeding unresponsive to conventional therapy. Vasopressin controlled bleeding in four patients, three of whom had continued to bleed following laparotomy for haemostasis; in two other patients, bleeding was reduced. All the patients were intensively monitored throughout the period of the vasopressin treatment; this enabled other physiological effects of vasopressin to be documented and reported. Mean arterial pressure and central venous pressure increased following the administration of vasopressin and there was a decrease in heart rate. Core body temperature rose significantly. Although all the patients had impaired renal function before receiving vasopressin, five had a prompt diuresis following its administration. Eight patients died but only three of intra-abdominal bleeding; two patients survived to leave hospital. Four patients had post-mortem evidence of ischaemia in the heart, liver and gastrointestinal tract; vasopressin may have contributed to the development of this. Vasopressin may have a place in the management of patients with life-threatening intra-abdominal haemorrhage but its use should be confined to those patients in whom conventional therapy has failed. | {
"pile_set_name": "PubMed Abstracts"
} |
Jed Lowrieâs highly anticipated debut in a Mets uniform was finally here.
For the Metsâ 141st game of the season, Lowrie was activated Saturday afternoon and he struck out in his lone plate appearance in the Metsâ 5-0 loss to the Phillies at Citi Field. The day felt like it would never come, as Lowrie battled a relentless string of injuries that led to setback after setback each time he felt close to healthy.
âJed is available,â Mets manager Mickey Callaway said of Lowrie who struck out as a pinch hitter in the bottom of the fourth with two runners on. âWeâre going to see â heâs going to probably need to continue to progress. So today, heâs available off the bench to pinch hit and then weâll see where he is tomorrow. Heâs put together some really solid at-bats it sounds like on his rehab and weâre confident heâs going to come up there and give us a good at-bat.â
Callaway said that âanything is possibleâ with how he plans to use Lowrie going forward, noting that theyâll have to take things day by day and evaluate how everybody on the roster is playing before inserting him into the lineup.
âThere might be an opportunity when we face a left-handed pitcher,â Callaway added. âHeâs the only guy that plays second base that would be up there hitting right-handed. Maybe he can start in that scenario if everything is going good.â
Since signing to the team run by his former agent Brodie Van Wagenen in January, Lowrie suffered from several injuries starting as early as spring training, when he first felt a pain in his left knee. He opened the year on the injured list before enduring injuries to his left hamstring and then his right calf, which prolonged Lowrieâs arrival.
Lowrie, a 35-year-old switch hitter who is coming off an All-Star season with the Aâs, played in 12 minor league rehab games before he was activated on Saturday. He batted a solid .275, going 11-for-40 with two home runs as he worked his way to make his first appearance in a Mets uniform.
âIâm not going to waste my time thinking about the past,â Lowrie said. âIâm going to focus on today and what I can do in the future to help.â
Lowrie neglected to say he felt 100 percent, but says heâs progressing and hopes to gain enough confidence in his health to be able to help the Mets â who were still tied four games back of the second NL wild-card spot â in their pursuit for the postseason. Itâs been a frustrating journey for him, but going forward he plans to prioritize his health.
âI would say both [getting back to how I was playing before and preventing these injuries from happening again],â Lowrie said of his plan going forward. âI think preventing injury and getting to a point where youâre able to play and take the pounding that you put on your body when you play every day at the major-league level.â
It was the first time Lowrie dealt with such a slew of injuries in his career. He added he didnât expect his recovery to be as long as it has been, but with the continuous setbacks there was nothing he could do.
Itâs no secret to anyone or even Lowrie himself that he isnât the young player he once was, but he still feels capable of making an impact with his new team.
âThatâs the whole point, thatâs why Iâm here.â | {
"pile_set_name": "OpenWebText2"
} |
Originally published at: https://www.datacamp.com/community/blog/web-development-data-science
The data science field is still hot and the programming languages that have been used for it are too: R popularity has been increasing every year and especially Python gaining more and more importance in the data science industry. This is not only because this general-purpose language stands out for its readability and has a relatively low and very gradual learning curve, but mostly also thanks to the tools and the concepts that were originally built by scientists and sysadmins. Because, even though there seems to be a culture gap between those who use Python for scientific purposes and those who use it for more conventional purposes such as system administration and web development, their strength is that they can work together. And this has been proven in recent years: the solid foundations of the Python language have been enriched with the creation and development of packages that help data scientists and data science teams tackle complex data problems.
As a result, there are users who look to move more towards using Python for data science. And those who are learning data science often wonder what else they can do with this popular programming language. | {
"pile_set_name": "OpenWebText2"
} |
KATY Skull Rose Print Dress 8-14
Skull Roses prints are back with a boom.Skull Rose Print Dress 8-14 is a Bandeau dress that is a must have for any girl. Combine it with flats for the day, or dress up in the evening with high heels and a statement handbag to complete the look! | {
"pile_set_name": "Pile-CC"
} |
Q:
Is the complex form of the Fourier series of a real function supposed to be real?
The question said to plot the $2\pi$ periodic extension of $f(x)=e^{-x/3}$, and find the complex form of the Fourier series for $f$.
My work: $$a_n=\frac{1}{2\pi}\int_{-\pi}^{\pi}e^{-x/3}e^{-inx}dx=\frac{1}{2\pi}\int_{-\pi}^{\pi}e^{-x(1/3+in)}dx$$
$$=\frac{e^{\pi(\frac{1}{3}+in)} - e^{-\pi(\frac{1}{3}+in)}}{2\pi(\frac{1}{3}+in)}=\frac{1}{\pi(\frac{1}{3}+in)}\sinh(\pi(\frac{1}{3}+in))$$
$$\therefore F(x)=\frac{3\sinh(\pi/3)}{\pi}+\sum_{n=-\infty}^{\infty}\frac{3\sinh(\pi/3+in\pi)}{\pi+3in\pi}\cos(nx)$$
But, this is not always real-valued. Is it possible for the complex Fourier series of a real-valued function to have imaginary coefficients, or is my algebra just wrong?
A:
You are using the formula for the complex fourier coefficients which are usually denoted by $c_n$. These are usually complex, and they lead to the representation:
$f_f(x) = \sum_{n=-\infty}^\infty c_n e^{inx}$
This is still (more or less) the original function and is therefore real.
There is also a transformation into the sinus-cosinus representation:
$f_f(x) = a_0 + \sum_{n=1}^\infty a_n \cos(nx) + b_n \sin(nx)$
Where the $a_n$ and $b_n$ are real if the original function was real.
You can even go back and forth between the 'real' and the 'complex' coefficients. This comes from the fact that you can express the sinus as well as the cosinus as
$\sin(x) = \frac{1}{2i}(e^{ix}-e^{-ix})$ and
$\cos(x) = \frac{1}{2}(e^{ix}+e^{-ix})$.
Or the other way around which might be more familiar:
$e^{ix} = \cos(x)+i\sin(x)$
You can find all of this including the formulas for converting the real coefficients $a_n,b_n$ to the complex ones $c_n$ and vice versa here: http://mathworld.wolfram.com/FourierSeries.html
| {
"pile_set_name": "StackExchange"
} |
ZTE ZMAX 2 Smartphone
ZTE USA announced the release of ZMAX 2, a second generation, budget-priced Android smartphone for $150 with AT&T. The new phone features an updated design with the same long-lasting battery and 5.5-inch HD screen at an affordable price. The phoneâs price-to-feature ratio is outstanding and the flexibility of a prepaid plan only sweetens the deal.
The ZTE ZMAX 2 delivers one of the best viewing experiences with a large 5.5-inch HD display, Dolby Digital Plus Audio, a long-lasting 3,000 mAh removable battery and 4G LTE speeds.
The 8-megapixel rear and 2-megapixel front cameras capture clear, crisp images and video. Thanks to 16 GB of internal memory you have plenty of space to store content and download the latest apps to edit or share your photos and videos.
The ZTE ZMAX 2 is a premium phablet that features the latest Android 5.1 Lollipop operating system, a 1.2 GHz quad-core Snapdragon processor.
Price and Availability
The ZTE ZMAX 2 can be purchased online at att.com and at select AT&T retail locations for $149.99 beginning September 25, 2015. | {
"pile_set_name": "Pile-CC"
} |
Molecular Engineering of Phenylbenzimidazole-Based Orange Ir(III) Phosphors toward High-Performance White OLEDs.
To develop B-O complementary-color white organic light-emitting diodes (WOLEDs) exhibiting high efficiency and low roll-off as well as color stability simultaneously, we have designed two orange iridium(III) complexes by simply controlling the position of the methoxyl group on the cyclometalated ligand. The obtained emitters mOMe-Ir-BQ and pOMe-Ir-BQ show good photophysical and electrochemical stabilities with a broadened full width at half-maximum close to 100 nm. The corresponding devices realize highly efficient electrophosphorescence with a maximum current efficiency (CE) and power efficiency (PE) of 24.4 cd A-1 and 15.3 lm W-1 at a high doping concentration of 15 wt %. Furthermore, the complementary-color all-phosphor WOLEDs based on these phosphors exhibit good performance with a maximum CE of 31.8 cd A-1, PE of 25.0 lm W-1, and external quantum efficiency of 15.5%. Particularly, the efficiency of this device is still as high as 29.3 cd A-1 and 14.2% at the practical brightness level of 1000 cd m-2, giving a small roll-off. Meanwhile, extremely high color stability is achieved by these devices with insignificant chromaticity variation. | {
"pile_set_name": "PubMed Abstracts"
} |
1
00:00:32,332 --> 00:00:37,337
âªâªïœ
2
00:00:37,337 --> 00:00:45,245
âªâªïœ
3
00:00:45,245 --> 00:00:49,349
ïŒã«ã¡ã©ã®ã·ã£ãã¿ãŒé³ïŒ
4
00:00:49,349 --> 00:00:53,336
ïŒå°å®®å±±å¿ä¿ïŒè²¡åžã¯çŸéã®ã¿ã
ã«ãŒãé¡ãå
蚱蚌ãäžåãªãã
5
00:00:53,336 --> 00:00:56,373
ïŒæç¬å¥åŸïŒèº«å
ã®ãããææåã¯
äžåãªããâŠã
6
00:00:56,373 --> 00:00:59,426
åç¡ãã®æ»äœã£ãŠãããã
7
00:00:59,426 --> 00:01:01,426
ïŒæç¬ïŒããã
8
00:01:03,363 --> 00:01:05,348
ïŒæµ
èŒªçŽæš¹ïŒ
ããïŒãã©ããããã§ããïŒ
9
00:01:05,348 --> 00:01:07,417
ïŒå çŽå«å€ªéïŒãã«ã¿ãéããŠãã®ã«
å
šç¶é£²ãã§ãªãã
10
00:01:07,417 --> 00:01:09,417
æ¬åœã§ããã
11
00:01:10,503 --> 00:01:12,503
ã©ãããŠé£²ãŸãªãã£ããã ããïŒ
12
00:01:13,306 --> 00:01:17,360
ïŒéæ³ãéïŒ
æšæ¥ãç·ã®å£°ãèãããã§ããïŒ
13
00:01:17,360 --> 00:01:20,380
ããããããªãã§ãã
倿¹ïŒæéãã«â
14
00:01:20,380 --> 00:01:24,351
ç¬ã®æ£æ©ã«åºããã§ããã©
ãã®äžã®éã«æ¥ãæã«âŠã
15
00:01:24,351 --> 00:01:26,353
âªè¿ããŠãããïŒ
16
00:01:26,353 --> 00:01:28,338
ïŒéæ³ïŒãè¿ããŠããããâŠã
17
00:01:28,338 --> 00:01:31,441
ïŒæ©ç¬å·çæŸïŒæ»å ã¯
é éšã匷æããäºã«ããè³æ«å·ã
18
00:01:31,441 --> 00:01:34,311
æ»äº¡æšå®æå»ã¯
æšæ¥ã®ååŸïŒæååŸã
19
00:01:34,311 --> 00:01:37,330
ã¡ããã©äž»å©Šã
ç·ã®å£°ãèããæå»ã«äžèŽãããªã
20
00:01:37,330 --> 00:01:39,366
ãç²ãããŸã§ãã
ãããç²ãããŸã§ãã
21
00:01:39,366 --> 00:01:43,353
çŸå Žã®çŒ¶ã³ãŒããŒã®
åæçµæãåºãŸããã
22
00:01:43,353 --> 00:01:46,356
ã³ãŒããŒã®äžãããèŽæ»éã®
ã¹ãªãŒããŒãæ€åºãããŸããã
23
00:01:46,356 --> 00:01:48,358
ãïŒãã¹ãªãŒããŒïŒ
24
00:01:48,358 --> 00:01:51,394
ã¹ãªãŒããŒã¯ç²æ«ç¶ã®åæéº»è¬ã
25
00:01:51,394 --> 00:01:54,347
å€éã«æåãããš
å¿æãè¡å§ãäœäžããŠâ
26
00:01:54,347 --> 00:01:56,299
ç ãããã«æ»ããã£ãŠèšãããŠãã
27
00:01:56,299 --> 00:01:59,336
æè¿ã忬§çµç±ã§å
¥ã£ãŠãã
æ°ãã麻è¬ã§â
28
00:01:59,336 --> 00:02:02,372
ãŸã ãããã»ã©
åºåã£ãŠãªãã¯ããªãã ãã©ã
29
00:02:02,372 --> 00:02:05,342
æµ
茪å
猶ã³ãŒããŒã«ä»çããŠãæçŽã¯ïŒ
30
00:02:05,342 --> 00:02:07,444
被害è
ã®æçŽãšäžèŽããŸããã
31
00:02:07,444 --> 00:02:10,444
ããšãæ°ã«ãªãäºããããã§ãã
ïŒå¿ä¿ïŒäœïŒ
32
00:02:12,365 --> 00:02:15,368
ãããšåããããª
ã¹ãªãŒããŒå
¥ãã®çŒ¶ã³ãŒããŒã飲ãã§â
33
00:02:15,368 --> 00:02:17,320
æ»äº¡ããéºäœã
ãã§ã«ïŒäœçºèŠãããŠãŸãã
34
00:02:17,320 --> 00:02:21,408
ãã£ïŒïŒ
ææåã¯çŸéå
¥ãã®è²¡åžã®ã¿ã
35
00:02:21,408 --> 00:02:23,408
ïŒäœãšãã身å
ãããããªãã£ãã
36
00:02:24,344 --> 00:02:27,314
ïŒéæ³ïŒãã©ããªãã§ãïŒäººãšã
身å
ããããããã®ããªïŒ
37
00:02:27,314 --> 00:02:32,352
ããããã身ãªããèŠãéã
æ®éã®ç掻è
ã£ãŠæããªã®ã«ãã
38
00:02:32,352 --> 00:02:34,337
ïŒç¢æ²¢è±æïŒ
ãŸãæåã«çºèŠãããã®ãâ
39
00:02:34,337 --> 00:02:38,341
ããã
çºèŠãããã®ã¯ïŒæïŒæ¥ãæã
40
00:02:38,341 --> 00:02:41,244
æ»äº¡æšå®æå»ã¯
忥ã®ååŸïŒæååŸã«ãªããŸãã
41
00:02:41,244 --> 00:02:44,364
å Žæã倿¥ã®äžç·å°ã
42
00:02:44,364 --> 00:02:46,366
èŠãç®
æ°æã¡ããå¯ãŠãããã«ããâ
43
00:02:46,366 --> 00:02:50,353
èŠããªãã§ããã
次ã«çºèŠãããã®ãããã
44
00:02:50,353 --> 00:02:53,323
çºèŠãããæ¥ãïŒæïŒïŒæ¥ãæã
45
00:02:53,323 --> 00:02:57,360
æ»äº¡æšå®æå»
åå€ã®ååŸïŒïŒæååŸã
46
00:02:57,360 --> 00:03:00,363
å Žæããããå°äžéµã屿åºå Žã
47
00:03:00,363 --> 00:03:03,450
ç¥ã£ãŠããããïŒ
æããã£ãããããã«èŠããã®ãã
48
00:03:03,450 --> 00:03:06,450
æïŒãæãšãèŠãã®ãïŒãå°å®®å±±åã
49
00:03:07,253 --> 00:03:09,272
ïŒå¿ä¿ïŒæªãïŒ
ïŒæç¬ïŒããâŠã
50
00:03:09,272 --> 00:03:11,341
ç¢æ²¢ã次ã
51
00:03:11,341 --> 00:03:14,327
ã§ãïŒçªç®ã«çºèŠãããã®ã
ããã
52
00:03:14,327 --> 00:03:17,347
çºèŠãããã®ãïŒæïŒïŒæ¥ãæã
53
00:03:17,347 --> 00:03:21,368
æ»äº¡æšå®æå»
åå€ã®ååŸïŒïŒæååŸã
54
00:03:21,368 --> 00:03:24,354
å Žæãè±åèªç¶å
¬åã
55
00:03:24,354 --> 00:03:28,358
ããïœãïŒäººãšã
æ¯è²ã®ãããšãã§æ»ãã ã®ãã
56
00:03:28,358 --> 00:03:30,443
ãã®æã ãã©ãâŠã
ïŒå¿ä¿ïŒæïŒ
57
00:03:30,443 --> 00:03:33,443
ããã
ãªããæ¡ã£ãŠãããã«èŠããªãïŒ
58
00:03:34,364 --> 00:03:36,366
ãããããèšãããããããã§ããã
59
00:03:36,366 --> 00:03:41,354
ãã®æãããã®æãããã®æãã
60
00:03:41,354 --> 00:03:44,357
確ãã«ïŒäººãšã
äœãæ¡ã£ãŠãããã«èŠããŸããã
61
00:03:44,357 --> 00:03:47,360
ïŒéæ³ïŒãïœããã§ã
ãã®æ¡ã£ãŠãäœãã¯â
62
00:03:47,360 --> 00:03:49,396
ã©ãã«ãã£ãã®ãïŒ
ãããå°ãªããšããâ
63
00:03:49,396 --> 00:03:51,348
ãã®ïŒäººãæ»ãã æã«â
64
00:03:51,348 --> 00:03:53,366
誰ãããã°ã«ããäºã ãã¯
確ãã ããã
65
00:03:53,366 --> 00:03:55,352
ãã£ïŒ
ãªãã§ããããªäºããããã§ããïŒ
66
00:03:55,352 --> 00:03:57,387
ã ã£ãŠãã¹ãªãŒããŒã£ãŠâ
67
00:03:57,387 --> 00:03:59,422
ç²æ«ç¶ã®åæéº»è¬ã£ãŠ
å
çãèšã£ãŠããããªãã
68
00:03:59,422 --> 00:04:04,344
ã ãšããããïŒäººã§çŒ¶ã«å
¥ããŠ
飲ãã§ãæ»ãã ãã ãšãããâ
69
00:04:04,344 --> 00:04:06,363
ãã®å
ã¿çŽã£ãŠããã
ããã±ãŒãžãâ
70
00:04:06,363 --> 00:04:08,348
æ®ã£ãŠãªãããããªãã
ãã©ãå ±åæžã«ã¯â
71
00:04:08,348 --> 00:04:12,385
ãã®èšèŒããã©ãã«ããªãã
ã€ãŸããçŸå Žã«ã¯ä»ã«èª°ãããŠâ
72
00:04:12,385 --> 00:04:14,320
ãã®ã¹ãªãŒããŒã®
ããã±ãŒãžãªããšâ
73
00:04:14,320 --> 00:04:16,256
æ¡ãããŠããã®ã
æã¡å»ã£ãã£ãŠäºããªã
74
00:04:16,256 --> 00:04:20,360
ãããããããã¡ãã£ãšåŸ
ãŠã
ä»åã®äºä»¶ã¯ãç¯è¡æã«â
75
00:04:20,360 --> 00:04:22,362
ãè¿ããŠãããã£ãŠãã
ç·ã®å£°ãèãããŠãã
76
00:04:22,362 --> 00:04:25,331
ãããããŠããã®ïŒäººã¯
ç¯äººã«ãã ãŸãããŠâ
77
00:04:25,331 --> 00:04:27,350
ã¹ãªãŒããŒå
¥ã猶ã³ãŒããŒã
飲ãŸããâ
78
00:04:27,350 --> 00:04:30,387
æã«æ¡ã£ãŠããäœãã奪ãããã
79
00:04:30,387 --> 00:04:33,440
ãšãããä»åã¯
äºæ
ã«æ°ã¥ãã被害è
ãâ
80
00:04:33,440 --> 00:04:37,343
猶ã³ãŒããŒã飲ãŸãã«æµæããŠ
殺害ãããã
81
00:04:37,343 --> 00:04:40,346
ãã®äžé£ã®ã¹ãªãŒããŒé¢é£äºä»¶ã¯
é£ç¶åŒ·ç殺人ã®å¯èœæ§ãããã
82
00:04:40,346 --> 00:04:43,333
æµ
茪ãšä¿é·ã¯ããã®ïŒäººãå«ãã
被害è
ã®èº«å
ææ»ã
83
00:04:43,333 --> 00:04:45,402
éæ³ãããã¡ã¯
æšæ¥ã®äºä»¶ã®å°åãã
84
00:04:45,402 --> 00:04:47,387
æã
ã¯
ã¹ãªãŒããŒã®åºã©ãããæŽãã
85
00:04:47,387 --> 00:04:49,387
è¡ãããå°å®®å±±åã
ïŒå¿ä¿ïŒã¯ãïŒ
86
00:04:50,373 --> 00:04:52,342
ãªãã ãïŒ
87
00:04:52,342 --> 00:04:54,344
ãªãã§ããã®ãã©åäºã
æç€ºåºããŠãã ãã
88
00:04:54,344 --> 00:04:56,429
ããïŒãè¡ããªããã§ããïŒ
89
00:04:56,429 --> 00:04:58,429
ãªãããïŒïŒ
90
00:04:59,532 --> 00:05:01,532
ããäºããããã ãã
91
00:05:02,335 --> 00:05:04,354
ïŒéæ³ïŒããããšã
ïŒç¢æ²¢ïŒãªãããïŒãããã
92
00:05:04,354 --> 00:05:07,357
ïŒéæ³ïŒæšæ¥ã®çŸå Žä»è¿ã®
é²ç¯ã«ã¡ã©ã®æ åã
93
00:05:07,357 --> 00:05:10,410
ãã£ããã
ã¿ããªã«é ããŠããã§ããïŒ
94
00:05:10,410 --> 00:05:12,295
ã¿ããªã«ã£ãŠâŠ
ããã誰ã«ãèãããŠãªããã®ã
95
00:05:12,295 --> 00:05:14,364
ããã£ãããæå³ãããããªãã
96
00:05:14,364 --> 00:05:18,384
ïŒéæ³ïŒãã被害è
ã¿ã£ãã
ïŒç¢æ²¢ïŒãâŠã
97
00:05:18,384 --> 00:05:20,353
ïŒéæ³ïŒïŒæïŒïŒåãã
98
00:05:20,353 --> 00:05:22,338
ïŒç¢æ²¢ïŒ
æ»äº¡æšå®æå»ã¯ïŒæååŸã ããâ
99
00:05:22,338 --> 00:05:24,457
被害è
ãïŒæéããŸã
çŸå Žã«ããã£ãŠäºã«ãªããŸããã
100
00:05:24,457 --> 00:05:27,457
ããã
ïŒæéãäœããŠããã ãããªïŒ
101
00:05:28,344 --> 00:05:31,414
ïŒéæ³ïŒãã£ãçŸå Žãžåãã女åé«ç
ããããã
102
00:05:31,414 --> 00:05:33,466
ããâŠããããªã®é¢ä¿ãªãã§ããã
103
00:05:33,466 --> 00:05:36,466
ç¯è¡æå»ååŸèŠããŠãã ããã
æ©ãã
104
00:05:37,454 --> 00:05:40,454
ïŒéæ³ïŒãã®ãã°ã¡ãã
äºæ
èŽåãã人ã
105
00:05:41,341 --> 00:05:43,409
ãã£ã
106
00:05:43,409 --> 00:05:45,409
ãã£ãã®å¥³åé«çâŠã
107
00:05:50,366 --> 00:05:53,303
ïŒçæŸïŒæ®å¿µãªãã
ç¹ã«æè¡ã®çããªããâ
108
00:05:53,303 --> 00:05:56,389
è·æ¥ã«çµã³ã€ããããª
身äœçãªç¹åŸŽããªãã
109
00:05:56,389 --> 00:05:58,341
ããã§ããâŠã
110
00:05:58,341 --> 00:06:01,361
身å
ã®ç¢ºèªãé£èªããããã
111
00:06:01,361 --> 00:06:06,349
ããâŠããèªæ®ºãšããŠåŠçããã
åã®ïŒäººãåœãã£ãŠã¿ãŸããã
112
00:06:06,349 --> 00:06:09,252
çºèŠãããæã®æãªããã
ä¿ç®¡ãããŠãã¯ãã§ããããã
113
00:06:09,252 --> 00:06:13,339
å
çããåã®ïŒäººã®å³æã®åçã£ãŠ
ãªãã§ãããïŒ
114
00:06:13,339 --> 00:06:15,325
峿ã®åçã§ããïŒ
115
00:06:15,325 --> 00:06:18,344
ããããïŒäººãšã
äœãæ¡ã£ãŠããããªãã§ããâ
116
00:06:18,344 --> 00:06:21,431
æã®ã¢ããã®åçããã£ãã
äœãããããããããªããšæã£ãŠã
117
00:06:21,431 --> 00:06:23,431
ããããŸãããæ¢ããŠã¿ãŸãã
118
00:06:24,467 --> 00:06:27,467
ããããã ãããããŸããã
119
00:06:29,422 --> 00:06:35,422
ããããæåã«çºèŠãããããã
çãŠãæã§ããã
120
00:06:36,296 --> 00:06:38,448
ãªãããé販åºã«å£²ã£ãŠã
æ®éã®æã¿ããã
121
00:06:38,448 --> 00:06:41,448
ããŒãã確ãã«ãããã§ããã
122
00:06:42,335 --> 00:06:44,304
ãããªãã ïŒãããã
123
00:06:44,304 --> 00:06:46,339
ãã
ã¯ãªãŒãã³ã°å±ã®ã¿ã°ãããªãïŒ
124
00:06:46,339 --> 00:06:49,325
ããããã§ããã
ããããããäžçè±ãã æãšãã«â
125
00:06:49,325 --> 00:06:51,377
ããã«
ãŸã ããã£ã€ãããŸãŸã«ãªã£ãŠãŠâ
126
00:06:51,377 --> 00:06:53,329
æ
ãŠãŠåŒãã¡ãã£ãŠ
ãºãã³ã®äžã«å
¥ããã£ãŠâ
127
00:06:53,329 --> 00:06:55,448
ãããåãããããããã§ããã
128
00:06:55,448 --> 00:06:57,448
ãããããããã°ã身å
ãããããã
129
00:06:59,302 --> 00:07:01,304
ïŒå¿ä¿ïŒãããæ±æšåäžãã
ïŒæç¬ïŒããã
130
00:07:01,304 --> 00:07:04,374
以åã麻è¬ã®äžæ³ææã§
æããããäºãããã
131
00:07:04,374 --> 00:07:08,361
æžåçµã®æ«ç«¯ã ã
ã¹ãªãŒããŒã®å£²äººãšããåã ã
132
00:07:08,361 --> 00:07:10,513
éåãéåããããããã
133
00:07:10,513 --> 00:07:14,513
ãããã²ãã£ãšããããã¢ã¬ãªã
ãã³ããããŠãã®ããããããªã
134
00:07:17,353 --> 00:07:20,356
ïŒå¿ä¿ïŒã¡ãã£ãšãæã¡ç©
èŠããŠããã£ãŠããã§ããïŒ
135
00:07:20,356 --> 00:07:22,342
ïŒæ±æšåäžïŒããã£ïŒãé¢ãïŒ
136
00:07:22,342 --> 00:07:24,394
ããã£ïŒãããããŠããïŒ
137
00:07:24,394 --> 00:07:27,347
ãªã©ãïŒãããã£âŠã
倧äžå€«ã§ããïŒ
138
00:07:27,347 --> 00:07:30,350
ïŒå¿ä¿ïŒãããããªãã®
ã»ã«ã³ãããã°ã«å
¥ã£ãŠããâ
139
00:07:30,350 --> 00:07:32,335
ã¹ãªãŒããŒã
140
00:07:32,335 --> 00:07:35,405
ããªãããããã売ã£ãŠããçžæã
å
šå¡èšã£ãŠãããããã
141
00:07:35,405 --> 00:07:38,341
ïŒæ±æšïŒä¿ºã¯
誰ã«ã売ã£ããããŠãŸãããã
142
00:07:38,341 --> 00:07:40,360
åãã€ããªã
æ¬åœã£ããã
143
00:07:40,360 --> 00:07:43,313
ããã¯
俺ã䜿ãããã«æã£ãŠããããã
144
00:07:43,313 --> 00:07:46,399
ãããšããããã€ã人ã«å£²ã£ãã£ãŠ
蚌æ ã§ããããããïŒ
145
00:07:46,399 --> 00:07:52,338
âªâªïœ
146
00:07:52,338 --> 00:07:54,440
ïŒå¿ä¿ïŒãã®äººãã¡ã«èŠèŠãã¯ïŒ
147
00:07:54,440 --> 00:07:57,440
ïŒæ±æšïŒããããèŠãäºãªãã£ããã
148
00:07:59,429 --> 00:08:01,429
ãããããã®äººã¯ïŒ
149
00:08:02,448 --> 00:08:04,448
ç¥ããªããªãã
150
00:08:08,354 --> 00:08:10,323
ïŒç°èŸºçæµïŒèŠèŠåºïŒ
151
00:08:10,323 --> 00:08:13,443
åžžç€æšã¢ãºãµããã¯ããããâŠ
ããããâŠããåšå®
ã§ããããïŒ
152
00:08:13,443 --> 00:08:15,443
ãåšå®
ã§ããããïŒ
153
00:08:16,446 --> 00:08:18,446
ïŒéæ³ïŒãåšå®
ã§ããããïŒ
ãããä»ãèšããâŠã
154
00:08:21,367 --> 00:08:23,436
ïŒçæµïŒã嬢æ§âŠã
155
00:08:23,436 --> 00:08:26,436
ïŒéæ³ïŒã¡ãã£ãš
話ãèãããã ããªãã§ã
156
00:08:30,259 --> 00:08:32,345
ïŒç¢æ²¢ïŒãå®¶æã®æ¹ã¯ïŒ
157
00:08:32,345 --> 00:08:34,347
ïŒåžžç€æšã¢ãºãµïŒ
ç¶ãæ¯ããä»ã¯æŒå¥æ
è¡ã§â
158
00:08:34,347 --> 00:08:36,316
ãšãŒããããåã£ãŠããŸãã
159
00:08:36,316 --> 00:08:39,402
ïŒç¢æ²¢ïŒãããã䞡芪ãšã
鳿¥œå®¶ã§ããããã
160
00:08:39,402 --> 00:08:41,337
ïŒéæ³ïŒãããåçã
ïŒç¢æ²¢ïŒã¯ãã
161
00:08:41,337 --> 00:08:44,290
ïŒéæ³ïŒãããªåçãããããã
ïŒç¢æ²¢ïŒãã€ã®éã«âŠã
162
00:08:44,290 --> 00:08:46,376
ïŒéæ³ïŒãã®äººãäŒã£ãäºããããªïŒ
163
00:08:46,376 --> 00:08:48,411
ãããŸããã
ïŒéæ³ïŒããããæšæ¥ã®â
164
00:08:48,411 --> 00:08:51,447
ååŸïŒæããïŒæããããŸã§
ã©ãã«ããããªïŒ
165
00:08:51,447 --> 00:08:54,447
çããªããã°ãªããªãçç±ã¯
ãªãã§ããïŒ
166
00:08:55,251 --> 00:08:57,437
åçã
ãïŒ
167
00:08:57,437 --> 00:08:59,437
æã£ãŠããããªãã§ããã
168
00:09:00,406 --> 00:09:05,445
ïŒéæ³ïŒæšæ¥ããã®é段ã®å
ã®ç¥ç€Ÿã§
äººãæ®ºããããã ãã©ã
169
00:09:05,445 --> 00:09:07,445
ãã¥ãŒã¹ã§ãèšã£ãŠãã§ããïŒ
170
00:09:08,464 --> 00:09:11,464
å¶æã§åŠæ ¡ã調ã¹ããã§ããã
171
00:09:15,338 --> 00:09:17,440
ïŒã¢ãºãµïŒåŠæ ¡ããããã°â
172
00:09:17,440 --> 00:09:20,440
çåŸã®ååãšäœæã¯
ããã«ããããŸãããã
173
00:09:23,312 --> 00:09:27,300
ïŒç¢æ²¢ïŒãã®ããäœã
èŠããèãããããäºãããã°â
174
00:09:27,300 --> 00:09:29,369
æããŠâŠã
ïŒéæ³ïŒç¥ç€Ÿã®æ¹åã«è¡ã£ãŠããâ
175
00:09:29,369 --> 00:09:31,337
æ»ã£ãŠãããŸã§ïŒæéããŸãâ
176
00:09:31,337 --> 00:09:33,356
äœããŠãã®ïŒ
æ£æ©ããŠãŸããã
177
00:09:33,356 --> 00:09:36,359
ãžãïœããããããããããŸã§
é»è»ã§è¡ã£ãŠãæ£æ©ããã®ïŒ
178
00:09:36,359 --> 00:09:38,327
ããã
179
00:09:38,327 --> 00:09:40,430
ãããåçã
ããããã£ããã
180
00:09:40,430 --> 00:09:43,430
åž°ãã ãèµ°ã£ãŠãã®ã¯
ãªãã§ããªïŒ
181
00:09:45,351 --> 00:09:48,304
æ°ãã€ããããæã£ãŠãããã
æéãçµã£ãŠããã®ã§â
182
00:09:48,304 --> 00:09:50,440
æ¥ãã ãã§ãã
183
00:09:50,440 --> 00:09:55,440
質åããããã ããªã
ããåž°ã£ãŠããããŸããïŒ
184
00:09:57,430 --> 00:09:59,430
ã¯ãã
185
00:10:06,439 --> 00:10:08,439
倧人ãªãããªãã
186
00:10:14,280 --> 00:10:16,349
ïŒåºäž»ïŒãã®ã¿ã°ããã¡ã®ã§ããã
æ¬åœã§ããïŒ
187
00:10:16,349 --> 00:10:19,352
ãããããã®ã¿ã°ãã
ã¯ãªãŒãã³ã°ã«åºãã人ç©ã£ãŠâ
188
00:10:19,352 --> 00:10:21,387
ç¹å®åºæ¥ãŸããããïŒ
ãããããããŸããã
189
00:10:21,387 --> 00:10:23,356
ã¡ãã£ãšåŸ
ã£ãŠãã ãããã
ã¯ãããé¡ãããŸãã
190
00:10:23,356 --> 00:10:27,360
ããâŠãïŒã®ïŒïŒïŒã®âŠã
191
00:10:27,360 --> 00:10:30,329
ããããããŸããã
192
00:10:30,329 --> 00:10:32,381
äžè°·å®ããã§ããã
193
00:10:32,381 --> 00:10:36,302
ïŒç°æïŒãã®å¯®ã«ããäžè°·å®ããã«
ééããªãã§ãã
194
00:10:36,302 --> 00:10:39,322
å·¥å Žã®æéå·¥ã ã£ããã§ãã
ãããªããããªããªã£ã¡ãã£ãŠã
195
00:10:39,322 --> 00:10:42,358
ããªããªã£ã¡ãã£ãã£ãŠ
ãã€é ã§ããïŒ
196
00:10:42,358 --> 00:10:45,344
ïŒæã®åãé ã§ããããã
197
00:10:45,344 --> 00:10:47,346
ãã®äžè°·ãããš
芪ããããŠãæ¹ã£ãŠâ
198
00:10:47,346 --> 00:10:50,299
ã©ãªãããããã£ããããŸãïŒ
ããâŠã
199
00:10:50,299 --> 00:10:53,269
ããŸã人ãšä»ãåãã¿ã€ããã
ãããŸããã§ãããããã
200
00:10:53,269 --> 00:10:55,288
éšå±ãæèŠããŠããã§ããïŒ
ãããã¡ãã£ãšâŠããããŸããïŒ
201
00:10:55,288 --> 00:10:58,391
ã¡ãã£ãšåŸ
ã£ãŠã
ããã¡ãã£ãšåŸ
ã£ãŠãã ããã
202
00:10:58,391 --> 00:11:01,444
ããæ°ãã人ãå
¥ã£ã¡ãã£ãŠãŠâŠã
203
00:11:01,444 --> 00:11:05,348
ããããããäžè°·ããã®è·ç©ã£ãŠ
ããã«ã¯ããªããã§ããïŒ
204
00:11:05,348 --> 00:11:09,302
äžè°·ããã¯äž¡èŠªã亡ãããŠãŠ
å
åŒããªããã§âŠã
205
00:11:09,302 --> 00:11:13,406
仿¹ãªããã§ã管ç人ã®ç§ã
åŠåãããŠããããŸããã
206
00:11:13,406 --> 00:11:15,341
ããããè·ç©ã¯ãªãã£ãŠäºã§ããïŒ
207
00:11:15,341 --> 00:11:18,344
ãâŠ
決ãŸã£ãŠããããªãã§ããâŠã
208
00:11:18,344 --> 00:11:26,319
âªâªïœ
209
00:11:26,319 --> 00:11:28,321
äœãããïŒãäžè°·ããã®ïŒ
210
00:11:28,321 --> 00:11:30,456
ãã¡ã ãã人ã®ãã®
åæã«èªåã®ãã®ã«ãã¡ãã
211
00:11:30,456 --> 00:11:33,456
鮿ãããã
ãã¿ãŸããâŠã
212
00:11:34,260 --> 00:11:37,280
ããã管ç人ããã³ããããŠã
ããœã³ã³ã£ãŠãããã
213
00:11:37,280 --> 00:11:39,365
ããã
ïŒæç¬ïŒãã®ç·ã¯äžäœâ
214
00:11:39,365 --> 00:11:41,334
人ã®ããœã³ã³ã§äœãã£ãŠããã ïŒ
215
00:11:41,334 --> 00:11:44,337
ç¡æã®ãªã³ã©ã€ã³ã²ãŒã ã
楜ããã§ãããã§ãã
216
00:11:44,337 --> 00:11:47,340
æ²ããã»ã©ã±ãããã奎ã ãªã
217
00:11:47,340 --> 00:11:49,358
ã³ãŒããŒé£²ãïŒ
ã¯ãã
218
00:11:49,358 --> 00:11:51,360
ã§ãã¡ãã£ãš
ãããèŠãŠããã£ãŠããã§ããïŒ
219
00:11:51,360 --> 00:11:54,430
ããã倱螪åã®äžè°·ããã
é »ç¹ã«èšªããŠãããµã€ãã§ãã
220
00:11:54,430 --> 00:11:56,349
ïŒå¿ä¿ïŒ
ãïœ
ïœïœïœïŒïŒ¶ïœïœïœïœ
ãïŒ
221
00:11:56,349 --> 00:11:59,352
è²ããªäºããã£ãŠãçããã®ã
ã€ãããªã£ã人ãã¡ãâ
222
00:11:59,352 --> 00:12:01,337
ããã«
èªåã®æããæžã蟌ããã§ããã
223
00:12:01,337 --> 00:12:05,341
ïŒæç¬ïŒããã«ããŠãããã
ãããæ°ã®æžã蟌ã¿ã ãªã
224
00:12:05,341 --> 00:12:07,260
ãããªã«å€§å¢ãæ»ã«ããã£ãŠãã®ïŒ
225
00:12:07,260 --> 00:12:11,347
æ¥æ¬ã®å¹Žéèªæ®ºè
æ°ã¯
ïŒäžäººãè¶
ããŠãŸããããã
226
00:12:11,347 --> 00:12:13,349
å
é²åœã§ã
åžžã«ãããã¯ã©ã¹ã§ããã
227
00:12:13,349 --> 00:12:15,351
åã®ã³ãŒããŒã
èªåã§æ·¹ããã
228
00:12:15,351 --> 00:12:17,370
ãã£âŠã
ãã®ãµã€ãã«ã¯â
229
00:12:17,370 --> 00:12:19,338
äŒå¡å°çšã«ãŒã ã£ãŠããã®ã
ãããã§ããã
230
00:12:19,338 --> 00:12:22,341
ããã¯ãèªåã眮ãããŠããç¶æ³ã
ãäºãã«è©±ãåãäºã§â
231
00:12:22,341 --> 00:12:25,294
ããäžåºŠ
çããåžæãèŠãã ãã£ãŠãããâ
232
00:12:25,294 --> 00:12:27,346
ãŸããããã°ãã¢ã¡ãªã«åã®â
233
00:12:27,346 --> 00:12:29,348
ã»ã©ããŒãµã€ãã¿ãããª
ãããªãã§ããã
234
00:12:29,348 --> 00:12:32,451
ã¡ãã£ãšãåãã³ãŒããŒâŠã
ãªãã»ã©ããã
235
00:12:32,451 --> 00:12:36,451
ã§ããã®äŒå¡å°çšã«ãŒã ãããããã
236
00:12:38,257 --> 00:12:40,359
ããããã
åããã£ãŠã¿ããã§ããã©â
237
00:12:40,359 --> 00:12:42,345
äŒå¡ã«ãªãæç¶ãã
ãã¡ããã¡ãé¢åãªãã§ããã
238
00:12:42,345 --> 00:12:44,330
ã¡ãŒã«ã¢ãã¬ã¹ãšãæžããŠ
å
¥åããã°â
239
00:12:44,330 --> 00:12:47,366
ãéãããŠããããããªãïŒ
ããããããéããã§ããã
240
00:12:47,366 --> 00:12:49,252
ããã®ãµã€ãã®ãã°ã€ã³ïŒ©ïŒ€ã¯â
241
00:12:49,252 --> 00:12:52,288
æ¬äººéå®ã®åãåãéµäŸ¿ã
éãããŠãããã§ããã
242
00:12:52,288 --> 00:12:55,291
ããã£ãŠãæ¬äººã
身åèšŒææžãæç€ºããªããšâ
243
00:12:55,291 --> 00:12:57,343
åãåããªãã£ãŠãã€ã ããã
244
00:12:57,343 --> 00:12:59,328
ããããã³ã¯ãšãã§
身å確èªã®ããã«äœ¿ã£ãŠãã
245
00:12:59,328 --> 00:13:01,330
ããã§ããããã§ããããã§ãã
246
00:13:01,330 --> 00:13:03,366
ãããããªãã§ããããªå³éã«
ãã§ãã¯ããå¿
èŠããããã ïŒ
247
00:13:03,366 --> 00:13:06,369
ããã¯ã§ãã
身å
確èªãå³ããè¡ãäºã§â
248
00:13:06,369 --> 00:13:09,322
çžæãããªãããŸãã
å·ãããã§ãªãäºãä¿èšŒãâ
249
00:13:09,322 --> 00:13:11,324
æ·±ãä¿¡é ŒâŠã
ãæ·±ãä¿¡é Œé¢ä¿ã®äžã§â
250
00:13:11,324 --> 00:13:14,343
çããåžæãå埩ããå Žã
å®çŸããã
251
00:13:14,343 --> 00:13:17,330
ãããåãèŠãããã§ããã
ãŸãããã®ãäŒå¡å°çšã£ãŠãã£ãŠãâ
252
00:13:17,330 --> 00:13:19,398
ã¿ããªãã³ãã«ããŒã
䜿ã£ãŠããã§ããã©ãã
253
00:13:19,398 --> 00:13:21,367
ãã®ããâŠã
ã¯ãïŒ
254
00:13:21,367 --> 00:13:24,420
ãã®ãã³ãã«ããŒã
ãžã§ããã£ãŠãã人ã ãã©ãâŠã
255
00:13:24,420 --> 00:13:26,420
ãžã§ããïŒ
256
00:13:27,506 --> 00:13:31,506
æ¬åœãé »ç¹ã«æžã蟌ãã§ãããã
257
00:13:32,361 --> 00:13:34,447
æ¬åœã§ããã
258
00:13:34,447 --> 00:13:38,447
誰ã®è³ªåã«ã
æåäžå¯§ã«çããŠãã
259
00:13:40,252 --> 00:13:43,289
ããããšã«ãã
äžå¿ããã®ãµã€ãã®ç®¡ç人ãâ
260
00:13:43,289 --> 00:13:45,358
ãµã€ããŒç¯çœªå¯Ÿç課ã«
調ã¹ãŠããããã
261
00:13:45,358 --> 00:13:47,343
ãªãã§ããŸãããã©åäºã®ãåã
ä»åã£ãŠãã ãïŒ
262
00:13:47,343 --> 00:13:49,395
ãã©âŠïŒ
ãããããããããããäžã€â
263
00:13:49,395 --> 00:13:52,365
æ°ã«ãªãäºããã£ããã§ããã
ïŒéæ³ïŒãªãã ïŒããã©åäºãã®ïŒã
264
00:13:52,365 --> 00:13:54,367
ãã£ãšâŠããã®ïŒïŒ
ïŒéæ³ïŒããã
265
00:13:54,367 --> 00:13:57,353
寮ã®ç®¡ç人ããã
äžè°·ããã®éšå±ãçä»ããŠãæã«â
266
00:13:57,353 --> 00:14:00,506
ãªããªã£ãŠããã®ã
ãã£ããã§ãã£ãŠã
267
00:14:00,506 --> 00:14:03,506
äžè°·ããã確ããç±åž¯éã
飌ã£ãŠãã¯ããªãã§ãããã
268
00:14:04,343 --> 00:14:08,297
ãâŠã
ãïŒããªãã§ããïŒ
269
00:14:08,297 --> 00:14:10,333
ç±åž¯éâŠãç±åž¯éã
ãã£ïŒ
270
00:14:10,333 --> 00:14:12,401
ããäœãã£ãŠãã®ïŒ
ããã䞻任ãã¡ã¯â
271
00:14:12,401 --> 00:14:14,437
ã¡ãã£ãšçšäºãæãåºããã®ã§
åºãããŠããã
272
00:14:14,437 --> 00:14:17,437
ãïŒãã¡ãã£ãšâŠã
273
00:14:19,442 --> 00:14:21,442
ïŒç¢æ²¢ïŒãããâŠã
ïŒéæ³ïŒããããªããïŒ
274
00:14:23,245 --> 00:14:26,449
ç§ãäœãããïŒ
ãããããâŠãããããã
275
00:14:26,449 --> 00:14:29,449
æ°ã«ããªãã§ãããã€ãã®äºã ããã
ããããã
276
00:14:31,370 --> 00:14:35,324
é ŒãŸããŠã峿ã®åçã§ãã
ãããããããšãã
277
00:14:35,324 --> 00:14:38,444
æè¿ããç²ããªããããªãã§ããïŒ
ãããã¡ãã£ãšã
278
00:14:38,444 --> 00:14:41,444
峿ãåºããŠãã ããã
279
00:14:47,303 --> 00:14:49,372
å
æ°æ³šå
¥ããŠãããŸãããã§ã
280
00:14:49,372 --> 00:14:51,372
ã©ã泚å
¥ã®æ¹ããããã§ããã©ã
281
00:14:55,444 --> 00:14:59,444
ããã¯ããããããã倱瀌ããŸãã
282
00:15:01,350 --> 00:15:04,303
ããŒã£ãšïŒ
ããŒã£ãšããã£ãšããã£ãšâŠã
283
00:15:04,303 --> 00:15:07,390
ãªãã®è©±ããŠãã£ãïŒ
ãããããã âŠã
284
00:15:07,390 --> 00:15:09,341
äžè°·ããã§ããã
ããã ãããã
285
00:15:09,341 --> 00:15:12,361
ããããããããããã¯ãããšã
ãã®äžè°·å®ãããâ
286
00:15:12,361 --> 00:15:15,331
ãã®ãµã€ãã«
é »ç¹ã«èšªããŠããã£ãŠäºã¯â
287
00:15:15,331 --> 00:15:18,417
ãã£ã±ãã圌ã¯èªæ®ºã ã£ãã®ããªïŒ
288
00:15:18,417 --> 00:15:22,304
ä»®ã«åŒ·ç殺人ã ã£ããšããŠ
äžè°·ããã¿ãããªäººããâ
289
00:15:22,304 --> 00:15:24,373
äžäœãäœã奪ãåãã£ãŠ
ãããã§ãããïŒ
290
00:15:24,373 --> 00:15:26,442
ïŒæç¬ïŒãã¿ãããªäººãã£ãŠ
倱瀌ã ãããåãããã
291
00:15:26,442 --> 00:15:28,442
ããšããã©ãåºã«ããããã£ãŠã
292
00:15:29,528 --> 00:15:31,528
ïŒäžæ²¢ææïŒãåŸ
ããèŽããŸããã
293
00:15:36,435 --> 00:15:38,435
ïŒäžæ²¢ïŒããããšãããããŸããã
294
00:15:43,309 --> 00:15:46,345
æµ
茪åãã¡ãã£ãšèŠãŠã
ã¯ãïŒ
295
00:15:46,345 --> 00:15:48,347
ãïŒ
ããâ
296
00:15:48,347 --> 00:15:52,451
ãªãããåºããã®ãæŒãã€ãããã
çã¿ãããããªãïŒ
297
00:15:52,451 --> 00:15:56,451
ãããããããããããã
298
00:15:57,406 --> 00:16:01,406
æ¬åœã ãããªãã§ããããïŒ
299
00:18:25,254 --> 00:18:28,274
ïŒæã®åãã«
å®
é
䟿ã§å±ãããã§ãããïŒ
300
00:18:28,274 --> 00:18:31,327
ïŒéæ³ïŒäœäŸ¿ã§ããïŒ
ãã€ããµã
301
00:18:31,327 --> 00:18:35,281
ïŒç¢æ²¢ïŒïŒæã®åãé
åèïŒäžç®ã®ïŒïŒã®â
302
00:18:35,281 --> 00:18:37,333
åžžç€æšã¢ãºãµããå®ãŠã«â
303
00:18:37,333 --> 00:18:39,335
ç±åž¯éã
éãããŠãã¯ããªãã§ããã©ã
304
00:18:39,335 --> 00:18:42,338
ãããéãäž»ã¯
äžè°·å®ãããšãªã£ãŠãŸããã
305
00:18:42,338 --> 00:18:45,424
ïŒæã®ïŒæ¥ã«é
éãããŠãŸãã
ïŒæïŒæ¥ã
306
00:18:45,424 --> 00:18:47,424
ã©ããããããšãããããŸããã
ã©ããã
307
00:18:48,310 --> 00:18:51,263
äžè°·å®ã¯æ»äº¡ããïŒæ¥åã«â
308
00:18:51,263 --> 00:18:53,299
åžžç€æšã¢ãºãµã«
ç±åž¯éãéã£ãŠãããšã
309
00:18:53,299 --> 00:18:56,352
ããã§ïŒäººã¯ç¹ãã£ããªã
310
00:18:56,352 --> 00:18:59,438
ã§ãããªãã§
ç±åž¯éã ã£ããã§ãããïŒ
311
00:18:59,438 --> 00:19:03,438
ïŒéæ³ïŒããã¯âŠ
åžžç€æšã¢ãºãµãæŽãã°ãããã ãã
312
00:19:05,311 --> 00:19:08,314
ãããã¡ãã£ãšèããŠã
ããããªäºã«ãªã£ãŠãã
313
00:19:08,314 --> 00:19:10,299
ãïŒãã©ãããïŒ
314
00:19:10,299 --> 00:19:13,485
äžè°·å®ããã®
æ»äº¡æç¶ããããããšãããâ
315
00:19:13,485 --> 00:19:15,485
äžè°·ããã®äœæ°ç¥šã
åããããŠãã®ã
316
00:19:16,322 --> 00:19:18,290
äœæ°ç¥šãåããããŠãïŒ
317
00:19:18,290 --> 00:19:21,310
ããããåœŒãæ»äº¡ããããšã«ãã
318
00:19:21,310 --> 00:19:24,380
äžè°·ããã¯çŸåšã枯åºå麻åžã«
æ®ãããŠãäºã«ãªã£ãŠãã
319
00:19:24,380 --> 00:19:27,333
ïŒæç¬ïŒã©ãããäºã ïŒãããâŠã
320
00:19:27,333 --> 00:19:41,313
âªâªïœ
321
00:19:41,313 --> 00:19:44,283
äžè°·å®ã®éšå±ã«
åºå
¥ãããŠãã人ãèŠãŠã人ã¯â
322
00:19:44,283 --> 00:19:46,418
誰ãããªããã
ãããâŠã
323
00:19:46,418 --> 00:19:49,418
å®éã¯
誰ãäœãã§ãªãããããªããããã
324
00:19:50,322 --> 00:19:53,275
ããããäœæãããã°
éµäŸ¿ç©ãå±ãã
325
00:19:53,275 --> 00:19:55,411
äœïŒãããã
326
00:19:55,411 --> 00:19:57,411
ïŒæç¬ïŒå西éè¡ã®ç£ä¿ç¶ã
327
00:19:58,230 --> 00:20:00,316
ïŒéæ³ïŒç°èŸºããã§ããã£ãïŒ
ïŒçæµïŒã¯ãã
328
00:20:00,316 --> 00:20:02,301
ïŒéæ³ïŒãã¡ãã«ã¯ããã€é ããïŒ
329
00:20:02,301 --> 00:20:04,336
æåã«æ¥ãã®ã¯â
330
00:20:04,336 --> 00:20:07,406
ãäºäººããçãŸãã«ãªã£ãŠ
ããã®é ã§ããã®ã§â
331
00:20:07,406 --> 00:20:09,325
ããããïŒïŒå¹Žã«ãªããŸãã
332
00:20:09,325 --> 00:20:11,360
ãäºäººïŒ
ããã
333
00:20:11,360 --> 00:20:17,360
ã¢ãºãæ§ãšãã
äºåµæ§ã®ååã®åŒããã§ãã
334
00:20:22,254 --> 00:20:24,273
ïŒç¢æ²¢ïŒååâŠã
335
00:20:24,273 --> 00:20:26,325
ãã
ãïŒãäœïŒ
336
00:20:26,325 --> 00:20:29,294
ãžã§ããã§ããã
äŸã®ãµã€ãã«ç±å¿ã«æžã蟌ãã§ãã
337
00:20:29,294 --> 00:20:31,263
ãžã§ããã£ãŠ
åå座ã®äºãªãã§ãã
338
00:20:31,263 --> 00:20:33,399
ãããããªã®ïŒ
ããã
339
00:20:33,399 --> 00:20:36,268
ãžãïœã
âŠãïŒããã£ããã®ã¢ãºãåã¯ïŒ
340
00:20:36,268 --> 00:20:39,288
ä»å¹Žã®ïŒæã«äº¡ããªããŸããã
341
00:20:39,288 --> 00:20:42,424
亡ããªã£ãã£ãŠ
ãç
æ°ãäœãã§ããïŒ
342
00:20:42,424 --> 00:20:45,424
ããâŠãèªæ®ºã§ãã
343
00:20:47,396 --> 00:20:49,396
èªæ®ºïŒ
344
00:20:50,482 --> 00:20:53,482
å³åŽãã¢ãºãæ§ã§ãã
345
00:20:56,321 --> 00:20:59,258
ååã®åŒãèªæ®ºã§ããïŒ
ïŒéæ³ïŒããã
346
00:20:59,258 --> 00:21:01,360
ïŒå¿ä¿ïŒ
ãã³ãã«ããŒã ããžã§ããã£ãŠâ
347
00:21:01,360 --> 00:21:03,395
ã²ãã£ãšããã
ãã®ã¢ãºãµãããããªãã®ïŒ
348
00:21:03,395 --> 00:21:05,314
圌女ãäžè°·ãããšã
ç¥ãåãã ã£ããã
349
00:21:05,314 --> 00:21:08,283
ïŒéæ³ïŒãªãã
ããªãã¯ããã®ãããªéèŠäººç©ã®â
350
00:21:08,283 --> 00:21:11,353
æ
å ±ã®å
šãŠããä»ã®ä»ãŸã§
æã
ã«é ããŠããã§ããïŒ
351
00:21:11,353 --> 00:21:14,273
ãããé ããŠããããããªãã
ã¡ãããšå ±åãããããªããã
352
00:21:14,273 --> 00:21:17,292
ãã£ã¡ãããã©ãè¡ã£ãŠããã ãïŒ
ããããããèããŠãããŸããã
353
00:21:17,292 --> 00:21:20,345
å西éè¡ã«è¡ã£ãŠããã§ããã
ïŒéæ³ïŒãµãŒãã
354
00:21:20,345 --> 00:21:25,317
ãã®äžè°·å®ã¯æ»äº¡åŸãå西éè¡ãã
çžåœé¡ã®äœå®
èè³ãåããŠãŸãã
355
00:21:25,317 --> 00:21:27,252
ïŒéæ³ïŒæ»ãã§ãã家建ãŠãŠ
ã©ãããã ãã
356
00:21:27,252 --> 00:21:29,321
æ®éãå¢ã ããããªãïŒ
ããããå¢ã ã£ãŠâ
357
00:21:29,321 --> 00:21:31,340
æ»ãã 人ã«ã¯å»ºãŠãããŸããã
ãããããããã ã
358
00:21:31,340 --> 00:21:34,326
ïŒæç¬ïŒé ã®è¡ã®å·¡ãã®
æªã人ãã¡ã®ããã«â
359
00:21:34,326 --> 00:21:37,329
é åºç«ãŠãŠèª¬æããŸããããã
ãµãŒãã
360
00:21:37,329 --> 00:21:39,314
ïŒïŒäººïŒæµ
茪ãã¡ãããšèããšãã
ããã俺ãããªãã§ããã
361
00:21:39,314 --> 00:21:44,253
äžè°·å®ããã®æ»åŸ
äœè
ãã圌ã®äœæ°ç¥šã枯åºãžç§»åã
362
00:21:44,253 --> 00:21:47,322
ããã§äžè°·å®ããã®æžç±ãå
ã«â
363
00:21:47,322 --> 00:21:50,342
å瀟圹å¡ã®äžè°·å®ã
äœãäžããããããã§ããã
364
00:21:50,342 --> 00:21:52,327
å®éã®æå£ãšããŠã¯â
365
00:21:52,327 --> 00:21:57,349
ãŸããå瀟ã§åšç±èšŒæãš
æºæ³åŸŽå祚ãåœé ããã
366
00:21:57,349 --> 00:22:01,320
ãããæž¯åºåœ¹æã«æåºããŠ
課çšèšŒææžãæã«å
¥ããã
367
00:22:01,320 --> 00:22:04,323
ããããŠ
æžé¡ãäžåæŽããããã§â
368
00:22:04,323 --> 00:22:07,326
äžè°·å®ãåä¹ãå¥äººãéè¡ã«èµŽãâ
369
00:22:07,326 --> 00:22:09,394
å€é¡ã®èè³ãåŒãåºããã£ãŠããã
370
00:22:09,394 --> 00:22:12,314
ã§ããã
è¿æžããæ®µã«ãªã£ãŠã¿ãã°â
371
00:22:12,314 --> 00:22:15,317
è¿æžãã¹ããåœã®äžè°·å®ã¯
ã©ãã«ãååšããªãã
372
00:22:15,317 --> 00:22:18,320
ã€ãŸã
äžè°·å®ãå©çšããèè³è©æ¬ºã
373
00:22:18,320 --> 00:22:21,406
ïŒå¿ä¿ïŒããããäºããå西éè¡ã¯
ãªãã«ãç¥ããªãã§â
374
00:22:21,406 --> 00:22:25,310
ãã£ããšæ»ãã 人éã«
ç£ä¿ç¶ãéã£ãŠããããã
375
00:22:25,310 --> 00:22:28,280
é åãçã£éã§ãããã«
äºèª²ã«è¢«å®³å±ãåºããããã
376
00:22:28,280 --> 00:22:30,349
ïŒæç¬ã®ããæãïŒ
377
00:22:30,349 --> 00:22:32,384
ã§ãããããã¯
ç§ã®æšæž¬ãªãã§ããâ
378
00:22:32,384 --> 00:22:36,321
ãã®ïŒäººã®è¢«å®³è
ãåºãã
äžé£ã®ã¹ãªãŒããŒäºä»¶â
379
00:22:36,321 --> 00:22:38,290
ãããããããã®ç®çã¯âŠã
ïŒéæ³ïŒåœŒãã®æžç±ãâ
380
00:22:38,290 --> 00:22:40,309
æã«å
¥ããäºã
ç®çã ã£ããã ãããªã
381
00:22:40,309 --> 00:22:42,327
ããããããããªãã§ãããªããä»âŠã
382
00:22:42,327 --> 00:22:44,379
確ãã«ãããèãããšã身å
ã
ããããªããªã£ãŠããäºã«ãâ
383
00:22:44,379 --> 00:22:46,315
åç¹ãâŠã
åç¹ããããã
384
00:22:46,315 --> 00:22:48,267
ãããŸãããåç¹ããããŸããã
åç¹ããããã
385
00:22:48,267 --> 00:22:51,353
éºäœã®èº«å
ãå
¬ã«ãªãã°
è©æ¬ºãå³åº§ã«ãã°ããŠããŸãã
386
00:22:51,353 --> 00:22:54,406
ã ãããåç¡ãã®éºäœã«ããšã
å¿
èŠããã£ãã£ãŠããã
387
00:22:54,406 --> 00:22:57,326
ã§ãããããªã£ãŠãããš
åžžç€æšã¢ãºãµãšããå°å¥³ã¯â
388
00:22:57,326 --> 00:23:00,279
ã©ãããã®äºä»¶ã«
é¢ãã£ãŠãããã§ãããïŒ
389
00:23:00,279 --> 00:23:03,365
圌女ã¯ãäžè°·å®ãããš
ç¹ããããã£ãã ãã§ãªãâ
390
00:23:03,365 --> 00:23:06,268
仿ã®äºä»¶ã§
被害è
ãæ®ºãããæã«ãâ
391
00:23:06,268 --> 00:23:08,287
çŸå Žã®è¿ãã«ããããã§ãããã
392
00:23:08,287 --> 00:23:10,422
ãããå¶ç¶ãšã¯æããªãã§ããã
393
00:23:10,422 --> 00:23:13,422
ïŒé»è©±ïŒ
394
00:23:14,343 --> 00:23:16,328
ã¯ããïŒä¿ã
ããµã€ããŒç¯çœªèª²ã§ãã
395
00:23:16,328 --> 00:23:19,331
ãâããŒãã»ã¢ã³ãã»ãã€ã¹âã®
ãµã€ã管çè
ãããããŸããã
396
00:23:19,331 --> 00:23:21,316
ãäžæ²¢ææãïŒïŒæ³ã
397
00:23:21,316 --> 00:23:24,386
ãäœãŸãã¯ã枯åºè¥¿èµ€åïŒã®ïŒã®ïŒã
398
00:23:24,386 --> 00:23:27,256
ãã¡ã°ã¬ã¹ãšããå®ç³åºã®
ã»ãŒã«ã¹ããŒãã§ãã
399
00:23:27,256 --> 00:23:29,408
äºè§£ã§ãã
ããããšãããããŸããã
400
00:23:29,408 --> 00:23:31,408
äžæ²¢ææãâŠã
401
00:23:32,494 --> 00:23:34,494
ã¡ã°ã¬ã¹âŠïŒ
402
00:23:35,297 --> 00:23:37,299
ïŒå¿ä¿ïŒã¡ã°ã¬ã¹ã£ãŠãã»ã
ãã®æ±æšã®ç«ã¡å¯ã£ãåºãïŒ
403
00:23:37,299 --> 00:23:39,318
ããã£ïŒ
ãã®æ±æšã®â
404
00:23:39,318 --> 00:23:41,353
ã¢ãã¬ã¹åž³ã®ãªã¹ãïŒ
ãã£âŠãã¡ãã£ãšåŸ
ãŠã
405
00:23:41,353 --> 00:23:44,323
ããïŒãã©ãã«ãããã ããããïŒ
406
00:23:44,323 --> 00:23:47,326
ãã£ãã
ãïŒããã£ã¡ããïŒ
407
00:23:47,326 --> 00:23:50,329
äžæ²¢ãäžæ²¢âŠã
ïŒæç¬ïŒäžæ²¢âŠã
408
00:23:50,329 --> 00:23:52,447
ïŒæç¬ã»å¿ä¿ïŒãã£ïŒ
ïŒæç¬ïŒäžæ²¢ææâŠã
409
00:23:52,447 --> 00:23:55,447
ïŒãã£ã€ã ïŒ
410
00:23:56,318 --> 00:23:58,387
ããã°ãã¯ã
411
00:23:58,387 --> 00:24:02,387
ãšããããã麻è¬ã®äžæ³ææå®¹çã§
ã¬ãµå
¥ãã§ãŒãã
412
00:24:05,394 --> 00:24:07,394
ã¯ãã倱瀌ããŸãŒãã
413
00:24:09,414 --> 00:24:11,414
ããïœã
414
00:24:12,301 --> 00:24:17,339
ïŒå¿ä¿ïŒããã£ã
ããŒãšâŠãç§ããã£ã¡ã
415
00:24:17,339 --> 00:24:23,328
âªâªïœïŒãªãŒãã£ãªã®é³æ¥œïŒ
416
00:24:23,328 --> 00:24:35,274
âªâªïœ
417
00:24:35,274 --> 00:24:37,309
çºèŠã
ïŒæç¬ïŒãªãã ïŒ
418
00:24:37,309 --> 00:24:39,278
ã¹ãªãŒããŒãã
419
00:24:39,278 --> 00:24:47,269
âªâªïœ
420
00:24:47,269 --> 00:24:49,304
éæ³ããã
ãïŒ
421
00:24:49,304 --> 00:24:51,373
éæ³ããïŒ
ãïŒïŒ
422
00:24:51,373 --> 00:24:53,373
ããâŠã
423
00:24:58,430 --> 00:25:01,430
ãåå·ä¿å€«ãâŠã
424
00:25:02,451 --> 00:25:05,451
ãã®åçã仿ã®è¢«å®³è
ã ã
425
00:25:08,323 --> 00:25:11,259
äžæ²¢ææããŸãã¯âŠã
426
00:25:11,259 --> 00:25:16,415
ãŸãã¯ã麻è¬âŠ
麻è¬åç· æ³éåã§é®æã ïŒ
427
00:25:16,415 --> 00:25:19,301
ã ããããããããã
æ¢ããã£ã€ã£ãŠãã ãïŒ
428
00:25:19,301 --> 00:25:21,286
âªâªïœïŒãªãŒãã£ãªã®é³æ¥œïŒ
429
00:25:21,286 --> 00:25:23,422
âªâªïœïŒå€§é³éã®é³æ¥œïŒ
ã¢ãŒãïŒ
430
00:25:23,422 --> 00:25:25,422
ãããããªã¥ãŒã ïŒ
431
00:25:27,409 --> 00:25:29,409
äœããæ»ãšä¹å¥³ãã ãã
432
00:25:31,480 --> 00:25:35,480
ãããäžæ²¢ãæšæ¥ã®
ååŸïŒæããïŒæãã©ãã«ããïŒ
433
00:25:36,284 --> 00:25:39,237
ãã®æéãªããåºã«ããŸãããã
434
00:25:39,237 --> 00:25:43,358
ïŒæç¬ïŒæ®ºå®³ãããåå·ä¿å€«ããã®
ä¿éºèšŒãæžç±è¬æ¬ãâ
435
00:25:43,358 --> 00:25:47,429
ãªããããªããæã£ãŠããã®ã
説æããŠããããŸããã
436
00:25:47,429 --> 00:25:52,429
圌ã®çå
æ¬äººããè²ãåããŸããã
437
00:25:53,301 --> 00:25:57,339
ãããã£ããã®ãä»äººã«è²ããš
è²ã£ãæ¬äººã¯â
438
00:25:57,339 --> 00:26:00,375
ãã®åŸãçããŠããã®ã
倧å€ãå°é£ã«ãªããšã¯â
439
00:26:00,375 --> 00:26:02,327
èããŸããã§ãããïŒ
440
00:26:02,327 --> 00:26:05,263
ããâŠãå Žåã«ãã£ãŠã¯ãã
441
00:26:05,263 --> 00:26:08,300
å Žåã«ãã£ãŠã¯âŠã
442
00:26:08,300 --> 00:26:12,337
ããªãã®éå¶ãããŠã§ããµã€ã
ãããŒãã»ã¢ã³ãã»ãã€ã¹ãã
443
00:26:12,337 --> 00:26:16,324
ãããã«ã¯ãèªæ®ºãèããŠã人éã
éåã倧å¢éãŸã£ãŠãŸããã
444
00:26:16,324 --> 00:26:18,260
ããããå
¥äŒããã«ã¯â
445
00:26:18,260 --> 00:26:21,329
å³éãªæ¬äººç¢ºèªã
å¿
èŠãšãããŠããã
446
00:26:21,329 --> 00:26:24,282
ã»ã©ããŒã«ã¯
ä¿¡é Œé¢ä¿ãå¿
èŠãªãã§ããã
447
00:26:24,282 --> 00:26:26,318
ããªãã¯ããã®ãµã€ããå©çšããŠâ
448
00:26:26,318 --> 00:26:29,371
ããæ¡ä»¶ã«åã人éã
æ¢ããŠãããããªããã§ããïŒ
449
00:26:29,371 --> 00:26:32,257
ã倱螪ããŠãã誰ãæããªã人éã
450
00:26:32,257 --> 00:26:35,293
ãèšãæããã°
ç¯çœªã«äœ¿ãã«ã¯ããã£ãŠããã®â
451
00:26:35,293 --> 00:26:37,329
æ¡ä»¶ã®ããæžç±ã®æã¡äž»ã
452
00:26:37,329 --> 00:26:39,364
ããã®æã®æžç±ã¯
ã¡ãã£ãšæãå ããã°â
453
00:26:39,364 --> 00:26:42,334
ãããã§ã
çµæŽãå¡ãæ¿ããäºãåºæ¥ãã
454
00:26:42,334 --> 00:26:45,320
ãæ¬äººã
身å
äžæã§æ»äº¡ããŠããã°â
455
00:26:45,320 --> 00:26:48,306
æ¬äººã®å£ããäºãé²èŠãã
æãããªãã
456
00:26:48,306 --> 00:26:50,425
ïŒäžæ²¢ïŒããšãã èšããããã§ããã
457
00:26:50,425 --> 00:26:54,425
ïŒæç¬ïŒããªãã®ãµã€ãã®
äŒå¡ã ã£ããäžè°·å®ããã
458
00:26:59,401 --> 00:27:03,401
åœŒã®æžç±ã¯
äœå®
èè³è©æ¬ºã«å©çšãããŠãŸããã
459
00:27:04,339 --> 00:27:06,308
ããã§ããã
460
00:27:06,308 --> 00:27:08,326
ããªãèªèº«ã¯ç¡é¢ä¿ã ãšïŒ
461
00:27:08,326 --> 00:27:11,363
ããããåœŒã®æžç±ãâŠã
462
00:27:11,363 --> 00:27:15,383
ãæ¬²ãããšãã人ãããã®ã§
å·®ãäžããŸããã
463
00:27:15,383 --> 00:27:18,383
äžæ²¢ã¯
æžç±ãããŒã«ãŒã£ãŠããã§ããã
464
00:27:20,338 --> 00:27:23,325
ïŒæç¬ïŒããªãã¯
æ¡ä»¶ã«åãèªæ®ºå¿é¡è
ããâ
465
00:27:23,325 --> 00:27:26,328
æžç±ãæã«å
¥ã
代ããã«ã¹ãªãŒããŒãäžããã
466
00:27:26,328 --> 00:27:29,381
ãããŠããã®æžç±ã売ã£ãŠ
å©çãåŸãŠããããã ã
467
00:27:29,381 --> 00:27:32,334
ãŸãã§èŠãŠãããããª
ãã£ãããããã ã
468
00:27:32,334 --> 00:27:35,420
ãšãããããã®
ããŸããã£ãŠããããžãã¹ã«â
469
00:27:35,420 --> 00:27:37,420
åé¡ãèµ·ãã£ãã
470
00:27:38,423 --> 00:27:41,326
ïŒæç¬ïŒåå·ä¿å€«ããã¯
èªæ®ºããŠãããªãã£ãã
471
00:27:41,326 --> 00:27:47,466
人éã誰ããèªåã®æãæ»ã«æ¹ã§
æ»ãããšã¯éããŸããããã
472
00:27:47,466 --> 00:27:49,466
âªïŒããã¯ïŒ
473
00:27:51,419 --> 00:27:53,419
ãïœãããã·ã§ãŒã
474
00:27:56,341 --> 00:27:59,427
äžæ²¢ã®ã¢ãªãã€
æç«ãã¡ãã£ãŠãŸããã
475
00:27:59,427 --> 00:28:02,427
ïŒç¢æ²¢ïŒåºã®é²ç¯ã«ã¡ã©ã«
æ ã£ã¡ãã£ãŠãŸããã
476
00:28:04,399 --> 00:28:07,399
ç¡è«ãç§ã¯
èª°ãæ®ºãã¡ãããŸãããã
477
00:28:11,373 --> 00:28:19,373
âªâªïœ
478
00:30:36,301 --> 00:30:44,409
âªâªïœ
479
00:30:44,409 --> 00:30:48,313
ïŒå¿ä¿ïŒåžžç€æšã¢ãºãµãšãã女ã®å
ãåãã§ããïŒ
480
00:30:48,313 --> 00:30:52,334
ããªãã®ããœã³ã³ã«å
¥ã£ãŠãã
äŒå¡åç°¿ã«â
481
00:30:52,334 --> 00:30:55,320
圌女ã®ååããããŸããã
482
00:30:55,320 --> 00:30:59,424
åžžç€æšã¢ãºãµ
ãã³ãã«ããŒã ããžã§ããã
483
00:30:59,424 --> 00:31:02,424
ãããç¥ã£ãŠãŸããã
484
00:31:03,294 --> 00:31:06,314
ïŒå¿ä¿ïŒ
圌女ãéåç±å¿ãªäŒå¡ã§ããã
485
00:31:06,314 --> 00:31:11,319
ïŒäžæ²¢ïŒåœŒå¥³ã¯
ä»äººã®å
šãŠãåãå
¥ãããã§ãã
486
00:31:11,319 --> 00:31:16,324
é 匵ããšãèšãã
å±ãŸãã§ããªãâ
487
00:31:16,324 --> 00:31:21,379
ãã ãå
±æã瀺ã
çžæã®è©±ãèãç¶ããã
488
00:31:21,379 --> 00:31:25,379
äœæéã§ããã©ããªçžæã®è©±ã§ãã
489
00:31:27,318 --> 00:31:30,321
ãããã
åžžè»ãéžããæ
ç±ã§ããâŠã
490
00:31:30,321 --> 00:31:32,273
ïŒç¢æ²¢ïŒäžæ²¢ã®è©±ã¯äºå®ã§ãã
491
00:31:32,273 --> 00:31:36,344
圌女ãïŒäººçµããã°ããŸã次
ã£ãŠæãã§ã話ãèããŠãŠâ
492
00:31:36,344 --> 00:31:39,297
äœæ¥ãäžç¡ãããŠãªãäºã
ããããã§ãã
493
00:31:39,297 --> 00:31:43,334
圌女ã¯èªåã®äºãé¡§ã¿ãã«â
494
00:31:43,334 --> 00:31:47,222
æ»ãæãäººã®æ°æã¡ã
åãå
¥ããããšããã
495
00:31:47,222 --> 00:31:52,327
ããªãããã®åœŒå¥³ã®æ°æã¡ã
å©çšããããããããŸãããïŒ
496
00:31:52,327 --> 00:31:55,330
ã©ãããäºã§ããïŒ
497
00:31:55,330 --> 00:31:58,333
ããªãã¯æ»ãæã人éã«
ããæã¡ãããã
498
00:31:58,333 --> 00:32:03,405
æžç±ãè²ãææãããã°
ãžã§ãããã¹ãªãŒããŒãå±ãããšã
499
00:32:03,405 --> 00:32:06,405
圌女ã¯ãããã¯çç ã®ãããªåã ã
500
00:32:08,293 --> 00:32:11,396
ïŒäžæ²¢ïŒçŸãããããããã§ããã
501
00:32:11,396 --> 00:32:15,396
圌女ã¯ãäººã®æ»ã«
ãšãã€ãããŠãããã§ãã
502
00:32:16,317 --> 00:32:20,321
ã¢ãºãµãããšã亡ããªã£ã
圌女ã®ååã®åŒãã¢ãºãåã®äºâ
503
00:32:20,321 --> 00:32:22,323
æããŠé ããŸããïŒ
504
00:32:22,323 --> 00:32:26,294
ããã
ã¢ãºãµããã®åœ¹ã«ç«ã€ãã§ããïŒ
505
00:32:26,294 --> 00:32:29,264
ã¯ãã
ïŒçæµïŒããããŸããã
506
00:32:29,264 --> 00:32:32,350
ïŒçæµïŒã¢ãºãµæ§ãšã¢ãºãæ§ã¯â
507
00:32:32,350 --> 00:32:35,420
ã䞡芪ããæŒå¥æ
è¡ã§
ãäžåšãã¡ã§ããã®ã§â
508
00:32:35,420 --> 00:32:38,323
ãå°ããæãã
ã»ãšãã©ã®æéãâ
509
00:32:38,323 --> 00:32:42,293
ãäºäººã£ããã§
éãããŠãããã§ãã
510
00:32:42,293 --> 00:32:45,380
ãŸãã§ãèšèã§åããªããŠãâ
511
00:32:45,380 --> 00:32:48,316
ãäºãã«
æ°æã¡ãéããŠãããããªâ
512
00:32:48,316 --> 00:32:51,319
æ¬åœã«ä»²ã®ããååã§ããã
513
00:32:51,319 --> 00:32:59,327
âªâªïœ
514
00:32:59,327 --> 00:33:03,281
ã¢ãºãåã£ãŠãã©ããªå°å¹Žã§ããïŒ
515
00:33:03,281 --> 00:33:06,367
ïŒçæµïŒã¢ãºãæ§ã¯
ãšãŠããããšãªããæ§æ Œã§â
516
00:33:06,367 --> 00:33:10,321
ããŒããŒã°ã©ã€ããŒãé£ã°ãã®ã
奜ãã§ããã
517
00:33:10,321 --> 00:33:13,324
ã¢ãºãæ§ã«ãšã£ãŠ
ãã£ããããã¢ãºãµæ§ã¯â
518
00:33:13,324 --> 00:33:15,393
å§ã§ãããšåæã«â
519
00:33:15,393 --> 00:33:19,393
ã©ãããæ¯èŠªã®ãããª
ååšã§ããã£ãããã§ãã
520
00:33:21,332 --> 00:33:24,319
ã¢ãºãåã®èªæ®ºã®åå ã¯ïŒ
521
00:33:24,319 --> 00:33:26,354
ïŒçæµïŒããããŸããã
522
00:33:26,354 --> 00:33:33,328
ãã ããã®ãŸãŸçããŠããäºã«
äžå®ãšçåãæã£ããããâ
523
00:33:33,328 --> 00:33:36,331
æ·±ãæ©ãã§ããã£ããããŸããã
524
00:33:36,331 --> 00:33:39,317
ãããªã¢ãºãæ§ããã¢ãºãµæ§ã¯â
525
00:33:39,317 --> 00:33:42,220
é 匵ããªãããš
å±ãŸãç¶ããŠãããã§ãã
526
00:33:42,220 --> 00:33:46,291
ã§ãããã®æ¥ã®æŒé â
527
00:33:46,291 --> 00:33:48,276
ãã«ã®å±äžã«åããã¢ãºãæ§ãâ
528
00:33:48,276 --> 00:33:51,329
è¿ãã«äœãåäŸã
èŠãŠããããã§ãã
529
00:33:51,329 --> 00:34:01,306
âªâªïœ
530
00:34:01,306 --> 00:34:06,344
ïŒçæµïŒçŽç²ãªã¢ãºãæ§ã¯
èŠããã°èŠããã»ã©â
531
00:34:06,344 --> 00:34:10,331
å¿ã®ãã©ã³ã¹ã
厩ãããŠããããã§ãã
532
00:34:10,331 --> 00:34:13,334
ãããŠâŠã
533
00:34:13,334 --> 00:34:33,404
âªâªïœ
534
00:34:33,404 --> 00:34:35,404
ã¢ãºãâŠïŒ
535
00:34:41,329 --> 00:34:45,300
ã¢ãºããâŠã
536
00:34:45,300 --> 00:34:47,352
é£ãã âŠã
537
00:34:47,352 --> 00:35:11,292
âªâªïœ
538
00:35:11,292 --> 00:35:14,362
ïŒçæµïŒã¢ãºãæ§ã倱ã£ã
ã¢ãºãµæ§ã¯â
539
00:35:14,362 --> 00:35:17,315
ãŸãã§
å¿ãå£ããŠããŸã£ãã¿ããã«â
540
00:35:17,315 --> 00:35:20,318
æ¶ãæµãäºããåºæ¥ãªãã£ãã
541
00:35:20,318 --> 00:35:31,279
âªâªïœ
542
00:35:31,279 --> 00:35:35,333
ïŒçæµïŒäœã
äºä»¶ã«å·»ã蟌ãŸããŠããã®ãªãâ
543
00:35:35,333 --> 00:35:39,454
ã©ãããã¢ãºãµæ§ã
å©ããŠå·®ãäžããŠãã ããã
544
00:35:39,454 --> 00:35:43,454
ãã®âŠãã¢ãºãµããã®éšå±
æèŠåºæ¥ãŸããïŒ
545
00:35:44,392 --> 00:35:46,392
ãã£ãéæ³ãã
ãããèªãã§ãã ããïŒ
546
00:35:48,313 --> 00:35:50,315
ïŒéæ³ïŒãã§ã¯ãããŒããã£ããâ
547
00:35:50,315 --> 00:35:54,435
ååŸïŒæâŠ
ã¢ã¯ã¢ãªããŒãã©ã¹ã«è¡ããŸãã
548
00:35:54,435 --> 00:35:56,435
圌女ã仿¥ãå±ãã«âŠïŒ
549
00:35:58,323 --> 00:36:00,325
ïŒéæ³ïŒäžæ²¢ïŒããŠãããã©ãã ãâ
550
00:36:00,325 --> 00:36:02,277
ãã®åãå©çšããã
æ°ãæžããã ãïŒ
551
00:36:02,277 --> 00:36:04,345
ïŒç¢æ²¢ïŒéæ³ããïŒ
ïŒæç¬ïŒäœãã£ãŠãã ïŒïŒããããïŒ
552
00:36:04,345 --> 00:36:07,215
ããã€ããŸãã圌女ã«
ã¹ãªãŒããŒå±ãã«è¡ããããã ãã
553
00:36:07,215 --> 00:36:11,319
ç§ã¯åœŒå¥³ã«ãäœäžã€
匷å¶ããèŠãã¯ãããŸãããã
554
00:36:11,319 --> 00:36:14,405
ãªãã ããã®ééâŠïŒ
ïŒç¢æ²¢ïŒéæ³ããïŒ
555
00:36:14,405 --> 00:36:17,405
ä»ã¯åœŒå¥³ãæãæ¹ãå
ã§ãããã
556
00:39:04,292 --> 00:39:06,327
ããããããã§
女ã®åãåŸ
ã£ãŠãã®ïŒ
557
00:39:06,327 --> 00:39:09,447
ãâŠïŒ
558
00:39:09,447 --> 00:39:11,447
èŠå¯ã ã
559
00:39:12,400 --> 00:39:15,386
ãžã§ããã«äŒã£ãã®ãïŒ
560
00:39:15,386 --> 00:39:19,386
åãæ¥ãæ
ããããã³ãã«çœ®ãããŠãŸããã
561
00:39:21,342 --> 00:39:24,345
ã¹ãªãŒããŒã¯ïŒ
ãâŠïŒ
562
00:39:24,345 --> 00:39:28,316
圌女ããããã«å±ããã¯ãã ã£ã
ã¹ãªãŒããŒã¯ïŒïŒ
563
00:39:28,316 --> 00:39:32,303
åãæ¥ãæã«ã¯ãã¡ã¢ã ãâŠã
564
00:39:32,303 --> 00:39:35,323
圌女ãã¹ãªãŒããŒãæã£ãŠ
ã©ããã«âŠã
565
00:39:35,323 --> 00:39:37,308
èªæ®ºãããªãã
566
00:39:37,308 --> 00:39:48,419
âªâªïœ
567
00:39:48,419 --> 00:39:52,419
ããïŒãä¿é·ãããâŠã
568
00:39:58,346 --> 00:40:00,331
ïŒå çŽã®å£°ïŒãèŠå¯ã®æ¹ãžã
569
00:40:00,331 --> 00:40:03,434
ãäžæšæ¥ãç¥ç€Ÿã§
ç·ã®äººã殺ããã®ã¯ç§ã§ãã
570
00:40:03,434 --> 00:40:07,434
ã眪ãåããŸãããåžžç€æšã¢ãºãµã
571
00:40:09,440 --> 00:40:11,440
æµ
茪ã§ãã
572
00:40:14,328 --> 00:40:16,330
ããã£ãã
573
00:40:16,330 --> 00:40:20,384
æ»ãªããããã
ãããããŠã圌女âŠã
574
00:40:20,384 --> 00:40:28,326
âªâªïœ
575
00:40:28,326 --> 00:40:31,329
ïŒå¿ä¿ïŒã©ãã«è¡ã£ãã
å¿åœããã¯ãªãã®ïŒ
576
00:40:31,329 --> 00:40:34,332
圌女ãã¹ãªãŒããŒãæã£ãŠãã®ãïŒ
577
00:40:34,332 --> 00:40:40,304
远ãè©°ãããã人éã«ãšã£ãŠ
æ»ã¯äžçªã®å®ããã§ããã
578
00:40:40,304 --> 00:40:45,443
ãåã圌女ãèªæ®ºãããã€ããã§
ã¹ãªãŒããŒãâŠïŒ
579
00:40:45,443 --> 00:40:56,443
âªâªïœ
580
00:41:07,331 --> 00:41:30,354
âªâªïœ
581
00:41:30,354 --> 00:41:32,406
ãããç¢æ²¢âŠããªããšãããã
582
00:41:32,406 --> 00:41:34,406
äœãã³ã³ã£ãŠããã§ããã
583
00:41:37,328 --> 00:41:40,431
ïŒéæ³ïŒã©ããæ»ã¬ãªã
ã¹ãªãŒããŒã«ãããã©ãã ïŒ
584
00:41:40,431 --> 00:41:42,431
ïŒç¢æ²¢ïŒãã£ïŒïŒ
585
00:41:43,467 --> 00:41:47,467
ïŒéæ³ïŒå°ãªããšããããã
çããããã
586
00:41:51,359 --> 00:41:56,464
æåã¯ãããããããš
æã£ããã§ããã©â
587
00:41:56,464 --> 00:41:58,464
ãããããããã¡ãªãã§ãã
588
00:42:00,434 --> 00:42:04,434
ç ãããã«æ»ãã ããã
眰ã«ãªããªãã
589
00:42:06,290 --> 00:42:08,342
ç§ã¯äººæ®ºãã ããã
590
00:42:08,342 --> 00:42:11,412
ïŒéæ³ïŒããšãšã
ã¹ãªãŒããŒãå±ãã«è¡ã£ãå
ã§â
591
00:42:11,412 --> 00:42:13,412
äœããã£ãïŒ
592
00:42:16,334 --> 00:42:20,438
人殺ãã£ã€ãã®ã¯
é»ã£ãŠæ»ããããã¡ãªãã ãïŒ
593
00:42:20,438 --> 00:42:23,438
ã©ããæ»ã¬ãã ã£ãã
åã£ãŠããæ»ãã
594
00:42:31,415 --> 00:42:39,415
ç§âŠãã¹ãªãŒããŒãæž¡ããŠ
ãã®äººã®è©±ãèãããã§ãã
595
00:42:41,342 --> 00:42:45,396
ïŒã¢ãºãµã®å£°ïŒ
ã€ããã£ãäºãè²ããªäºã
596
00:42:45,396 --> 00:42:48,396
ãã®ããšâŠã
597
00:42:49,350 --> 00:42:53,337
ïŒåå·ä¿å€«ïŒ
仿¥ã¯ããããŠãããããªã
598
00:42:53,337 --> 00:42:56,440
ãªãããå»¶ã
話ãèããŠããã£ãããšã§â
599
00:42:56,440 --> 00:42:58,440
æªããã ãã©âŠã
600
00:43:01,312 --> 00:43:07,385
ããããå
šç¶ãããªäºãªãã§ãã
601
00:43:07,385 --> 00:43:10,354
俺ã®äžçã£ãŠâ
602
00:43:10,354 --> 00:43:15,443
人ã«ã ãŸããã£ã±ãªãã®
ã€ãŸããªã人çã ã£ããã©â
603
00:43:15,443 --> 00:43:21,443
ããæ»ã¬ãšãªããš
ãã£ã±ãè¿·ãããã
604
00:43:25,336 --> 00:43:31,342
ãã®âŠãæž¡ãããã®
è¿ããŠãããããã ããïŒ
605
00:43:31,342 --> 00:43:34,245
æž¡ãããã®ã£ãŠ
ãªãã®äºã§ããïŒ
606
00:43:34,245 --> 00:43:37,264
ãã£ïŒãäœèšã£ãŠãã®ïŒ
607
00:43:37,264 --> 00:43:41,352
æžç±ã ãã
æžç±âŠïŒ
608
00:43:41,352 --> 00:43:45,372
管ç人ãæå®ããç§æžç®±ã«
ã¡ãããšéã£ããããªããã
609
00:43:45,372 --> 00:43:47,341
å±ãããã
ããããæ¥ããã ãïŒ
610
00:43:47,341 --> 00:43:49,343
æžç±æ
å ±ã®é¡ãéã£ããâ
611
00:43:49,343 --> 00:43:52,346
ãžã§ãããã¹ãªãŒããŒãå±ãã
ã£ãŠçŽæãããªããïŒ
612
00:43:52,346 --> 00:43:55,316
çŽæã£ãŠãã©ãããäºã§ãïŒ
613
00:43:55,316 --> 00:43:58,335
ãµããããªãâŠã
614
00:43:58,335 --> 00:44:02,339
äœãç¥ããã«ãã¹ãªãŒããŒã ã
éãã§ããã£ãŠããã®ãïŒïŒ
615
00:44:02,339 --> 00:44:06,360
ç§ãæ¬åœã«
æžç±ã®äºãªããŠäœãâŠïŒ
616
00:44:06,360 --> 00:44:09,280
ããå æžã«ãããã
617
00:44:09,280 --> 00:44:12,333
ããããŸã§ã俺ãã ãŸãã®ãïŒïŒ
618
00:44:12,333 --> 00:44:15,436
è¿ããïŒãè¿ããŠãããïŒ
619
00:44:15,436 --> 00:44:17,436
è¿ããâŠïŒ
620
00:44:21,342 --> 00:44:24,395
ãã£âŠããããŒïŒ
621
00:44:24,395 --> 00:44:28,395
ããã£ïŒãããâŠã
622
00:44:32,319 --> 00:44:35,289
ãã®äººãæ»ãã§ããŸã£ãâŠã
623
00:44:35,289 --> 00:44:38,476
åã®ãããããªãïŒ
åã¯èº«ãå®ãããšããã ãã ã
624
00:44:38,476 --> 00:44:40,476
ãªãã§ããããªäºãå§ãããã ïŒ
625
00:44:44,348 --> 00:44:47,351
ïŒäººã§æ»ã¬ã®ã¯å¯ããããã
626
00:44:47,351 --> 00:44:50,321
äžæ²¢ã«ãããèšãããã®ãïŒ
ïŒã¢ãºãµïŒéãã
627
00:44:50,321 --> 00:44:52,323
ç§ãæ¬åœã«ãããæã£ãããã
628
00:44:52,323 --> 00:44:55,392
ã ããããããªãµãã«
èªåã眰ããŠããã®ãïŒ
629
00:44:55,392 --> 00:44:58,392
ã¢ãºãåãæ»ãã§ããããã£ãšã
630
00:45:01,432 --> 00:45:03,432
ãããªã®ãïŒ
631
00:45:10,307 --> 00:45:18,399
ç§âŠãã¢ãºãã®æ°æã¡ã
ãããããšããªãã£ãã
632
00:45:18,399 --> 00:45:23,437
ã¢ãºãã«
匷ããªã£ãŠããããããŠâŠã
633
00:45:23,437 --> 00:45:29,437
ãã®åã®èŠãã¿ã
çè§£ããŠãããããªãã£ãã
634
00:45:32,363 --> 00:45:37,434
æ¬åœã¯ãé 匵ããªããŠ
èšã£ã¡ããããªãã£ããã§ãã
635
00:45:37,434 --> 00:45:43,434
ã¢ãºãã¯ãã€ãããšãèŠãããšã
èšããªããªã£ãŠâŠã
636
00:45:55,452 --> 00:45:58,452
éæ³ããã§ããã£ãïŒ
637
00:46:00,391 --> 00:46:06,391
ãã®æ¥ãã¢ãºãã¯
ãæŒé ã«ãããã«æ¥ããã§ãã
638
00:46:08,315 --> 00:46:11,285
ãããããæ¥ãåŸããŠâ
639
00:46:11,285 --> 00:46:16,407
å€çŒãã«ãªã£ãŠ
ãã£ããæããªããŸã§â
640
00:46:16,407 --> 00:46:19,410
ãã®åã¯ãã²ãšããŒã£ã¡ã§â
641
00:46:19,410 --> 00:46:22,410
äœæéããããã§ç©ºãèŠãŠãã
642
00:46:25,416 --> 00:46:32,416
ç§ããã®åã
å¯ããæ°æã¡ã®ãŸãŸâŠã
643
00:46:36,443 --> 00:46:39,443
ã²ãšããŒã£ã¡ã§
éãããŠããŸã£ããã§ãã
644
00:46:41,432 --> 00:46:44,432
ç§ãããã®ã«âŠã
645
00:46:46,437 --> 00:46:49,437
ç§ãããã®ã«âŠïŒ
646
00:46:52,426 --> 00:46:55,426
ãã®åã
ïŒäººã§éãããŠããŸã£ãâŠã
647
00:46:56,497 --> 00:46:58,497
ãã¡ã ã
648
00:47:01,452 --> 00:47:04,452
ã¢ãºãâŠã
649
00:47:06,423 --> 00:47:11,423
ç§ãé£ãã§ããã°ã«è¡ããã
650
00:47:14,431 --> 00:47:18,431
é£ã¶åã«æããŠïŒ
ä¿é·âŠã
651
00:47:20,337 --> 00:47:23,290
ã¹ãªãŒããŒé£²ãã§
亡ããªã£ã人â
652
00:47:23,290 --> 00:47:27,328
æ»ã¬åã«ããªãã
æã£ãŠããããªãã ãã©ã
653
00:47:27,328 --> 00:47:29,346
ïŒç¢æ²¢ïŒä»ãããïŒ
654
00:47:29,346 --> 00:47:33,283
ïŒäººãšãã峿ã®ç²ã®ããã®èŸºã«â
655
00:47:33,283 --> 00:47:37,471
硬ããã®ãæŒãã€ãããããª
çããã£ãã
656
00:47:37,471 --> 00:47:40,471
åãæèšãå
åŽã«ããŠãããïŒ
657
00:47:45,295 --> 00:47:50,334
圌ããæ¡ã£ãŠãã®ã¯
åã®æãªããããªãã®ããªã
658
00:47:50,334 --> 00:48:07,351
âªâªïœ
659
00:48:07,351 --> 00:48:09,303
åã¯ãæ»ãã§ããäººã®æãâ
660
00:48:09,303 --> 00:48:12,339
æ¡ã£ãŠããŠãããããã
ãªãã®ããªïŒ
661
00:48:12,339 --> 00:48:15,409
ã²ãšããŒã£ã¡ã§
å¯ããæ°æã¡ã®ãŸãŸâ
662
00:48:15,409 --> 00:48:18,409
éãããããªãã
ããæã£ãŠã
663
00:48:19,329 --> 00:48:24,334
ãã°ã«åº§ã£ãŠ
ãã£ãšæãæ¡ã£ãŠãããŠãã
664
00:48:24,334 --> 00:48:28,305
èªæ»ãé¡ã人ã«ãåããã£ãš
äžçæžåœãå¯ãæ·»ãããšããã
665
00:48:28,305 --> 00:48:32,326
ç±åž¯éãããããã£ãŠã
äžè°·ããããã¯â
666
00:48:32,326 --> 00:48:34,344
ç±åž¯éãé ããâ
667
00:48:34,344 --> 00:48:37,331
ããããªå€æ¥ã®äžã§æ»ã«ãã
ã£ãŠãã人ããããâ
668
00:48:37,331 --> 00:48:40,434
ããããå Žæãæ¢ããŠãããŠâ
669
00:48:40,434 --> 00:48:45,434
æåŸã®æåŸãŸã§
æãæ¡ã£ãŠãããŠãã
670
00:48:49,326 --> 00:48:54,348
ä»ã«ãäœãããŠãããããªãããã
671
00:48:54,348 --> 00:48:59,436
éãããåãããããã®äºãããã
672
00:48:59,436 --> 00:49:03,436
ããã«ãåã
æ°ãã€ããŠãªãã ãã ã
673
00:49:04,341 --> 00:49:08,362
ãã€ã®éã«ãããã®ãµã€ããã
ããªããªã£ãŠã人ãã¡ãããããã
674
00:49:08,362 --> 00:49:10,397
èŠããŠãããªïŒ
675
00:49:10,397 --> 00:49:15,352
äžåžããå«ãããåããŠ
æ»ã«ããã£ãŠèšã£ãŠãç·ã®äººã
676
00:49:15,352 --> 00:49:18,338
ãã®äººããµãããšã«åž°ã£ãŠâ
677
00:49:18,338 --> 00:49:21,358
ä»ãç«æŽŸã«
ééäŒç€Ÿã§åããŠããã ãã
678
00:49:21,358 --> 00:49:23,410
åã«ããããšãã£ãŠèšã£ãŠãã
679
00:49:23,410 --> 00:49:27,331
åïŒïŒ
åãããªãããããèŠãŠã
680
00:49:27,331 --> 00:49:30,384
äžæ²¢ã®ããœã³ã³ã«ãã£ã
ãã®ãµã€ãã®äŒå¡åç°¿ã
681
00:49:30,384 --> 00:49:33,384
ããã§é£çµ¡ãšã£ãŠ
確èªãããã ãã
682
00:49:34,421 --> 00:49:36,340
é·å¹Žé£ãæ·»ã£ã奥ããã亡ãããŠâ
683
00:49:36,340 --> 00:49:40,310
ïŒäººã§çãããããªãã£ãŠ
èšã£ãŠã人ãããããã
684
00:49:40,310 --> 00:49:44,448
ãã®äººã¯
å°åã®ãã©ã³ãã£ã¢ã«åå ããŠâ
685
00:49:44,448 --> 00:49:48,448
å
童通ã§
åäŸãã¡ã«éã³ãæããŠãã
686
00:49:49,286 --> 00:49:53,340
åãããã£ãš
話ãèããŠããããããã§â
687
00:49:53,340 --> 00:49:57,494
çãçŽãããš
æ©ãå§ãã人ãã¡ãããã
688
00:49:57,494 --> 00:50:13,494
âªâªïœ
689
00:50:16,296 --> 00:50:18,432
ãã£âŠã
690
00:50:18,432 --> 00:50:22,432
ïŒæ³£ã声ïŒ
691
00:50:24,338 --> 00:50:27,341
倧人ãæãããããªãã
692
00:50:27,341 --> 00:50:31,345
ïŒæ³£ã声ïŒ
ïŒéæ³ïŒããããâŠã
693
00:50:31,345 --> 00:50:38,352
ïŒæ³£ã声ïŒ
694
00:50:38,352 --> 00:50:43,340
åžžç€æšã¢ãºãµã確ä¿ã
ç¢æ²¢ã確ä¿ã
695
00:50:43,340 --> 00:50:46,426
ä¿è·ãä¿è·ã§ãã
696
00:50:46,426 --> 00:50:55,426
âªâªïœ
697
00:50:58,438 --> 00:51:03,438
åžžç€æšã¢ãºãµãã
ç¡äºãä¿è·ãããããã
698
00:51:07,297 --> 00:51:10,417
äººãæ»ã«è¿œãããã®ãã人ãªãâ
699
00:51:10,417 --> 00:51:13,417
人ãæããã®ãã人ãªã®ãã
700
00:51:16,340 --> 00:51:20,310
圌女ãè²ããªäº
蚌èšããŠãããã ãããªã
701
00:51:20,310 --> 00:51:23,330
ããããããã«ããããããâ
702
00:51:23,330 --> 00:51:27,434
æŽãããã
å
šéšãåã£ãŠãããããã
703
00:51:27,434 --> 00:51:32,434
ãŸãæå§ãã«
æžç±ã誰ã«ããããã§å£²ã£ããšãã
704
00:51:38,295 --> 00:51:42,349
ïŒã¢ããŠã³ãµãŒïŒã倧æéè¡ããçžæ¬¡ãã§
å€é¡ã®èè³éãã ãŸãåãããâ
705
00:51:42,349 --> 00:51:46,403
äœå®
èè³è©æ¬ºäºä»¶ã§
èŠèŠåºææ»äºèª²ã¯âŠã
706
00:51:46,403 --> 00:51:49,339
äºèª²ã«
ç¹å€§ã®æ©ã売ã£ãŠãã£ããã
707
00:51:49,339 --> 00:51:53,327
ãããâŠããŸãã調ã¹ãŠããã®ã¯
ç§ãšå°å®®å±±åã§ãããã
708
00:51:53,327 --> 00:51:57,397
ã¡ã£ã¡ãããªïœïŒ
誰ã調ã¹ããšãã£ãŠâŠã
709
00:51:57,397 --> 00:51:59,233
ãªãããïŒãããã
èªåã®ææã£ãŠâŠã
710
00:51:59,233 --> 00:52:02,336
ãã€ããèªåãâŠïŒ
ãŸããŸãâŠãïŒäººãšãïŒ
711
00:52:02,336 --> 00:52:05,339
ããå°ããèŠããŸããã
ïŒæç¬ïŒããå°ãâŠïŒïŒ
712
00:52:05,339 --> 00:52:08,342
å°å®®å±±åãããããé£ã£ãŠãªãã§
èšã£ãŠããããåãã
713
00:52:08,342 --> 00:52:13,347
ããïœãå€ã®ããããã¯æé«ã ãã
ããããã®ãã®ã©è¶ããæé«ïŒ
714
00:52:13,347 --> 00:52:17,284
ããããã£ãŠã
ã³ã£ããæ°Žããã€ã³ããªãã ããã
715
00:52:17,284 --> 00:52:19,353
ãªãã§ããïŒãã³ã£ããæ°Žã£ãŠã
716
00:52:19,353 --> 00:52:22,422
ããïŒãå·®ãæ°Žã®äº
ã³ã£ããæ°Žã£ãŠèšããªãïŒ
717
00:52:22,422 --> 00:52:25,422
ïŒæµ
茪ã»å¿ä¿ïŒèšããªãããïœã
| {
"pile_set_name": "Github"
} |
Amino acid substitutions in mitochondrial ATPase subunit 9 of Saccharomyces cerevisiae leading to oligomycin or venturicidin resistance.
A series of isonuclear oligomycin-resistant mutants of Saccharomyces cerevisiae carrying mutations in the mitochondrial oli1 gene has been studied. DNA sequence analysis of this gene has been used to define the amino acid substitutions in subunit 9 of the mitochondrial ATPase complex. A domain of amino acids involved in oligomycin resistance can be recognized which encompasses residues in each of the two hydrophobic portions of the subunit 9 polypeptide that are thought to span the inner mitochondrial membrane. Certain amino acid substitutions also confer cross-resistance to venturicidin: these residues define an inner domain for venturicidin resistance. The expression of venturicidin resistance resulting from one particular substitution is modulated by nuclear genetic factors. | {
"pile_set_name": "PubMed Abstracts"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.