text stringlengths 8 5.77M |
|---|
Legal and ethical obligations to conduct a clinical drug trial in Australia as an investigator initiated and sponsored study for an overseas pharmaceutical company.
Most multi-centre trials are both financed and sponsored by the pharmaceutical company involved. What follows will map the path adopted for an investigator initiated and sponsored study for a new indication of an established medication. The chief investigators of a company-sponsored, investigator-initiated, multi-centre, placebo-controlled study of an established medication, Pharmaceutical Benefit Scheme (PBS) listed for treatment of one condition but trialled in the management of another condition (trial of off-label use), were approached to submit a protocol to repeat the type of study with a different compound. The new study would test a different agent, also PBS listed, for the same condition as in the initial study and with the same off-licence application. The company would finance the study, provide the medication and matched placebo but only review the investigator-initiated protocol which would be sponsored by the principal investigator. This required the investigator to implement the trial, as would normally be done by the pharmaceutical company, yet also act as its principal investigator. The principal investigator, with colleagues and a Clinical Research Organisation (CRO), developed a protocol, adapted for the new agent, and submitted it for approval. Upon acceptance a contract was negotiated with the pharmaceutical company which had to overcome jurisdictional conflicts between common law and civil law legal systems. A CRO was contracted to undertake administrative functions which dictated special contractual agreements to overcome possible conflicts of interest for a sponsor/investigator to protect patient interests. There was need to find indemnification insurance with jurisdictional problems, co-investigators, ethics committee approvals and finance management as just some of the difficulties encountered. The paper will outline how these obstacles were overcome and how ethical and legal issues were respected through compromise. The ethical and legal obligations were addressed in a fashion which allowed the conduct of a trial adopting a proven methodology but novel infrastructure such that it was a totally independent study with regards conduct and reporting of final data, irrespective of the results being either positive or negative. This may represent a more acceptable way to ensure that future clinical trials are devoid of undue influence from the pharmaceutical industry which may still fund the study. |
The β-lactam family of antibiotics is the most important class of antibacterial compounds in clinical application. The narrow bactericidal spectrum of naturally occurring β-lactam antibiotics, their low acid stability and increasing resistance problems have triggered the development of semi-synthetic antibiotics (SSA's) such as the semi-synthetic penicillins (SSP's) and semi-synthetic cephalosporins (SSC's). In general, chemical synthesis of semi-synthetic β-lactam antibiotics is performed under harsh conditions using reactive intermediates and organic solvents at low temperatures, causing high downstream processing costs and processes that are environmentally unfriendly. Therefore, there is an ongoing effort to replace the traditional chemical processes by enzymatic conversion, in order to obtain a more sustainable production of semi-synthetic β-lactam antibiotics.
Natural β-lactams typically consist of the β-lactam nucleus (e.g. 6-amino-penicillanic acid (6-APA), 7-amino-desacetoxy-cephalosporanic acid (7-ADCA) and others) and a so-called side chain, which is connected to the nucleus via an amide bond. Penicillin G acylase (EC 3.5.1.11) is a hydrolytic enzyme which is broadly used to remove the side chain of penicillin G (PenG), cephalosporin G (CefG) and related antibiotics to produce the corresponding deacylated nucleus 6-APA and 7-ADCA respectively together with the liberated side chain (phenylacetic acid (PAA) in the case of PenG and CefG). These deacylated β-lactam intermediates are the building blocks of SSA's such as ampicillin, amoxicillin, oxacillin, cloxacillin, dicloxacillin, flucloxacillin, cefalexin, cefadroxil, cefradine, cefaclor, cefprozil, cefatoxime and others. For recent reviews on PenG acylases see Rajendhran J, and Gunasekaran P., J Biosci. Bioeng. (2004), 97, 1-13, Arroyo M et al., Appl Microbiol Biotechnol. (2003) 60, 507-14, Sio C F and Quax W J. Curr Opin Biotechnol. (2004), 15, 349-55, Chandel, A. K. et al. Enzyme and Microbial Technology (2008), 42, pp. 199-207.
Apart from deacylating β-lactam compounds, it has been found that PenG acylase and amino ester hydrolases can also be used to synthesize β-lactam antibiotics. In this process, the PenG acylase catalyses the condensation of an activated side chain with a deacylated β-lactam intermediate (such as 6-APA, 7-ADCA, 7-ACA and others). The enzyme-catalyzed synthesis of β-lactam antibiotics can be carried out in either an equilibrium-controlled or a kinetically controlled conversion process. Under conditions of an equilibrium-controlled conversion, the level of product accumulation that can be reached is governed by the thermodynamic equilibrium of the reaction, which is unfavourable in the case of the synthesis of semi-synthetic antibiotics, in particular when the reaction is carried out in aqueous media. In a kinetically controlled conversion the enzyme catalyses the transfer of the acyl group from the activated side chain, i.e. the acyl donor, to the β-lactam nucleus, i.e. nucleophilic acceptor. For the preparation of semi-synthetic penicillins, the activated side chain may be the amide-derivative or the methylester of an aromatic carboxylic acid. In this case, the level of product accumulation is governed by the catalytic properties of the enzyme and high non-equilibrium concentrations of the acyl-transfer product can transiently be obtained. Examples of side chain used in the synthesis of SSA's are activated phenylglycine, activated hydroxyphenylglycine, activated dihydro-phenylglycine and others.
PenG acylase catalyzes the hydrolysis of amides and esters via an acyl-enzyme intermediate in which the N-terminal serine of the β-subunit is esterified to the acyl group. In the case of hydrolysis, water attacks the acyl-enzyme and drives the hydrolysis to completion. When an amino group of an added external nucleophile (e.g. 6-APA, 7-ADCA) is present, both the nucleophile and the water may attack the acyl enzyme, yielding the desired acyl-transfer product (antibiotic) and the undesired hydrolyzed acyl donor, respectively.
The ability of PenG acylase to act as an acyl transferase, i.e. to synthesize SSA's, is already exploited on an industrial scale in the enzymatic production of various semi-synthetic β-lactam antibiotics. However, in the production of SSA's, the hydrolysis reaction by water reduces the efficiency of the transfer reaction, due to the loss of activated precursor side chains. The ratio between the rate of synthesis (S) and rate of hydrolysis (H) is an important parameter for evaluating the synthetic performance of a PenG acylase. The S/H ratio equals the molar ratio of synthesized product (SSA) compared to the hydrolysis product at defined conditions during the enzymatic acylation reaction. The synthesized product is defined herein as the β-lactam antibiotic formed from the activated side chain and the β-lactam nucleus. The hydrolysis product is defined herein as the corresponding acid of the activated side chain. For an economically attractive process, it is desirable that the S/H ratio is high, while at the same time, the enzymatic activity preferably is also sufficiently high.
The S/H ratio that is observed in a conversion is dependant on the reactants, the reaction conditions and the progress of the conversion. Youshko et al. showed that the initial value of the S/H ratio is dependent both on the kinetic properties of the enzyme and the concentration of the nucleophilic acceptor (e.g. 6-APA)—see Youshko, M. I. and Svedas, V. K., Biochemistry (Moscow) (2000), 65, 1367-1375 and Youshko, M. I. et al. Biochimica et Biophysica Acta—Proteins and Proteomics (2002), 1599, 134-140. At fixed conditions and nucleophile concentration, the initial S/H ratio can be used to compare the performance of different PenG acylases and/or different PenG acylase mutants. In addition, the performance of different PenG acylases can be compared by measuring the synthesis and the hydrolysis during the conversion as function of time, which allows for calculation of the S/H ratio at different stages of the conversion. The synthetic activity (=the rate at which the product of synthesis is formed=rate of synthesis=production rate) of a PenG acylase in an acylation reaction refers to the amount of β-lactam antibiotic formed in the acylation reaction per unit time at defined conditions. Preferably, the initial activity is determined. The initial enzymatic activity can be determined by carrying out the acylation reaction and then constructing a graph of the amount of product synthesized versus the reaction time, a so-called progress curve. In general, at the start of the conversion, the rate of product formation is relatively constant and the activity can be derived directly from the slope of the progress curve. In case the synthetic activity already starts to decline at the beginning of the conversion the initial rate should be obtained by extrapolation of the progress curve and calculation of the slope at t=0. In order to compare the activity of different PenG acylases the synthetic activity should be normalised to the same amount of protein. In the same way as for the initial rate of synthesis the initial rate of hydrolysis can be determined from a graph of the amount of the activated side chain hydrolyzed versus the reaction time.
PenG acylases have been subject of several studies involving PenG acylase mutants. An extensive list of published mutations is given Rajendhran and Gunasekara (2004)—supre vide. More recently, further studies were published by Gabor, E. M. and Janssen, D. B., Protein Engineering, Design and Selection (2004), 17, 571-579; Jager, S. A. W. et al. Journal of Biotechnology (2008), 133, 18-26; Wang, J., et al. Applied Microbiology and Biotechnology (2007), 74, 1023-1030.
International Patent Application WO96/05318 to Gist-brocades teaches how the specificity of PenG acylases can be modified by mutating the substrate binding site at one or more amino acid positions. It was shown that the S/H ratio of PenG acylases can also be tuned in this way.
In addition, International Patent Applications WO98/20120 (to Bristol-Meyers Squibb), WO03/055998 (to Gist-brocades) and Chinese Patent Application CN101177688 (to Shanghai Institute for Biological Sciences) describe a process for the enzymatic preparation of a β-lactam antibiotic from a β-lactam nucleus and an activated side chain with the aid of a PenG acylase mutant. WO98/20120 discloses mutations at amino acid positions 142 and 146 in the α-subunit and at amino acid positions 24, 56 or 177 in the β-subunit of Escherichia coli PenG acylase. Particularly, the PenG acylase variant with a mutation at position β24 (Fβ24A), whereby phenylalanine is replaced by alanine, appears to produce a significantly higher yield in the synthesis of penicillins and cephalosporins. However, in WO03/055998 it was shown that in processes where, instead of an ester precursor, an amide precursor, is used in combination with said mutant Fβ24A, the S/H ratio is still high, but the enzymatic activity is so low that the use of this mutant is economically much less attractive. Instead, it was shown in WO03/055998 that a PenG acylase mutant wherein arginine at position 145 in the α-subunit was replaced by leucine (Rα145L), cystein (Rα145C) or lysine (Rα145K), also showed an improved S/H ratio but, in addition, had retained a higher level of synthetic activity, especially with amide precursors. Nevertheless, the synthetic activity of all these mutants was less the synthetic activity of the wild-type PenG acylase.
CN101177688 disclosed that also of mutants of the Bacillus megaterium PenG acylase, an improvement of the S/H ratio was accompanied by a decrease of the synthetic activity.
EP-1418230 to TUHH-Technologie GmbH, discloses Alcaligenes faecalis PenG acylases for which the post-translational maturation of the α-subunit is incomplete resulting in a higher hydrolytic activity for penicillin G and 6-nitro-3-phenylacetamide benzoic acid (further referred to as NIPAB). Incomplete processing of said α-subunit was invoked by amino-acid substitutions in the so-called linker region between α and β subunit. It was not described whether or not such mutations could also increase the synthetic activity.
The prior art discussed above shows that, although it is possible to increase the S/H ratio of various mutants of PenG acylase, such improvements in the S/H ratio are accompanied by a decrease of the synthetic activity compared to the wild type PenG acylase. Therefore, the disadvantage of these mutants is that long conversion times are needed or very high concentration of mutant PenG acylases in such conversion, which makes industrial applications of such mutants economically unattractive if not impossible.
It is the purpose of the present invention to provide mutant PenG acylases which have increased S/H ratio's while maintaining or more preferably increasing the synthetic activity compared to the wild type enzyme in order to be suitable for industrial processes. |
108 F.3d 1370
NOTICE: THIS SUMMARY ORDER MAY NOT BE CITED AS PRECEDENTIAL AUTHORITY, BUT MAY BE CALLED TO THE ATTENTION OF THE COURT IN A SUBSEQUENT STAGE OF THIS CASE, IN A RELATED CASE, OR IN ANY CASE FOR PURPOSES OF COLLATERAL ESTOPPEL OR RES JUDICATA. SEE SECOND CIRCUIT RULE 0.23.UNITED STATES of America, Appellee,v.Gregory V. BROWN, Defendant-Appellant.
No. 96-1590.
United States Court of Appeals, Second Circuit.
March 18, 1997.
1
APPEARING FOR APPELLANT: Gregory V. Brown, pro se, Raybrook, New York.
2
APPEARING FOR APPELLEES:Joshua W. Nesbitt, Assistant United States Attorney, Northern District of New York, Albany, New York.
3
PRESENT: Honorable JOHN M. WALKER, Jr., Honorable JOSEPH M. McLAUGHLIN, Honorable HARLINGTON WOOD, Jr.,* Circuit Judges.
SUMMARY ORDER
4
This cause came on to be heard on the transcript of record from the United States District Court for the Northern District of New York (Thomas J. McAvoy, Chief Judge ) and was submitted by both parties.
5
ON CONSIDERATION WHEREOF, IT IS HEREBY ORDERED, ADJUDGED AND DECREED that the appeal from the order of the United States District Court for the Northern District of New York is DISMISSED.
6
Defendant, Gregory Vincent Brown, pro se, appeals from the August 20, 1996, order of the United States District Court for the Northern District of New York (Thomas J. McAvoy, Chief Judge ) revoking defendant's term of supervised release. Prior to revocation, on November 14, 1990, Brown had been sentenced to a term of incarceration of 36 months, followed by a term of supervised release of 24 months, for convictions on charges of false representation, 18 U.S.C. §§ 499, 912, 1001, embezzlement of United States' funds, 18 U.S.C. § 641, possession of false identification, 18 U.S.C. § 1028, unlawful importation of goods, 18 U.S.C. § 545, and threatening harm to a witness, 18 U.S.C. § 1513.
7
As a basis for revoking probation, the district court found that Brown had committed new criminal conduct (aggravated harassment and contempt of court under New York penal law) and failed to follow instructions of his probation officer to refrain from violations of the law and abide by a protection order entered against him. Further, Brown admitted to failing to advise his probation officer of a change in his employment status as required by the terms of his supervised release. After a revocation proceeding, the court, by order entered August 20, 1996, committed defendant to custody for a term of 18 months. Defendant filed a notice of appeal on September 3, 1996.
8
Under Fed.R.App.P. 4(b): "In a criminal case, a defendant shall file the notice of appeal in the district court within 10 days after the entry of either the judgment or the order appealed from...." Defendant in this case, failed to file a notice of appeal within the period provided. As noted, the district court entered its judgment on Tuesday, August 20, 1996. Accordingly, pursuant to Fed.R.App.P. 4(b), defendant was required to file his notice of appeal, at the latest, on Friday August 30, 1996, ten days after entry of judgment. Defendant's counsel, however, filed his notice of appeal on September 3, 1996, one business day later. (The intervening weekend included the Labor Day holiday.) Defendant, thus, filed out of time. See United States v. Clark, 51 F.3d 42, 43 (5th Cir.1995) (finding Fed.R.App.P. 4(b) applicable to judgment revoking term of supervised release); United States v. Patterson, 982 F.2d 319, 320 (9th Cir.1992) (per curiam ) (same); United States v. Johnson, 980 F.2d 1212, 1212 (8th Cir.1992) (per curiam ) (same).
9
Although the government fails to raise this matter, we must do so sua sponte, as timeliness of a defendant's notice of appeal implicates our jurisdictional authority. United States v. Ferraro, 992 F.2d 10, 11 (2d Cir.1993) (per curiam ) ("the requirement of a timely notice of appeal in rule 4(b) is jurisdictional"). The court, thus, is precluded from hearing Brown's appeal. Id.
10
Accordingly, the instant appeal is dismissed. Because, Brown's notice of appeal was filed within 30 days after the period of appeal, we remand the action to the district court to determine whether the untimeliness of filing was attributable to excusable neglect as understood by Fed.R.Civ.P. 4(b). See United States v. Clark, 51 F.3d at 42.
*
The Honorable Harlington Wood, Jr., of the United States District Court for the Southern District of New York sitting by designation
|
Pages
Monday, December 1, 2008
Space Savvy
The Gemini Program Begins with a Bang
Soon after the assassination of President John F. Kennedy in Dallas on Nov. 22, 1963, Eric Sevareid, a CBS news reporter said that his legacy was his attitude and contagious spirit that all things are possible for Americans if only we have the vision and will. In a speech at Rice University on September 12, 1962, President Kennedy sets a goal to put a man on the Moon and return him safely to Earth before the end of the decade. At this point in time, the USA has a total of 20 minutes of spaceflight experience. This nearly inconceivable challenge is considered courageous and historic by some, arrogant and fool-hardy by others.
But NASA and the nation takes JFK's words to heart. The next step after the Mercury program is to graduate to the Gemini program and two-man capsules. The Mercury missions proved spaceflight was possible for human beings. Gemini will teach man how to fly to the Moon.On September 17, 1962, a second group of astronauts arrive, four from the Air Force, two from the Navy, and two civilians. They are called The New Nine and several will become famous: Jim Lovell, Neil Armstrong and Buzz Aldrin among them. Rivalry between the astronauts is intense. Each wants to be first to step on the surface of the Moon.
John Young, one of the New Nine, and Gus Grissom of the original Mercury Seven are the first two astronauts paired for a Gemini mission. Their comaraderie and enthusiasm gives them a reputation among their peers of being a sort of 'dynamic duo.'
But there's a problem...
A more powerful rocket is needed to launch a two man capsule into space. The Air Force is developing the new Titan missiles but having difficulties adapting missile rockets to a manned Gemini vehicle. The Titans are initially a disaster (see YouTube video). One out of every five fails catastrophically. Astronauts watch as the rockets explode on the launchpad again and again. The odds aren't good enough to risk propelling a manned mission into space.
Engineers attack the problems and create safeguards and backup systems to make the rockets safer. Finally NASA launches two rockets in a row that don't explode with the unmanned Gemini 1 and Gemini 2 capsules. John Young and Gus Grissom will ride the next into space aboard Gemini 3. Their primary goal? Test the brand new rocket and capsule and return...alive. If anything goes wrong with the launch, Young and Grissom will be killed on live television with millions watching.
In a moment of optimism, Grissom names the Gemini 3 capsule The Molly Brown after the Broadway hit "The Unsinkable Molly Brown." He hopes the name will bring good luck and, if the voyage is successful, won't end with the same fate as his Liberty Bell 7 Mercury capsule which sank before recovery. The Molly Brown was the last NASA vehicle to be named by an astronaut.
The launch aboard the converted ICBM (intercontinental ballistic missile) goes flawlessly. The rocket stages fall away and the Gemini 3 capsule reaches orbit. Grissom and Young become the first Americans to fly in space together. They make three successful orbits of the earth testing important maneuvers and altitude changes that are essential first steps in reaching the Moon.
With the exception of a contraband corned beef sandwich smuggled aboard by John White (for which the crew was later reprimanded because the crumbs could have played major havoc with the instrumentation onboard), a couple of minor failed experiements and a glitch with the orbital manuevering system thrusters (that would manifest itself again in Gemini 8 as a much larger issue) the flight was without significant problems.
But re-entry is not so perfect. Back on Earth, the recovery task force of 27 ships and 126 aircraft wait while things go amiss. In an interview for the Discovery Channel documentary When We Left Earth, John Young states, "We screwed up on re-entry. When we fired the retro-rockets, we forgot the Earth rotated under us. We forgot to put the rotation of the Earth into the equation."
When the parachutes engaged, the sudden change in orientation in the capsule causes Gus Grissom to crack the plexiglas faceplate of his helmet on a control panel. The Gemini capsule is coming down about 190 miles short of the targeted recovery area. Grissom is able to make up much of this distance during assent, but Gemini 3 still lands about sixty miles off target. The men decide to deviate from standard landing procedures by not opening their capsule's hatch, and by keeping their helmets on for some time after splashdown due to smoke that was present from the thrusters. As the astronauts drift in the Atlantic waiting for rescue, Grissom gets seasick, but both men are recovered safely after an uncomfortable thirty minutes or so.
A large crowd turns out for a ticker tape parade in the cold rain of lower Manhattan to welcome the returning heroes home. The first Gemini flight has been a success and has laid the groundwork for more ambitious missions to come.
Each Gemini mission going forward will involve huge risks and giant leaps in achieving the goal set by President Kennedy. The next Gemini mission will involve another historic first for spaceflight. One of the astronauts will conduct an EVA or Extra Vehicular Activity. For the first time, man will walk in space.
2 comments:
Wow. What a concise, fascinating flow of information, Laurie. Imagine where I may have soared to, if only one of the classes I snored through in my formative years could have been so well written and fun to read. Thanks for sharing this historic documentary on the accomplishments of the USA space program.
Oh, so glad you enjoyed it, Arlene. Rediscovering our space program has been a blast for me and an information bonanza. I'm learning so much I never knew and I'm awestricken at the amount of risk these brave men and women took on to advance our knowledge of space and spaceflight.
I never realized what a monstrous task it was to reach the Moon. Most people know about the near disaster with Apollo 13, but I think few realize that almost every excursion into space has its dramas and moment of jeapardy where the crew and/or the craft could have been lost.
"To boldly go where no one has gone before" should have been NASAs motto.
About Spacefreighters Lounge
Hosted by 5 Science Fiction Romance authors with 8 RWA Golden Heart finals and a RITA final between them. We aim to entertain with spirited commentary on the past, present, and future of SFR, hot topics, and our take on Science Fiction and SFR books, television, movies and culture.
SFR Galaxy Award
SFR Galaxy Award
Looking for More Great SFR? You found it! Click the image below.
Followers
Follow by Email
Welcome to Spacefreighters' Lounge
Pull up a hoverchair and have a Billins. :)
This blog is named in honor of the seedy tavern on the planet Dartis in Inherit the Stars where Laurie's MC Sair originally began his journey--before her critiquers compelled her to trash the Star Wars cantina opening. [See post entitled: From Whence Came the Nameclick to see the excerpt.]
Not being one to give up on a theme, a new Spacefreighters Lounge manifested itself on the planet Banna in a later draft--reincarnated as a slightly more respectable locale. |
(A) Laser light at 578 nm is pre-stabilized to an isolated, high-finesse optical cavity using Pound-Drever-Hall detection and employing electronic feedback to an acousto-optic modulator (AOM) and laser piezoelectric-transducer. This stable laser light is then delivered to the Yb-1 and Yb-2 systems, where it is aligned along the optical lattice axis to probe the atomic clock transition. Resonance with the atomic transition is detected by observing atomic fluorescence collected onto a photomultiplier tube (PMT). The fluorescence signal is digitized and processed by a microcontroller unit (MCU), which computes a correction frequency, f1;2(t). This correction frequency is applied to the relevant AOM by way of a direct digital synthesizer (DDS), and locks the laser frequency onto resonance with the clock transition. (B) Relevant Yb atomic energy levels and transitions, including laser cooling transitions (399 and 556 nm), the clock transition (578 nm), and the optical pumping transition used for excited state detection (1388 nm). (C) A single-scan, normalized excitation spectrum of the 1S0- 3P0 clock transition in 171Yb with 140 ms Rabi spectroscopy time; the red line is a free-parameter sinc2 function fit. Credit: arXiv:1305.5869 [physics.atom-ph]
(Phys.org) —Researchers at the National Institute of Standards and Technology in Boulder Colorado have succeeded in building a record breaking clock—one that has an instability of just one part in 10-18. They describe their new clock in a paper they've uploaded to the preprint server arXiv. In it they suggest that if their clock could somehow be used to gauge the age of the universe, it would be able to do so within just a single second.
As time has passed, clock-making has become more important—besides helping people get together at prearranged times, clocks now help run the GPS system, keep networks on track and are key to unlocking the fundamental laws of the universe. As technology has grown in sophistication, so too has the need for ever more accurate clocks. This has led to atomic clocks which use the electronic transition frequency in the ultraviolet, optical or microwave region of the electromagnetic spectrum to keep very accurate time. In this new effort, the researchers built a new type of atomic clock that is more accurate than any that has come before.
To build their clock the researchers employed a laser and mirrors to build a lattice trap capable of capturing atoms—its purpose was to hold atoms steady so that there wouldn't be any frequency interference, a problem with other atomic clocks. The trap was then filled with ytterbium atoms which were then shot with a second laser to measure their electronic frequencies. The result was a clock that if allowed, would be off by less than a second if run for 31 billion years.
Building a clock that is believed to be the most accurate in the world creates a problem though, how to accurately measure its accuracy? The answer is by building another clock exactly like the first of course and then comparing the two against one another. That's what the researchers did, running both clocks for a short period to see if they came up with exactly the same time duration, which the researchers report, they did.
One of the first uses for the new clock will be in measuring gravitational redshift, which is a means of measuring very precisely, the height of geographic areas. This can be done because time moves slower in areas of higher gravity. The researchers say their new clock is capable of measuring redshit to within 1 centimeter.
Explore further Physicists propose a way to make atomic clocks more accurate
More information: An atomic clock with $10^{-18}$ instability, arXiv:1305.5869 [physics.atom-ph] An atomic clock with $10^{-18}$ instability, arXiv:1305.5869 [physics.atom-ph] arxiv.org/abs/1305.5869 Abstract
Atomic clocks have been transformational in science and technology, leading to innovations such as global positioning, advanced communications, and tests of fundamental constant variation. Next-generation optical atomic clocks can extend the capability of these timekeepers, where researchers have long aspired toward measurement precision at 1 part in $bm{10^{18}}$. This milestone will enable a second revolution of new timing applications such as relativistic geodesy, enhanced Earth- and space-based navigation and telescopy, and new tests on physics beyond the Standard Model. Here, we describe the development and operation of two optical lattice clocks, both utilizing spin-polarized, ultracold atomic ytterbium. A measurement comparing these systems demonstrates an unprecedented atomic clock instability of $bm{1.6times 10^{-18}}$ after only $bm{7}$ hours of averaging. via Arxiv Blog Journal information: arXiv
© 2013 Phys.org |
[Screening program with urinary porphyrins--application to clinical finding in latent porphyrias].
Judging from the incidence of porphyria in Japan, most cases can be diagnosed by measurement of the amount of porphyrin in the urine. Normally, the analysis of porphyrin in urine is performed by high-performance liquid chromatography but this requires about 40 minutes per specimen. However, if one simply measures coproporphyrin I and III only, then one specimen can be measured in about 10 minutes. If screening is performed using this method, those subjects in which high levels of coproporphyrin I and III are detected can undergo further test of urine, blood and feces to detect porphyrin and related materials. Using this screening methods in high school students, 2 cases of congenital porphyria were detected. One case was hereditary coproporphyria and the other was acute intermittent porphyria. If this method is added to screening methods normally used for health checkups, cases of porphyria should be detected with ease. |
Q:
Profile a WPF + WCF + EF app
I've lately read about the MvcMiniProfiler, which I found really useful. However, we are developing a WPF app, so we can not use that (we are using WPF, WCF and Entity Framework with an Oracle DB, with the Devart EF provider).
What would be the easiest (most lightweight, smallest footprint) solution to profile our app constantly while developing? I would specifically be interested in how many and what SQL queries are sent to the DB during a WCF call, and how long do they take. Maybe this is completely unrelated to WPF and WCF, and what I need is just an EF profiler. And I am looking for a simple solution, even that would be acceptable, if the profile data was written out to the Debug window.
A:
I recommend reading Julie Lerman's Profiling Database Activity in the Entity Framework. It walks through how to set up tracing, as well as some commercial profiling options, such as the Entity Framework Profiler.
This, combined with the standard Visual Studio profilers, will cover all three of your cases. That being said, a good memory profiler (such as SciTech's) can also be useful when working with the WPF application, in particular, as it's possible (easy?) to create memory leaks in WPF applications.
|
Why we should be adding tongue scraping to our oral routine (Getty)
When it comes to oral hygiene, we like to think we know the drill. We brush, we floss, and swirl the mouthwash round for added zing, but. should we also be adding a good tongue scraping to our daily routine?
Some experts think so.
Tongue Scraping or Jihwa Prakshalana, the Ayurvedic self-care ritual known as tongue scraping, is an an oral hygiene practice that removes bacteria, food debris, toxins, and dead cells from the surface of the tongue.
Brushing your tongue daily can not only improve your oral health, but also your overall health in some pretty surprising ways.
“The Ayurvedic ritual of tongue scraping is one of the most powerful tools you can add to your daily wellness routine,” explains Dr Reena Wadia, founder of gum disease practice RW Perio.
“There is also now recent scientific evidence backing up the importance of tongue cleaning.
“One of the key benefits is that it removes the tongue coating that builds up over time. This is super important as a tongue coating is one of the most common reasons for bad breath."
Halitosis aside, here’s some other, pretty convincing reasons brushing that tongue of yours could be worth it.
Benefits of brushing or scraping your tongue
It gets rid of bad breath
According to Dr Wadia on of the main reasons to add tongue brushing to your daily routine is to rid your mouth of the nasty bacteria that causes halitosis, more commonly known as bad breath.
“The two biggest causes of oral malodour include tongue coating and gum disease,” explains Dr Wadia.
“Tongue cleaning has an effect in reducing oral malodour caused by tongue coating.”
Read more: Have we been brushing our teeth all wrong?
If left alone, the bacteria on your tongue can transfer to your teeth, which pretty much renders the brushing you did completely pointless.
So do yourself, and those close to you, a favour by scraping all that grimness off your tongue. Your breath will thank you for it.
It boosts your immunity
Believe it or not our tongues form part of the first line of defence in our immune systems. Scraping your tongue not only helps prevent toxins from being reabsorbed into your body, it also boosts overall immune function.
“The tongue is made up of lots of crypts, cracks and irregular surfaces so is an ideal site for the growth of bugs/bacteria,” explains Dr Wadia.
“These bacteria can produce things which taste and smell foul. The tongue is like a carpet, it needs to be cleaned regularly!”
It improves your sense of taste
Research suggests that using a tongue scraper twice daily can improve your sense of taste. That’s because without removing the mucus on your tongue, your taste buds can become blocked making it difficult to recognise the taste of food.
Removing build-up from the surface of your tongue can open up the tongue’s pores, helping to expose your taste buds.
“Your tongue may be able to better distinguish between bitter, sweet, salty, and sour sensations,” Dr Wadia adds.
Read more: Brushing your teeth three times a day could keep your heart healthy
It makes your tongue look better
According to Dr Wadia buildup of excess debris can cause your tongue to take on a white, coated appearance.
“Daily scraping can help remove this coating and prevent it from returning,” she says.
View photos There are a whole host of benefits of scraping your tongue (Getty) More
It helps with overall oral health
Not only does giving your tongue a good scraping help with your general tooth and gum health, it also helps to remove bacteria and toxins from the mouth which could help prevent oral health problems such as plaque build-up, tooth decay, loss of teeth, gum infections, and gum recession.
It can help with digestive health
Without scraping your tongue to remove bacteria it can linger in the mouth and travel down the throat to your gut. And no one needs those kind of nasties in their gut. By scraping your tongue you’ll not only help remove that bacteria, you’ll also kickstart saliva production and promote agni (the body’s digestive fire) which can help improve digestion.
Story continues |
The 5 Commandments of Contracting And How Learn More
This is What You Need to Know before You Can Hire That Custom Home Builder or Hire for Remodeling.
A building is something that is supposed to last a number of generations and that means that the construction should be taken seriously. In cases where there is a lot of money involved like this one, it is important that all the care be taken because one cannot afford the mistakes that might even lead to the reconstruction because of the kind of investment this is. You should therefore make sure that an expert is the one doing the job and a choice like a general contractor will be ideal for the custom home building because you will leave all to them. When you go out there looking for a company to hire, you will realize that there are so many of them out there that will be offering the service and that means that getting the best will not be a walk in the park. However, this will not be too hard if you know what to look for or where to look.
Remember that for someone to get you exactly what you are looking for they will have to be good at what they do and that is why the qualifications and the kind of experience that the company has are a good place to start. What you will be needing is a company or a contractor that has been in the field for some time because that way you will be sure that they have seen most if not all and they can handle pretty much everything. he only way that you can be sure of the quality is if you test it yourself but in this case you will not be having that luxury meaning you need other ways. The experience will not be of any use to you if they have never done something like the one that you are looking for and that is why their portfolio is important. References from the company and the online reviews is the other ways that you can tell what the people that have been there feel about the company.
The other thing that will in most cases determine the quality that you get is the amount that you pay. However this is not to say that you need to break your bank account in the name of getting good quality and that is why you will be looking for a company that will offer you the best for the most reasonable prices. If you want to save some few money without having to change other things like the quality then you should look for the contractors that are near you. This is because less fuel will be used to get to you and it will also be convenient for all the parties. The residential; contractors Potsdam, the remodeling Potsdam and the general contractors Potsdam are the best options for the constructions that are in Potsdam. |
To maintain the on-going overall alcoholism program on the Swinomish Reservation and in doing so the objectives would continue to be: (a) to help the Swinomish Indians to focus on solutions for the alcoholic and the alcoholism problems, and (b) the entire community would be exposed in one way or another to a preventive alcoholism program. The program director would serve as a counselor as well as an information and referral source. The youth counselor/supervisor will counsel and supervise the youth, as well as initiating activities in a strong attempt at preventive alcoholism. |
Metabolic and hormonal factors influencing extrarenal buffering of an acute acid load.
This study evaluates metabolic and hormonal factors influencing extrarenal buffering of an acute acid load. Phosphate deprivation of 2 weeks duration was associated with enhanced extrarenal acid buffering. The enhanced extrarenal buffering capacity of phosphate deprivation was not dependent on the presence of parathyroid glands. Parathyroid hormone administration to phosphate-deprived rats promoted a further enhancement of the buffering capacity of an acid load. Blood pH and HCO3 during acid loading were not significantly different between control and diphosphonate-treated rats and between phosphate-deprived rats and phosphate-deprived rats treated with diphosphonate. The mortality rate, however, was significantly higher in diphosphonate-treated rats than in rats not receiving the drug suggesting that diphosphonate blunts the buffering of an acid load in both control and phosphate-deprived rats. Chronic vitamin D administration and acute administration of arginine vasopressin in pharmacologic doses were associated with significant enhancement of buffering capacity as compared to control rats. Thyrocalcitonin administration to intact but not thyroparathyroidectomized rats was associated with diminished capacity to buffer an acid load. These data demonstrate that the buffering of an acute acid load is influenced by a number of dietary and hormonal factors probably acting at the level of the bone. |
Public Statements
Congressman Young Comment on FDA Environmental Assessment of Frankenfish
Statement
Alaskan Congressman Don Young today released the following statement in response to the Food and Drug Administration's (FDA) environmental assessment decision that genetically engineered salmon pose "no significant impact" on the environmental or public's health:
"I've said from the beginning that frankenfish pose a grave threat to Alaska's wild salmon stocks, and today's decision by the FDA is foolish and disturbing," Rep. Young said. "As the final process moves forward, I will continue the fight with the Alaska Congressional Delegation to ensure that this product never hits the market."
"In the 113th Congress, I plan to reintroduce legislation that will at a bare minimum require genetically engineered salmon to be labeled to ensure that the public knows what they are purchasing at the grocery store and feeding to their families." |
KBeau Jewelry is officially on Instagram!
I finally switched over from my personal Instagram account to a kbeaujewelry account.
Be sure to follow so you can catch the latest creations, sales and behind the scenes shenanigans!
In case you missed it; I’m on Tumblr, Facebook, Twitter and Pinterest. Whew! Follow me on all these platforms because I will be posting different tid bits to different accounts. Tid bits include, but not limited to….home show and trunk show announcements, sales and new items. |
Vitamin D supplementation and lipoprotein metabolism: A randomized controlled trial.
Vitamin D deficiency is associated with an unfavorable lipid profile, but whether and how vitamin D supplementation affects lipid metabolism is unclear. To examine the effects of vitamin D supplementation on lipid and lipoprotein parameters. This is a post hoc analysis of the single-center, double-blind, randomized, placebo-controlled Styrian Vitamin D Hypertension Trial (2011-2014). Two hundred individuals with arterial hypertension and 25-hydroxyvitamin D concentrations of <75 nmol/L were randomized to 2800 IU of vitamin D daily or placebo for 8 weeks. One hundred sixty-three participants (62.2 [53.1-68.4] years of age; 46% women) had available lipid data and were included in this analysis. Vitamin D supplementation significantly increased total cholesterol, triglycerides, very-low-density lipoprotein (VLDL) triglycerides, low-density lipoprotein (LDL) triglycerides, high-density lipoprotein (HDL) triglycerides, apolipoprotein B (ApoB), LDL-ApoB, ApoCII, ApoCIII, phospholipids, and ApoE (P < .05 for all). Except for ApoCII and ApoCIII and HDL-triglycerides, all other treatment effects remained statistically significant after adjustment for multiple testing with the Benjamini and Hochberg false discovery rate method. There was a nonsignificant increase in LDL cholesterol. Furthermore, no significant effects were seen on free fatty acids, lipoprotein (a), ApoAI, ApoAII, VLDL cholesterol, VLDL-ApoB, HDL cholesterol, LDL diameter, and VLDL diameter. The effects of vitamin D on lipid metabolism are potentially unfavorable. They require further investigation in view of the wide use of vitamin D testing and treatment. |
Q:
Kotlin: Reified type parameter makes Gson fail
I've encountered a weird behavior using Gson deserializing within a function with reified type. It only happens when interfaces are involved in the type parameter.
Take the following code:
val toBeSerialized = listOf("1337")
with(Gson()) {
val ser = toJson(toBeSerialized)
val deser = fromJson<List<Serializable>>(ser)
}
Line number 4 makes use of a custom extension function Gson.fromJson(json: String): T.
It fails if T is defined as reified:
inline fun <reified T> Gson.fromJson(json: String): T = fromJson<T>(json, object : TypeToken<T>() {}.type)
And it works if it is defined as a normal type parameter:
fun <T> Gson.fromJson(json: String): T = fromJson<T>(json, object : TypeToken<T>() {}.type)
(Note that making T reified makes no sense here, just want to understand its impact in the special use case)
The exception when using reified looks as follows:
Exception in thread "main" java.lang.RuntimeException: Unable to invoke no-args constructor for ? extends java.io.Serializable. Registering an InstanceCreator with Gson for this type may fix this problem.
at com.google.gson.internal.ConstructorConstructor$14.construct(ConstructorConstructor.java:226)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:210)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.read(TypeAdapterRuntimeTypeWrapper.java:41)
at com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:82)
at com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:61)
at com.google.gson.Gson.fromJson(Gson.java:888)
at com.google.gson.Gson.fromJson(Gson.java:853)
at com.google.gson.Gson.fromJson(Gson.java:802)
at SoTestsKt.main(SoTests.kt:25)
Caused by: java.lang.UnsupportedOperationException: Interface can't be instantiated! Interface name: java.io.Serializable
at com.google.gson.internal.UnsafeAllocator.assertInstantiable(UnsafeAllocator.java:117)
at com.google.gson.internal.UnsafeAllocator$1.newInstance(UnsafeAllocator.java:49)
at com.google.gson.internal.ConstructorConstructor$14.construct(ConstructorConstructor.java:223)
... 8 more
A:
The version that uses reified T fails because it's trying to de-serialize "1337" as a Serializable, which is an interface, so it can not be instantiated, and by default there is no type adapter which can de-serialize into a Serializable (like there is for List<...>). The easiest way to fix this is to pass List<String> as a type argument.
In the non-reified version there is no actual type information being passed to your extension function. You can verify this by printing the type you get from the token:
fun <T> Gson.fromJson(json: String): T {
val tt = object : TypeToken<T>() {}.type;
println(tt);
return fromJson(json, tt);
}
In the none reified version that will just print T (i.e. no actual type information is available), but in the reified version it will print the actual type (+ any Kotlin declaration site variance modifiers, so List<String> becomes List<? extends String>). (I don't know why Gson silently ignores this error of missing type information).
The reason the non reified version works is a coincidence. Since Gson's default type for de-serializing ["1337"] is ArrayList<String> and that is also what you get. It just happens to be assignable to a List, and since generics are erased there is no class cast exception for the mismatch between String and Serializable as the type argument. It works out in the end any ways since String implements Serializable.
If you slightly modify the example, where a different cast happens, for instance by specifying a different kind of List, you run into trouble:
val deser = fromJson<LinkedList<Serializable>>(ser)
Throws a java.lang.ClassCastException: java.util.ArrayList cannot be cast to java.util.LinkedList
You need reified T to be able to pass on the type information, but it also means that a failure will happen earlier, and does not go unnoticed due to type erasure, like in the non-reified version.
|
Q:
Why margin can't be add to height offset as digit?
I used code as below to get marginTop, and I found it is "8px", then I replace "px" to "", it show "8", but when I add it to height offset, it never worked!
does anyone tell me why?
var computedStyle = window.getComputedStyle ? getComputedStyle(document.body, null) : document.body.currentStyle;
var marginTop = computedStyle['marginTop'].replace('px', '');
alert(marginTop);
window.scroll(0, elem.offsetTop - headerHeight + marginTop + marginTop);
A:
var marginTop = computedStyle['marginTop'].replace('px', '');
the marginTop contains string value and you are doing this with a string:
elem.offsetTop - headerHeight + marginTop + marginTop
You need to convert the string to integer using parseInt().
|
How To Make Risotto at Home
Cooking Lessons from The Kitchn
Risotto also has a reputation for being fussy and time-consuming, but once you start cooking, it doesn't take more than a 30 minutes to make.
Risotto is a dish that's become associated with fancy high end restaurants, but really, it's the epitome of Italian home cooking and comfort food. Knowing how to make a good risotto is something we think every cook should have in their back pocket, if only because it's one of those dishes that's so satisfying and easy to prepare, and it never fails to impress.
Risotto also has a reputation for being fussy and time-consuming. It's true that once you start cooking, it does require a fair amount of attention, but it doesn't take more than a 30 minutes to make. In fact, true Italian cooks will tell you that risotto should take no more than 18 to 19 minutes from start to finish. One of our chefs in culinary school made us time him, and sure enough, his risotto was done in exactly 18 1/2 minutes every single time!
Risotto is more of a technique than a dish. Once you get a feel for the basic steps of making the soffrito, toasting the rice, and adding in the broth a scoop at a time, a whole world of dishes opens up. You can add caramelized onions, ribbons of swiss chard, bits of sausage, wild mushrooms from the farmers market, or any other combination of flavors and textures suits your fancy. You can even play around with using grains other than rice for making the risotto itself.
One thing is crucial for a good risotto: have everything ready before you step up to the stove. That include the rice, the wine, your add-ins and the bowls to serve it in. Risotto waits for no one and is perfect the second it's done.
Instructions
1. Risotto Prep - Measure, chop, and gather all of the ingredients going into your risotto. Warm the broth in a saucepan over low heat. It should be just barely steaming by the time you start the risotto.
2. Soffrito - This is the flavor base of your risotto. It almost always includes onions, but you can add any other aromatics, spices, or ingredients you would like. Sauté these ingredients in a healthy amount of butter (which is traditional) or olive oil over medium-high heat until the onion is translucent and beginning to break down. Add the garlic and other spices, and cook until fragrant.
3. Tostatura - Pour the rice into the soffrito and stir until every grain is coated with fat. (Add more fat if needed - this is not the time to skimp!) Continue stirring the rice until the edges have turned translucent but the center is still opaque. You should also be able to smell the aroma of toasted rice.
4. Deglaze - Deglazing the pan at this point isn't strictly necessary, but a splash of white wine will add another layer of flavor and help lift up any bits that have caramelized to the pan. Use a 1/2 cup or so of wine, and simmer until the wine has completely reduced and the pan is nearly dry.
5. Cottura - Begin incrementally adding the warm broth one ladle at a time. Wait to add another ladle until the liquid has been almost completely absorbed by the rice. This gradual addition of liquid is key to getting the rice to release its starch and create its own delicious sauce, so don't rush this step. Ideally, you want to use just enough broth to cook the rice and no more.
Begin tasting the rice after about 12 minutes to gauge how far it has cooked. Add salt and other seasonings as needed. The risotto is ready when the rice is al dente (when it still has a bit of chew) and the dish has the consistency of thick porridge. If you run your spatula through the risotto, the risotto flow slowly to fill in the space. As the Italians say, risotto should be like "la onda," a wave that slowly rolls to shore.
6. Mantecatura - As a final step, add one more ladle of broth along with one or two tablespoons of butter and a cup of cheese to enrich the risotto and make it extra-creamy.
Serve the risotto immediately. The longer it stands, the more the starches will set and you'll lose the creamy silkiness.
Emma is the recipe editor for The Kitchn and a graduate of the Cambridge School for Culinary Arts. She is the author of True Brews and Brew Better Beer (Spring 2015). Check out her personal blog for more cooking stories. |
If you’re like me, you want to be wise and avoid making foolish mistakes as much as possible. To do this, we need to remember that the fear of the Lord is the beginning of wisdom. It is literally the starting point to living well.
The fear of the Lord is a beautiful thing. It causes us to run to God, not away from Him.
If you want to live in God’s favor:
1. Live in radical obedience to Him. Whatever He asks, decide to obey Him.
These days there is such an emphasis on doing whatever makes you “happy.” This is worldly wisdom and is complete foolishness. Living for momentary pleasure outside of God’s ways is a recipe for disaster, not peace and hope.
If Jesus is your Lord, you must say “yes” to whatever He asks you to do. Saying “No, Lord” is an oxymoron.
2. Put your hope in God’s love for you.
Ask Him to tell you how much He loves you, then live in that reality. If you feel like you don’t hear an answer, remember that He showed His love for you by sending Jesus to die for you. There truly is no greater love.
Pursue God’s favor over your life, just like Jesus grew in favor with God. When God is for you, no man can be against you.
The Lord delights in those who fear him, who put their hope in his unfailing love.
How are you doing today? Have you stopped to take your internal peace temperature? I really want you to know that true, lasting peace is available to you, no matter who you are or what your circumstances may be.
John 14:27 says, "Peace I leave with you; my peace I give you. I do not give to you as the world gives. Do not let your hearts be troubled and do not be afraid."
Peace is a gift, and it is a responsibility. Jesus gives the peace to us, then He tells us it is our job to make sure we stay in it.
There is only one pathway to true peace: absolute obedience to Jesus Christ. This is the starting point to true freedom.
Once you start to live in peace, abundant joy is right around the corner!
JOY HAPPENS WHEN YOU HAVE SO MUCH PEACE
THAT YOU CAN'T HOLD IT INSIDE ANYMORE!
John 15:10-11 goes on to say, "If you keep My commands, you will remain in my love, just as I have kept my Father’s commands and remain in His love. I have told you this so that My joy may be in you and that your joy may be complete."
Sometimes it can be really hard to make a choice. It's especially hard if you're the type of person who really wants to get it right. The higher the stakes, the harder the choice.
I recently had to make a hard choice in parenting. It was so difficult because parenting my kids is one the most important things to me. I went back and forth between a couple of options, back and forth, back and forth.
Here's how I landed on my decision:
I prayed about it
I got into unity with my husband about it
I bounced it off the people in my inner circle
I fought off fear
I finally picked the "scary but exciting option"
You see, I felt the Lord speak to me that the "scary but exciting option" was actually His provision for our family. I chose to let go of all of the "what if's" and move forward. As soon as I made the choice, I felt peace and relief. I know that if at some point the peace leaves, I can choose to make another change.
Sometimes it's hard to make a choice when we really value doing the right thing. As long as we aren't choosing to do something in violation of God's ways or our conscience, we can move forward with courage and confidence. Even if we choose "wrong", God is able to make everything work out for our best.
It seems like almost every day we are bombarded with images and stories of brutal attacks by terrorists. I think that we can tend to either get really scared or become numb. What if we were to avoid both of those options and pray powerful prayers instead?
Let's start by reading what Jesus had to say about it:
In fact, the time is coming when anyone who kills you will think they are offering a service to God. They will do such things because they have not known the Father or me. John 16:2b-3
It really all boils down to a lack of love. People who have hateful, murder filled hearts have not yet received the love of God. They go through life thinking that they have to prove something to God, not knowing that they can turn to Jesus and have a good Father who loves them so deeply. They live like orphans, fighting for every scrap, when they actually could have a relationship with a Father who wants to bless them with every good thing.... They just don't know it yet.
Let's start to pray faith filled prayers that God would visit terror minded people with His great love. Let's pray that He would visit them in their dreams or in any other way He sees fit, not just to protect ourselves from attacks, but so that these orphan ones can experience true love and freedom.
Jesus died because they too were valuable to the Father. Let's value what He values, and pray that the lost ones will come home!
I've been really wanting to replace the carpet in my bedroom, hallway and stairs for a few years now. Every time I got close to saving up the money, something else came up to spend the money on instead. It's not like I spent it on emergencies, but on fun stuff like going on trips to Cuba and Washington DC.
Anyway, I finally got the money together and decided that now is the time to get this project done. We're inheriting a complete bedroom set, so it seemed like the right time to change the flooring since we have to move everything anyway.
I did my research to make sure I had enough money saved, picked out the new flooring, and had the company do the measurements. It seemed like everything was coming together until the sales person from the store sent me the quote. It was almost double what I had figured!
I was busy, so I just sat on it a couple days. Before many days had passed, I found myself on a longish drive to an appointment. This meant I had plenty of time to think. I was starting to get stressed about the whole situation (first world problem). Should I forget about replacing the flooring, only do part of the project, or what? I was committed to paying cash upfront, so financing wasn't an option.
Thankfully it wasn't too long before I caught myself losing my peace and decided to ask God about it. I told the Lord that I needed Him. I told Him that I needed His wisdom and help, and that I know He really loves me and wants to give me my heart's desires. Then I decided to stop thinking about the situation and move on with my day. My peace was back!
God is so good. Within a couple hours I received an email from the sales person apologizing that they had done their figures incorrectly! Wow!
I had been hoping to pick up some extra work to make up for the discrepancy, but when it came down to it, I actually already had all that I needed.
I wonder how many times in our lives we stress about things that we think we need, when we actually already have them.
I bet that if you had a stomach virus, you would never intentionally walk up to your loved ones and puke in their faces. Gross! Who would ever do that to anyone, much less than to the people who really matter to them.
Reasonable people try to stay away from others when they're sick. They do it out of mercy and kindness. They stay away until they are well, and no longer contagious. If the sickness doesn't seem to be passing, they get to a doctor to do their best to resolve the situation. Being sick is no fun for the one who is sick or for the rest of the family.
Why is it that we let emotional issues go unsorted for years and years? When the emotional pain buttons in our lives get pushed, the fallout hurts the people around us at least as much it hurts us.
You know the secret areas of pain that you keep hidden and hope that no one ever finds out about? I have news for you, people may not know the specifics, but they certainly feel the painful symptoms.
If you aren't willing to get healthy for your sake, at least do the hard work of getting yourself sorted out for the sake of the people you love. You really don't want people around you to feel like they have to spend their whole lives walking on eggshells.
Don't delay. Get the help you need. Make an appointment with a counselor or reach out to get with a mature, trusted mentor.
Whatever God is calling you to do, He will always provide everything you need in order to follow through on it. He is a good Father, and He will not ask you do something, then leave you to do it on your own. Here's where to start...
Get clear on your mission. What problem in the world makes your blood boil? What is your dream testimonial?
Make a short term plan with a few things you can start on right away to get you headed in the right direction.
Imagine what you want your life to look like in five or ten years. Are you putting the right things in place to get there?
It's not enough to get a vision; we need to ask Holy Spirit to help us walk it out. Feeling overwhelmed? You can do this! It's time to cooperate with God.
Do not merely listen to the word, and so deceive yourselves. Do what it says. James 1:22
Would you do me a huge favor and take this one minute survey about where you are on the journey to your destiny? This will help me know how to help you.
A few years ago our oldest son started experiencing bouts of extreme dizziness and ringing in his ears along with some hearing loss. After consulting with doctors it was determined that he needed to have an MRI. Basically they needed to check to see if he had a brain tumor.
Wow, that was unsettling. It is so hard to go through these kinds of tests when you have to do so much waiting. It's like they say, "This is really serious; go home and wait and wait and wait until we find time to help you out."
I remember ironing clothes and fighting this mental battle at the same time. I had to choose to put the thought of cancer aside and think about other things. It's like I actually imagined in my mind pushing the worrying thoughts over to the side and bringing thoughts of things I could control into the center.
The MRI did come back clear, and he was diagnosed with Meniere's disease. Miraculously his hearing loss has been restored!
Jesus gives us peace as a gift. He also commands us to not let our hearts be troubled. Sometimes not letting your heart be troubled requires you to fight.
Peace is not the absence of war.
If you fear, you can't hear. If you allow your heart to be troubled you won't be able to hear the whisper of Holy Spirit as He guides you in your life choices.
John 14:27 Peace I leave with you; my peace I give to you. Not as the world gives do I give to you. Let not your hearts be troubled, neither let them be afraid.
It's impossible to grow up without picking up any unhealthy patterns, regardless of how great your parents were. We all have some bags that need to be unpacked and put away. Here are 7 great reasons to stop stuffing your issues and get healthy!
If you don't take care of your issues, you will pass them on to your kids and grand kids.
Your marriage relationship will go to a level you've only dreamed of when you see things with a clear perspective.
Living with stress and turmoil on the inside will make you physically ill.
Being spiritually, physically, and emotionally healthy makes it possible for you to do your part in bringing heaven to earth.
Jesus paid too high a price for you to keep living below your potential.
As you humble yourself (embracing vulnerability) God will release grace to you.
I just had the craziest experience with the Lord. We were standing in this train station, and I could literally feel the air from the train pushing against my face and back.
I was like,"Hey, I hate trains."
Jesus was like, "Let yourself go." The wind totally held me up.
He was like, "That's abiding in peace."
Then we walked onto the train, but it was still moving so fast. He was telling me the importance of abiding in peace and asked if I wanted to see something cool.
I said, "Yes."
We got off the train, and the station was crowded. Everyone was frozen and Jesus said, "Watch this."
He went and began to change people's faces, their expressions. He was like, "You try it."
We literally ran through the crowd, and we were taking casts off people and taking their crutches. We would just stretch out the person's legs or arms or whatever, and they were gonna be better. Then we went back on the train.
I'm planning to write three 30 day books this year, one of which will be about living in peace and hope! For now, I would love for you to check out this book by Steve Backlund. It is an awesome tool to help you recognize and beat the lies the enemy tries to tell us.
You might be able to identify the things you are best at by noticing what things you tend to judge the most in other people.
For example, if you always find yourself looking down on people who dress sloppy, you may have a gift for beauty. Maybe you could develop a practical solution to inspire others to dress for the life they want to have.
Generally it's a bad idea to compare yourself with other people. If you compare other people's strengths to your own weaknesses, you could put yourself into a clinical depression. We all have different strong points. No one has a monopoly on perfection.
As followers of Jesus, supernatural beings, we get to come up higher and begin to value each other for the wide array of strengths we each bring to the table. It takes all kinds to get the work of the Kingdom accomplished. After all, Jesus is the Head, and we are His body. I'm pretty sure He doesn't want to be all elbows. Another way to find out what you should focus your life's efforts the most on is to think about what things break your heart the most or make you feel intense anger.
In my case, thinking about orphans breaks my heart. There is something about the kids living in the slums in India that tears me up inside. I know that part of my destiny is to rescue as many of them as I can. I want to see them not only provided for physically, but also nurtured emotionally and spiritually.
I cannot afford to get overwhelmed by the enormity of the problem. I have to take it one child at a time. For the past few years we have been sponsoring one little boy from India. My dream is to one day be able to provide homes to care for multiple children there.
Another thing that makes me feel intense frustration is when I see angry parents being harsh with young children. Of course all parents experience moments of frustration, but spewing anger on kids is so damaging to their hearts. When I see a child wilt right before me, it breaks my heart.
My heart also breaks for the parents. Most of the time they never had nurturing parents themselves and are really just emotional orphans.
It makes me just want to swoop in and pour love and nurture on the parent and child alike. The parents really need someone to come along side them and parent them. Only then will they have the life tools needed to stop the dysfunctional cycle they are living in.
Do you have a certain tune that pops into your head from time to time?
When I was a kid we used to sing this verse to a fun, little tune. From time to time it pops into my head and sticks around for a while.
He that dwells in the secret place of the most High
shall abide under the shadow of the Almighty.
I will say of the LORD, He is my refuge
and my fortress: my God; in Him will I trust.
Psalm 91:1-2 KJV 2000
Singing the scripture is such a great way to get God’s Word in your heart. Once you’ve learned a verse to song, it stays in a place in your brain for years. At any moment the Word can spring forth into your memory.
This basic memory function is what advertisers who write jingles depend on. If they can plant messages in our minds, we can definitely do it to ourselves too.
It’s so great to have scripture playing in the soundtrack of your day instead of negativity. As you meditate on God’s Word, the enemy’s lies and accusations get blocked out.
Putting scripture to song is a fun way to fight back. It really is a powerful weapon for you. Think of it as part of putting on the full armor of God.
Put on the full armor of God, so that you will be able to stand firm against the schemes of the devil.
Ephesians 6:11
Intentionally get God’s Word into your heart and mind, then meditate on it as you go about your daily routine!
Which piece of armor do you think singing the Bible best represents? Helmet of Salvation, Breastplate of Righteousness, Sword of the Spirit, Belt of Truth, or Shoes of Peace?
Do you ever feel tired or unmotivated? It turns out that Jesus offers a very simple solution to this common problem. He instructs us to simply come to Him and drink.
Now on the last day, the great day of the feast,
Jesus stood and cried out, saying,
“If anyone is thirsty, let him come to Me and drink.”
John 7:37
In case this sounds too mystical, let’s look at some practical ways to “come and drink.”
It’s really nice when we feel a sweet longing to spend time with Jesus, but even when the feeling is absent, the coming to Him is still powerful.
When you do feel a stirring to “come” to Him, drop what you are doing and go with it. As you respond to the smallest nudge of His presence, you will be filled with His love and peace.
Discipline yourself to come to Him daily whether you feel anything or not. You might want to set a daily reminder on your phone to pause to talk to Jesus and meditate on His Word.
As you read the Bible, don’t rush. When a verse stands out to you, stop and chew on it. Don’t worry about getting through a certain amount of reading. Focus on mediating on the Lord and the verses He is highlighting to you.
Close your eyes and take a deep breath as you let His Word go deep inside. This is a great way to “drink.”
He who believes in Me, as the Scripture said,
“From his innermost being will flow rivers of living water.”
John 7:38
The more you spend time letting the truth of Jesus go deep into your heart and mind, the more powerful you will become. Let God take you through a process of mind renewal until you truly believe that you are one with Him.
This is the place that powerful ministry stems from. Ask Holy Spirit to come more and more until your entire being is saturated with His presence. Be aware of the fact that you are a spiritual being at your core.
Release the rivers of living water within you by opening your mouth and letting the naturally resulting praise and prayer flow.
When you take time to cultivate God’s presence in your own life, you can begin to release His Kingdom everywhere you go.
As you go to work, school, or shopping be aware of what you are carrying. You truly are a supernatural being who has the power to shift the atmosphere simply by showing up and releasing love.
A very practical step is to practice looking people in the eyes, smiling, and making small talk. This does not come easily to all of us, but it is the pathway to deeper conversation and connection.
You can do it! You are not alone; You are powerfully connected to Love!
Have you ever felt love for people you've never met? Christopher has had a burden for Syrian refugees for a few years now. He had the great idea to bless a family with Christmas gifts. We found a family to reach out to through our friends David and Ashley. My heart joined with Christopher's as I spent time and money shopping for the family.
Where your treasure is, there your heart will be.
Matthew 6:21
I sincerely enjoyed shopping for and spending money on a family I've never met. I didn't even know their names. As I shopped for and carefully wrapped each gift, I felt a love grow in my heart for boy age 17, boy age 15, boy age 13, girl age 11, girl age 8, twin boy and girl age 4, mom and dad.
Maybe it's a tiny taste of the way God says, "Before you were born I knew you, and I loved you."
Our whole family took the gifts to the refugee family on Christmas night. It was especially fun to watch the 4 year olds with their brand new bikes. They were so excited. The little boy fell off his bike three times while we were there, but that didn't diminish his ear to ear smile one bit.
As we sipped tea and ate unleavened bread, we spoke to each other in broken English. They told us a little about their journey including how their house in Syria had been bombed to the ground one month after they had fled to Jordan.
Before we said goodbye, Christopher prayed a blessing over the family. We released peace and love, and we left with full hearts.
We didn't even make it home before we found out that the father had posted this on his facebook.
Do you ever hear a voice in your head accusing you of being a fake, going overboard, or being annoying?
One of the best things you can do is recognize that 'other' voice and decide to ignore it. That is warfare. Put on your helmet of salvation to protect your mind from spiritual mental attacks.
My sheep hear My voice,
and I know them,
and they follow Me.
John 10:27
It's time to get our heads sorted out...
Let's practice listening to the voice of Holy Spirit and shutting out the voice of that stranger, the accuser.
The closer you get to your destiny, the more mind games you may have; but even if you were to give up on your destiny and decide to do nothing with your life, it would not solve the problem of the accuser constantly accusing you. That's his main job.
It is better to chase your destiny down, while fighting some mind games, than to end up living in boredom or depression while forfeiting your purpose.
I have been gripped this Christmas season by the amazing fact that God turned Himself into a human. Wow! That's earth-shatteringly vulnerable.
I'm sharing this worship video today because this song has been touching me so deeply the past couple weeks. I don't believe this song was meant to be a Christmas song, but I do believe it sums up what Christmas is truly all about. Come, let's adore Him together!
Onething Live / Justin Rizzo - There is one found worthy, from the album Onething Live Magnificent Obsession by IHOP in 2012
Do you ever feel like your head is fractured? Like the plates of your skull are literally pulling apart. That's the picture that comes into my head when I think back to the most painful holiday season I ever went through.
A few years ago our family went through a very difficult season when we experienced a failed adoption. We had a toddler foster son living with us for over six months, and just before the adoption was due to go through, everything fell apart.
That season was one of the hardest times of my life. Parenting that precious boy felt to me like it must feel to parent a terminally ill child.
To some that may sound too dramatic or even insensitive to those who have lost biological children. All I can tell you is that is how I felt. There was so much uncertainty.
Uncertainty is very hard for me. I like to "make a plan and work the plan!" Hourly we were waiting for word to confirm or deny that this baby would be ours forever. I guess I always had a sinking feeling that things weren't going to go the way we hoped.
I remember having days where I would literally daydream about escaping. I would imagine how wonderful it would be to just sit all alone in a dark, silent room for hours. It was like there was just too much noise and uncertainty spinning all around me, and I needed peace.
After we got the news that the little guy would be moving on, I spent the next few months just recovering and resting. The grief process just takes some time.
It was extremely painful because the authorities involved had told us to bond with him and get him to bond with us because he was going to be ours. I have a clear memory of him lying his head on my lap while I patted his back and called him my baby. He would repeat it back to me in such a sweet secure little voice: "my baby."
Having to send him off from our home when there was no way on earth we could explain to him what was happening felt like such a betrayal. It broke my heart. Our final days with him were right between Christmas and New Years.
Looking back I can see how much the Lord held us during those devastating moments. He sent some wonderful people into our lives who loved us well when we really needed it. Though I felt like everything was falling apart, God was actually building strength deep inside of me. That incredibly hard time built perseverance. I cannot say that I lived in perfect peace, but I can say that I fought hard to stay in perfect peace, and it was worth the fight!
After a couple years of grieving, I attended a church service with a guest minister. He shared about peace and took us through an exercise to let go of pain. That night was a turning point for me. In a matter of minutes I was able to let go of the deep pain that had been trying to swallow me alive.
A friend of mine wrote a book with the basics of the model that I learned that night. The book is written to kids and teens, but it definitely is usable for adults as well. This book can help you move past any kind of pain and grief.
This book would make a great gift for the book lover on your Christmas list. I especially recommend it for people who are trying to work out their life path.
Here's a review from Amazon:
Being a missionary's kid myself, this book resonated with me on a number of levels. I enjoy Beth's storytelling, and this book is full of some great stories. I also appreciate how she moves beyond autobiography, making truths she's gleaned from her experiences accessible to the reader. Her questions made me think, and gave me healthy perspective on decisions I've made in life (both good ones and failures). The perspective didn't take me into introspection or regret, but helped me think about how I can talk about these keys with my girls, as they mature.
We take your privacy seriously. Please read our privacy policy before signing up for our newsletter. In our weekly newsletter we will share inspirational posts and announcements. We send our emails with MailChimp, and you can unsubscribe at any time. |
Prophylactic effects of intravenous magnesium on hypertensive emergencies after cataract surgery. A new contribution to the pharmacological use of magnesium in anaesthesiology.
The pharmacological effects of magnesium sulphate heptahydrate (MgSO4.7H2O) on hypertensive patients during the perioperative period were used, to control critical rises of blood pressure. This double-blind study included 40 hypertensive elderly patients, who underwent eye surgery under local anaesthesia; they were divided into two groups (A and B) of 20 patients each. An intravenous dose of 4g MgSO4.7H2O was given to group. A, while group B, which was used as a control group, was given to a placebo. All patients were premedicated with 10 mg oral diazepam 1.5 h before the operation. Systolic and diastolic blood pressure, heart rate and ECG were monitored for 1 h. None of the patients who received MgSO4.7H2O showed any ECG disturbances. Systolic and diastolic blood pressure, as well as heart rate, fluctuated outside the critical range, whereas in the control group an increase of blood pressure was noted which was treated with other anti hypertensive drugs. The results indicated that parenteral administration of MgSO4.7H2O in hypertensive patients before surgery stabilized blood pressure fluctuations outside the critical range, without causing the pressure to fall to a level that might risk undesirable side effects during eye surgery under local anaesthesia. |
b 1. Field of the Invention
This invention relates to the use of 1,5-pentanedial as the active antimicrobial agent in disinfecting and/or preserving solutions for contact lenses.
b 2. Description of the Prior Art
This invention relates to disinfecting contact lenses, particularly soft contact lenses. When the term "soft contact lenses" is used herein, it is generally referring to those contact lenses which readily flex under small amounts of force and return to their original shape when released from that force. Typically, soft contact lenses are formulated from poly(hydroxyethyl methacrylate) which has been, in the preferred formulation, crosslinked with ethylene glycol dimethacrylate. For convenience, this polymer is generally known as PHEMA. Soft contact lenses are also made from silicone polymers typically crosslinked with dimethyl polysiloxane. As is known in the art, conventional hard lenses usually consist of poly (methylmethacrylate) crosslinked with ethylene glycol dimethacrylate.
Hard contact lenses do not absorb appreciable amounts of water as do some soft contact lenses and thus the use of harsher disinfecting and cleaning agents does not create a problem in the hard contact lenses cleaning area. However, many hard lens disinfecting and preserving solutions contain benzalkonium chloride or chlorobutanol which may render the treated lenses hydrophobic, may not be stable in solution or lack compatibility with certain types of hard lenses, e.g., high silicone content. As is generally known the users of soft contact lenses are warned against using solutions made for hard contact lenses since the materials in the solutions, as mentioned, may be absorbed or even concentrated by the soft contact lenses and may seriously damage the soft contact lenses or the eye of the user.
U.S. Pat. No. 3,016,328, R. E. Pepper et al, discloses dialdehyde alcoholic sporicidal compositions containing a saturated dialdehyde, e.g., glutaraldehyde, an alkanol and an alkalinating agent. Also disclosed are aqueous sporicidal compositions containing a dialdehyde (0.25 to 4%) and an alkalinating agent, the solution having a pH of 7.4 or more. Medical, surgical and optical applications are suggested.
U.S. Pat. No. 3,697,222, G. Sierra, discloses the use of an aqueous acid glutaraldehyde solution at temperatures above 45.degree. C. to sterilize an object. The sterilizing action is enhanced by the use of ultrasonic energy. Sterilization also may be achieved by using ultrasonic energy and aqueous alkaline glutaraldehyde solutions, the preferred temperature being 55.degree. to 65.degree. C. Sierra teaches the aqueous glutaraldehyde concentration can be up to 7.5% and preferably 1 to 2%.
U.S. Pat. No. 3,912,450 and U.S. Pat. No. 3,968,248, R. M. G. Boucher, disclose disinfecting or sterilizing medical items by contacting the item with a sporicidal composition containing 0.1 to 5 weight percent of glutaraldehyde and 0.01 to 1 weight percent of an ethoxylate type non-ionic surface active agent and at a temperature of at least 15.degree. C. Boucher discusses this development in some detail in an article (Amer. J. Hosp. Pharm. 31:546-547) published June 1974.
U.S. Pat. No. 3,968,250, R. M. G. Boucher, discloses disinfecting and sanitizing fowl eggs with an aqueous solution containing 0.1 to 5% of glutaraldehyde and 0.01 to 1 percent of an ethoxylate type non-ionic surface active agent.
U.S. Pat. No. 4,093,744, M. W. Winicov et al, discloses an aqueous composition containing 2 to 4 weight percent of glutaraldehyde and 0.1 to 10 weight percent of a surfactant with a pH of 6.7 to 7.3 to kill bacterial spores. This patent further discloses "Independent analyses of the sporicidal compositions disclosed in U.S. Pat. No. 3,016,328 to Pepper et al revealed that the 10 hour contact kill time was readily obtainable when using a fresh solution, but that the efficacy of the compositions markedly decreased upon standing for prolonged periods of up to about two weeks. Further, this reduction in effectiveness was found to be attributable to the diminution of glutaraldehyde, which lost a total of about 25% of its value by the end of a two week period."
Contact Lenses by Robert H. Hales, Williams & Wilkins Co., Baltimore, MD (1978) at page 33 records the use of glutaraldehyde as a chemical disinfectant for contact lens solution. While stating glutaraldehyde is a highly active bacterial and sporicidal agent, he notes it is toxic and irritating, unstable and requires an alkaline condition. No other mention is made of this antimicrobial agent. |
Launch HN: VergeSense (YC S17) – AI-Powered Sensors for Building Management - dpryan
Hello HN! This is Dan and Kelby (tripleplay369), the founders of
VergeSense (<a href="http://www.vergesense.com" rel="nofollow">http://www.vergesense.com</a>). We're building an AI-powered
facility management platform that helps companies use their buildings
more efficiently. The cost of real estate is typically the #2 cost
center for any company (after people), but most companies don't have a
good way of measuring how their building is being used. Our product solves
this by identifying wasted areas and recommending more productive
uses for that space (e.g. turning unused offices into conference rooms
or employee lounge areas).<p>The core of our offering is a discrete sensor that leverages multiple
inputs (primarily an imaging sensor + PIR-based motion sensing), which
feed into a neural network model that executes inference directly on
the device. This allows us to do powerful processing on inexpensive hardware.<p>Our machine-learning stack is built around Tensorflow, which we use in two ways:
1) for inference (we embed Tensorflow directly on a Raspberry Pi),
and 2) training new models in the cloud. New models can be pushed remotely to the devices over-the-air to make the sensors “smarter”.<p>While our sensors are currently trained to count people, our vision is to evolve
into a 100% passive "super-sensor" that can be configured to detect
thousands of different types of events. Examples that we've explored
include things like detecting falls (e.g. during an emergency),
counting assets (equipment, furniture, cars), and monitoring
equipment usage (for preventative maintenance).<p>We're happy to chat and would love to hear your thoughts. Some things
we've worked on that might be interesting to discuss:
rapid-prototyping for hardware (Raspberry Pis +ESP8266),
machine-learning, computer-vision,
building automation, BLE, B2B sales, keeping sane while
drawing bounding boxes, or anything else that comes to mind!<p>We look forward to your feedback!<p>Dan + Kelby
======
wbrocklebank
It’s a super interesting space: we’ve been working on a similar concept here
at Shepherd (Shprd.com) for a couple of years. We use existing SCADA & BMS
embedded sensors as well as industrial standard retrofit sensors to send data
to our cloud analytics platform.
Uptake is strong, as you say, because facilities management can benefit a lot
from condition-based monitoring enhanced with ML.
Good luck - reach out if you want to chat,
Will
~~~
dpryan
Sent you a note!
------
haaen
TechCrunch previously wrote about VergeSense. See my post:
[https://news.ycombinator.com/item?id=14947275](https://news.ycombinator.com/item?id=14947275)
Tried to change the title of that post to:
VergeSense’s (YC S17) AI sensing hardware wants to reduce the usage of office
space
but HN didn't let me add (YC S17)
------
ju-st
Hi!
\- What about privacy, is filming workplaces in high resolution ok with
customers, their employees, the law and unions?
\- Your FAQ states that you are selling the whole package for a yearly fee.
Isn't that quite a risk when the customer is using mobile data as connectivity
and having devices in the field that can and will fail and have to be
replaced? Do you pay then a contractor to replace a single hardware node at
your customers location?
\- Have you looked at warehouses as customers? I suppose real estate is their
#1 cost center :)
~~~
tripleplay369
\- One of the benefits of our system - as opposed to other more traditional
video-based approaches - is that we never send any raw data off our devices.
We have a light-weight neural-net model (about ~10MB) that runs directly on
the devices, and only reports back on detected events (so things like “person
detected”, “door-entry passed”), etc. This also has a side-benefit in that our
devices can operate on low-bandwidth networks (and makes it economical to
backhaul detected event-data over a cellular network).
\- We include a gateway device with our product, and if anything goes wrong
(sensor or gateway goes offline), we cover this as part of our service
contract.
\- Warehouses are another potential vertical, provided we have access to
training data to train up our models. For example, if someone wanted to
“count” things like boxes / forklifts / etc, our sensors can be configured to
detect them.
------
scrappyjoe
Have you looked into IoT systems control? Things like temp monitoring feeding
into A/C draw, electricity off when people leave etc? Utilization of space
is,one consideration, but a major other factor is maintenance, which comes
down to optimizing running costs and minimizing wear through preventative
maintenance and proactive design - IoT has a lot of potential in that space.
~~~
dpryan
Great question - we believe there are a lot of potential add-on modules,
especially around building control. One that is gaining a lot of interest
recently is using people-counting data to more precisely control HVAC systems
(most systems today rely on simple motion sensing for control).
Modulating heating / cooling based on the exact count can help cut energy
consumption, sometimes by as much as 30% for commercial buildings.
ARPA-E (Advanced Research Projects Agency) recently put out a proposal for
such a system - you can read more here if you're interested:
[https://arpa-e-foa.energy.gov/](https://arpa-e-foa.energy.gov/)
~~~
jcims
I was talking with a maintenance supervisor for a large facility and he was
saying that they were able to modify the amount of make-up outside air cycled
into a facility based on occupation (think O2 depletion, lol). Is that
something you folks have run into? Seems like if you could avoid exhausting a
couple (hundred?) thousand cubic feet of cooled air per hour, you could save a
good bit of money.
------
ruler88
How do you defend the statement "Each sensor creates a sphere of intelligence
and the more data they collect, the smarter they get."
Do you mean each device gets smarter individually because the specific device
learned more about the specific space? Or that there is some kind of
supervised learning component where you would adjust the algorithm/model over
time for every device.
~~~
dpryan
At a local-level, each sensor builds a background model, which we diff against
& combine w/ inference outputs for detections (background modeling helps
reduce our false-positive rate). At a global level, we continuously push new
pre-trained models over-the-air. These are built using 3rd party data sources
(so not sourced from the sensors themselves).
------
gt5050
Could this be used in retail store spaces to get footfall analyticsc?
Also, what this the average area the sensors cover
~~~
dpryan
Retail analytics are a potential use-case, but we've decided to focus on
space-optimization for buildings because we believe it's an underserved market
relative to the opportunity. There are a lot of established companies doing
footfall analytics for retail (using things like WiFi, BLE, door-counters,
thermal images, analytics on video surveillance data, etc.).
As for sensor coverage, we cover about 1k sqft per sensor (it'll vary a bit
depending on mounting height - higher mounting equates to a wider area of
coverage)
------
eoinmurray92
How do you train the models in the cloud, run servers yourself, or use some
service?
~~~
tripleplay369
Currently we train on AWS EC2 instances. In the very beginning I was training
on the GTX 980Ti in my desktop, which actually performed way better than I
expected. Models trained faster on that machine than on p2.xlarge instances on
EC2. But the advantage of training multiple models simultaneously, and using
multi-gpu machines made the switch to EC2 worth it.
|
American Voters Don't Get Foreign Policy
posted by Stuart Stevens
-
2 years ago
The American public is usually wrong on foreign policy. It takes leadership to understand that, deal with it and still do the right thing.
In the 2012 presidential race, the Romney campaign, for which I worked as a senior strategist, regularly asked a series of routine questions about issues that mattered most to voters. No doubt the Obama campaign asked similar questions, and I’m sure their findings mirrored ours.
As an example, here are numbers from a Romney poll taken in mid-October, before the “Foreign Policy” debate, the third debate of the general election. It showed results that were fairly constant throughout the election.
When voters were asked:
And, which ONE of the following issues do you believe should be the top priority for the President and Congress?
Economic issues like jobs
Fiscal issues like the deficit, spending and cutting taxes
Foreign policy issues like national security and the war in Afghanistan
Pocketbook issues like rising prices, the cost of gasoline and housing
Social issues like abortion and gay marriage”
The results broke down as follows:
56% Economic Issues
21% Fiscal issues
6% Foreign policy issues
6% Pocketbook issues
4% Social issues
Not that it wasn’t obvious, but this does help explain why in a “Foreign Policy” debate there was a lot of discussion of issues that touched on domestic policy, issues that voters felt more impacted their lives. “Trade” was mentioned 14 times; “terrorism” only four.
Like it or not, Americans today are just not that interested in foreign affairs. In 1964, Pew research showed only 20 percent of the public agreed and 69 percent disagreed with the statement, “The US should mind its own business and let other countries get along the best they can on their own.” By December 2013, those who disagreed had risen to 52 percent and those who agreed had fallen to 38 percent—a 63-point shift.
If you’re looking for an issue that unites Republicans and Democrats, this is it. When asked, “Should the US concentrate more on our national problems rather than international,” the results vary almost none by party: 82 percent of Republicans, 76 percent of Democrats and 79 percent of independents agree. |
See also
Purses (3)
Bright and beautiful
Large or small, plain or patterned, the one thing your purse should be this summer season is bright. Our collection has bold colouring, stripes and shiny finishes that are perfect for all occasions. Choose from famous brands and our very own Collection by John Lewis range of fun slogan purses and shop with style |
define(function() {
var theme = {
// 默认色板
color: [
'#1790cf','#1bb2d8','#99d2dd','#88b0bb',
'#1c7099','#038cc4','#75abd0','#afd6dd'
],
// 图表标题
title: {
textStyle: {
fontWeight: 'normal',
color: '#1790cf'
}
},
// 值域
dataRange: {
color:['#1178ad','#72bbd0']
},
// 工具箱
toolbox: {
color : ['#1790cf','#1790cf','#1790cf','#1790cf']
},
// 提示框
tooltip: {
backgroundColor: 'rgba(0,0,0,0.5)',
axisPointer : { // 坐标轴指示器,坐标轴触发有效
type : 'line', // 默认为直线,可选为:'line' | 'shadow'
lineStyle : { // 直线指示器样式设置
color: '#1790cf',
type: 'dashed'
},
crossStyle: {
color: '#1790cf'
},
shadowStyle : { // 阴影指示器样式设置
color: 'rgba(200,200,200,0.3)'
}
}
},
// 区域缩放控制器
dataZoom: {
dataBackgroundColor: '#eee', // 数据背景颜色
fillerColor: 'rgba(144,197,237,0.2)', // 填充颜色
handleColor: '#1790cf' // 手柄颜色
},
// 网格
grid: {
borderWidth: 0
},
// 类目轴
categoryAxis: {
axisLine: { // 坐标轴线
lineStyle: { // 属性lineStyle控制线条样式
color: '#1790cf'
}
},
splitLine: { // 分隔线
lineStyle: { // 属性lineStyle(详见lineStyle)控制线条样式
color: ['#eee']
}
}
},
// 数值型坐标轴默认参数
valueAxis: {
axisLine: { // 坐标轴线
lineStyle: { // 属性lineStyle控制线条样式
color: '#1790cf'
}
},
splitArea : {
show : true,
areaStyle : {
color: ['rgba(250,250,250,0.1)','rgba(200,200,200,0.1)']
}
},
splitLine: { // 分隔线
lineStyle: { // 属性lineStyle(详见lineStyle)控制线条样式
color: ['#eee']
}
}
},
timeline : {
lineStyle : {
color : '#1790cf'
},
controlStyle : {
normal : { color : '#1790cf'},
emphasis : { color : '#1790cf'}
}
},
// K线图默认参数
k: {
itemStyle: {
normal: {
color: '#1bb2d8', // 阳线填充颜色
color0: '#99d2dd', // 阴线填充颜色
lineStyle: {
width: 1,
color: '#1c7099', // 阳线边框颜色
color0: '#88b0bb' // 阴线边框颜色
}
}
}
},
map: {
itemStyle: {
normal: {
areaStyle: {
color: '#ddd'
},
label: {
textStyle: {
color: '#c12e34'
}
}
},
emphasis: { // 也是选中样式
areaStyle: {
color: '#99d2dd'
},
label: {
textStyle: {
color: '#c12e34'
}
}
}
}
},
force : {
itemStyle: {
normal: {
linkStyle : {
color : '#1790cf'
}
}
}
},
chord : {
padding : 4,
itemStyle : {
normal : {
borderWidth: 1,
borderColor: 'rgba(128, 128, 128, 0.5)',
chordStyle : {
lineStyle : {
color : 'rgba(128, 128, 128, 0.5)'
}
}
},
emphasis : {
borderWidth: 1,
borderColor: 'rgba(128, 128, 128, 0.5)',
chordStyle : {
lineStyle : {
color : 'rgba(128, 128, 128, 0.5)'
}
}
}
}
},
gauge : {
axisLine: { // 坐标轴线
show: true, // 默认显示,属性show控制显示与否
lineStyle: { // 属性lineStyle控制线条样式
color: [[0.2, '#1bb2d8'],[0.8, '#1790cf'],[1, '#1c7099']],
width: 8
}
},
axisTick: { // 坐标轴小标记
splitNumber: 10, // 每份split细分多少段
length :12, // 属性length控制线长
lineStyle: { // 属性lineStyle控制线条样式
color: 'auto'
}
},
axisLabel: { // 坐标轴文本标签,详见axis.axisLabel
textStyle: { // 其余属性默认使用全局文本样式,详见TEXTSTYLE
color: 'auto'
}
},
splitLine: { // 分隔线
length : 18, // 属性length控制线长
lineStyle: { // 属性lineStyle(详见lineStyle)控制线条样式
color: 'auto'
}
},
pointer : {
length : '90%',
color : 'auto'
},
title : {
textStyle: { // 其余属性默认使用全局文本样式,详见TEXTSTYLE
color: '#333'
}
},
detail : {
textStyle: { // 其余属性默认使用全局文本样式,详见TEXTSTYLE
color: 'auto'
}
}
},
textStyle: {
fontFamily: '微软雅黑, Arial, Verdana, sans-serif'
}
};
return theme;
}); |
Computed tomography changing over time in type 1 pulmonary laceration.
Pulmonary laceration has been accepted as a rare event of primary lung injury in blunt chest trauma. Four types of pulmonary laceration have been classified according to computed tomographic (CT) pattern, lung location, and injury mechanism. Type 1 pulmonary laceration represents the most common injury as a result of blunt chest trauma in young patients. I report the role of chest CT scan and conservative management for a young man diagnosed with type 1 pulmonary laceration after a fall from scaffolding. |
Microvascular lesions of diabetic retinopathy: clues towards understanding pathogenesis?
Retinopathy is a major complication of diabetes mellitus and this condition remains a leading cause of blindness in the working population of developed countries. As diabetic retinopathy progresses a range of neuroglial and microvascular abnormalities develop although it remains unclear how these pathologies relate to each other and their net contribution to retinal damage. From a haemodynamic perspective, evidence suggests that there is an early reduction in retinal perfusion before the onset of diabetic retinopathy followed by a gradual increase in blood flow as the complication progresses. The functional reduction in retinal blood flow observed during early diabetic retinopathy may be additive or synergistic to pro-inflammatory changes, leucostasis and vaso-occlusion and thus be intimately linked to the progressive ischaemic hypoxia and increased blood flow associated with later stages of the disease. In the current review a unifying framework is presented that explains how arteriolar dysfunction and haemodynamic changes may contribute to late stage microvascular pathology and vision loss in human diabetic retinopathy. |
Plastic Injection Mould of Earphone Cover with PC Multi Cavities xing mould is able to offer the full range of service from mold designing, making, plastic part molding to printing, assembly, package, and shipping arrangement. TOOLING Mold and die are used interchangeably to describe the tooling applied to produce plastic parts. They are typically constructed from pre-hardened steel, hardened steel, aluminum, and/or beryllium-copper alloy. Of these materials, hardened steel molds are the most ..
Plastic Injection Mould of Earphone Cover with PC Multi Cavities xing mould is able to offer the full range of service from mold designing, making, plastic part molding to printing, assembly, package, and shipping arrangement. TOOLING Mold and die are used interchangeably to describe the tooling applied to produce plastic parts. They are typically constructed from pre-hardened steel, hardened steel, aluminum, and/or beryllium-copper alloy. Of these materials, hardened steel molds are the most .. more
...Plastic Injection Mould of Earphone Cover with PC Multi Cavities xing mould is able to offer the full range of service from mold designing, making, plastic part molding to printing, assembly, package, and...
...Plastic Injection Mould of Earphone Cover with PC Multi Cavities xing mould is able to offer the full range of service from mold designing, making, plastic part molding to printing, assembly, package, and... more
...shotinjectionmold Product Description 5 years warranty and over 17 years experience Company Information Established in 1996,17 years experience in plastic 2 shotinjectionmold.Advanced equipment brought from Europe guarantee high quality products .We ...
...shotinjectionmold Product Description 5 years warranty and over 17 years experience Company Information Established in 1996,17 years experience in plastic 2 shotinjectionmold.Advanced equipment brought from Europe guarantee high quality products .We ... more
... Two ShotInjectionMolding With Texture Surface Brief Description: Double color mould, two kinds of plastic material in the same injectionmolding machine injectionmolding, two forming, but the product out of the mold only once the mold. This...
... Two ShotInjectionMolding With Texture Surface Brief Description: Double color mould, two kinds of plastic material in the same injectionmolding machine injectionmolding, two forming, but the product out of the mold only once the mold. This... more |
ICC profile
In color management, an ICC profile is a set of data that characterizes a color input or output device, or a color space, according to standards promulgated by the International Color Consortium (ICC). Profiles describe the color attributes of a particular device or viewing requirement by defining a mapping between the device source or target color space and a profile connection space (PCS). This PCS is either CIELAB (L*a*b*) or CIEXYZ. Mappings may be specified using tables, to which interpolation is applied, or through a series of parameters for transformations.
Every device that captures or displays color can be profiled. Some manufacturers provide profiles for their products, and there are several products that allow an end-user to generate his or her own color profiles, typically through the use of a tristimulus colorimeter or a spectrophotometer (sometimes called a spectrocolorimeter).
The ICC defines the format precisely but does not define algorithms or processing details. This means there is room for variation between different applications and systems that work with ICC profiles. Two main generations are used: the legacy ICCv2 and the December 2001 ICCv4. Since late 2010, the current version of the format specification (ICC.1) is 4.3.
ICC has also published a preliminary specification for iccMAX (ICC.2) or ICCv5, a next-generation color management architecture with significantly expanded functionality and a choice of colorimetric, spectral or material connection space.
Details
To see how this works in practice, suppose we have a particular RGB and CMYK color space, and want to convert from this RGB to that CMYK. The first step is to obtain the two ICC profiles concerned. To perform the conversion, each RGB triplet is first converted to the Profile connection space (PCS) using the RGB profile. If necessary the PCS is converted between CIELAB and CIEXYZ, a well defined transformation. Then the PCS is converted to the four values of C,M,Y,K required using the second profile.
So a profile is essentially a mapping from a color space to the PCS, and from the PCS to the color space. The profile might do this using tables of color values to be interpolated (separate tables will be needed for the conversion in each direction), or using a series of mathematical formulae.
A profile might define several mappings, according to rendering intent. These mappings allow a choice between closest possible color matching, and remapping the entire color range to allow for different gamuts.
The reference illuminant of the Profile connection space (PCS) is a 16-bit fractional approximation of D50; its white point is XYZ=(0.9642, 1.000, 0.8249). Different source/destination white points are adapted using the Bradford transformation.
Another kind of profile is the device link profile. Instead of mapping between a device color space and a PCS, it maps between two specific device spaces. While this is less flexible, it allows for a more accurate or purposeful conversion of color between devices. For example, a conversion between two CMYK devices could ensure that colors using only black ink convert to target colors using only black ink.
References in standards
The ICC profile specification, currently being progressed as International Standard ISO 15076-1:2005, is widely referred to in other standards. The following International and de facto standards are known to make reference to ICC profiles.
International Standards
ISO/IEC 10918-1: Coding of still pictures – JPEG
ISO 12234-4: Photography – Electronic still-picture imaging – Part 4: Exchangeable image file format (Exif 2.2) (ISO TC42)
ISO 12639:2004 Graphic technology – Prepress digital data exchange – Tagged Image File Format for Image Technology (TIFF/IT) (ISO TC130)
ISO/DIS 12647-1: Graphic Technology – Process control for the production of halftone color separations, proof and production prints – part 1: Parameters and measurement methods (Revision under way in ISO TC130)
ISO/DIS 12647-2: Graphic Technology – Process control for the production of halftone color separations, proof and production prints – part 2: Offset processes (Revision under way in ISO TC130)
ISO/CD 12647-3: Graphic technology – Process control for the production of half-tone color separations, proofs and production prints – Part 3: Coldset offset lithography on newsprint
ISO/CD 12647-3: Graphic technology – Process control for the production of half-tone color separations, proof and production prints – Part 4: Publication gravure printing
ISO/CD 12647-6: Graphic technology – Process control for the production of half-tone color separations, proof and production prints – Part 6: Flexographic printing
ISO/IEC 15948: Portable Network Graphics file format (jointly defined with W3C – see www.libpng.org/pub/png/spec/iso)
ISO/IEC15444: Coding of still pictures – JPEG2000 (ISO JTC 1/SC 2)
ISO 15930-1:2001 Graphic technology – Prepress digital data exchange – Use of PDF. Part 1: Complete exchange using CMYK data (PDF/X-1 and PDF/X-1a) (ISO TC130)
ISO 15930-3:2002 Graphic technology – Prepress digital data exchange – Use of PDF. Part 3: Complete exchange suitable for color managed workflows (PDF/X-3) (ISO TC130)
ISO 15930-4:2003 Graphic technology – Prepress digital data exchange using PDF – Part 4: Complete exchange of CMYK and spot color printing data using PDF 1.4 (PDF/X-1a)
ISO 15930-5:2003 Graphic technology – Prepress digital data exchange using PDF – Part 5: Partial exchange of printing data using PDF 1.4 (PDF/X-2)
ISO 15930-6:2003 Graphic technology – Prepress digital data exchange using PDF – Part 6: Complete exchange of printing data suitable for color-managed workflows using PDF 1.4 (PDF/X-3)
ISO 22028-1:2004 Photography and Graphic Technology – Extended color encodings for digital image storage, manipulation and interchange – Part 1: Architecture and requirements (ISO TC42)
ISO 12052 / NEMA PS3 Digital Imaging and Communications in Medicine (DICOM)
De facto standards
PICT standard specifications (file format published by Apple Computer Inc.)
PostScript Language (EPS file format published by Adobe Systems Inc.)
PDF Portable Document Format (file format published by Adobe Systems Inc.)
JDF v1.1 Revision A (Job Definition format published by the CIP4 consortium available)
SVG (Scalable Vector Graphics) version 1.1 (file format defined by W3C available from https://www.w3.org/TR/SVG/)
SWOP (Specifications for Web Offset Publications), used for CMYK print jobs, primarily in the United States
See also
Color management
Digital printing
International Color Consortium
References
External links
ICC Frequently Asked Questions
ICC profile specification
ICC profiles for CMYK systems
Is your system ICC Version 4 ready? A test page for browsers
ICC profiles in Adobe Photoshop
CoCa - Open source ICC profile creator by Andrew Stawowczyk Long
ICC profiles in MATLAB
Category:Color space
Category:1994 introductions |
[Factors related to the psychological stress response of nurses working in emergency and critical care centers].
This questionnaire survey was performed in order to reveal the characteristics of work-related stressors on nurses working in emergency and critical care centers (emergency nurses) and factors related to their stress responses. There were 347 subjects who replied to the survey: 199 emergency nurses and 148 nurses working in internal medicine departments (control group) in 11 hospitals in the Kinki and Tokai areas of Japan. The work-related stressor scores among the emergency nurses were significantly higher than those in the control group for 6 out of 8 factors: work difficulties, patient life-support duties, relationships with patients and their families, dealing with patient death, relationships with doctors and technical innovation. The work-related stressor score was significantly lower among the emergency nurses for one factor: lack of communication. Multiple logistic regression analysis was used to evaluate the relationship between the stress response and the other factors such as work-related stressors, individual and situational factors, non-work factors and social support. Risk factors related to the stress response of the emergency nurses were: perceived stress due to work difficulties, negative lifestyles and desiring a career change. Important aspects of mental health support for emergency nurses are: strengthening technical support, such as holding study sessions to reduce work difficulties, as well as adjusting the working environment to improve individual lifestyles. |
As a molecular switch the ras protein undergoes structural changes that couple recognition sites on the surface of the protein to the guanine nucleotide-divalent metal ion binding site. X-ray crystallographic studies of p2l suggest that coordination between threonine-35 and the divalent metal ion plays an important role in these conformational changes. Recent ESEEM studies of p2l in solution, however, place threonine-35 further away from the metal and were interpreted as weak or indirect coordination of this residue. We have reported the high frequency (139.5 GHz) EPR spectra of p2l*Mn(II) complexes of two alternate guanine nucleotides that probe the link between threonine-35 and the divalent metal ion. In particular, the number of water molecules in the first coordination sphere [unreadable]of the manganous ion was determined to be four in p2 1 -Mn-(Il)-GDP and two in p2 I -Mn(H)-GMPPNP. The results for GMPPNP (a GTP-analog) are consistent with the number of water molecules predicted by the X-ray structure. These results rule out indirect coordination of threonine-35 and are consistent with direct, weak coordination of this residue as suggested by Halkides. The 170 hyperfine coupling constant of H2 170 is determined as 0.25 mT in the GDP form and 0.28 mT in the GTP form. These values are similar to reported values for 170-enriched aquo- and phosphato-ligands in other complexes of Mn(II). For all of these measurements, the high magnetic field strength (4.9 T), corresponding to 139.5 GHz EPR excitation, yields narrow Mn(II) linewidths and thus enhances sensitivity to 170 hyperfine broadening. |
Symptom burden and performance status in a population-based cohort of ambulatory cancer patients.
For ambulatory cancer patients, Ontario has standardized symptom and performance status assessment population-wide, using the Edmonton Symptom Assessment System (ESAS) and Palliative Performance Scale (PPS). In a broad cross-section of cancer outpatients, the authors describe the ESAS and PPS scores and their relation to patient characteristics. This is a descriptive study using administrative healthcare data. The cohort included 45,118 and 23,802 patients' first ESAS and PPS, respectively. Fatigue was most prevalent (75%), and nausea least prevalent (25%) in the cohort. More than half of patients reported pain or shortness of breath; about half of those reported moderate to severe scores. Seventy-eight percent had stable performance status scores. On multivariate analysis, worse ESAS outcomes were consistently seen for women, those with comorbidity, and those with shorter survivals from assessment. Lung cancer patients had the worst burden of symptoms. This is the first study to report ESAS and PPS scores in a large, geographically based cohort with a full scope of cancer diagnoses, including patients seen earlier in the cancer trajectory (ie, treated for cure). In this ambulatory cancer population, the high prevalence of numerous symptoms parallels those reported in palliative populations and represents a target for improved clinical care. Differences in outcomes for subgroups require further investigation. This research sets the groundwork for future research on patient and provider outcomes using linked administrative healthcare data. |
GOP Shaking Off Defeat as It Looks to the Future With New Ideas
The move comes as influential Republicans are urging party members to shake off the disappointment of November’s defeat and look to a brighter future.
Leading the way is Louisiana Gov. Bobby Jindal, seen as a top candidate for the 2016 White House nomination. In a stirring speech, Jindal told the RNC Winter Meeting in Charlotte, N.C., that the way forward should be based on the premise that Republicans are united in the belief that the federal government should be something that performs only necessary functions.
“(The Democrats) want to be in charge of the federal government so they can expand it while we say we want to be in charge so we can shrink it,” Jindal said.
“The problem with that debate is it’s focused entirely on our opponent’s terms and is a small, short-sighted debate,” he added.
Instead, Jindal proposes focusing on ways that the American economy — rather than the federal government — can grow.
“We must be the party of growth. We know government is out of control and America knows that too, but we just lost the election,” Jindal said.
“We must focus on economy growth in every community in the country — not just Washington, D.C.
“We’ve fallen into that trap that says government is based in Washington, D.C.”
Newly re-elected RNC chairman Rience Preibus also said it is time to stop looking back to what went wrong on Election Day. He said he is confident that the GOP will bounce back. |
Molded bodies, made from foam plastics have an advantage over molded bodies made from dense plastic, because they can be fabricated on the one hand independently of wall thickness and on the other hand they have considerably lower weight. This results in a considerable saving of material without a sacrifice in the strength of the molded body. The disadvantage of molded bodies made from plastic which contains a blowing agent is that it produces a rough surface. If this rough surface is not desirable, only an additional working step can make the molded body of the required smoothness. To later improve the surface of a molded body made from foamed plastic, a number of procedures are known, all of which require a high expenditure which usually cannot be justified. German Pat. No. 1,778,457 describes a procedure for fabrication of plural-layer molded bodies with a foamed core and a non-foaming thermoplastic outer skin; first a plug is pushed into the mold and filled with an uncompleted charge of non-foaming thermoplastic material; thereafter, before solidification of the inside of this plug to the first charge, a second charge is injected which contains a blowing agent where the material of the second charge presses the material of the first charge to all sides into the mold so that it is completely filled. Devices for the execution of this procedure are described in the German Pat. No. 1,779,280, the German Pat. No. 1,814,343, and the German Pat. No. 2,007,238; the second charge is introduced only when the introduction of the first charge has been interrupted or completely ended. However, this procedure often leads to undesirable markings on the surfaces of the finished molded bodies. Furthermore, it is possible, especially on molded bodies with complicated molds, that the first injected charge is not pressed uniformly against the walls of the mold by the following foaming charge, but is rather driven so far apart that it tears. In this case, the smooth surface of the outer skin is interrupted and the molded body is not usable. To prevent this from happening, the amount of the non-foaming thermoplastic materials is often increased by a certain amount which amount is not necessary for the fabrication of a molded part with a non-foaming thermoplastic outer skin. This naturally leads again to an additional weight increase.
Another procedure has been suggested (which does not belong to this art), especially for injection molding of plastic parts with thick walls and smooth surface and porous core; wherein, first, the material forming the smooth surface and, thereafter, the inner plastic is injected into the mold which contains the blowing agent. First, a part of the material which forms the smooth surface is injected and, thereafter, the plastic containing the blowing agent with more material forming the smooth surface is injected at the same time. The device for execution of this procedure contains an injection head which is connected with two injection cylinders. Within the injection cylinder, which takes in the material forming the smooth surface, is arranged a piston-cylinder operating as an injection piston which takes in the material containing the blowing agent. This piston and cylinder carries on its front end a displacement jet and its rear end is equipped with channels; it is capable of being pushed into the inside of the piston-cylinder carrying the material containing the blowing agent. Furthermore, the injection jet is equipped with a gate-valve movable transversely of the flow channel. By this design there is the condition that the outlet nozzle arranged on the mold body extends relatively far into the closed injection jet, so that the mold ejection procedure is greatly handicapped. In addition, this device is complicated and expensive, because of the shape of the injection jet. After the hardening of the molded body, the outlet nozzle has to be removed by cutting or breaking off, which requires an additional working procedure which should not be overlooked.
In the German Pat. No. 1,154,264 a device is described for a continuous extrusion press for endless mold bodies or of plate-shaped elements containing a foamed material core and an outside skin made from thermoplastic material, in which the jet is arranged centrally inside the injection head of the extrusion press for the exit of the foam plastic and is surrounded by a jet for the exit of the foam plastic material, from which the thermoplastic material exits along with the foamed plastic. Based on the working procedure of this device, these mold bodies of plate-shaped design elements have front faces which are not provided with a layer of thermoplastic material; this means that formed bodies or plate-shaped design elements may not be fabricated where its foam material core is completely surrounded by a cover of dense thermoplastic material.
The present invention provides apparatus for the discontinuous fabrication of mold bodies made with several layers of thermoplastic material, which has a simple design by which the different materials can be injected separately or in common. Furthermore, by means of this device, it is possible for the thickness of the individual layers of the finished mold bodies to be changeable. To solve this task, according to this invention, it is suggested that the ring jet be limited in its movement through the injection head and that an axially-displaceable ring jet guided within the injection head be provided in the form of a closing sleeve whose bore forms the central jet equipped with a displaceable closure needle.
According to a further characteristic of this invention, the front faces of the sleeve and the closure needle are closed in their injection position in the plane of the front face of the injection head. By this method, it is assured that, in the area of the injection head, no outlet nozzle is created, so that the molded bodies may be removed without any difficulty by opening the mold.
The front face of the sleeve, closure needle, and injection head can form a part of the inside wall of the mold. This design would be used in the case where no outlet nozzle on the molded body should be created. Therefore, the expenditure of work to remove the outlet nozzle is eliminated. In addition, a molded body made by this device contains a covering made from dense thermoplastic material which completely surrounds the foam material core. Connected to the sleeve and the closure needle are separate or common displacement drives, where the use of a common displacement drive can bring the sleeve against the force of a spring through the closure needle into its arrested position. According to a further characteristic of this invention, the displacement drives are controllable for the sleeve and the closure needle in accordance with the position of the piston within the injection cylinders, so that the opening and closing of the central jet and the ring jet may be selectively chosen. |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# || ____ _ __
# +------+ / __ )(_) /_______________ _____ ___
# | 0xBC | / __ / / __/ ___/ ___/ __ `/_ / / _ \
# +------+ / /_/ / / /_/ /__/ / / /_/ / / /_/ __/
# || || /_____/_/\__/\___/_/ \__,_/ /___/\___/
#
# Copyright (C) 2011-2020 Bitcraze AB
#
# Crazyflie Nano Quadcopter Client
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
# MA 02110-1301, USA.
"""
Used for sending external position to the Crazyflie
"""
__author__ = 'Bitcraze AB'
__all__ = ['Extpos']
class Extpos():
"""
Used for sending its position to the Crazyflie
"""
def __init__(self, crazyflie=None):
"""
Initialize the Extpos object.
"""
self._cf = crazyflie
def send_extpos(self, x, y, z):
"""
Send the current Crazyflie X, Y, Z position. This is going to be
forwarded to the Crazyflie's position estimator.
"""
self._cf.loc.send_extpos([x, y, z])
def send_extpose(self, x, y, z, qx, qy, qz, qw):
"""
Send the current Crazyflie X, Y, Z position and attitude as a
normalized quaternion. This is going to be forwarded to the
Crazyflie's position estimator.
"""
self._cf.loc.send_extpose([x, y, z], [qx, qy, qz, qw])
|
Neetzan Zimmerman is a genius at creating viral content. For the past two years he’s worked at Gawker, one of the world’s biggest blogs. Month in and month out, he generated more traffic than all of Gawker’s other writers combined. He just got poached away from Gawker by Whisper, a hot social networking startup in Los Angeles, where he will be editor-in-chief.
Not bad for a guy with no background or training in journalism, who started out by simply creating his own blog where he could post funny videos about cute cats and other crazy stuff.
“It’s the best job in the world,” he says. “I get paid to have fun.”
I asked Zimmerman to share some of his secrets. Some things he does are downright surprising, and go against the conventional wisdom in the world of marketing. For example, Zimmerman has a Twitter account (@neetzan) but he rarely posts anything there. For another, while he’s a master at creating content that spreads like wildfire on social networks, he never goes on Facebook or Twitter to promote his own stuff directly.
“I never self-promote,” he says. “I find it tacky. That’s just never been my style, and I’ve done well enough without it that it never occurred to me to change.”
Getting Started
At Gawker, Zimmerman’s job was to create huge amounts of traffic so that other writers could focus on longer stories that required more reporting. Zimmerman turned out to be awfully good at generating page views. Of the 10 biggest stories on Gawker in 2013, nine were his.
Zimmerman has a list of about 1,000 websites that he scans for ideas. He starts a typical day by rolling out of bed and skimming whatever items have popped up overnight; usually there are about 500. From 7 a.m. to 7 p.m. he stays glued to an iPad (for reading) and a laptop (for writing). He works from home because commuting would force him to be disconnected from the internet. "You need to be in one place, without moving, for an extended period of time," he says.
He finds material on sites that most people have never heard of, like 22 Words, Tastefully Offensive and Daily Picks and Flicks. But finding stories is only the first part of creating a viral hit. Part Two involves knowing how to find the right angle and write a great headline. “I made a game out of this – finding stories that at first blush didn’t seem that interesting, and turning them into viral powerhouses,” he says.
Zimmerman posts 10 to 15 items a day. His biggest post, about reality TV star Farrah Abraham (“Teen Mom”) making a sex tape, drew nearly 11 million views -- but did so over the course of seven months.
“At any given moment there were 200 people looking at that post,” Zimmerman says. “This went on for months." He says he tries to create posts that can endure for months, rather than ones that generate a lot of traffic for one day and then fade away. "My focus is on playing the long game," he says.
Most of us don't want to spend 12 hours a day sitting in a chair looking for cat videos. But if you do want to learn more about virality, check out this SlideShare: |
Arcade classic Galaga is about to become an animated series
Share This Post
If you walked into an arcade in the '80s...or the '90s...or perhaps even now, there's a good chance you stared down Galaga. Even as games like Space Invaders preceded it, the game has endured to the point that it even got a shout-out in The Avengers, establishing it as the ultimate arcade blast. Now it's hitting another level.
Variety reports that relatively new company The Nuttery Entertainment is working with Bandai Namco to develop extensions of the Galaga brand. Those extensions reportedly include multiple platforms and "new characters and stories," but for now the biggest development is an animated series titled Galaga Chronicles.
“We are incredibly honored to be able to work on such an amazing legacy property and help launch it into the animated space,” Nuttery co-founder Magnus Jansson said. “There is such a deep love for this game from fans around the world, and our team is excited make sure the next chapter in the Galaga saga is equally impressive and inspiring as its humble 8-bit beginnings.”
At first glance this feels like nothing more than an attempt to cash in on video game nostalgia. Galaga's story is basically "you fight aliens and then you fight more aliens," but that's not exactly a story that's never worked onscreen before. The name recognition is solid, and if you hang a good story on this skeleton something really fun could come out of it. Plus, we have recent offerings like Netflix's Castlevania to prove that video game adaptations really can be good.
What do you think? Should Galaga really get its own series, or should it be left to the arcade? Let us know in the comments! |
// Copyright (c) 2015-2016 Yuya Ochiai
// Copyright (c) 2016-present Mattermost, Inc. All Rights Reserved.
// See LICENSE.txt for license information.
const webpack = require('webpack');
const electron = require('electron-connect').server.create({path: 'src'});
const mainConfig = require('../webpack.config.main.js');
const rendererConfig = require('../webpack.config.renderer.js');
let started = false;
const mainCompiler = webpack(mainConfig);
mainCompiler.watch({}, (err, stats) => {
process.stdout.write(stats.toString({colors: true}));
process.stdout.write('\n');
if (!stats.hasErrors()) {
if (started) {
electron.restart();
} else {
electron.start();
started = true;
}
}
});
for (const key in rendererConfig.entry) {
if (!key.startsWith('webview/')) {
if ({}.hasOwnProperty.call(rendererConfig.entry, key)) {
delete rendererConfig.entry[key];
}
}
}
const preloadCompiler = webpack(rendererConfig);
preloadCompiler.watch({}, (err) => {
if (err) {
console.log(err);
}
});
|
THAT “tsunami” is one of the few Japanese words in global use points to the country's familiarity with natural disaster. But even measured against Japan's painful history, its plight today is miserable. The magnitude-9 earthquake—the largest ever in the country's history, equivalent in power to 30,000 Hiroshimas—was followed by a wave which wiped out whole towns. With news dribbling out from stricken coastal communities, the scale of the horror is still sinking in. The surge of icy water shoved the debris of destroyed towns miles inland, killing most of those too old or too slow to scramble to higher ground (see article). The official death toll of 5,429 will certainly rise. In several towns over half the population has drowned or is missing.
In the face of calamity, a decent people has proved extremely resilient: no looting; very little complaining among the tsunami survivors. In Tokyo people queued patiently to meet their tax deadlines. Everywhere there was a calm determination to conjure a little order out of chaos. Volunteers have rushed to help. The country's Self-Defence Forces, which dithered in response to the Kobe earthquake in 1995, have poured into the stricken area. Naoto Kan, the prime minister, who started the crisis with very low public support, has so far managed to keep a semblance of order in the country, despite a series of calamities that would challenge even the strongest of leaders. The government's inept handling of the Kobe disaster did much to undermine Japan's confidence in itself.
The wider concern
The immediate tragedy may be Japan's; but it also throws up longer-term questions that will eventually affect people all the way round the globe. Stockmarkets stumbled on fears about the impact on the world's third-biggest economy. Japan's central bank seems to have stilled talk of financial panic with huge injections of liquidity. Early estimates of the total damage are somewhat higher than the $100 billion that Kobe cost, but not enough to wreck a rich country. Disruption to electricity supplies will damage growth, and some Asian supply chains are already facing problems; but new infrastructure spending will offset some of the earthquake's drag on growth.
Those calculations could change dramatically if the nuclear crisis worsens. As The Economist went to press, helicopters were dropping water to douse overheating nuclear fuel stored at the Fukushima Dai-ichi plant, where there have been explosions, fires and releases of radiation greater, it seems, than the Japanese authorities had admitted. The country's nuclear industry has a long history of cover-ups and incompetence, and—notwithstanding the heroism of individual workers—the handling of the crisis by TEPCO, the nuclear plant's operator, is sadly in line with its past performance.
Even if the nuclear accident is brought under control swiftly, and the release of radiation turns out not to be large enough to damage public health, this accident will have a huge impact on the nuclear industry, both inside and outside Japan. Germany has already put on hold its politically tricky decision to extend the life of its nuclear plants. America's faltering steps towards new reactors look sure to be set back, not least because new concerns will mean greater costs.
China has announced a pause in its ambitious plans for nuclear growth. With 27 reactors under construction, more than twice as many as any other country, China accounts for almost half the world's current nuclear build-out—and it has plans for 50 more reactors. And in the long term the regime looks unlikely to be much deterred from these plans—and certainly not by its public's opinion, whatever that might be. China has a huge thirst for energy that it will slake from as many wells as it can, with planned big increases in wind power and in gas as well as the nuclear build-out and ever more coal-fired plants.
Thus the great nuclear dilemma. For the best nuclear safety you need not just good planning and good engineering. You need the sort of society that can produce accountability and transparency, one that can build institutions that receive and deserve trust. No nuclear nation has done this as well as one might wish, and Japan's failings may well become more evident. But democracies are better at building such institutions. At the same time, however, democracy makes it much easier for a substantial and implacable minority to make sure things don't happen, and that seems likely to be the case with plans for more nuclear power. Thus nuclear power looks much more likely to spread in societies that are unlikely to ground it in the enduring culture of safety that it needs. China's nearest competitor in the new-build stakes is Russia.
Yet democracies would be wrong to turn their back on nuclear power. It still has the advantages of offering reliable power, a degree of energy security, and no carbon dioxide emissions beyond those incurred in building and supplying the plants. In terms of lives lost it has also boasted, to date, a reasonably good record. Chernobyl's death toll is highly uncertain, but may have reached a few thousand people. China's coal mines certainly kill 2,000-3,000 workers a year, and coal-smogged air there and elsewhere kills many more. It remains a reasonable idea for most rich countries to keep some nuclear power in their portfolio, not least because by maintaining economic and technological stakes in nuclear they will have more standing to insist on high standards for safety and non-proliferation being applied throughout the world. But in the face of panic, of sinister towers of smoke, of invisible and implacable threats, the reasonable course is not an easy one.
Back to Tokyo
No country faces that choice more painfully than Japan, scarred by nuclear energy but also deprived of native alternatives. To abandon nuclear power is to commit the country to massive imports of gas and perhaps coal. To keep it is to face and overcome a national trauma and to accept a small but real risk of another disaster.
Japan's all too frequent experience of calamity suggests that such events are often followed by great change. After the earthquake of 1923, it turned to militarism. After its defeat in the second world war, and the dropping of the atom bombs, it espoused peaceful growth. The Kobe earthquake reinforced Japan's recent turning in on itself.
This new catastrophe seems likely to have a similarly huge impact on the nation's psyche. It may be that the Japanese people's impressive response to disaster, and the rest of the world's awe in the face of their stoicism, restores the self-confidence the country so badly needs. It may be that the failings of its secretive system of governance, exemplified by the shoddy management of its nuclear plants, lead to more demands for political reform. As long as Mr Kan can convince the public that the government's information on radiation is trustworthy, and that it can ease the cold and hunger of tsunami survivors, his hand may be strengthened to further liberalise Japan. Or it may be that things take a darker turn.
The stakes are high. Japan—a despondent country with a dysfunctional political system—badly needs change. It seems just possible that, looking back from a safe distance, Japan's people will regard this dreadful moment not just as a time of death, grief and mourning, but also as a time of rebirth. |
NEW YORK (Reuters) - Tesla Inc’s more than 66 percent rally for the year is prompting some funds to make an outsized bet on the electric car maker.
The Tesla corporate logo is pictured at a Tesla electric car dealership in Sydney, Australia, May 31, 2017. REUTERS/Jason Reed
A total of 22 actively managed mutual funds and exchange-traded funds have more than 5 percent of their portfolios in the company, according to Morningstar data.
Few of these funds are actively buying shares, fund filings show, but instead are letting their stakes balloon as the stock continues to rally. Typically, fund managers prevent any one position from growing beyond 5 percent of assets in order to manage their risk.
“It’s concerning because there’s a significant risk in holding that much of any individual stock because you’re not getting the benefits of diversification, particularly with a company that is as volatile as Tesla is,” said Todd Rosenbluth, director of ETF and mutual fund research at CFRA in New York.
At 19.4 percent of assets, the $2.1-billion Baron Partners fund has the largest individual stake in Tesla, with 19.5 percent of assets, while another Baron fund, the $185-million Baron Focused Growth fund, has the second-largest position with 17.3 percent of assets in the company.
Both funds began buying shares of Tesla in 2014 and are up more than 18 percent for the year, nearly double the 10 percent gain for the broad S&P 500.
Baron declined to comment. Ron Baron, the fund’s manager, said in June that he thinks that Tesla could hit $1,000 per share by 2020, a 181-percent gain from its current price of approximately $356 per share.
At 10 percent of assets, the $66-million ARK Industrial Innovation ETF has the largest position in Tesla among all exchange-traded funds, according to Morningstar. The actively-managed fund, which aims to buy companies following a theme of disruptive innovation, is up 33.7 percent for the year.
Sector ETFs are more likely than actively-managed funds to have outsized positions in individual companies, largely because they track market-weighted indexes that themselves are often top-heavy, Rosenbluth said.
The $12.9-billion Consumer Discretionary Select SPDR ETF, for instance, has 15.1 percent of its assets in shares of Amazon.com Inc, while the $3.8 billion iShares MSCI South Korea Capped ETF holds 22.9 percent of its assets in shares of Samsung Electronics Co Ltd.
Investors in ETFs are more likely to accept greater individual company risk as long as the portfolio is representative of a sector, Rosenbluth said. |
Q:
quickbooks magento 2.3 QBWC1012
I'm migrating a store from magento 1 to magento 2 and they have it integrated with Quickbooks Desktop on a MQC remote desktop.
I'm using consolibyte on both m1 and m2. And it was working on m2 as well. Then I had to put aside this task and updated magento to 2.3.0.
Now when I try to update my magneto 2 application in the web connector on the remote desktop it gives me the following error:
"Version:
Not provided by service
Message:
Authentication failed
Description:
QBWC1012: Authentication failed due to following error message.
The request failed with an empty response. See QWCLog for more details. Remember to turn logging on."
The log looks like this:
"20190213.06:57:36 UTC : QBWebConnector.WebServiceManager.DoUpdateSelected() : updateWS() for application = 'Sansha 2 France 1.0' has STARTED
20190213.06:57:36 UTC : QBWebConnector.RegistryManager.getUpdateLock() : HKEY_CURRENT_USER\Software\Intuit\QBWebConnector\UpdateLock = FALSE
20190213.06:57:36 UTC : QBWebConnector.RegistryManager.setUpdateLock() : HKEY_CURRENT_USER\Software\Intuit\QBWebConnector\UpdateLock has been set to True
20190213.06:57:36 UTC : QBWebConnector.RegistryManager.setUpdateLock() : ********************* Update session locked *********************
20190213.06:57:36 UTC : QBWebConnector.SOAPWebService.instantiateWebService() : Initiated connection to the following application.
20190213.06:57:36 UTC : QBWebConnector.SOAPWebService.instantiateWebService() : AppName: Sansha 2 France 1.0
20190213.06:57:36 UTC : QBWebConnector.SOAPWebService.instantiateWebService() : AppUniqueName (if available): Sansha 2 France 1.0
20190213.06:57:36 UTC : QBWebConnector.SOAPWebService.instantiateWebService() : AppURL: https://eurostore.magento2.sansha.com/quickbooks/api
20190213.06:57:36 UTC : QBWebConnector.SOAPWebService.do_serverVersion() : *** Calling serverVersion().
20190213.06:57:36 UTC : QBWebConnector.SOAPWebService.do_serverVersion() : Actual error received from web service for serverVersion call: <The request failed with an empty response.>. For backward compatibility of all webservers, QBWC will catch all errors under app-not-supporting-serverVersion.
20190213.06:57:36 UTC : QBWebConnector.SOAPWebService.do_serverVersion() : This application does not contain support for serverVersion. Allowing update operation for backward compatibility.
20190213.06:57:36 UTC : QBWebConnector.SOAPWebService.do_clientVersion() : *** Calling clientVersion() with following parameter:<productVersion="2.2.0.71">
20190213.06:57:37 UTC : QBWebConnector.SOAPWebService.updateWS() : Actual error received from web service for clientVersion call: <The request failed with an empty response.>. For backward compatibility of all webservers, QBWC will catch all errors under app-not-supporting-clientVersion.
20190213.06:57:37 UTC : QBWebConnector.SOAPWebService.do_clientVersion() : This application does not contain support for clientVersion. Allowing update operation for backward compatibility.
20190213.06:57:37 UTC : QBWebConnector.SOAPWebService.do_authenticate() : Authenticating to application 'Sansha 2 France 1.0', username = 'sansha'
20190213.06:57:37 UTC : QBWebConnector.SOAPWebService.do_authenticate() : *** Calling authenticate() with following parameters:<userName="sansha"><password=<MaskedForSecurity>
20190213.06:57:37 UTC : QBWebConnector.SOAPWebService.do_authenticate() : QBWC1012: Authentication failed due to following error message.
The request failed with an empty response.
More info:
StackTrace = at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall)
at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
at QBWebConnector.localhost.WCWebServiceDoc.authenticate(String strUserName, String strPassword)
at QBWebConnector.localhost.WCWebService.authenticate(String strUserName, String strPassword)
at QBWebConnector.SOAPWebService.authenticate(String UserName, String Password)
at QBWebConnector.WebService.do_authenticate(String& ticket, String& companyFileName)
Source = System.Web.Services
20190213.06:57:37 UTC : QBWebConnector.RegistryManager.setUpdateLock() : HKEY_CURRENT_USER\Software\Intuit\QBWebConnector\UpdateLock has been set to False
20190213.06:57:37 UTC : QBWebConnector.RegistryManager.setUpdateLock() : ********************* Update session unlocked *********************
20190213.06:57:37 UTC : QBWebConnector.WebServiceManager.DoUpdateSelected() : Update completed with errors. See log (QWClog.txt) for details.
"
I tried to find a solution but still no luck.
The SOAP server seems like working:
"QuickBooks PHP DevKit Server v3.0 at /quickbooks/api/
(c) "Keith Palmer" <keith@consolibyte.com>
Visit us at: http://www.ConsoliBYTE.com/
Use the QuickBooks Web Connector to access this SOAP server.
QuickBooks_WebConnector_Server::handle() parameters:
- $return = 1
- $debug = 1
Misc. information:
- Logging: 1
- Timezone: Europe/Paris (Auto-set: )
- Current Date/Time: 2019-02-14 08:09:29
- Error Reporting: 32767
SOAP adapter:
- QuickBooks_Adapter_Server_Builtin
Registered handler functions:
Array
(
[0] => __construct
[1] => authenticate
[2] => sendRequestXML
[3] => receiveResponseXML
[4] => connectionError
[5] => getLastError
[6] => closeConnection
[7] => serverVersion
[8] => clientVersion
)
Detected input:
Timestamp:
- 2019-02-14 08:09:29 -- process 0.27981"
And this is how it is initiated:
if(!\QuickBooks_Utilities::initialized($dsn)) {
\QuickBooks_Utilities::initialize($dsn);
\QuickBooks_Utilities::createUser($dsn, $qb_username, $qb_password);
}
$server = new \QuickBooks_WebConnector_Server($dsn, $this->_map, $this->_errmap, $this->_hooks);
$response = $server->handle(true, true);
Did anyone faced this problem? Do someone know how can it be solved.
Regards
A:
Found the issue finally.
The point is that updating to magento 2.3 breaks HTTP POST request, or at least the way they were working before the update, see https://magento.stackexchange.com/questions/253414/magento-2-3-upgrade-breaks-http-post-requests-to-custom-module-endpoint.
I used https://github.com/consolibyte/quickbooks-php/blob/master/dev/dev_qbwc_tester.php to test the request and the form key was not valid. To fix this your controller has to implement CsrfAwareActionInterface.
And to prevent magento from trying to set cookies and rewrite headers after your QBWC server is initialized you will have to block your code just after the SOAP server is initialized. I put a die() after, not sure is the best solution, but authentication work (this was my primary issue).
The final code in the controller looks like this:
<?php
namespace Sansha\Quickbooks\Controller\Api;
use Magento\Framework\App\CsrfAwareActionInterface;
use Magento\Framework\App\RequestInterface;
use Magento\Framework\App\Request\InvalidRequestException;
class Index extends \Magento\Framework\App\Action\Action implements CsrfAwareActionInterface
{
protected $resultPageFactory;
protected $_map = array();
protected $_hooks = array();
protected $_errmap = array();
protected $_helper_config;
protected $_helper_data;
protected $_helper_api;
protected $_logger;
/**
* Constructor
*
* @param \Magento\Framework\App\Action\Context $context
* @param \Magento\Framework\View\Result\PageFactory $resultPageFactory
*/
public function __construct(
\Magento\Framework\App\Action\Context $context,
\Magento\Framework\View\Result\PageFactory $resultPageFactory,
\Sansha\Quickbooks\Helper\QbConfig $helperConfig,
\Sansha\Quickbooks\Helper\QbData $helperData,
\Sansha\Quickbooks\Helper\QbApi $helperApi,
\Sansha\Quickbooks\Logger\Logger $logger
) {
$this->resultPageFactory = $resultPageFactory;
$this->_helper_config = $helperConfig;
$this->_helper_data = $helperData;
$this->_helper_api = $helperApi;
$this->_logger = $logger;
parent::__construct($context);
}
/**
* Execute view action
*
* @return \Magento\Framework\Controller\ResultInterface
*/
public function execute()
{
if (function_exists('date_default_timezone_set'))
{
date_default_timezone_set('Europe/Paris');
}
$store_id = $this->_helper_data->getStoreId();
$dsn = $this->_helper_config->getDsn($store_id);
$qb_username = $this->_helper_config->getQbLogin($store_id)['username'];
$qb_password = $this->_helper_config->getQbLogin($store_id)['password'];
if(!\QuickBooks_Utilities::initialized($dsn)) {
\QuickBooks_Utilities::initialize($dsn);
\QuickBooks_Utilities::createUser($dsn, $qb_username, $qb_password);
}
$server = new \QuickBooks_WebConnector_Server($dsn, $this->_map, $this->_errmap, $this->_hooks);
$response = $server->handle(true, true);
die(); // stop code exectuon after server initialization
}
// below 2 functions will avoid breaking HTTP POST request
public function createCsrfValidationException(RequestInterface $request): ?InvalidRequestException
{
return null;
}
public function validateForCsrf(RequestInterface $request): ?bool
{
return true;
}
|
Q:
SQL NOT IN not working
I have two databases, one which holds the inventory, and another which contains a subset of the records of the primary database.
The following SQL statement is not working:
SELECT stock.IdStock
,stock.Descr
FROM [Inventory].[dbo].[Stock] stock
WHERE stock.IdStock NOT IN
(SELECT foreignStockId FROM
[Subset].[dbo].[Products])
The not in does not work. Removing the NOT gives the correct results, i.e. products that are in both databases. However, using the NOT IN is not returning ANY results at all.
What am I doing wrong, any ideas?
A:
SELECT foreignStockId
FROM [Subset].[dbo].[Products]
Probably returns a NULL.
A NOT IN query will not return any rows if any NULLs exists in the list of NOT IN values. You can explicitly exclude them using IS NOT NULL as below.
SELECT stock.IdStock,
stock.Descr
FROM [Inventory].[dbo].[Stock] stock
WHERE stock.IdStock NOT IN (SELECT foreignStockId
FROM [Subset].[dbo].[Products]
WHERE foreignStockId IS NOT NULL)
Or rewrite using NOT EXISTS instead.
SELECT stock.idstock,
stock.descr
FROM [Inventory].[dbo].[Stock] stock
WHERE NOT EXISTS (SELECT *
FROM [Subset].[dbo].[Products] p
WHERE p.foreignstockid = stock.idstock)
As well as having the semantics that you want the execution plan for NOT EXISTS is often simpler as looked at here.
The reason for the difference in behaviour is down to the three valued logic used in SQL. Predicates can evaluate to True, False, or Unknown.
A WHERE clause must evaluate to True in order for the row to be returned but this is not possible with NOT IN when NULL is present as explained below.
'A' NOT IN ('X','Y',NULL) is equivalent to 'A' <> 'X' AND 'A' <> 'Y' AND 'A' <> NULL)
'A' <> 'X' = True
'A' <> 'Y' = True
'A' <> NULL = Unknown
True AND True AND Unknown evaluates to Unknown per the truth tables for three valued logic.
The following links have some additional discussion about performance of the various options.
Should I use NOT IN, OUTER APPLY, LEFT OUTER JOIN, EXCEPT, or NOT EXISTS?
NOT IN vs. NOT EXISTS vs. LEFT JOIN / IS NULL: SQL Server
Left outer join vs NOT EXISTS
NOT EXISTS vs NOT IN
A:
If NOT IN does not work, you may always try to do LEFT JOIN. Then filter by WHERE using one of the values from the joined table, which are NULL. Provided, the value you were joining by does not contain any NULL value.
|
Field
The present subject matter relates to a facsimile apparatus, a control method thereof, and a storage medium.
Description of Related Art
In recent years, a method for performing facsimile (fax) communication using the Internet protocol (IP) network has become established. The fax communication using the IP network employs Session Initiation Protocol (SIP) as a call connection protocol, and T.38 protocol for performing data communication. Since such fax communication using the IP network is performed via the IP network, communication is performed at a higher speed as compared to a conventional G3 fax. Further, a SIP and T.38-enabled Internet fax apparatus (hereinafter referred to as an IP fax) is currently in the market.
Furthermore, a T.38 gateway (T.38 GW) which converts in real time an analog fax signal of the G3 fax into the T.38 protocol is also available in the market. The T.38 GW thus allows the SIP and T.38-enabled IP fax and the conventional G3 fax to communicate with each other.
Communication using the T.38 protocol utilizes, for the transport layer, either Transmission Control Protocol (TCP)/Transport Protocol Data Unit Packet (TPKT), or User Datagram Protocol (UDP)/UDP Transport Layer Protocol (UDPTL).
The protocols available for communication using the T.38 protocol are determined as below according to environmental specifications in which IP fax communication is to be performed.
Public IP network: TCP/TPKT, UDP/UDPTL
Local IP network A (using a predetermined exchanger): UDP/UDPTL
Local IP network B (using peer-to-peer (P2P) method): TCP/TPKT, UDP/UDPTL
T.38 GW: UDP/UDPTL
The predetermined exchanger is a local IP exchanger which includes an SIP server and resolves on the local IP network a connecting destination address based on a destination telephone number. Further, P2P is a communication method which directly connects terminals to each other on the IP network to allow the terminals to transmit and receive data using IP addresses thereof.
If the size of an internet facsimile protocol (IFP) packet is increased and the number of packets is decreased in both the TCP/TPKT and UDP/UDPTL protocols, extra data of a header portion to be transmitted to the network can be decreased. Throughput is thus improved. Further, in the case of using the UDP/UDPTL protocol, if the number of redundant packets for performing error recovery is increased, tolerance to packet loss is improved. However, if the number of redundant packets is excessively increased, the throughput is lowered, so that real-time property becomes degraded.
Thus, it is necessary to appropriately determine the packet size according to the environment in which communication using the T.38 protocol is to be performed.
Japanese Patent Application Laid-Open No. 2002-158702 discusses a method for determining the packet size for a GW apparatus and a router apparatus. More specifically, Japanese Patent Application Laid-Open No. 2002-158702 is directed to a technique for reducing, when performing real-time communication such as voice communication, a packet delay time, and at the same time improving the throughput when performing non-real-time communication such as data communication including a file transfer. Japanese Patent Application Laid-Open No. 2002-158702 thus discusses a technique for dividing and transmitting packets when performing real-time communication.
Among T.38 GWs used for performing communication between the T.38-enabled IP fax and the G3 fax, there is a GW in which restrictions are placed on a receivable packet size. More specifically, the total size of the packet is restricted to less than 320 bytes, and the size of the IFP packet is restricted to less than 128 bytes.
For this reason, there is a problem in that the packet not satisfying the above-described restrictions on the packet size becomes discarded on the T.38 GW side, so that IP fax communication cannot be normally performed. Japanese Patent Application Laid-Open No. 2002-158702 does not discuss adjusting the packet size with respect to the size restrictions on the above-described data portion.
In general, when communicating with a T.38 GW, the UDP/UDPTL protocol is available for the transport layer in the T.38 protocol. In the case of a Voice over IP gateway (VoIP GW) used for voice communication, a small amount of packet loss which occurs in VoIP is negligible, so that the UDP protocol is generally used. Since the T.38 GW is a variation of the VoIP GW, the T.38 GW also generally uses the UDP protocol.
Thus, the above-described size restrictions can be overcome by performing packetization, specifically, by reducing the size of the IFP packet of a UDPTL packet and decreasing the number of redundant packets used for error recovery. However, if the packet size is uniformly reduced regardless of the connection destination, the data amount of the header portion to be transmitted to the network relatively increases, so that the throughput is lowered in the environment other than the T.38 GW. Further, if the number of redundant packets used for error recovery becomes small, the tolerance to packet loss may also be lowered. |
Share this
Read more!
Get our weekly email
Enter your email address
On the union-led 'March For The Alternative' in London last March, 145 protestors were arrested in relation to peacefully occupying a luxury store. Here, one of the ten found guilty speaks out on his trial.
“Which judge have you got?”. That this is the first question asked by anyone familiar with the justice system perhaps says all you need to know about its operation. Throughout our case, I had ringing in my ears the wisdom of a QC I met in 2009 when we took the Treasury to court over the RBS bail-out: “people have this bizarre notion that the British justice system is based on some kind of mixture of legislation and logic” he would tell us. “Get that idea out of your head immediately. What matters is which judge you get – and it will be a man – what mood he is in, how much sleep he had, his personal prejudices, and perhaps, maybe, some of the internal politics of the court.”
After that case, another lawyer – a partner at a well known firm – opined “if that’s the justice system in this country, it’s time for a fucking revolution”. If arbitrary decisions by judges sitting with no jury are one element of England’s legal system then another ingredient I was surprised to find is farce.
Most of those arrested in Fortnum and Mason on 26 March had charges dropped. The 30 who didn’t were chosen because of specific aggravating factors which demonstrated the ‘intent to intimidate’ – including ‘carrying an umbrella’, ‘holding a leaflet’ and, in my case, ‘facilitating a meeting’ (among them was Asuka Jones, who sets out his account of arrest and being found guilty here).
Roughly half of the defendants had our names spelt wrong on the court record. Before our trial even began, one of us had our charges dropped because they had accidentally been left off a list. At one point, the prosecution barrister triumphantly asked one of his key witnesses if they had seen any behaviour inside the store which they believed to be criminal. The man replied with one word: “no”. The police officers held their heads in their hands: this is not the carefully prepared theatre of Holywood courtroom dramas.
Another witness – who they had brought in to prove that we were intimidating – said nothing of the sort. Instead, he declared, with an amused glint in his eye, that the situation was chaos: “in the middle of all of these protesters, a man kept approaching me and insisting he wanted to buy a kilo of pink marshmallows”. Later, the woman who ran the confectionary stand on the day returned to this theme. She went to great lengths to explain the confusion caused by this man who couldn’t understand that marshmallows aren’t sold by the kilogram. In his written witness statement, the shop manager claimed that he had seen someone steal, open and swig from a bottle of champagne. “I was horrified” he wrote “that anyone would drink champaign before it was chilled”. During his appearance, the lawyers sought to establish where various events in the shop had taken place – ‘where did this happen…’: ‘by the Old Door’; ‘and that?’ ‘by the confectionary stand’ ‘and where was the stolen champagne from?’ “Highgrove, of course.” Even the judge couldn’t stifle his laugh.
In the end, the most aggressive behaviour which the prosecution could level at any of the individuals on trial was footage of a game of beach volleyball inside the shop. Their assertion that, in the context this terrified customers was at always absurd. It became comical when, later in the case, our barrister showed CCTV footage of first a police officer, and then a customer, joining in.
At the first climax of the case, right before he pronounced his decision, Judge Snow looked at our representative and said “you aren’t expecting me to rule on article ten and eleven rights are you?”. Richard – our barrister, whose boiling intellect had already melted judge Snow, if only he had noticed, looked a little exasperated. He had mentioned the European Convention numerous times throughout the trial. But he repeated, in the way that a patient parent might politely re-iterate an instruction to a toddler, that yes the judge did need to consider our right to protest in a case all about, err, the right to protest. The judge declared that he would therefore need yet another full day to come to a verdict. The following day, he turned up twenty minutes late to announce his decision.
But in a bizarre narrative structure, before the farce comes the boredom. Because the truth is that the main experience of this trial was not the week in court, but the months of preparation. We are on email number 88 from our solicitors. Most run to hundreds of words, and each demonstrates the careful thought and passionate work with which they have launched themselves at this case. Each is important and relevant, and each represents more work when I return late at night from the office. In fact, most have themselves been sent long out of office hours by a legal team working for us largely in their spare time for little or no pay.
Of that team, it is two paralegals at Bindmans whose work delivered the backbone of these communications. And it is they who have borne the brunt of this monotony. When the prosecution accidentally copied one of our lawyers into an email discussing our request for all three hundred hours of CCTV, it showed them saying they would create ‘the most boring and uncomfortable cinema on earth’.
I am told that, much to the mirth of their housemates, the paralegals stuck the photos of each of the defendants around their bedroom walls. They then settled down for weeks to watch hundreds of hours of footage. These endless dry eyed weeks of pausing and replaying of silent grainy footage of our protest – in many cases, including mine, working essentially for free – demonstrate a deep passion and solidarity. The willingness to fight for justice against the adrenalin filled rush of a police horse is one thing. The capacity to keep a flame alive through the crushing, airless monotony played out in months of DVD after DVD showing the same essentially monotonous events from one angle after another is a prospect I couldn’t face.
But it was this work which delivered the great reveal of our case – debunking the gross exaggeration of the key prosecution witness. She had closed the first day with a story of how a gang of protesters had surrounded her, pointed at her, and shouted in her face for around half an hour. With a magicians flourish, our lawyers produced in just 5 minutes, from a vast pile of disks, the relevant CCTV DVD. This showed that, whilst her stall was in the midst of a large crowd, the events as she described them simply didn’t happen. Clearly the telling and retelling of the story had distorted it entirely in her head. Clearly the prosecution lawyers relying on her testimony hadn’t done their research. Thank God, or, rather, Bindmans, that ours had.
Throughout the trial, in the breaks between witnesses, the ten defendants and three or four lawyers would congregate in a small stuffy meeting room more fitting for two. In these discussions, our lawyers would map out for us every potential legal alley down which the prosecution might run, or which the judge might use as an excuse to convict us. ‘This law’, we would be told, ‘was contingent on that factor, and so we will just need to say this’. Or, ‘if they go down that route, then this will be our response, but they probably haven’t thought of that’. When it came to it though, the prosecution didn’t bother with any of the subtlety of actually making a legal case. They just read out the witness statement from the woman earlier in the case whose testimony had essentially been disproved by the CCTV. And when the judge came to his ruling, he too ignored the subtleties of actually justifying his case in law. Essentially, despite two and a half days of thought, he went for the random opinion generator option: “oh, errr, guilty”.
For me, the verdict isn’t a huge problem: in the world I inhabit it is not abnormal (and perhaps that tinges the tone of this piece). For some of my co-defendants, it is: at least one of us could well lose his job over it. For others, being declared a criminal hurts.
The support we had from our legal team was matched only by the mutual solidarity I felt from my co-defendants. We were essentially ten random people who had been at that protest that day. And yet I was delighted to find each of them to be supportive, helpful, intelligent and interesting. The only moment during the trial where I was moved by anything more than boredom or farce was when the character witness statements were read out for each of us, outlining the outstanding place my co-defendants held in the eyes of their respective communities.
In both our lawyers and my co-defendants, I was lucky. Unlike many others, I wasn’t isolated by the legal system, wasn’t sent to face the cold justice system alone. Likewise I am a privileged white man, and was able to feel at home surrounded by lawyers and judges – people much like those with whom I studied at university or who my parents knew when I was growing up. For most who sit in the dock, this is not the case. For most, the combination of monotony and farce is surely overwhelmed by a third feeling – terror.
And so whilst my experience of our trial was a bizarre mixture of boredom and comedy, with a swirl of pink marshmallow through the middle and a (tepid) champagne sauce, the truth about the legal system is surely this: most involved have little clue what they are doing: as the blundering prosecution showed, if you are on the side of the powerful, you don’t need to. Judges wobble back and forth within the blurry parameters of their role as defenders of a system in crisis. If, like us, you have excellent lawyers battling your corner, you have a hope. If not, you don’t.
We have been ordered to pay £1000 each in prosecution costs. Whilst we can’t fundraise for that, the trial placed a significant financial burden on many of my co-defendants, with long journeys to travel from across the country, accommodation to find in London, etc. You can help by donating here. |
Texas lands shot-blocking big man
By the time Prince Ibeh got around to making his college decision Thursday night, he likely could have picked just about any program in the country.
However, the 6-foot-10 center from Garland (Texas) Naaman Forest decided that he would stay local and picked the Texas Longhorns.
"Texas first of all they have a pedigree of players like Prince going to the next level and that was intriguing to him," said Prince's summer coach with the Top Achiever Pistons, Lawrence Mann. "Recruiting him it was Rick Barnes himself, and that meant a lot to us.
"Also, he wants to win a national championship and he feels like he can do that with the players they have there."
Entering the summer virtually unknown outside of the state of Texas, Ibeh proved to have a rare blend of size and athleticism. During stints with the Top Achievers and a few runs in Nike's EYBL with the Texas Titans he proved to be a very high level shot blocker who could really run the floor.
According to Mann, it was seeing Ibeh turn up his level of play and want to compete against the best that has helped to transform him.
That willingness to compete also meant that Texas already having a pair of quality bigs on board in five-star Cameron Ridley and three-star Connor Lammert only enhanced the Longhorns standing with Ibeh.
"He told me "I'm not afraid of competition, I want it," Mann told Rivals.com. "When he told me that I was like you are right, done deal."
A rapidly rising four-star prospect who currently ranks #54 nationally, Ibeh is the fourth member of what is turning out to be a high quality class of big guys who will be put on the floor with a steady point guard in Javan Felix.
"They feel that they have three big guys in that class that are really good and they've all got to compete," said Mann of Texas' plans for Ibeh. "They feel Prince has a bounce and a upside where they can play guys at different positions and he can get his minutes by playing his game and playing hard.
"You have guys who worry about being one and done and Prince isn't worried about all of that. Prince just wants to get better." |
Yesterday, as you probably heard, Amazon announced that it was raising its minimum hourly pay to $15. About 350,000 workers will receive an immediate raise as a result. Amazon also called on other companies to do the same and said it would lobby Washington to increase the federal minimum wage. A tightening labor market no doubt contributed to Amazon’s decision, but politics — avoiding “the chance of regulations that pose a bigger cost down the road,” as The Wall Street Journal’s Dan Gallagher wrote — was the main factor.
This is how democracy and capitalism are supposed to work.
“Jeff Bezos admitted a real degree of failure here and openly stated that the critics were right and he was wrong,” wrote Shaun King, the writer and Black Lives Matter activist. “Thank you @SenSanders,” tweeted John Podesta, Hillary Clinton’s former campaign chairman. Bezos thanked Sanders yesterday as well, in a Twitter exchange.
For more on the importance of changing corporate behavior, I recommend a recent book by Peter Georgescu, the former C.E.O. of a major advertising agency, as well as coverage and analysis of Senator Elizabeth Warren’s new legislation on this topic.
Trump’s tax fraud. If you’re a subscriber to The Times and you’re feeling angry this morning about President Trump’s brazen tax cheating — as uncovered by a long Times investigation — I know how you feel. But I would also encourage you to make room to feel a small bit of pride, as well.
A long journalistic investigation, involving multiple reporters and editors, is expensive. The reason my colleagues in the newsroom are able to pursue such projects is because of the financial support of subscribers. All of us who work here are grateful for that support.
Among the reactions to the investigation:
“If you’re wondering how Trump managed to evade the tax authorities for so long, given the brazen acts reported in that NYT piece, note that we’ve basically stopped prosecuting white-collar crime and tax evasion,” tweeted The Washington Post’s Catherine Rampell.
“Most if not all of the transactions detailed in The Times can be pursued as civil tax fraud by both the federal and New York state governments,” argues David Cay Johnston, a reporter who has covered Trump’s finances for years. “What we need now are serious investigations by Congress, by the IRS and by New York state and city tax authorities.” |
BUENOS AIRES (Reuters) - Argentina’s former president Cristina Fernandez accused the government of political persecution on Wednesday after testifying in court about alleged irregularities at the central bank that occurred under her watch.
Slideshow ( 10 images )
In a defiant speech outside the federal court in Buenos Aires, Fernandez addressed tens of thousands of supporters who showed up in the rain to cheer her on, beating drums and singing: “If they touch Cristina, we’re gonna create chaos”.
“They can call me to testify 20 more times. They can imprison me. But they will not be able to silence me,” she said.
Fernandez, who was constitutionally barred last year from running for a third consecutive presidential term, is a divisive figure, revered by many for generous welfare programs and reviled by others for economic policies such as nationalizing businesses and currency controls.
She was called to testify on Wednesday about charges against the central bank for selling U.S. dollar futures at below-market rates during her presidency, costing the government billions of dollars.
The 63-year-old former president said the bank’s actions were legitimate and the case against her was an “abuse of judicial power.”
Fernandez attacked President Mauricio Macri, who has implemented a string of unpopular austerity policies since taking power last December, such as a steep currency devaluation and cuts in gas, power and transportation subsidies.
“I never saw so many calamities in 120 days,” she said in an impassioned speech broadcast live on Argentine television.
Macri says these measures are necessary to reduce the ballooning fiscal deficit and attract the investment needed to reboot Latin America’s third-largest economy.
One Fernandez supporter at the rally, Gustavo Sanchez, a teacher from the northern Buenos Aires suburb of Tigre, said her presidency marked some of Argentina’s best years and he lamented the change in government.
“Students come to school poorly fed because their parents can’t afford good food with the cost of water and gas rising,” he said. Some Fernandez backers at the rally came together on buses from hours away, tangling downtown Buenos Aires traffic.
In a separate case, last weekend a prosecutor accused Fernandez of money laundering. Under Argentine law, a judge still needs to decide whether to accept the charge and open an investigation.
Opposition politicians including Fernandez accuse the government of pursuing charges against her to distract Argentines from the difficult economic situation and from Macri’s links with offshore companies revealed by the “Panama Papers” leak.
Macri campaigned partly on a promise to root out endemic corruption in Argentina and has vowed to provide investigators into his finances whatever information necessary. |
<!--
-
- $Id$
-
- This file is part of the OpenLink Software Virtuoso Open-Source (VOS)
- project.
-
- Copyright (C) 1998-2020 OpenLink Software
-
- This project is free software; you can redistribute it and/or modify it
- under the terms of the GNU General Public License as published by the
- Free Software Foundation; only version 2 of the License, dated June 1991.
-
- This program is distributed in the hope that it will be useful, but
- WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- General Public License for more details.
-
- You should have received a copy of the GNU General Public License along
- with this program; if not, write to the Free Software Foundation, Inc.,
- 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
-->
<v:page name="blog-home-page"
xmlns:vm="http://www.openlinksw.com/vspx/ods/"
xmlns:v="http://www.openlinksw.com/vspx/"
style="index.xsl"
doctype="-//W3C//DTD XHTML 1.0 Transitional//EN"
doctype-system="http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"
fast-render="1">
<vm:page>
<vm:header>
<vm:title>Application Settings</vm:title>
</vm:header>
<vm:pagewrapper>
<vm:rawheader caption="Application Settings"/>
<vm:navigation-new on="settings"/>
<vm:subnavigation-new on="site"/>
<vm:body>
<vm:login redirect="index.vspx"/>
<table border="0" width="100%" height="100%" class="settings">
<tr>
<td>
<img src="images/icons/go_16.png" border="0" alt="" />
<v:url name="appl1" value="Applications Management" url="--sprintf ('services.vspx?l=%s', self.topmenu_level)" render-only="1" />
<div>
Change your applications.
</div>
</td>
<td>
<img src="images/icons/go_16.png" border="0" alt="" />
<v:url name="appl2" value="Application Access Points" url="--sprintf ('vhost.vspx?l=%s', self.topmenu_level)" render-only="1" />
<div>
Add or remove URLs for accessing your applications.
</div>
</td>
</tr>
<tr>
<td>
<img src="images/icons/go_16.png" border="0" alt="" />
<v:url name="appl5" value="Log and Statistics" url="--sprintf('stat.vspx?l=%s', self.topmenu_level)" render-only="1" />
<div>
View application activity and statistics.
</div>
</td>
<td>
<img src="images/icons/go_16.png" border="0" alt="" />
<v:url name="appl6" value="Content Tagging Settings" url="--sprintf('tags.vspx?l=%s', self.topmenu_level)" render-only="1" />
<div>
Define rules for tagging the application contents.
</div>
</td>
</tr>
<tr>
<td>
<img src="images/icons/go_16.png" border="0" alt="" />
<v:url name="appl4" value="Application Administration" url="--sprintf ('admin.vspx?l=%s', self.topmenu_level)" render-only="1" />
<div>
Make a new application instance, join to an existing or invite someone to join.
</div>
</td>
<td>
<img src="images/icons/go_16.png" border="0" alt="" />
<v:url name="appl7" value="Home Page Template Selection" url="--sprintf ('user_template.vspx?l=%s', self.topmenu_level)" render-only="1" />
<div>
Edit the current template or pick new.
</div>
</td>
</tr>
<tr>
<td>
<img src="images/icons/go_16.png" border="0" alt="" />
<v:url name="appl10" value="Application Notifications" url="--sprintf ('inst_ping.vspx?l=%s', self.topmenu_level)" render-only="1" />
<div>
Application Notifications
</div>
</td>
<td>
<img src="images/icons/go_16.png" border="0" alt="" />
<v:url name="appl8" value="Edit Profile" url="--sprintf ('uiedit.vspx?l=%s', self.topmenu_level)" render-only="1" />
<div>
Edit your personal data.
</div>
</td>
</tr>
<tr>
<td>
<img src="images/icons/go_16.png" border="0" alt="" />
<v:url name="appl11" value="Application Notification Log" url="--sprintf ('ping_log.vspx?l=%s', self.topmenu_level)" render-only="1" />
<div>
Application Notification Log
</div>
</td>
<td>
<img src="images/icons/go_16.png" border="0" alt="" />
<v:url name="appl14" value="OAuth keys" url="--sprintf ('oauth_apps.vspx?l=%s', self.topmenu_level)" render-only="1" />
<div>
Setup Application OAuth keys
</div>
</td>
</tr>
<tr>
<td>
<img src="images/icons/go_16.png" border="0" alt="" />
<v:url name="appl6" value="Content Hyperlinking" url="--sprintf('url_rule.vspx?l=%s', self.topmenu_level)" render-only="1" />
<div>
Define rules for automatic hyperlinking of the application content.
</div>
</td>
<td>
<img src="images/icons/go_16.png" border="0" alt="" />
<v:url name="appl14" value="Semantic Pingback" url="--sprintf ('semping_app.vspx?l=%s', self.topmenu_level)" render-only="1" />
<div>
Semantic Pingback Setup
</div>
</td>
</tr>
<tr>
<td>
<?vsp
if (wa_user_is_dba (self.u_name, self.u_group))
{
?>
<img src="images/icons/go_16.png" border="0" alt="" />
<v:url name="appl3" value="Site Security" url="--sprintf ('security.vspx?l=%s', self.topmenu_level)" render-only="1" />
<div>
Freeze, unfreeze, act on an application as user.
</div>
<?vsp
}
?>
</td>
<td>
<img src="images/icons/go_16.png" border="0" alt="" />
<v:url name="appl7" value="Validation fields" url="--sprintf('uiedit_validation.vspx?l=%s', self.topmenu_level)" render-only="1" />
<div>
Define validation fields by imports.
</div>
</td>
</tr>
</table>
</vm:body>
</vm:pagewrapper>
</vm:page>
</v:page>
|
April 2011 posts
One of the mixed blessings of our modern society is an ever-increasing life expectancy.
For those of us with the advantages of good health and adequate income, those extra years are a gift. But for those who face a long period of disability and poverty, old age can be more daunting than rewarding, not only for the elderly but for their families as well.
For the growing population of elderly Latinos, of whom Mexican Americans are the largest subgroup, the future is particularly uncertain. That population is projected to increase dramatically in the years to come, rising from the current 1.3 million to 14.7 million by 2050. The number of these individuals who are 85 and older — the so-called "frail elderly" or "oldest-old" — will also continue to grow. Interestingly, despite its socioeconomic disadvantage, the Latino population as a whole actually has a higher life expectancy than its non-Latino counterparts, a phenomenon known as the "Latino paradox." But this longevity has trade-offs, as Latinos — Mexican Americans in particular — tend to spend a larger number of years with chronic health problems than their non-Latino counterparts. Experts are still trying to solve the puzzle, but the evidence suggests a strong link between poverty, lack of education, and the loss of the immigrant advantage through selective health risk behaviors, such as smoking and fast-food diets.
A longer period of incapacitation means a greater need for assistance, and therein lies a problem that has both economic and cultural aspects.
On the economic side, many Latinos spend their working lives in low-paying jobs that preclude saving for retirement. Even those who are eligible for Social Security often receive low benefits and rely heavily on publicly funded programs such as Medicaid that are at risk of continuing cutbacks and restructuring. The tight economy also affects younger Latinos, with the escalating cost of living often making it difficult to support themselves and their children, much less their aging parents. The high unemployment rate for Latinos — 11.3 percent in March 2011 — further exacerbates the problem.
On the cultural side of the equation, although Latinos as a group are strongly tied to their families and place high value on intergenerational bonds, both economic realities and societal pressures are taking their toll.
The time-honored traditions and expectations of caring for older parents at home as opposed to placing them in a nursing home are at a crossroads.
Who, then, will care for the burgeoning population of elderly Latinos, particularly that large number with serious disabilities? As a sociologist who has spent many years studying the health and long-term care needs of this vulnerable population, I believe that the answer to that question depends on how a number of other important questions are addressed.
• Over the 15 years since our first study of the living arrangements and long-term care expectations of older Mexican Americans the evidence that cultural tradition dictates the reliance on family for long-term care has not significantly altered. However, a larger proportion of the same group of Mexican-born and U.S.-born older adults now also cite economic and health constraints as reasons for living with family. • Research has also shown that among those of Mexican origin, individuals who migrated to the United States in mature adulthood have a higher life expectancy than individuals who migrated in childhood or midlife. • As noted earlier, however, living longer does not necessarily mean living well. Balancing quality-of-life issues — including cultural preferences — with harsh economic realities will become increasingly difficult, both for families and for those who fund and carry out public assistance programs.
Returning then, to the question of who will take care of our Latino elders when they are no longer able to take care of themselves, I suggest that each of us can assume some share of the responsibility.
One key to effective planning for the future is obtaining sound information, whether that be family members gathering facts about available assistance or policy researchers gathering data about demographic trends or public officials gathering forecasts on the economy. Although it is difficult to predict how the current reforms in the health care and public assistance systems will affect various subgroups of the U.S. population, including the growing Mexican-American contingent, those reforms are not static. Changes will continue to be made in response to new circumstances and new information.
With 30 years of industry experience under his belt, one Texas-based insurance executive decided it was time to open his own operation to serve the insurance needs of the growing Hispanic population. While Dallas-based Hispanic Affiliates General Agency (HAGA) recently opened its doors, the project has been years in the making, explained Tony Gonzalez.
Having served in executive roles with insurers that utilize the independent agency distribution channel of the property/casualty insurance business, such as Safeco and Republic Group, Gonzalez’s expertise extends from marketing to sales to commercial underwriting and more. Now he’s adding company owner to his list of accomplishments.
The 2010 census shows that the Hispanic population in the United States grew by 43 percent between 2000 and 2010. Gonzalez believes this growing market is ripe for independent agents to explore as currently most Hispanics fill their insurance needs via the captive side of the insurance industry. To serve that need, Gonzalez is actively recruiting Hispanic retail agents and aims to provide them with products and services, and access to admitted insurance markets, with which they in turn can serve their Hispanic customers.
While the 2010 census had some bearing on Gonzalez’s decision to launch HAGA this year, he said the idea for the business and plans for its implementation have been on the drawing board for some time.Insurance companies that serve the independent agency system don’t have a developed channel through which to reach the Hispanic community, Gonzalez said. And while many companies would like to grow their market share with Hispanic consumers, in many cases they are constrained by the lack of an agency base through which to reach those customers.
During his tenure at Safeco and Republic Group both companies had charged Gonzalez with developing initiatives to reach out to the Hispanic community. As a result of his research, Gonzalez realized the time was right to create a vehicle through which independent agency companies could access a new and growing market.
“The numbers were dramatic,” Gonzalez said. “What I found out was that most of the standard or preferred type business, middle and upper-middle class Hispanics, were with the captive channel.”In his recruiting efforts, Gonzalez is focused on Hispanic independent insurance agents. There’s a good reason for that, he explained.
“They are especially qualified to deal with that [the Hispanic] market,” he said. “They know … to a great degree I would say, what that market looks like. What they need to know about specifically, the target market that we’re going after – middle and upper-middle class – is who has it, why they have that class of business, and what I’m providing them in order to go head-to-head against the captive agency to pull those customers into the independent agency system.”
Gonzalez said the 2010 census has caught the attention of insurers, in terms of the growth of the Hispanic community. “That bodes well for me,” he said. “But companies need to believe that this is a good model for distributing their products. If I were still on that side, that’s what I would have to answer. They need to believe that it is a good model for distributing their products.”
While some companies bought into the idea quickly, others are taking a wait-and-see approach. “The big question for them is, and I’ve heard them say this, where is this business? What does it look like? And, can an MGA get access to it? So, there’s still work to be done on that end with regards to the insurance companies themselves. Most have bought it. Most get it. Some are still waiting,” he said.Gonzalez said he is passionate about what he’s doing and is concentrating now on getting his agency force in place. To that end, he’s logged many hundreds of miles of “windshield” time in order to meet with agents face to face. The next step, he said, is to create awareness among Hispanic consumers of the choice, and quality products and services, the independent insurance agency system has to offer.
It used to be that advertising to the Hispanic segment of the population was simple.
When a client wanted to advertise to Hispanics, it hired (for the most part) a Hispanic advertising agency. The agency produced spots that were done in Spanish and broadcasted in one of the few Spanish media outlets.
Hispanics are becoming the largest minority group in 191 metropolitan areas, a fact that has the potential to shift the balance of power in the House of Representatives due to ethnic voting in states in the redistricting process.
No wonder we are becoming so sought-after. But are we really being courted the right way?
U.S. Latinos (and any immigrant for that matter) have adapted to the new habits of language and ways of doing things very rapidly in order to be competitive. However, we hang onto and cherish our traditions and value systems. We take pride in them and instill them in our progeny.
We are also not a race, but more of an ethnic group comprised of many countries of origin, each with its own regional segmentations due to education, economic status, political affiliation, religion, among others.
What is one of the common denominators among this vast group? Language! Spanish is not only related to our culture but it is also a way to enrich it. It just so happens that a very strong part of how Hispanics feel about our culture is attached to language.
Whenever Latinos get together, we speak in Spanish because it makes us feel comfortable and that we have a sense of connection; it happens naturally. In a mixed group of Latinos from different countries, even idiosyncrasies that may be regionally specific are easily understood.
There is much debate going on recently about whether the Spanish language is still as relevant as it was when most Latinos didn't speak English. And by the same token, whether the traditional Hispanic advertising agencies are outdated.
An Associated Press-Univision poll relating to Hispanics and media consumption shows that U.S. Hispanics, including English-dominant speakers, turn to Spanish-language media on a daily basis.
90% of Spanish-dominant Hispanics watch some Spanish-language TV.
75% listen to Spanish-language radio each day.
Among English-dominant Hispanics, nearly 4 in 10 said they consume either Spanish-language television or radio.
Why do Latinos, even English-dominant, continue to consume Spanish media? Aren't many general market agencies now "speaking" Spanish? Why is it so complicated now to advertise to Hispanics?
It is crucial to understand that, in order to tap into the tremendous consumer and political power Hispanics represent (just like with mainstream Americans), we cannot be grouped together into a homogenous group.
The richness of knowledge that results from the merger of more than one culture and the resourcefulness that is required to adopt the acquired customers, positions multicultural individuals and organizations in an advantageous situation over monoculture ones.
If anything, Hispanic agencies -- by their very nature of having to listen to and understand the needs of a brand in English, find the commonalities with their own personal experiences in Spanish, develop strategies that are relevant to a diverse target and create a message that conveys the original concept without losing all its complexity -- should be seen as more able to "speak" to a diverse audience in any language. This is something that a mere translation cannot achieve.
So, are the Hispanic advertising agencies doomed? Only if they don't speak English at work and Spanish when they go home.
Dell laptops designed for Spanish-language speakers are selling briskly for BrandsMart USA at the company’s South Florida stores, which makes plenty of sense given the area’s high numbers of Hispanic residents.
The Dell Inspiron M5030 sells for $498.88 at BrandsMart USA stores in West Palm Beach, Deerfield Beach, Sunrise and Miami and features a keyboard with keys labeled in Spanish and special characters like the ñ. In addition, it operates on Windows 7 – the Spanish version used by manufacturers in Latin American countries. And of course, the manual is in Spanish.
“This is perfect for people who speak Spanish only or whose first language is Spanish,” said Angus Bryan, vice president of merchandising for BrandsMart USA. “Until now, if they wanted to use a computer, they had to get used to working in English. This gives them something that they are really familiar and comfortable with out of the box.”
And apparently the consumers are happy about it. Bryan could not share specific sales figures, but said Dell laptop “is the second-highest selling Dell laptop in our stores and is selling at a one-to-one [ratio] with the English version.” He adds: “That is pretty strong.”
Dell is the first U.S.-based computer manufacturer to offer a Spanish-language laptop, and the company chose BrandsMart USA to launch the device.Call that a win-win for South Florida.
Phoenix – A federal judge ruled that the office of Maricopa County sheriff Joe Arpaio violated the constitutional rights of an Hispanic father and son who were arrested during an immigration raid in this Arizona metropolis.
"This is a very important decision. The judge said that this detention and the arrest violated the 4th Amendment of the Constitution of the United States," Alessandra Soler Meetze, executive director of the American Civil Liberties Union of Arizona, told Efe on Tuesday.
In February 2009, Mexican immigrant Julian Mora, a legal resident of the United States for more than 30 years, was detained along with U.S. born son Julio Mora by MCSO deputies.
The Moras were driving in their automobile near Handyman Maintenance Inc., a landscaping firm, where an immigration raid was being conducted at that time.
Soler Meetze said that U.S. District Judge David Campbell ruled on Monday that the Moras were detained for no reason and without probable cause, which is prohibited under the 4th Amendment.
Father and son were taken to the site of the raid and detained for more than three hours.
"This decision also shows that Maricopa County has to take responsibility for the actions of Sheriff Arpaio," said Soler Meetze.
In August 2009, the ACLU filed a lawsuit against the MCSO for violating the civil rights of the Moras using the argument that the two men were detained solely based on the color of their skin.
Soler Meetze said the federal judge's decision is just the first part of the lawsuit, given that it is expected that at the end of this year Arpaio will go on trial with prosecutors trying to prove to a jury that the sheriff violated the civil and constitutional rights of the Moras based on racial profiling practices.
In the trial, the level of responsibility of Arpaio, his deputies and Maricopa County in this case will also be determined, as well as the economic compensation to which the Moras may be entitled.
The head of the ACLU of Arizona said she feels that the judge's decision is a clear message to Arpaio that he cannot keep ignoring the law or people's constitutional rights.
"I think that the sheriff will have to change the tactics he uses to carry out his raids," said Soler Meetze.
Arpaio is the only official in Arizona who continuously conducts raids because of alleged violations of the state law sanctioning employers who knowingly hire undocumented immigrants.
During the raids, dozens of workers have been arrested on charges of identity theft for working using Social Security Numbers that were never issued to them.
Currently, Arpaio is also under investigation by the Justice Department on accusations of making racial profiling arrests during his operations.
In response to rapid growth in Latin markets, HootSuite announces a full localization for Spanish-speaking users. Now, businesses, organizations and marketing professionals from 20+ countries can use the popular social media dashboard too in their first language.
HootSuite, the popular social media dashboard for professional social network management is now entirely localized for Spanish. The company, with over 1.5 million users worldwide, is pleased to offer this option as one of the many resources available to its growing Spanish-speaking community.
Spanish – Essential to the HootSuite Road-Map
HootSuite is now fully translated in Spanish, but the outreach to began over 6 months ago starting with a Spanish support profile on Twitter @HootSuite_ES, then adding a dedicated section in the Help Desk, followed by Spanish-translated Info Sheets and Case Studies to provide industry knowledge and information. Additionally, Spanish versions of the mobile applications were released in Autumn 2010 for iPhone/iPad, BlackBerry, and Android to positive reaction.
"After noticing the fast growth and keen interest coming from dozens of Spanish-speaking countries, we're very pleased to announce a full Spanish localization for the HootSuite social media dashboard. This translation is a result of significant work by passionate HootSuite volunteers who can take pride in knowing their contributions are helping social media fans around the world. We are also grateful to see such positive reaction to the Spanish-language case studies and customer support channels over the last six months and will continue to share user success stories from Latin countries."
-- Ryan Holmes, HootSuite CEO
Fast Growth in Latin Markets
Spanish-speaking HootSuite users are the fastest growing international HootSuite community -- increasing by over 100% in recent months. These statistics are illustrated in an infographic which shows the rapid growth in Spanish-speaking countries as well as social media usage patterns.
Mexico leads as the Spanish-speaking country which sends the most Tweets, followed by Spain.
The number of HootSuite users in Spanish speaking countries during the last six months grew by 93% compared with 68% overall.
The fastest growing Spanish-speaking countries during the last six months are El Salvador, Venezuela, Honduras, Guatemala and Puerto Rico – all with growth of over 100%.
The most popular social network in the Spanish speaking markets is Twitter with 54% of total added accounts, followed by Facebook (24% profiles and 10% pages) in 2nd place.
HootSuite Translation Project
The Spanish translation of the HootSuite Web application is possible thanks to the crowd-sourcing and collaborative work of multiple translators and coordinators who volunteered in the Translation Project, a HootSuite initiative to provide the dashboard in multiple languages through community participation at: http://translate.hootsuite.com
Florida’s Hispanic students are earning more college degrees than their counterparts in other states, but President Obama and others note there’s room for improvement.
April 26, 2011
By Michael Vasquez
President Obama’s goal of the United States achieving the world’s highest proportion of college graduates will be significantly boosted — or dragged down — by the fate of Hispanic students, according to a pair of education reports released this week
One of those reports comes from the White House itself. The Obama administration on Wednesday will release “Winning the future: Improving education for the Latino community.” Preliminary excerpts from the report emphasize that Hispanics are by far the largest minority in U.S. public schools — comprising more than 1 in 5 in pre-kindergarten through 12th grade. Hispanics are also projected to account for the majority of the nation’s population growth between 2005 and 2050.
But Hispanic students for years have graduated college at lower rates than the population as a whole, making America’s progress in education impossible if Hispanics continue to lag behind, the White House argues.
“It’s good news for Florida compared to the rest of the country,” said Sarita Brown, president of Excelencia in Education. “But it’s also a story that there’s lots more work to be done.”
Florida’s sizable Hispanic middle class is a key factor in the state’s relatively high rate of Hispanic college graduates. Research has shown higher income levels, as well as college-educated parents, significantly boost a child’s chances of completing his or her degree.
Excelencia’s report, which is the first of a planned series of state-by-state fact sheets, did not break down Hispanic by country of origin.
In the White House’s report, Hispanic college achievement is called “integral” to the administration’s overriding educational goals.
“This is an American issue, not just a Latino issue,” said Juan Sepúlveda, executive director of the White House Initiative on Educational Excellence for Hispanics. Sepúlveda singled out pre-kindergarten education as one area with much room for improvement — Hispanics, he said, are the country’s only ethnic group with less than half their children enrolled in pre-K classes.
Not so long ago, recommending digital marketing to target Latinos was a very uncomfortable conversation to have with a client. Fortunately, those days seem to be over. Supported by strong research, clients are more familiar with the growing importance of digital among Hispanics (social media, mobile, etc). As part of that (now easier) conversation, Hispanics and social media is becoming one of the hottest topics.
And clients are right. In just one year, as the total Latinos online audience grew 16 percent, the number of Latinos on Facebook grew 2.8 times. In March 2011, the amount of Hispanic Facebook users reached almost 22 million. That is, 70.2 percent of all Latinos online are active Facebook users versus 29.1 percent one year before.
We are seeing not only reach but true engagement. Latinos spend more time on Facebook: 52 percent of Hispanics use Facebook at least weekly, spending an average of 29 minutes on social networking versus White Americans who spend 19 minutes.
There's no doubt that Latinos are into Facebook. My question is, what's the real opportunity for marketers? Is this growing penetration a powerful marketing tool? Do Latinos engage with brands as actively as they do with other people?
Gap or Opportunity?
According to a recent study published by eMarketer, PR professionals believe that social media is a key tool for reaching Latinos. However, only 45 percent of respondents are actually using social media to reach Hispanics, compared to 92 percent who use it to reach the general population. This discrepancy means there is a huge opportunity for marketers to reach Hispanics via social media.
Let's take a look at the top 10 most watched TV shows both in English and Spanish. Spanish programs are getting stronger in terms of ratings. Take Reina del Sur that topped CBS, ABD, and NBC in its time slot in adults 18-49. But when it comes to creating social relationships, the story seems to be very different.
The Latino Benchmark
One of the biggest challenges that we face when starting social media strategies for our clients is setting the right goals. In other words, what does success look like? Especially considering that most of the brands are just getting started.
So I tried to build a benchmark by analyzing what the top 100 brands are doing. Let's see the key findings.
Finding 100 brands with both a general market and a Hispanic Facebook page was pretty hard.
Almost all the brands use a Spanish language, thus most of the analysis has to be built comparing English versus Spanish pages.
The ratio (Spanish fans over total English fans) was disappointingly low, confirming somehow that there's a huge gap or opportunity.
Categorizing the brands in terms of ratio performance, I could establish three segments:
Successful brands: 8 percent-plus
Good performers: 4 to 8 percent
Underperformers: less than 4 percent
Almost 80 percent of the brands belonged to the "underperformers" segment (i.e., Pepsiyosumo has only 6,901 fans).
Examples of "successful" brands are: Toyota Latino and American Airlines.
Spanish Facebook pages of brands tend to have a lower engagement than specific Hispanic pages (i.e., those of TV shows), though the latter are also in Spanish.
From a community management perspective, there's a lot of room to grow. Delays in answering questions (even not answering) were seen across key brands. In many cases, the dialogue with the consumer was pretty basic and anything but inspirational.
Moving the Conversation Forward
My first conclusion is that there's a huge gap between Latinos' usage of Facebook and real engagement with brands through their Spanish pages.
Second, considering that most of the Spanish initiatives are new and that many brands are just testing the waters, there's light at the other end of the tunnel. In order to move the conversation forward, marketers need to change their approach beyond language.
A Hispanic approach to social media should be based on content and engagement, rather than simply language. Once identified, the role of the Latino Facebook page, a bilingual approach, has a bigger potential. Consumers are used to interacting in both languages or in the language of their choice. Bilingualism conversations are richer. |
Selective conversion of cellulose in corncob residue to levulinic acid in an aluminum trichloride-sodium chloride system.
Increased energy consumption and environmental concerns have driven efforts to produce chemicals from renewable biomass with high selectivity. Here, the selective conversion of cellulose in corncob residue, a process waste from the production of xylose, to levulinic acid was carried out using AlCl3 as catalyst and NaCl as promoter by a hydrothermal method at relatively low temperature. A levulinic acid yield of 46.8 mol% was obtained, and the total selectivity to levulinic acid with formic acid was beyond 90%. NaCl selectively promoted the dissolution of cellulose from corncob residue, and significantly improved the yield and selectivity to levulinic acid by inhibiting lactic acid formation in the subsequent dehydration process. Owing to the salt effect of NaCl, the obtained levulinic acid could be efficiently extracted to tetrahydrofuran from aqueous solution. The aqueous solution with AlCl3 and NaCl could be recycled 4 times. Because of the limited conversion of lignin, this process allows for the production of levulinic acid with high selectivity directly from corncob residue in a simple separation process. |
Q:
ASP.NET MVC 2 VirtualPathProvider GetFile every time for every request
I have implemented a VirtualPathProvider. The VirtualPathProvider reads the view from File system.
However my problem is the method GetFile(string virtualPath) is not executed every time for every request. I think it is related to the caching, isn't it? What I want is getting file every time for every request. Because for some cases, the page in the file system will be modified and users want the system shows the changes immediately.
Thanks.
A:
I found the solution myself on the internet.
Really thanks jbeall replied on 07-15-2008, 11:05 AM.
http://forums.asp.net/t/1289756.aspx
In short words, overrides the following methods
GetCacheDependency - always return null
GetFileHash - always return different value
After these modifications, for every request, MVC gets the file from source directly.
|
Blue African Bowl - Medium
Share this:
This beautiful medium blue woven bowl is an exquisite blend of function and style. It works perfectly as a bread basket, or a catch-all for trinkets in your home. Each bowl is hand crafted using cattail stalks and bound with recycled plastic strips.
Dimensions:
2.5 inch height x 10.5 inch diameter
Details:
Clean with a dry or slightly damp cloth
Handcrafted in Africa
*Because each basket is handwoven there may be a slight deviation in color and / or size.
Related products
About Us
Maison Midi travels the Mediterranean in search of rare and exclusive items that will bring the Mediterranean dream to your table and home. From the Atlantic Coast of Portugal to 'Le Midi' in Southern France, the 'Maghreb' of Northwest Africa, and the coastal resorts of Italy, Greece and Turkey. Maison Midi brings this dream to the comfort of your home. |
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/>
<title>tclap: Member List</title>
<link href="tabs.css" rel="stylesheet" type="text/css"/>
<link href="doxygen.css" rel="stylesheet" type="text/css"/>
</head>
<body>
<!-- Generated by Doxygen 1.6.0 -->
<div class="navigation" id="top">
<div class="tabs">
<ul>
<li><a href="index.html"><span>Main Page</span></a></li>
<li><a href="namespaces.html"><span>Namespaces</span></a></li>
<li class="current"><a href="annotated.html"><span>Classes</span></a></li>
<li><a href="files.html"><span>Files</span></a></li>
</ul>
</div>
<div class="tabs">
<ul>
<li><a href="annotated.html"><span>Class List</span></a></li>
<li><a href="hierarchy.html"><span>Class Hierarchy</span></a></li>
<li><a href="functions.html"><span>Class Members</span></a></li>
</ul>
</div>
</div>
<div class="contents">
<h1>TCLAP::StdOutput Member List</h1>This is the complete list of members for <a class="el" href="classTCLAP_1_1StdOutput.html">TCLAP::StdOutput</a>, including all inherited members.<table>
<tr class="memlist"><td><a class="el" href="classTCLAP_1_1StdOutput.html#ace725aebd685c16f464d697e85e0327d">_longUsage</a>(CmdLineInterface &c, std::ostream &os) const </td><td><a class="el" href="classTCLAP_1_1StdOutput.html">TCLAP::StdOutput</a></td><td><code> [inline, protected]</code></td></tr>
<tr class="memlist"><td><a class="el" href="classTCLAP_1_1StdOutput.html#a60fa57587838d506d907f08800f2631c">_shortUsage</a>(CmdLineInterface &c, std::ostream &os) const </td><td><a class="el" href="classTCLAP_1_1StdOutput.html">TCLAP::StdOutput</a></td><td><code> [inline, protected]</code></td></tr>
<tr class="memlist"><td><a class="el" href="classTCLAP_1_1StdOutput.html#a9afc267e012c3ac42c8b1afe01f98556">failure</a>(CmdLineInterface &c, ArgException &e)</td><td><a class="el" href="classTCLAP_1_1StdOutput.html">TCLAP::StdOutput</a></td><td><code> [inline, virtual]</code></td></tr>
<tr class="memlist"><td><a class="el" href="classTCLAP_1_1StdOutput.html#a38661be8895e07c442c2c3138b7444a2">spacePrint</a>(std::ostream &os, const std::string &s, int maxWidth, int indentSpaces, int secondLineOffset) const </td><td><a class="el" href="classTCLAP_1_1StdOutput.html">TCLAP::StdOutput</a></td><td><code> [inline, protected]</code></td></tr>
<tr class="memlist"><td><a class="el" href="classTCLAP_1_1StdOutput.html#aeb10eb400e0ee45f2cde689bef606b49">usage</a>(CmdLineInterface &c)</td><td><a class="el" href="classTCLAP_1_1StdOutput.html">TCLAP::StdOutput</a></td><td><code> [inline, virtual]</code></td></tr>
<tr class="memlist"><td><a class="el" href="classTCLAP_1_1StdOutput.html#a768111a59af4753ac6e5ace3ec99482e">version</a>(CmdLineInterface &c)</td><td><a class="el" href="classTCLAP_1_1StdOutput.html">TCLAP::StdOutput</a></td><td><code> [inline, virtual]</code></td></tr>
<tr class="memlist"><td><a class="el" href="classTCLAP_1_1CmdLineOutput.html#afdf4435a2619076d9798a0a950ed405b">~CmdLineOutput</a>()</td><td><a class="el" href="classTCLAP_1_1CmdLineOutput.html">TCLAP::CmdLineOutput</a></td><td><code> [inline, virtual]</code></td></tr>
</table></div>
<hr size="1"/><address style="text-align: right;"><small>Generated on Sat Apr 16 15:34:25 2011 for tclap by
<a href="http://www.doxygen.org/index.html">
<img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.6.0 </small></address>
</body>
</html>
|
Canadian High Arctic Research Station
On June 1 2015 the Government of Canada established Polar Knowledge Canada, a federal research organization that focuses on advancing Canada’s knowledge of the Arctic and strengthening Canadian leadership in polar science and technology. The new organization comprises a pan-northern science and technology research program; and a polar knowledge management and mobilization function. The Canadian High Arctic Research Station (CHARS), a world class research facility in Cambridge Bay, Nunavut, will be operational in 2017. The Station will provide a suite of services for northern S&T, including technology development and knowledge sharing centres and laboratories that will complement smaller facilities.
Research History
In May 2014, the phase-in of the CHARS Science & Technology Program (S&T) began with the first field season near Cambridge Bay, NU. The CHARS’ pan-northern S&T program launched a Call for Proposals (CfPs) on December 5, 2015. This CfP initiated elements of the CHARS pan-northern S&T Program by requesting proposals that will strengthen monitoring in northern Canada and fill research gaps in northern regions of significant resource development. Funding will be allocated starting in June 2015. |
require 'xiki/core/hide'
require 'xiki/core/control_lock'
require 'xiki/core/line'
require 'xiki/core/text_util'
module Xiki
class Search
# Used by search+just+Plus and search+just+Minus
SPECIAL_ORDER = "X!='\"[]{}<>?-+ +*@#\\/!$X:|"
# These override it
SPECIAL_ORDER_MINUS = {
" "=>"-",
"+"=>"-",
"-"=>"?",
"?"=>"-",
}
SPECIAL_ORDER_PLUS = {
" "=>"+",
"-"=>"+",
"+"=>" ",
"?"=>"+",
"@"=>"=",
}
@@case_options = nil
# Deprecated? Has to do with moving cursor to line. But why can't we just pass it in? Try doing that.
@@outline_goto_once = nil
def self.outline_goto_once; @@outline_goto_once; end
def self.outline_goto_once= txt; @@outline_goto_once = txt; end
@@log = File.expand_path("~/.xiki/misc/logs/search_log.xiki")
MENU = '
- .history/
- .log/
- .launched/
- docs/
| > Summary
| You start a search by typing Control-s. Then type the search string.
| Here are some interesting keys to type while searching, to do things
| with the match.
|
| > What do we mean by "search_bookmark" etc.?
| With all xiki keyboard shorcuts, you "type the acronym".
| By the key shortcut "search_bookmark" we mean typing Control-s and then
| Control-c.
|
| Of course, typing Control-s to search lets you type characters
| to search for, so in some cases typing a search string in between makes
| sense. So, search_clipboard actually means you would type Control-s then
| some characters to search for then Control-c to copy to the clipboard.
|
| > Examples
- examples/
| search_copy: Copy found to clipboard
| search_bookmark: Search text of files in a dir
| search_all: Show all previous searches
| search_value: Insert found where search began
| search_delete: Delete found
| search_diffs (without searching): Search in diffs
| search_todo: Search in :t bookmark
| search_files: Search in :n bookmark
| search_paths: Search history of menus
- miscellaneous/
| search_search: Re-do the last search
| search_word: Suck the next word in
| search_yank: Suck the rest of the line in
| search_usurp: Suck the next expression in
|
| For more details about Xiki keyboard shortcuts, see:
+ =keys/docs/
|
- see/
<= next/
'
def self.case_options
return @@case_options if @@case_options
@@case_options = [] # Set to empty if not set yet
self.add_case_option 'upper', proc {|txt| txt.upcase}
self.add_case_option 'lower', proc {|txt| txt.downcase}
self.add_case_option 'camel', proc {|txt| TextUtil.camel_case(txt)}
self.add_case_option 'snake', proc {|txt| TextUtil.snake_case(txt)}
self.add_case_option 'whitespace', proc {|txt| TextUtil.whitespace_case(txt)}
@@case_options
end
# Make another option show up for View.cases
def self.add_case_option name, the_proc
# Delete if there already
@@case_options.delete_if{|i| i.first == name}
@@case_options << [name, the_proc]
end
def self.insert_at_spot
match = self.stop
Hide.show
Location.to_spot
$el.insert match
end
def self.insert_tree_at_spot
self.stop
txt = FileTree.snippet # Grab symbol
Hide.show
Location.go :_0
$el.insert txt
end
def self.insert_at_search_start options={}
was_reverse = self.was_reverse
match = self.stop
# Nothing searched for yet, so do search+value (search for what's in the clipboard)...
if match.nil?
return self.isearch Clipboard[0]
end
# Pull match back to search start...
self.to_start # Go back to start
match = "#{options[:prepend]}#{match}"
View.insert match #, :dont_move=>1
View.message ""
end
def self.isearch_have_wikipedia
term = self.stop
Wikipedia.wp term
end
def self.isearch_have_within
match = self.stop
self.to_start # Go back to start
$el.insert match[/^.(.*).$/, 1]
end
def self.move_to_search_start match
was_reverse = self.was_reverse
View.delete(Search.left, Search.right)
deleted = View.cursor
self.to_start # Go back to start
Move.backward match.length if was_reverse # If reverse, move back width of thing deleted
View.insert match
# Save spot where it was deleted (must do after modification, for bookmark to work)
View.cursor = deleted
Move.forward(match.length) unless was_reverse
Location.as_spot('killed')
self.to_start # Go back to start
Move.forward(match.length) unless was_reverse
end
def self.insert_var_at_search_start
match = self.stop
self.to_start # Go back to start
match.strip!
$el.insert "\#{#{match}}"
end
def self.insert_quote_at_search_start
match = self.stop
self.to_start
$el.insert "'#{match}'"
end
def self.isearch_select_inner
self.stop
$el.set_mark self.left + 1
$el.goto_char self.right - 1
Effects.blink :what=>:region
end
def self.isearch_diffs
was_reverse = self.was_reverse
match = self.stop
# Nothing searched for yet, so search difflog...
if match.nil?
Location.as_spot
DiffLog.open
View.to_bottom
Search.isearch nil, :reverse=>true
return
end
# Match found, so behave like search+dir...
# Search for filenames in dir
dir = Keys.bookmark_as_path :prompt=>"Enter bookmark to look in (or space for recently edited): "
# "dir/file.txt", so just open path
if match =~ /\//
View.open "#{dir}#{match}"
return
end
return View.message("Use space!", :beep=>1) if dir == :comma
return self.open_file_and_method(match) if dir == :space # If key is comma, treat as last edited
TextUtil.snake_case! match if match =~ /[a-z][A-Z]/ # If camel case, file is probably snake
FileTree.grep_with_hashes dir, match, '**' # Open buffer and search
end
# Used by search+add and search+just+2 etc.
def self.isearch_clear
match = self.stop
if match.nil? # Nothing searched for yet, so go to spot of last delete
Location.to_spot('killed')
else
View.delete Search.left, Search.right
Location.as_spot('killed')
end
end
def self.isearch_go
match = self.stop
Grab.go_key
end
def self.enter txt=nil
match = self.stop
txt ||= Clipboard[0]
if match.nil? # Nothing searched for yet, so show search history
return Launcher.open "searching/history/", :bar_is_fine=>1
end
View.delete(Search.left, Search.right)
View.insert txt, :dont_move=>1
end
def self.copy_and_comment
self.stop
line = Line.value(1, :include_linebreak=>true).sub("\n", "")
Code.comment Line.left, Line.right
self.to_start # Go back to start
$el.insert "#{line}"
Line.to_beginning
end
def self.isearch_just_comment
self.stop
Code.comment Line.left, Line.right
Line.to_beginning
end
def self.just_increment options={}
match = self.stop
View.delete(Search.left, Search.right)
orig = View.cursor
position = SPECIAL_ORDER.index match
decrement = options[:decrement]
# If one of certain chars, use custom order
result =
if ! decrement && found = SPECIAL_ORDER_PLUS[match]
found
elsif decrement && found = SPECIAL_ORDER_MINUS[match]
found
elsif position # Change '[' to ']', etc
increment_or_decrement = decrement ? -1 : 1
SPECIAL_ORDER[position+increment_or_decrement].chr
else
if decrement
match =~ /[a-z]/i ?
(match[0] - 1) :
(match.to_i - 1).to_s
else
match.next
end
end
View.insert(result)
View.cursor = orig
end
def self.jump_to_difflog
match = self.stop
self.to_start
Location.as_spot
DiffLog.open
View.to_bottom
Search.isearch match, :reverse=>true
end
def self.just_edits
match = self.stop
DiffLog.open View.file
View.to_bottom
Search.isearch match, :reverse=>true
end
def self.copy match
Clipboard[0] = match
$el.x_select_text(match) if Environment.gui_emacs
end
def self.go_to_end
match = self.stop
if match.nil? # If nothing searched for yet
Location.as_spot
Search.isearch_restart "%links", :restart=>true
return
end
$el.goto_char self.right
end
# Clears the isearch, allowing for inserting, or whatever else
def self.stop options={}
txt = self.match
# TODO > restore or fix!
# Maybe use (isearch-abort) instead?, and then move cursor to line?
# might result in weird scrolling
# Or, try one of these functions?
# isearch-exit
# isearch-done
# isearch-complete1
# isearch-complete
# isearch-cancel
# Make it do special clear if nothing found (to avoid weird isearch error)
if txt.nil?
if $el.elvar.isearch_success # || Search.left == View.bottom
$el.isearch_resume "[^`]", true, nil, true, "", true
View.message ""
else
txt = :not_found
end
end
$el.elvar.isearch_mode = nil
$el.isearch_clean_overlays
$el.isearch_done
# Why doing this? > self.last_search
txt == :not_found ? self.last_search : txt
end
def self.match left=nil, right=nil
left ||= self.left
right ||= self.right
return nil if left == 0# || self.nil?
result = $el.buffer_substring(left, right)
return nil if result == ""
result
end
def self.to_start
View.to($el.elvar.isearch_opoint)
end
# Do query replace depending on what they type
# def self.query_replace s1=nil, s2=nil
def self.query_replace s1=nil, s2=nil
# If no params, prompt for them...
if ! s1
s1 = Keys.input :prompt=>"Replace from: "
s2 = Keys.input :prompt=>"Replace to: "
else
s1 = self.quote_elisp_regex s1
end
$el.query_replace_regexp s1, s2
nil
end
def self.isearch_query_replace # after=nil
match = self.stop
# No match, so prompt for before...
if ! match
no_match_existed = true
match = Keys.input(:prompt=>"Exchange occurrences of what?: ")
end
was_upper = match =~ /[A-Z]/
match.downcase!
left, right = Search.left, Search.right
before = $el.regexp_quote(match) # Always start with isearch match
# Prompt for after...
initial_input = ''
after = Keys.input(:prompt=>"Exchange occurrences of '#{before}' with: ", :initial_input=>initial_input)
@@query_from, @@query_to = before, after
# We're replacing a match right here, so do it before doing query replace...
if ! no_match_existed
View.delete left, right
View.insert was_upper ?
TextUtil.title_case(after) :
after
end
$el.query_replace_regexp before, after
nil
end
def self.grep
cm_grep("./", read_from_minibuffer("Grep for pattern: "))
end
def self.tree_grep # prefix=nil
prefix = Keys.isearch_prefix
path = Keys.bookmark_as_path :include_file=>1 # Get path (from bookmark)
return if ! path
# If C-u, just jump to bookmark and search from the top
if prefix == :u
View.open path
View.to_highest
Search.isearch
return
end
if path == :space # If space, search in buffers
self.find_in_buffers Keys.input(:prompt=>"Search all open files for: ")
return
end
input = Keys.input(:prompt=>"Text to search for: ")
input.gsub! "#", "\\#"
FileTree.grep_with_hashes path, input
end
# Incrementeal search between cursor and end of paragraph (kills unmatching lines)
def self.kill_search(left, right)
pattern = ""
lines = $el.buffer_substring(left, right).split "\n"
ch = $el.char_to_string read_char
while ch =~ /[~#-'>a-zA-Z0-9!*\_'.~#-\/]/
if ch == "\t"
pattern = ""
else
pattern = pattern + regexp_quote(ch)
# Filter text and put back
View.delete left, right
# Replace out lines that don't match
lines = lines.grep(/#{pattern}/i)
# Put back into buffer
$el.insert lines.join("\n") + "\n"
right = $el.point
# Go to first file
$el.goto_char left
end
$el.message "Delete lines not matching: %s", pattern
ch = $el.char_to_string read_char
end
# Run whatever they typed last as a command (it was probably C-m or C-a, etc.)
case ch
when "\C-m" # If it was C-m
# Do nothing
else
$el.command_execute ch
end
end
def self.search_in_bookmark match
path = Keys.bookmark_as_path :include_file=>1
if path == :slash # If space, go back to root and search
# Make match be orange
Overlay.face(:ls_search, :left=>self.left, :right=>self.right)
self.search_at_root match
return
end
View.to_after_bar if View.in_bar?
if path == :space # If space, search in buffers
self.find_in_buffers match
return
end
match.gsub!(/([#()*+?^$\[\]\/|.])/, "\\\\\\1")
# Search in bookmark
FileTree.grep_with_hashes path, match
end
def self.subtract
match = self.match
if match # If currently searching
return $el.isearch_del_char
end
View.message "Unused!", :beep=>1
self.stop
self.search_last_launched
end
def self.search_last_launched
match = self.stop
Launcher.open Launcher.last_launched_menu
end
def self.launched *arg
arg = arg.any? ? arg.join("/") : nil
txt = File.read @@log
txt = txt.sub(/\A- /, '').split(/^- /).reverse.uniq
if arg && arg == "#"
txt = txt.select{|o| o =~ /^ - ##/}
elsif arg && arg == ":"
txt = txt.select{|o| o =~ /^ - [^#].*: /}
elsif arg
path = Bookmarks[arg]
if File.file? path # File
regex = /^#{Regexp.escape File.dirname path}\/\n - #{Regexp.escape File.basename path}/
else # Dir
regex = /^#{Regexp.escape path}/
path = "#{path}/" if path !~ /\/$/
end
txt = txt.select{|o| o =~ regex}
end
result = "=#{txt.join("=")}"
result
end
def self.left
$el.match_beginning 0
end
def self.right
$el.match_end 0
end
def self.isearch_find_in_buffers options={}
match = self.stop
self.find_in_buffers match, options
end
def self.find_in_buffers string, options={}
string.gsub!('"', '\\"')
new_args = "\"#{string}\""
new_options = {}
new_options[:buffer] = View.name if options[:current_only]
new_args << ", #{new_options.inspect[1..-2]}" unless new_options.empty?
View.bar if options[:in_bar]
$el.switch_to_buffer "find in buffers/"
Notes.mode
$el.erase_buffer
View.insert "+ Buffers.search #{new_args}/"
$el.open_line 1
CodeTree.launch :no_search=>true
if new_options[:buffer] # Goto first match
$el.goto_line 4
Line.to_words
Tree.filter(:recursive=>false, :left=>Line.left, :right=>View.bottom)
else # Goto first match in 2nd file
$el.goto_line 2
$el.re_search_forward "^ -", nil, true
Line.next 2
Line.to_words
end
end
def self.just_marker
match = self.stop
$el.highlight_regexp(Regexp.quote(match), :notes_label)
end
def self.isearch_highlight_match
match = self.stop
View.selection = [self.right, self.left]
end
def self.highlight_all_found
match = self.stop
$el.highlight_regexp(Regexp.quote(match), :hi_yellow)
end
def self.hide
match = self.stop
Hide.hide_unless /#{Regexp.quote(match)}/i
$el.recenter -3
Hide.search
end
# Insert line at beginning of search
def self.have_line
self.stop
column = View.column
line = Line.value(1, :include_linebreak=>true).sub("\n", "")
self.to_start # Go back to start
$el.insert line
View.column = column
end
# Insert line at beginning of search
def self.have_label
self.stop
label = Line.label
self.to_start # Go back to start
$el.insert "- #{label}: "
end
# Insert line at beginning of search
def self.have_paragraph
self.stop
paragraph = View.paragraph
offset = View.cursor - View.paragraph(:bounds=>true)[0]
self.to_start # Go back to start
orig = Location.new
$el.insert paragraph
orig.go
Move.forward offset
end
# During isearch, pull next n words
def self.isearch_pull_in_words n
# If on the beginning of a grouping char, move back to catch the sexp
$el.el4r_lisp_eval "
(isearch-yank-internal
(lambda ()
(forward-word #{n}) (point)))"
end
# During isearch, pull next sexp into the search string
def self.isearch_pull_in_sexp
# If on the beginning of a grouping char, move back to catch the sexp
$el.el4r_lisp_eval %q=
(isearch-yank-internal
(lambda ()
(if (and (> (point) 1)
(string-match "[{<\\\\\"'(\\\\[]" (char-to-string (char-before (point))))
)
(backward-char))
(forward-sexp) (point)))=
end
# During isearch, open most recently edited file with the search string in its name
def self.isearch_options
match = self.stop
if match.nil? # If nothing searched for yet
# Do something if nothing searched yet?
Location.as_spot
Search.isearch_restart "%o", :restart=>true
else
# Match found, so do task menu here...
Launcher.options_key
end
end
#
# Jumps to the file corresponding to a string.
# The string has no path - the edited history (and visited history?)
# are used to find the file.
#
# If the string is like "Foo.bar" it jumps to the method as well ("foo").
#
# > Examples
# Search.open_file_and_method "View"
# Search.open_file_and_method "View.path"
#
def self.open_file_and_method match
match.sub!(/^[+-] /, '') # Foo.hi("you")
match.sub!(/[( ].+/, '') # Foo.hi
if match =~ /(.+)[.#](.+)/
match, method = $1, $2 # split off, and open
end
# Convert to snake case, or nil if already in snake case
snake = TextUtil.snake_case(match)
snake = nil if snake == match
match = "#{match}."
snake = "#{snake}."
# For each file edited
found = DiffLog.file_list.find do |o|
next if ! o.is_a? String
next if o =~ /.xiki$/ # Ignore notes files
next if o =~ /:/ # Ignore files with colons (tramp)
name = o[/.*\/(.*)/, 1] # Strip off path
# Check for match
if name =~ /^#{Regexp.quote(match)}/i || (snake && name =~ /^#{Regexp.quote(snake)}/i)
o
else
false
end
end
if found # Open it if it matches
View.open found
if method # If method, go to it
View.to_highest
result = Search.forward "^ +def \\(self\\.\\)?#{method}[^_a-zA-Z0-9]", :beginning=>true
return Code.suggest_creating_method View.file, method if !result # If not found, suggest creating
Move.to_axis
$el.recenter 0
dir, name = found.match(/(.+\/)(.+)/)[1..2]
Search.append_log dir, "- #{name}\n : #{Line.value}"
end
else
View.message "'#{match}' not found (no recently edited file with that substring found)."
end
return
end
def self.isearch_or_copy name
was_reverse = self.was_reverse
match = self.stop
# Isearch for it, or remember what's already searched...
if match.nil? # If nothing searched for yet
target = Clipboard.register(name).downcase
self.isearch target, :reverse=>was_reverse
# Make next "C-," repeat just jump to this search string...
was_reverse ?
Keys.remember_key_for_repeat(proc {Search.backward target, :quote=>1}, :movement=>1) :
Keys.remember_key_for_repeat(proc {Search.forward target, :quote=>1, :beginning=>1}, :movement=>1)
else # If found something, just remember it
self.stop
target = match
Clipboard.register(name, target)
end
end
def self.isearch_copy_as name
self.stop
Clipboard.set(name, self.match)
end
def self.kill_matching options={}
# TODO: Get options[:kill_matching]=>true to delete matching
# - and map to Keys.just_kill_matching
# Prompt for string
filter = Keys.input :promp=>"Remove lines containing what? "
Line.start
left = $el.point
$el.re_search_forward "^$", nil, 1
right = $el.point
$el.goto_char left
txt = View.delete left, right
txt = txt.split("\n").select{|o| o !~ /#{filter}/}.join("\n")
View.<< "#{txt}\n", :dont_move=>1
# Delete non-matching lines
end
def self.kill_filter options={}
# TODO: Get options[:kill_matching]=>true to delete matching
# - and map to Keys.just_kill_matching
Line.start
left = $el.point
$el.re_search_forward "^$", nil, 1
right = $el.point
$el.goto_char left
Tree.filter(:left=>left, :right=>right, :recursive=>true)
end
def self.cancel
# Clear any subsequent keys, since it could be a mouse click
Keys.read_subsequent_chars
self.stop
self.to_start # Go back to start
View.message ""
end
# def self.not_found?
# # Note this returns false when nothing searched for
# ! elvar.isearch_success
# end
# Search.forward "from"
# Search.forward "from", :dont_move=>1 # Just return location
# Search.forward "from", :beginning=>1 # Beginning of match
def self.forward target, options={}
View.to_highest if options[:from_top]
orig = View.cursor
# :line, so make regex to find just the line...
if options[:line]
target = "^#{Search.quote_elisp_regex target}$"
end
target = self.quote_elisp_regex target if options[:quote]
# Do actual search
found = $el.re_search_forward(target, nil, (options[:go_anyway] ? 1 : true))
View.cursor = orig if options[:dont_move]
if options[:beginning] && found && View.cursor != View.bottom
left = self.left
# Before moving back to left side, if left of match is where we were already, search again
if orig == left && left != 1 # Unless we're at the top
self.forward target, options
else
View.cursor = left
end
end
found
end
def self.backward target, options={}
orig = View.cursor
found = $el.re_search_backward target, nil, (options[:go_anyway] ? 1 : true)
View.cursor = orig if options[:dont_move]
found
end
def self.to find
Move.forward
if Search.forward(find)
match = $el.buffer_substring self.left, self.right
Move.backward(match.size)
else
$el.beep
$el.message "not found"
Move.backward
end
end
def self.isearch_open
self.stop
View.open(self.match)
end
def self.isearch_google options={}
term = self.stop
if term
term.gsub!(' ', '+')
term = "\"#{term}\"" if options[:quote]
term =~ /^https?:\/\// ? # If url, just browse
$el.browse_url(term) :
$el.browse_url("http://google.com/search?q=#{term}")
return
end
Search.isearch nil, :from_bottom=>true
end
def self.search_thesaurus
term = self.stop
url = term.sub(/^\s+/, '').gsub('"', '%22').gsub(':', '%3A').gsub(' ', '%20')
$el.browse_url "http://thesaurus.reference.com/browse/#{url}"
end
def self.isearch_move_line
$el.isearch_done
$el.isearch_clean_overlays
line = $el.buffer_substring $el.point_at_bol, $el.point_at_eol + 1
View.delete $el.point_at_bol, $el.point_at_eol + 1
#self.to_start # Go back to start
$el.exchange_point_and_mark
View.insert line, :dont_move=>1
end
def self.outline
if Keys.prefix_u?
History.open_current :outline=>true, :prompt_for_bookmark=>true
else
History.open_current :outline=>true
end
end
def self.outline_search
if Keys.prefix_u?
History.open_current :bar=>true, :all=>true
else
History.open_current :all=>true
end
end
def self.upcase
self.stop
$el.upcase_region(self.left, self.right)
end
def self.downcase
self.stop
$el.downcase_region(self.left, self.right)
end
def self.enter_like_edits
Notes.enter_junior
View << "@edits/"
Launcher.launch
end
# Inserts a "- ##foo/" string into the current view.
def self.enter_search bm=nil, input=nil
# If line already has something, assume we'll add - ##foo/ to it
if ! Line[/^ *$/]
indent = Line.indent
Line.to_right
# up+, so just grab from clipboard...
if Keys.prefix_u
input = Clipboard.get
View.insert "\n#{indent} - ###{input}/"
Launcher.launch
return
end
# Insert ##/ under...
View << "\n#{indent} - ##/"
View.to(Line.right - 1)
return
end
bm ||= Keys.input(:timed=>true, :prompt=>"Enter bookmark in which to search: ")
return unless bm
input ||= Keys.prefix_u? ? # Do search
Clipboard.get :
Keys.input(:prompt=>"Text to search for: ")
if bm == "." # Do tree in dir from bookmark
if Line.blank?
dir = $el.elvar.default_directory
else
dir = nil
end
else
dir = Bookmarks.expand("%#{bm}")
end
View.insert("#{dir}" || "")
indent = Line.indent
Line.to_right
View.insert("\n#{indent} - ###{input}/")
Launcher.launch
end
def self.isearch_have_outlog options={}
return self.isearch_have_outlog_javascript if View.extension == "js"
match = self.stop
match.strip!
self.to_start
return View.insert("Ol.line") if match.nil?
method = options[:method]
txt = options[:no_label] ?
"Ol#{method} #{match}" :
"Ol#{method} #{match.inspect}, #{match}"
View.insert txt
end
def self.isearch_have_outlog_javascript
match = self.stop
self.to_start
View.insert "p(\"#{match}: \" + #{match});"
end
def self.isearch_as_camel
self.stop
term = self.match
self.to_start
View.insert TextUtil.camel_case(term)
end
def self.isearch_as_snake
self.stop
term = self.match
self.to_start
View.insert TextUtil.snake_case(term)
end
def self.isearch_just_adjust
self.stop
Move.forward
$el.transpose_chars 1
self.to_start
end
# Go to root of tree and do search
def self.search_at_root txt
Search.backward("^ *[+-] /")
# If next line isn't ##... line (will be visually distinct) add linebreak
unless Line.value(2) =~ /^ *[+-] ##/
Line.next
View.insert "\n"
Line.previous 2
end
self.enter_search '.', txt
end
def self.just_select
self.stop
View.set_mark(Search.right)
View.to(Search.left)
Effects.blink :what=>:region
end
def self.isearch_just_tag
self.stop
left, right = Search.left, Search.right
tag = Keys.input :timed=>true, :prompt=>"Enter tag name: "
left_tag = "<#{tag}>"
right_tag = "</#{tag}>"
if tag == 'di'
left_tag = "<div id='#{Keys.input :prompt=>"Enter id: "}'>"
right_tag = "</div>"
elsif tag == 'dc'
left_tag = "<div class='#{Keys.input :prompt=>"Enter id: "}'>"
right_tag = "</div>"
end
View.to(right)
View.insert right_tag
View.to(left)
View.insert left_tag
View.to right + left_tag.length
end
def self.isearch_just_wrap
self.stop
left, right = Search.left, Search.right
wrap_with = Keys.input :timed=>true, :prompt=>"Enter string to wrap match with: "
View.to(right)
View.insert wrap_with
View.to(left)
View.insert wrap_with
View.to right + wrap_with.length
end
def self.just_orange
self.stop
Overlay.face(:notes_label, :left=>Search.left, :right=>Search.right)
end
def self.just_edges
self.stop
left, right = Search.left+1, Search.right-1
Effects.blink :left=>left, :right=>right
View.delete(left, right)
View.to(Search.left+1)
end
def self.isearch_just_surround_with_char left=nil, right=nil
term = self.stop
right ||= left
# nothing passed, so request it
if ! left
left = Keys.input :chars=>1
right = left.tr "([{<", ")]}>"
end
if term == ""
View.insert "()"
Move.backward
return
end
View.to(Search.left + term.length)
View.insert right
View.to(Search.left)
View.insert left
View.to Search.left
end
def self.isearch_surround_with_tag
term = self.stop
left = Search.left
tag = Keys.input :timed=>1, :prompt=>"Enter tag to surround: "
View.to(left + term.length)
View.insert "</#{tag}>"
View.to left
View.insert "<#{tag}>"
View.to left
end
# Copy match as name (like Keys.as_name)
def self.just_name
term = self.stop
loc ||= Keys.input(:chars=>1, :prompt=>"Enter one char (variable name to store this as): ") || "0"
Clipboard.copy loc, term
Effects.blink :left=>left, :right=>right
end
def self.just_macro
self.stop
Macros.run
end
def self.to_left
match = self.stop
if match.nil? # C-b and nothing searched for yet
return Launcher.open "search/history/", :bar_is_fine=>1
end
Line.to_left
end
def self.just_menu
match = self.stop
View.open "%ml"
View.to_bottom
Search.isearch match, :reverse=>true
end
def self.isearch_just_case
was_reverse = self.was_reverse
txt = self.stop
return Search.isearch(Clipboard[0], :reverse=>was_reverse) if txt.nil?
choice = Keys.input(:prompt=>'convert to which case?: ', :choices=>TextUtil.case_choices)
View.delete(Search.left, Search.right)
View.insert choice[txt], :dont_move=>1
end
def self.isearch_have_case
self.stop
txt = self.match
lam = Keys.input(:prompt=>'convert to which case?: ', :choices=>TextUtil.case_choices)
self.to_start # Go back to start
$el.insert lam[txt]
end
def self.isearch_just_underscores
self.stop
term = self.match
View.delete(Search.left, Search.right)
View.insert TextUtil.snake_case(term)
end
def self.zap
# try adding this > see if it stops it!
was_reverse = self.was_reverse
match = self.stop
if match.nil? # If nothing searched for yet
Launcher.open "ze/", :hotkey=>1
return
end
right = $el.point
self.to_start # Go back to search start
View.delete($el.point, right)
end
def self.xiki options={}
match = self.stop
# No string searched for, so pretend key was: search+xiki (show xiki search options)...
if ! match
return Launcher.open "search xiki", :hotkey=>1
end
# No match, so treat like search+expand
self.stop
Launcher.launch
end
def self.isearch_restart path, options={}
term = self.stop
Location.as_spot if options[:as_here]
if path == "%n" # If :t, open bar
View.open "%n"
elsif path == "%links"
View.open "%links"
elsif path == "%o"
Code.open_log_view
options[:reverse] = true
elsif path == "%d"
View.open "%d"
options[:reverse] = true
elsif path == :top
# Will go to highest below
elsif path == :edge
options[:to_edge] = 1
elsif path == :right
Location.as_spot
View.layout_right 1
options[:to_edge] = 1
elsif path == :next
Location.as_spot
View.next
elsif path == :previous
Location.as_spot
View.previous
else
View.open Bookmarks[path]
end
View.wrap unless options[:restart] # Don't change wrapping if starting search
options[:to_edge] ? View.to_relative(:line=>1) : View.to_highest
if options[:reverse]
View.to_bottom
options[:restart] ? $el.isearch_backward : self.isearch(term, :reverse=>true)
return
end
options[:restart] ? $el.isearch_forward : self.isearch(term)
end
def self.isearch txt=nil, options={}
if options[:from_bottom]
View.to_bottom
options[:reverse] = true
end
txt ||= "" # Searchig for nil causes error
$el.isearch_resume txt, (options[:regex] ?true:nil), nil, (! options[:reverse]), txt, true
$el.isearch_update
end
def self.isearch_forward
prefix = Keys.prefix :clear=>1
if prefix == :u
return $el.isearch_backward
elsif prefix == :-
return $el.isearch_forward_regexp
elsif prefix == :uu
return $el.isearch_backward_regexp
end
$el.isearch_forward
end
def self.isearch_stop_at_end
# Kind of a hack - search for anything, so it won't error when we stop
$el.isearch_resume "[^`]", true, nil, true, "", true
self.stop
self.to_start # Go back to start
View.message ""
nil
end
def self.isearch_tasks
match = self.stop
if match.nil?
# Nothing searched for yet, so search tasks
Location.as_spot
Search.isearch_restart "%n", :restart=>true
else
# Something searched for, so move to the right of it...
$el.goto_char self.right
end
end
def self.isearch_pull
was_reverse = self.was_reverse
match = self.stop
return Line.previous if match.nil? && was_reverse # Odd case, user might do this if at end of file
# Meant to search git diffs, when nothing matched yet
return Git.search_repository if match.nil?
self.move_to_search_start match
end
def self.isearch_links
match = self.stop
# Nothing searched for...
if ! match
Location.as_spot
Search.isearch_restart "%links", :restart=>true
return
end
# $el.goto_char self.right
if ! View.file # If buffer, not file
buffer_name = $el.buffer_name
txt = View.txt
View.to_buffer "* outline of matches in #{buffer_name}"
Notes.mode
View.kill_all
View.insert txt.grep(Regexp.new(match)).join
return
end
# Match found, so only matches ("jump+like")
current_line = Line.number
dir = View.dir
file_name = View.file_name
View.to_buffer "tree filter"; View.dir = dir
View.clear; Notes.mode
regex = Regexp.quote(match).gsub("/", '\/')
View.insert "
- #{dir}/
- #{file_name}
- ###{regex}/
".unindent
View.to_line 3
options = {:current_line=>current_line}
txt = FileTree.filter_one_file "#{dir}/#{file_name}", /#{Regexp.quote(match)}/i, options
txt = txt.map{|o| "#{o}\n"}.join("")
Line.next
left = View.cursor
View << "#{txt.gsub(/^/, ' ')}\n"
right = View.cursor
View.cursor = left
Line.next options[:line_found]
Line.to_beginning
Tree.filter(:left=>left, :right=>right, :recursive=>true)
end
def self.was_reverse
! $el.elvar.isearch_forward
end
# During search, copy to the clipboard.
# If no search, does "search+commands" - shows shell commands
# recently run in a directory.
def self.isearch_copy
match = Search.stop
return Launcher.open("log/") if match.nil?
self.copy match
Location.as_spot('clipboard')
end
# Search for what's in the clipboard
def self.isearch_like_clipboard
end
def self.isearch_pause_or_resume
match = self.stop
if match.nil? # If nothing searched for yet, resume search
Location.to_spot('paused')
Search.isearch $xiki_paused_isearch_string
else
# If search in progress, stop it, remembering spot
$xiki_paused_isearch_string = self.match.downcase
Location.as_spot('paused')
end
end
def self.isearch_just_search
match = self.stop
Search.backward "^ *[+-] ##"
Move.to_axis
indent = Line[/^ */]
match = Regexp.escape match
View.insert "#{indent}- \#\##{match}/\n", :dont_move=>1
Launcher.launch
end
def self.isearch_enter_and_next
if self.match == "" # If nothing searched for yet, go to where last copy happened
self.stop
Location.to_spot('clipboard')
Search.isearch Clipboard[0]
return
end
match = self.stop
View.delete(Search.left, Search.right)
View.insert Clipboard[0]
Search.isearch match
end
def self.isearch_move_to path, options={}
match = self.stop
match = Line.value if match.nil? # Use line if nothing searched for
match = TextUtil.regexp_escape(match).gsub("/", "\\/") if options[:as_regex]
match.gsub!(/^/, options[:prepend]) if options[:prepend]
match = "#{match}#{options[:append]}" if options[:append]
match = ":+#{match}\n:-#{match}\n" if options[:diffs]
self.move_to path, match, options
end
def self.move_to_files match, options={}
match_with_path = FileTree.snippet :txt=>match
match_with_path = ">\n#{match_with_path}"
result = self.try_merging_link match, options
result ? nil : match_with_path
end
def self.try_merging_link match, options={}
target_path = options[:path] || View.file
View.open("%links")
View.to_highest
# Do nothing if first line is blank
return false if Line.blank?
Move.to_junior # Go to first file
return false if Line.blank?
path = Tree.path.last # Grab from wiki tree
# If doesn't fit in the tree, just return to delegate back
return false if ! target_path
target_path = Files.tilde_for_home target_path
return false if ! target_path.start_with?(path)
# Check if saved, and save it afterwards if so?
modified = View.modified?
cursor = Line.left 2
FileTree.enter_quote match, :leave_indent=>1, :char=>":", :leave_quote_chars=>1
View.cursor = cursor
View << " - #{options[:label]}\n" if options[:label]
DiffLog.save :no_diffs=>1 if ! modified
return true # If handled
end
def self.have_go
txt = self.stop
# Jump to ^n!
View.open("%n")
# Quote!
txt = Tree.quote txt, :char=>" |"
# Put blank space ahead!
Move.top
View.<< "\n#{txt}\n", :dont_move=>1
end
def self.move_to path, match, options={}
orig = Location.new
was_in_bar = View.in_bar?
# :prompt_label, so get label here (it gets applied in 2 place)
if options[:prompt_label]
label = Keys.input :prompt=>"label: "
label = "do" if label.blank?
label = Notes.expand_if_action_abbrev(label) || label
label = "#{label}:" if label !~ /[:!)]$/
options[:label] = label
end
if options[:include_file_context]
match = FileTree.snippet :txt=>match
end
if path == "%n" # If :n, grab path also
was_visible = View.file_visible? Bookmarks['%links']
View.open("%n")
elsif path == "%links" # If :n, grab path also
match = self.move_to_files match, options
return orig.go if ! match # It handled it if it didn't return the match
else
View.open path
end
# Maybe extract to .insert_in_section ? ...
View.to_highest
# If in :n, resave if was saved
if path == "%links" # If :n, grab path also
should_save = ! View.modified?
end
line_occupied = ! Line.blank?
# if options[:append]
if options[:insert_after]
match = "\n#{match}"
Notes.to_block
Line.previous
end
if options[:label]
match.sub!(/^ : /, " - #{label}\n\\0")
end
View.insert match
View.insert "\n" if line_occupied # Make room if line not blank
# Add line after if before heading, unless match already had one
View.insert "\n" if Line[/^>/] && match !~ /\n$/ # If there wasn't a linebreak at the end of the match
line = View.line
View.to_highest
DiffLog.save :no_diffs=>1 if should_save
# Which case was this handling? Being in :n? Why leave cursor in :t when in :n?
# Go to original location, unless it was as+task, and todo.notes was visible to begin with (becase it makes sense to leave the cursor in todo.notes)
orig.go if path != "%n" || was_visible
if path == "%n" && orig.buffer == "notes.xiki"
Line.next line-1
View.column = orig.column
end
end
def self.log
View.open @@log
end
def self.append_log dir, txt#, prefix=''
txt = "- #{dir}\n #{txt}\n"
File.open(@@log, "a") { |f| f << txt } rescue nil
end
def self.bookmark
match = self.stop
if match.nil?
# Prompt for bookmark to search in
self.tree_grep
else
self.search_in_bookmark match
end
end
def self.searches
begin
$el.elvar.search_ring.to_a
rescue Exception=>e
["error getting searches, probably because of special char :("]
end
end
def self.last_search
begin
$el.nth 0, $el.elvar.search_ring.to_a
rescue Exception=>e
View.beep
return "- exception happened while looking for last search string!"
end
end
# Backs the search/history menu.
def self.history txt=nil
# If nothing selected yet, show history
if ! txt
searches = self.searches.uniq
searches = searches.map do |o|
o =~ /\n/ ?
": #{o.inspect}\n" :
": #{o}\n"
end
return searches.join("")
end
# Option selected, so search for it
txt.sub! /^\: /, ''
# Go back to where we came from, if we're in special search buffer
View.kill if View.name == "searching/history/"
Search.isearch txt
nil
end
# Mapped to up+to+outline
#
# Outline that includes path to root. Like...
#
# - /tmp/
# - foo.rb
# | def a
# | def b
# | while true
# | c;
# | d;
def self.deep_outline txt, line
txt = txt.split "\n"
target_i = line
# Start with current line
i, children, matched_above = target_i-1, false, 1
result = [txt[i]]
target_indent = txt[i][/^ */].length / 2
# Go through each line above
while (i -= 1) >= 0
# Grab lines with incrementally lower indent, but only if they have lines under!
line = txt[i]
next if line.empty?
indent = line[/^ */].length / 2
if indent > target_indent # If lower, skip, remembering children
children = true
elsif indent == target_indent # If same, only grab if there were children
if children
matched_above += 1
result.unshift txt[i]
end
children = false
else # Indented less
next if indent == 0 && line =~ /^#? ?Ol\b/ # Skip if ^Ol... line
matched_above += 1
result.unshift txt[i]
children = false
target_indent = indent
end
end
i, candidate = target_i-1, nil
target_indent = txt[i][/^ */].length / 2
# Go through each line below
while (i += 1) < txt.length
# Grab lins with incrementally lower indent, but only if they have lines under!
line = txt[i]
next if line.empty?
indent = line[/^ */].length / 2
if indent > target_indent # If lower, add candidate if any
if candidate
result << candidate
candidate = nil
end
elsif indent == target_indent # If same, only grab if there were children
candidate = txt[i]
else # Indented less
next if indent == 0 && line =~ /^#? ?Ol\b/ # Skip if ^Ol... line
target_indent = indent
candidate = txt[i]
end
end
[result.join("\n")+"\n", matched_above]
end
def self.isearch_m
was_reverse = self.was_reverse
match = self.stop
if match # If there was a match, just stop
was_reverse ?
Keys.remember_key_for_repeat(proc {Search.backward match, :quote=>1}, :movement=>1) :
Keys.remember_key_for_repeat(proc {Search.forward match, :quote=>1, :beginning=>1}, :movement=>1)
# Remember for C-, first
return
end
Launcher.open(": search+m is available? was > log/", :no_launch=>1) if match.nil?
# Launcher.open("log/") if match.nil?
end
def self.just_bookmark
match = self.stop
path = Keys.bookmark_as_path :include_file=>1 # Get path (from bookmark)
View.open path
View.to_highest
Search.isearch match
end
def self.enter_insert_search
# If in file tree, do ##
if FileTree.handles?
raise "enter+web do no longer makes search under a file tree"
else
self.insert_google
end
end
def self.insert_google
line = Line.value
# If not harmless chars, move to next line
harmless_chars = line[/^[ =+-]*$/]
if ! harmless_chars && Line.at_right?
Line << "\n"
line = Line.value
end
return View << "= google/" if harmless_chars.any? && harmless_chars !~ /=$/
View << "google\n "
end
def self.like_delete
match = self.stop
line = View.line
# Delete matches between here and end of the paragraph...
left = Line.left
Search.forward "^$"
right = $el.point
txt = View.delete left, right
txt = txt.split("\n", -1)
txt = txt.select{|o| o !~ /#{Regexp.quote match}/i}
txt = txt.join("\n")
# Continue here...
View.<< txt, :dont_move=>1
View.remove_last_undo_boundary
Line.to_words
end
def self.have_right
match = self.stop
View.to_upper
View.to_highest
View << "#{match}\n\n"
end
# Query replaces from "1" clipboard to "2" clipboard, etc.
# Search.query_replace_nth "1", "2"
def self.isearch_just_after
match = self.stop
View.delete self.left, self.right
View << Clipboard.register("2")
end
def self.query_replace_with_2
match = self.stop
# C-s just typed, so prompt for search strings...
if ! match
$el.query_replace_regexp(
Keys.input(:prompt=>"Replace: "),
Keys.input(:prompt=>"With: "))
return
end
match.downcase!
two = Clipboard.register("2")
left, right = Search.left, Search.right # Replace this one to get started
View.delete left, right
View << two
self.query_replace match, two
end
def self.query_replace_nth n1, n2
if Keys.up? # If up+, grab line from last diff
a, b = DiffLog.last_intraline_diff
return self.query_replace a, b
end
self.query_replace Clipboard.get(n1), Clipboard.get(n2)
end
def self.quote_elisp_regex txt
$el.regexp_quote txt
end
def self.isearch_just_special
match = self.stop
found = Search.forward "[^\t-~]" # => 1434703
View.flash("- no special char found", :times=>3) if ! found
nil
end
def self.search_just_swap
# Grab and delete what's selected
txt_a = self.stop
View.delete(Search.left, Search.right)
# Insert from clipboard
View << Clipboard["0"]
# Go back to start and insert text_a
self.to_start # Go back to start
View << txt_a
end
def self.just_kill
was_reverse = self.was_reverse
self.stop
txt = Line.delete
# Get where we would have gone
before = $el.elvar.isearch_opoint
# Move back to account for deleting if reverse
before -= txt.length+1 if was_reverse
View.cursor = before
end
def self.like_expanded
match = Search.stop
Tree.to_parent
Tree.collapse
Launcher.launch :no_search=>1
Search.isearch match
end
def self.insert_before_and_after
txt = Clipboard.register("1").gsub(/^/, ":-").strip+"\n"
txt << Clipboard.register("2").gsub(/^/, ":+").strip+"\n"
cursor = View.cursor
View.<< txt #, :dont_move=>1
View.set_mark
View.cursor = cursor
end
def self.recenter_to_top
$el.recenter(1)
end
def self.hop_to
line = Line.rest
regex = /(\bself|[A-Z][a-zA-Z]+)\.[a-z][a-z_]*\??/i
function = line[regex]
function ||= Line[regex]
function.sub!(/^self/){
file = View.file_name.sub(/\..*/, '')
TextUtil.camel_case file
}
self.open_file_and_method function
end
end
end
|
Extensive efforts have been pointed towards determining approaches by which brain health can be enhanced in older adults given the rapid increase in the U.S. aging population, including Veterans, and rising incidence of Alzheimer?s disease and other forms of cognitive impairment. Additional attention is now also focusing on concerns of accelerated aging processes in Veterans due to common morbidities such as history of traumatic brain injury and posttraumatic stress disorder (PTSD). Thus, finding interventions that can improve brain health in older adult Veterans is critical. Mindfulness-based training has proven effective in improving brain health in younger and middle-aged adults, though there is a paucity of studies that have addressed the potential efficacy of mindfulness training in older adults. Interestingly, areas of cognition that experience the greatest extent of enhancement from mindfulness training are also some of the same primary areas most susceptible to the effects of aging, such as attention, processing speed and cognitive control. In addition, mindfulness training holds the potential to simultaneously address other dimensions of brain health such as stress and anxiety reduction, and associated beneficial decreases in blood pressure and cortisol (a primary effector arm of the stress response). The proposed mixed methods pilot study builds on current on-going work by the Principal Investigator and Co-Investigator in which Mindfulness-Based Stress Reduction (MBSR) is being used to improve cognition and reduce stress in Veterans with stroke and with chronic mild traumatic brain injury/PTSD. We propose a randomized, controlled trial of MBSR versus a Psychological Education (PEd) group in order to determine the acceptability, feasibility, and potential efficacy of MBSR in improving brain health in older Veterans. Fifty-eight subjects will be randomly assigned to MBSR or PEd groups. Specific Aim 1 evaluates the acceptability and feasibility of older adults participating in Mindfulness-Based Stress Reduction (MBSR). Specific Aim 2 evaluates the potential efficacy of MBSR in older Veterans to enhance attention abilities. Specific Aim 3 evaluates the potential efficacy of MBSR in older Veterans to reduce anxiety and biomarkers of stress (i.e., blood pressure and cortisol). Assessments will be conducted immediately prior to and after intervention, as well as three months following intervention. Based on pilot work by our group with Veterans, and outcome studies of mindfulness-based interventions in other populations, it is predicted that MBSR will be both acceptable and feasible in healthy older Veterans (Specific Aim 1), will enhance attentional abilities (Specific Aim 2), and will decrease both subjectively reported anxiety and physiological measures of stress (Specific Aim 3). Mindfulness-based intervention offers a compelling and novel, side-effect free intervention to improve brain health in our aging Veterans, ultimately presenting the possibility of significantly increased quality of life, delayed age-related declines, and decreased health-care utilization, with associated medical care cost savings. MBSR lends itself well to VA rollout given its straightforward structure and application, allowing for ease of dissemination should it prove acceptable, feasible and efficacious for older Veterans. |
Make a statement like no other in the adidas Older boys BTS Jacket. The standout dual-tone colours grab attention, while subtle adidas branding on the arm lends an athletic appeal. Crafted in woven polyester with a synthetic down filling, this jacket keeps you warm as temperatures plummet, while the large hood and full-zip design lends itself to the layered look. |
[Detection of Echinococcus granulosus and Echinococcus multilocularis in cyst samples using a novel single tube multiplex real-time polymerase chain reaction].
Cystic echinococcosis (CE) and alveolar echinococcosis (AE) caused by Echinococcus granulosus and Echinococcus multilocularis, respectively, are important helminthic diseases worldwide as well as in our country. Epidemiological studies conducted in Turkey showed that the prevalence of CE is 291-585/100.000. It has also been showed that the seroprevalence of AE is 3.5%. For the diagnosis of CE and AE, radiological (ultrasonography, computed tomography, magnetic resonance) and serological methods, in addition to clinical findings, are being used. The definitive diagnosis relies on pathological examination When the hydatid cysts are sterile or does not contain protoscolex, problems may occur during pathological discrimination of E.granulosus and E.multilocularis species. In this study, we aimed to develop a novel multiplex real-time polymerase chain reaction (M-RT-PCR) targeting mitochondrial 12S rRNA gene of E.granulosus and E.multilocularis using Echi S (5'-TTTATGAATATTGTGACCCTGAGAT-3') and Echi A (5'-GGTCTTAACTCAACTCATGGAG-3') primers and three different probes; Anchor Ech (5'-GTTTGCCACCTCGATGTTGACTTAG-fluoroscein-3'), Granulosus (5'-LC640-CTAAGGTTTTGGTGTAGTAATTGATATTTT-phosphate-3') and Multilocularis (5'-LC705-CTGTGATCTTGGTGTAGTAGTTGAGATT-phosphate-3') that will enable the diagnosis of CE and AE in same assay. During M-RTR-PCR, plasmids containing E.granulosus (GenBank: AF297617.1) and E.multilocularis (GenBank: NC_000928.2) mitochondrial 12S rRNA regions were used as positive controls. Cysts samples of patients which were pathologically confirmed to be CE (n: 10) and AE (n: 15) and healthy human DNA samples (n: 25) as negative control as well as DNA samples of 12 different parasites (Taenia saginata, Hymenolepis nana, Trichuris trichiura, Fasciola hepatica, Enterobius vermicularis, Toxoplasma gondii, Pneumocystis jirovecii, Trichomonas vaginalis, Cryptosporidium hominis, Strongyloides stercoralis, Plasmodium falciparum, Plasmodium vivax) were used to develop M-RT-PCR. E.granulosus and E.multilocularis control plasmids were constructed to detect analytic sensitivity of the test using TOPO cloning. Positive control plasmids were diluted to determine analytical sensitivity and specificity by distilled water at 10(6)-10(5)-10(4)-10(3)-10(2)-10(1)-1 plasmid copy of dilution in each reaction. According to the results, analytical sensitivity of the assay for E.granulosus and E.multilocularis was 1 copy plasmid/µl reaction. The non-existence of cross reactivity with 12 different parasites' DNA samples showed the analytical specificity of the assay. Displaying Echinococcus DNA in cyst samples among 25 patients and species discrimination as well as non-existence of cross reactivity with human DNA samples showed that the clinical sensitivity and specificity of the assay were 100%. As a result, the M-RT-PCR developed in the present study provided a sensitive, specific, rapid, and reliable method in the diagnosis of echinococcosis and the discrimination of E.granulosus and E.multilocularis from cyst samples. |
Q:
Console outputting a certain value - if statement straight after it not working with same value
cout << levelData.interactMap[tileHitX][tileHitY] << endl;
if(levelData.interactMap[tileHitX][tileHitY] == 1.8)
cout << "pls werk" << endl;
So the cout outputs 1.8...yet the if statement does not work.
It's a function in which I'm passing a struct member to using the & pointer.
It's inside this if statement.
if(levelData.interactMap[tileHitX][tileHitY] >= 1 & levelData.interactMap[tileHitX][tileHitY] <= 1.8)
{
levelData.interactMap[tileHitX][tileHitY] = levelData.interactMap[tileHitX][tileHitY] + 0.1;
chop.play();
cout << levelData.interactMap[tileHitX][tileHitY] << endl;
if(levelData.interactMap[tileHitX][tileHitY] == 1.8)
cout << "pls werk" << endl;
}
Calling the function
int action(int facing, sf::Sprite& player, sf::View& view, sf::Clock& actionTimer, levelData& levelData, sf::Sound& chop)
and the define thing for the function
int action(int, sf::Sprite&, sf::View&, sf::Clock&, levelData&, sf::Sound&);
Thanks
A:
As mentioned in several comments, floating point numbers are a little weird when it comes to equality. This is because, to use a reasonable amount of memory, floating point numbers are not stored exactly, but approximated using increasingly small powers of 2 (i.e. 2^-1 + 2^-2 + 2^-4....). As you might expect, this means that there is a margin of error involved, depending on several factors.
As far as your problem is concerned, it comes down to this: don't use "==" on floating point numbers. Instead, check to see if it is within a margin of error with something like:
float acceptableThresholdOfError = .0001;
if(fabs(levelData.interactMap[tileHitX][tileHitY] - 1.8) <= acceptableThresholdOfError)
{ //code }
Obviously, the needed precision determines exactly how small the error threshold should be/can be. This way, even if your variable is not a perfect approximation, it can be thought of as such as long as it's "close enough."
|
Getty Images Fourth Estate The Clintonites Should Stop Freaking Out About WikiLeaks
Jack Shafer is Politico’s senior media writer.
The Clinton campaign has a three-fold plan to interrupt press coverage of the gusher of emails sent to and from campaign chair John Podesta’s account and released by Julian Assange’s WikiLeaks organization. The first is a Fight Club-style vow of silence about the emails, which appear to have been hacked, as Podesta and the campaign have refused to confirm or deny their authenticity to reporters. “Don’t have time to figure out which docs are real and which are faked,” Podesta tweeted.
The second has been to call attention to what appears to be the emails’ tainted provenance.
“Media needs to stop treating Wikileaks like it is same as FOIA,” tweeted Clinton press secretary Brian Fallon on Monday in one of five tweets on that theme. “Assange is colluding with Russian government to help Trump.” That WikiLeaks hasn’t released material on Trump, Fallon continued, “tells you something.”
The third has been to assign Clinton surrogates to the talk shows to dismiss the significance of the very documents that have so upset Fallon.
Of the three strategies, the second seems more imaginative. Going beyond the traditional dialectic of confirm-or-deny, Fallon hopes to totally delegitimize the emails by branding them as the dark fruits of a scurrilous foreign power. The subtext of Fallon’s protest is that no matter what reporters dig out in the voluminous Podesta emails, their stories will be polluted by the motives and methods behind their acquisition.
Fallon is floating a very large crock here. Real reporters don’t treat Freedom of Information Act requests the way he implies. That is, reporters don’t FOIA the government for a stack of documents and then, upon receiving them, blindly publish the stack or their gleanings and call it a work of journalism. No document obtained via the FOIA process is automatically a reliable source upon which a sound story can be built. Its contents must be tested, verified, cross-examined and blended with other information before it has any business being placed in a news story. The same goes for court proceedings, corporate documents, scientific papers, and audio and video recordings, only double.
Clinton’s allies warn us that because the WikiLeaks dump may contain forgeries—not a fully imaginary admonition, mind you—reporters should keep their distance for their own good. Forged documents are as old as journalism itself, and as technology grows more sophisticated, more forgeries will appear, and reporters will have to be even more vigilant lest they be hoodwinked. But if the Clinton forces had journalists’ best interests at heart, they’d agree to help confirm or deny the contents of the more salacious emails. But they don’t, so they won’t. Until they do, we should consider their warnings about WikiLeaks forgeries to be mostly about throwing reporters off the trail, not preventing the spread of disinformation.
In some corners, the WikiLeaks documents are considered radioactive because they were, purportedly at least, hacked and not merely leaked. This is a difference without a distinction. In both cases, information has been improperly obtained and improperly shared, with laws broken in the process. In most cases, both are criminal acts. Should it matter to journalists that the Podesta emails might have been liberated by, to put it in Fallon’s words, an “illegal hack by a foreign govt.”? Absolutely! That’s a great story! I’d run that story! But angels almost never leak. If reporters limited their appetites to only heavenly leaks, they’d starve. I say that if the material is strong enough, you hold your noise and publish the best and most accurate stories you can from them.
One indicator that the hacked Podesta emails are legit is that they are so boring. One of my Politico colleagues who has plowed through hundreds of them looking for news calls them “The Big Yawn.” The characters in the Podesta emails come off less like Machiavellian schemers than harried politicians responding on the fly with a mixture of bravado, strategy and improvisation to unfolding events. They’re not in control. Like most everybody else in Washington, they’re reacting.
This is not to say that the emails contain no news value. From them we gain a sense of how the Clinton team works together, what Clinton said in her Wall Street speeches and more on the political sabotaging of Bernie Sanders. We learn that Clinton aide Doug Band was feuding with Chelsea Clinton at the Clinton Foundation. That Hillary Clinton has made herself expert in taking both sides of an issue. That Donna Brazile leaked CNN Town Hall questions to the Clinton campaign. That Podesta was courting Martin O’Malley in February, hoping to win his endorsement for Clinton, of petty squabbles involving Lanny Davis and Robby Mook, and that Podesta was phone buddies with Justice Ruth Bader Ginsburg.
Future emails leaks may contain stronger meat. For my sake, and for the journalists assigned to wallow in them, I hope so. But so far, there is less journalistic significance in the confidential emails Team Clinton is zipping around than in the wild, prolific and quite public tweets that Donald Trump issues nearly every hour.
******
If you don’t want to see sausage made, don’t eat sausage. Or something like that. Send mangled clichés via email to [email protected]. My email alerts are forgeries, my Twitter feed is controlled by a foreign power, and my RSS feed has a head of prematurely silver hair. |
Para isso basta cortar a garrafa e, como nas fotos, encher o “funil” conseguido com o gargalo, de composto vegetal. Na outra parte da garrafa coloca água. Para ser mais eficiente, pode fazer uma pequena trança com tiras de tecido em algodão, que sairá do gargalo e fica mergulhado na água, regando assim por capilaridade. |
Q:
Use perl to do multi-line replacement
I have to do some replacements in many *.c files. I want to do the replacement like this:
original: printf("This is a string! %d %d\n", 1, 2);
result: print_record("This is a string! %d %d", 1, 2);
That is, replace the "printf" with "print_record", and remove the trailing "\n".
At first, I use sed to do this task. However, maybe there are some cases like this:
printf("This is a multiple string, that is very long"
" and be separated into multiple lines. %d %d\n", 1, 2);
In this case, I can't use sed to remove the "\n" easily. I heard that perl can do this work well. But I am fresh to perl. So can anyone help me? How to accomplish this with perl?
Thanks very much!
A:
What you want to do is not trivial. It requires some parsing to take care of balanced delimiters, quoting, and the C rule that adjacent string literals be joined into a single one. Fortunately, the Perl module Text::Balanced handles a lot of this (Text::Balanced is available in the Perl 'standard' library). The following script should do more or less what you want. It takes one command-line argument and outputs on standard output. You'll have to wrap it inside a shell script. I used the following wrapper to test it:
#/bin/bash
find in/ -name '*.c' -exec sh -c 'in="$1"; out="out/${1#in/}"; perl script.pl "$in" > "$out"' _ {} \;
colordiff -ru expected/ out/
And here's the Perl script. I wrote some comments, but feel free to ask if you need more explanation.
use strict;
use warnings;
use File::Slurp 'read_file';
use Text::Balanced 'extract_bracketed', 'extract_delimited';
my $text = read_file(shift);
my $last = 0;
while ($text =~ /( # store all matched text in $1
\bprintf # start of literal word 'printf'
(\s*) # optional whitespace, stored in $2
(?=\() # lookahead for literal opening parenthesis
)/gx) {
# after a successful match,
# 1. pos($text) is on the character right behind the match (opening parenthesis)
# 2. $1 contains the matched text (whole word 'printf' followed by optional
# whitespace, but not the opening parenthesis)
# 3. $2 contains the (optional) whitespace
# output up to, but not including, 'printf'
print substr($text, $last, pos($text) - $last - length($1));
print "print_record$2(";
# extract and process argument
my ($argument) = extract_bracketed($text, '()');
process_argument($argument);
# save current position
$last = pos($text);
}
# output remainder of text
print substr($text, $last);
# process_argument() properly handles the situation of a format string
# consisting of adjacent string literals
sub process_argument {
my $argument = shift;
# skip opening parenthesis retained by extract_bracketed()
$argument =~ /^\(/g;
# scan for quoted strings
my $saved;
my $last = 0;
while (1) {
# extract quoted string
my ($string, undef, $whitespace) = extract_delimited($argument, '"');
last if !$string; # quit if not found
# as we still have strings remaining, the saved one wasn't the last and should
# be output verbatim
print $saved if $saved;
$saved = $whitespace . $string;
$last = pos($argument);
}
if ($saved) {
$saved =~ s/\\n"$/"/; # chop newline character sequence off last string
print $saved;
}
# output remainder of argument
print substr($argument, $last);
}
|
Marina
Marina is a small place located in the Bay with the same name; the place is the Centre of the Municipality, which consists of 15 more small places.
Local residents have been engaged in fishing and agriculture for a long time. On the surrounding hills grow the centuries-old olive trees; it is therefore not surprising that this region is widely known for its production of olive oil. Marina is dominated by the picturesque fortification, the former mansion of the bishops of Trogir from the 15th century. Above the Marine is located Drid hill that dominates the entire area. It used to be the seat of Drid County, whose ruins can still be seen on the hill.
Riviera Marina consists of three small picturesque Dalmatian places: Vinišće, Poljica and Sevid that will win you at aglance. The crystal clear waters, sandy beaches, undiscovered and preserved nature will help you to relax from the city noiseand get rid of stress. Find your little oasis of peace on one of the many beaches. In the area of the municipality there are registered numerous archaeological monuments that are waiting for you to discover them.
Address
Street: Trg Stjepana Radića 1
Postcode: 21222
City: Marina
Country: Croatia
Contact
Telephone: +385 21 889 015
Fax: +385 21 889 015
E-Mail: This email address is being protected from spambots. You need JavaScript enabled to view it. |
package org.knowm.sundial.jobs;
import org.knowm.sundial.Job;
import org.knowm.sundial.annotations.CronTrigger;
import org.knowm.sundial.exceptions.JobInterruptException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@CronTrigger(cron = "0/20 * * * * ?")
public class SampleJob5 extends Job {
private final Logger logger = LoggerFactory.getLogger(SampleJob5.class);
@Override
public void doRun() throws JobInterruptException {
logger.info("Running SampleJob5.");
// Do something interesting...
}
}
|
package xtof;
import java.util.HashMap;
import java.util.Random;
import edu.stanford.nlp.classify.ColumnDataClassifier;
import edu.stanford.nlp.classify.GeneralDataset;
import edu.stanford.nlp.classify.LinearClassifier;
import edu.stanford.nlp.util.Pair;
public class LinearModel {
LinearClassifier model;
public double[][] getWeights() {
return model.weights();
}
/** used to do 10-fold cross-validation
*
*/
public static GeneralDataset[] getTrainDev(int part, GeneralDataset data) {
GeneralDataset[] res = {null,null};
if (part>9) throw new Error("retrainPart only support 10-fold cross-validation for now");
int n = data.size();
int nPerPart = n/10;
int testdeb=nPerPart*part;
int testend=nPerPart*(part+1);
if (part==9) testend=n;
Pair<GeneralDataset, GeneralDataset> splitData = data.split(testdeb,testend);
GeneralDataset trainData=splitData.first;
GeneralDataset devData=splitData.second;
res[0]=trainData;
res[1]=devData;
return res;
}
public static LinearModel train(ColumnDataClassifier cdc, GeneralDataset data) {
LinearModel m = new LinearModel();
m.model = (LinearClassifier) cdc.makeClassifier(data);
return m;
}
public float getSCore(int[] feats) {
double[][] w = model.weights();
double sc=0;
for (int j=0;j<feats.length;j++) {
sc+=w[feats[j]][0];
}
return (float)sc;
}
public void randomizeWeights(float weightPreviousValues) {
double[][] w = model.weights();
Random r = new Random();
for (int i=0;i<w.length;i++)
for (int j=0;j<w[i].length;j++)
w[i][j]=weightPreviousValues*w[i][j]+(1-weightPreviousValues)*(r.nextDouble()-0.5);
}
public TestResult test(GeneralDataset data) {
final int[][] feats = data.getDataArray();
int[] refs = data.getLabelsArray();
double[][] w = model.weights();
TestResult res = new TestResult();
for (int i=0;i<feats.length;i++) {
double sc=0;
for (int j=0;j<feats[i].length;j++) {
sc+=w[feats[i][j]][0];
}
int rec=0;
if (sc<0) rec=1;
res.addRec(refs[i], rec);
}
return res;
}
public static class TestResult {
int ref0rec0=0, ref0rec1=0, ref1rec0=0, ref1rec1=0;
float tp,fp,fn,tn;
public void addRec(int ref, int rec) {
if (ref==0&&rec==0) ref0rec0++;
else if (ref==0&&rec==1) ref0rec1++;
else if (ref==1&&rec==0) ref1rec0++;
else if (ref==1&&rec==1) ref1rec1++;
}
public float getAcc() {
calcStats();
float acc = (tp+tn)/(tp+fp+tn+fn);
return acc;
}
private void calcStats() {
// which class is "negative" ?
int nref0=ref0rec0+ref0rec1;
int nref1=ref1rec0+ref1rec1;
if (nref0>nref1) {
// neg is 0
tp=ref1rec1; fp=ref0rec1; fn=ref1rec0; tn=ref0rec0;
} else {
// neg is 1
tp=ref0rec0; fp=ref1rec0; fn=ref0rec1; tn=ref1rec1;
}
}
public float getF1() {
calcStats();
float prec=tp/(tp+fp);
float reca=tp/(tp+fn);
float f1=(tp+tp)/(tp+tp+fp+fn);
return f1;
}
public String toString() {return "F1= "+getF1()+" n= "+(tp+fp+tn+fn);}
public boolean isSimilar(TestResult r) {
float diff = getF1()-r.getF1();
if (diff<-0.02 || diff>0.02) return false;
return true;
}
}
/**
*
* @param data
* @return the gain obtained in the risk
*/
public float optimizeRisk(GeneralDataset data) {
Random r = new Random();
final int[][] feats = data.getDataArray();
double[] priors = {0.2,0.8};
float[] sc = new float[feats.length];
double[][] w = model.weights();
float prevRisk;
{
for (int i=0;i<sc.length;i++) sc[i]=getSCore(feats[i]);
RiskMachine rr = new RiskMachine(priors);
prevRisk=rr.computeRisk(sc,new RiskMachine.GMMDiag());
}
float riskdeb=prevRisk, riskfin=prevRisk;
TestResult acc = test(data);
System.out.println("SCD iter -1 "+prevRisk+" acc "+acc);
for (int i=0;i<Parms.nitersRiskOptim;i++) {
int wi=r.nextInt(w.length);
w[wi][0]+=Parms.finiteDiffDelta;
w[wi][1]-=Parms.finiteDiffDelta;
float newRisk;
{
for (int ii=0;ii<sc.length;ii++) sc[ii]=getSCore(feats[ii]);
RiskMachine rr = new RiskMachine(priors);
newRisk=rr.computeRisk(sc,new RiskMachine.GMMDiag());
}
float grad=(newRisk-prevRisk)/Parms.finiteDiffDelta;
if (grad!=0) {
w[wi][0]-=grad*Parms.gradientStep;
w[wi][1]+=grad*Parms.gradientStep;
{
for (int ii=0;ii<sc.length;ii++) sc[ii]=getSCore(feats[ii]);
RiskMachine rr = new RiskMachine(priors);
prevRisk=rr.computeRisk(sc,new RiskMachine.GMMDiag());
riskfin=prevRisk;
}
acc = test(data);
System.out.println("SCD iter "+i+" "+prevRisk+" acc "+acc);
}
}
return riskfin-riskdeb;
}
}
|
Humans are actively destroying the earth, every day. Single-use plastics get tossed into landfills where they sit for hundreds of years or worse – in natural habitats where it negatively impacts wildlife. I can’t idly contribute to this destruction, which is why I’m pledging to cut single-use plastics from my life! Unless, of course, I really need to use them.
There are scientific and ethical reasons to stop using disposable plastics that have convinced me to radically restructure my life. For example, my water bottle usage decreased tremendously when I purchased a HydroFlask. But sometimes my HydroFlask gets too heavy in my bag so I’ll stop by CVS for a Dasani. But I try to recycle it, sort of!
Cutting out single-use plastics brought so many positive changes to my life. It’s why I threw out all my old tupperware, shopping bags, and clothes from synthetic fibers in exchange for cute glassware, canvas grocery totes, and brand new (sustainable) clothes!
I read you can reuse pickle jars for storage, which I think is kind of ugly and not my style, but you’re totally free to do that! For the planet!
I mean, sometimes if I order takeout and it arrives in a plastic container, I can’t help that. And it’d be wasteful not to eat it. Sometimes I’m limited in my activism, but oh well. I love pad thai!
And yes: I’ve seen those pictures of the plastic littering the oceans, killing wildlife and polluting our food chain. Can you imagine a little baby turtle choking on a straw? It breaks my beautiful, extraordinarily large heart. I think about those images every time I bravely use a reusable ziploc bag. They require cleaning after each use, which I don’t really have time to do during the week, so sometimes I’m forced to use disposable bags.
We all need to feel the urgency when transitioning from throwaway culture to sustainability.
The most unwilling person I’ve encountered about this is my landlord, who only accepts paper checks, and it’s so sad. How am I supposed to write a check without a plastic pen? I haven’t paid rent in three months because someone needs to hold his stubbornness accountable.
Plastic takes a long time to break down, anywhere from 20 to 1,000 years to break down. Whenever I have parties, I ask people to bring cans because 70% of aluminum is made from recyclable material. Although my friends and I hate beer, so I pour some mixed drinks into solo cups, but only for myself! Minimize plastic waste!
Laws against use of plastic bags help, but it’s not enough. In order to create change, we need to be passionate leaders committed to a better tomorrow, unless you need to use a plastic thing once, which sometimes I really need to, but I do it as an informed consumer. Join me in saving our little Earth home – and the turtles! |
Cheap Jordan Shoes Free Shipping
UPDATE New"Red" Nike Air Jordan 11 Rumored to be Carmelo Anthony PE Anytime a fresh Nike Air Jordan 11 surfaces, it causes a madness, as well as the first reaction to this red-based colorway has been no distinct. Rumored to be a special make up for Carmelo Anthony, the shoe picks up on the recent tendency of red-dominated sneakers. Jumpman branding on the tongue and heel is done in white, while black finds a house on the inside. Finishing off the look below is a white midsole and icy translucent outsole with white herringbone. What is fascinating is that clear carton photos for this particular shoe are labeled with a size 14 tag, when Melo has pretty much constantly worn a 15. Maybe there’s a bit more to this story than we’re aware of at the minute. That aside, how do you feel about the red Nike Air Jordan 11 Retro Would you get a pair if Jordan Brand ever decided to release this colorway We’ll keep you updated with any new information here at Sole Collector. cheap jordan shoes onlinejordan for cheapjordans cheapauthentic jordans cheap |
Attainment, Completion, and the Trouble in Measuring Them Both
Here’s a seemingly simple question: How have the educational-attainment rates of various groups of Americans changed over the years?
It’s a question with considerable impact. For example, the answer could help determine how well the country’s colleges and universities are meeting its labor needs, and how equitable education is across various demographic groups.
Integrated Postsecondary Education Data System (Ipeds), for a wide variety of college data, including graduation rates
Digest of Education Statistics, which is largely drawn from Ipeds
National Postsecondary Student Aid Study (NPSAS), for student aid
National Student Clearinghouse (NSC), which tracks students as they transfer between colleges
Survey of Income and Program Participation (SIPP), which is a household survey that includes educational-attainment data
From differing methodologies to varying accuracy of the underlying data, it’s not hard to imagine the difficulty in reconciling those disparate databases. Or, as the report puts it, “some of the differences in these data sets lead to systematic differences in the results they generate.”
Determining Definitions
Sometimes the challenges in unifying data start with developing a standard set of definitions. While some people might use “attainment” and “completion” interchangeably, they are not synonymous. Attainment measures “the highest level of education that individuals have completed,” while “completion” describes “how many people finish the programs they begin.”
One could hold the completion rate steady, but expand enrollment in order to increase attainment. Likewise, one could hold enrollment steady, but expand completions to increase attainment. The two are not interchangeable.
Even terms like “college graduate” and “undergraduate degree” can be open to interpretation. In 2011-12, 3.8 million students completed their undergraduate studies. Yet fewer than half — 43 percent — earned bachelor’s degrees. The other credentials were short-term certificates and associate degrees. That matters because enrollment rates include students at two-year institutions, many of which don’t offer bachelor’s degrees. Their students, therefore, could never “attain” a bachelor’s degree, even though they could certainly complete their degrees.
Demographic Shifts
For those seeking a look at demographic trends, beware. How people identify has changed over the years, and different sources use different methodologies. For example, before 2008, Ipeds used a different methodology than did the Census Bureau, though now the two are mostly aligned. Furthermore, some reporting combines ethnic groups, which can mask variances within one of them. For example, the categories of whites or blacks sometimes include people of Hispanic origin; other times they do not. Likewise, sometimes Asians and Pacific Islanders are grouped together, obscuring the differences between the groups.
Survey Says
Some of the data are culled from institutions; other statistics come from individuals. Some data are complete censuses of everyone involved; other data come from selected and varying samples. The Current Population Survey is based on a sample that excludes people residing in military barracks, prisons, and old-age homes. The American Community Survey is based on samples that include those populations.
The decennial census attempts to gather information about every individual, just as Ipeds attempts to gather data from every provider of postsecondary education.
Additionally, different populations have characteristics the data might not reflect. For example, many older age groups have higher attainment levels than do younger groups. That pattern might suggest a drop in education levels among younger populations. In reality, as the report explains, it shows that many people earn their degrees well after the traditional college age.
Keeping Track
Tracking the data over space and time raises all sorts of problems. For example, how many degrees a state awards doesn’t necessarily correspond to the distribution of degrees within that state, because graduates can and do move after college. As the report points out, “California ranks 21st in the percentage of adults between 25 and 44 years old with at least a bachelor’s degree (32 percent), but 46th in the number of bachelor’s degrees awarded in 2009-13 relative to the number of 18- to 24-year-olds (4 percent).”
Furthermore, tracking students over time can be tricky if they transfer. For privacy reasons, Ipeds data don’t track students who transfer away from the institutions where they started. By contrast, the National Student Clearinghouse does track most of the students who transfer, but their data cannot be broken out by institution.
There have been calls to resolve some of those problems with better data collection, such as rescinding the ban on a federal unit-record system, but until that happens scholars will just have to remember the data researcher’s motto: caveat emptor! |
Association of angiotensin-converting enzyme intron 16 insertion/deletion polymorphism with history of foetal loss.
The angiotensin-converting enzyme (ACE) intron 16 insertion/deletion (I/D) polymorphism is associated with ACE activity and has been discussed as a risk factor for pre-eclampsia. Disturbances of uteroplacental circulation are involved in the pathogenesis of pre-eclampsia. In this study, we tested whether the ACE I/D genotype is associated with history of foetal loss (FL) or uteroplacental dysfunction (UPD). ACE I/D genotype was determined in 312 women presenting with a history of FL and 112 women admitted because of UPD. The association of the ACE I/D genotype with FL or UPD was assessed in a case-control study using 527 patients with diagnoses other than FL or UPD. To exclude potential biases due to associations of this genotype with other diagnoses, we additionally performed a case-control study using 553 healthy controls. ACE I/D genotype was significantly associated with history of FL in both case-control studies (patient controls: odds ratio 1.52, p<0.02; healthy controls: odds ratio 1.48, p=0.02). There was no evidence for allele-dose dependency. No association of the ACE I/D genotype with UPD could be detected. The ACE I/D genotype exhibits a statistically significant association with a history of FL. These results corroborate an involvement of the renin-angiotensin system in pregnancy complications. |
FOXBOROUGH, Mass. – New England Revolution II will host Orlando City B its inaugural USL League One Match on March 28 at Gillette Stadium. Led by Head Coach Clint Peay, Revolution II will compete alongside 14 clubs – including six other teams owned and operated by MLS clubs – in the 28-game regular season from March through October, with playoffs slated for the fall culminating in League One Cup.
The full list of USL League One openers is available online at USLsoccer.com.
2020 USL League One Regular Season: New England Revolution II Season Opener Date Opponent Venue Saturday, March 28 Orlando City B Gillette Stadium (Foxborough, Mass.)
Revolution II is a critical development in the club’s continued effort to elevate player development as it bridges the gap between the Revolution Academy and first team. The team will train at the new Revolution Training Center and play regular-season home games at Gillette Stadium, with the potential for additional matches to be played throughout the region. Operating under the leadership of Sporting Director Bruce Arena, in conjunction with Technical Director Curt Onalfo and Head Coach Clint Peay, the Revolution will unveil additional details such as the team's full coaching staff, schedule, and roster at a later date prior to the 2020 season. |
Q:
Java: Inheritance, which container I should use to store the object passed by method
I am trying to structure classes comprises of inheritance.
classes I have created
main
List item
library
item
person
books
MusicCD
Movies
Magazine ....etc
here is the library class
import java.util.ArrayList;
public class Library {
/**
* itemList contains the List of all items in the library.
*/
private ArrayList itemList;
/**
* count of all the items in the library.
*/
private int count;
public Library(){
}
/**
* Add a new item to the list of items.
* @param newItem The new item to be added to the list.
* @throws unisa.library.DuplicateItemException
*/
public void addItem(Item newItem) throws DuplicateItemException {
itemList.add(newItem);
}
}
Item class,
public class Item extends Person{
private String id;
private String name;
private boolean loaned;
//borrower?
private double cost;
public Item(String id, String name, boolean loaned, String borrower, double cost) {
// TODO Auto-generated constructor stub
super(borrower);
this.id = id;
this.name=name;
this.loaned = loaned;
this.cost = cost;
}
public String getID(String id){
return id;
}
public String getName(String name){
return name;
}
public boolean getLoaned(boolean loaned){
return loaned;
}
public double getCost(double cost){
return cost;
}
}
person class,
public class Person {
private String name;
private String address;
public Person(String name, String address){
this.name = name;
this.address = address;
}
public Person(String name){
this.name = name;
}
}
book, Movies, MusicCD all are identical
public class Book extends Item{
private String author;
public Book(String author, String id, String name, boolean loaned, String borrower, double cost){
super(id, name, loaned, borrower, cost);
this.author = author;
}
}
I must use these classes but I am not sure whether I have applied correct inheritance.
now the problem is from the main class
they are initiating library object
Do the Test
Library l1 = new Library();
and calling method
l1.addItem(new Magazine(Magazine.frequency.day, "ID001","Vanity Not So Faire", false,"New York", null, 5.95));
here they are passing object of Magazine class (identical to book class) and in the function declaration I have used Item as container. By addItem it is required to add any item (book, magazine, DVD ...etc). what container I should pass in the function declaration (addItem(?)) or Is there something wrong in structuring classes???
A:
first thing that i noticed wrong was why is an Item extending a Person..?
the next one is that if books,magazines and all the other stuff you say are having identical code, it breaks the concept of re-usability.
so, to summerize
Person class need not be extended by any other class but since he may borrow an item he may have a reference of Item in the class
public class Person {
private ArrayList<Item> itemList;
//more person specific code
}
Item class can have a Person reference like
public class Item {
Person borrower;
//More Item specific code
}
item should be extended by a class say Media and then you can construct specialized classes that inherit the Media class.
public class Media extends Item {
//media specific code
}
public class Movie extends Media {
public enum Quality {FULL_HD, HD};
//more Movie specific code
}
this might give you a little idea..
|
Medical imaging is the technique, process and art of creating visual representations of the interior of a body for clinical analysis and medical intervention. Medical imaging seeks to reveal internal structures hidden by the skin and bones, as well as to diagnose and treat disease. Medical imaging may also be used to establish a database of normal anatomy and physiology to make it possible to identify abnormalities.
One example of medical imaging may be ultrasonography which is a technique that is based on ultrasound waves and which helps physicians to visualize the structures of internal organs of human body.
In some cases it is difficult to identify the boundaries of abnormal regions in the image. Hence segmentation may be desired. The manual methods of segmentation may require high attention of sonographer, may suffer from poor accuracy, be prone to human error and may be time consuming. Automatic segmentation of medical imaging products can help the physicians by locating abnormal regions in the image. |
Q:
Programmatically open fullscreen webView on click
I want to create like plugin for applications. When the specific button is clicked the webview should be opened on top of the app(activity). The whole logic for this webview should be in .jar library. It should not open new activity, because then I need to copy new layout(.xml file) in to project but I do not want to do this, because adding this new library (plugin) should be as simple as possible. I also should not change the existing layout.
Is any way to open this webview by just adding some lines of code to program and then control everything from library vithout making changes in layouts (.xml).
Update solution:
I solve this by calling new activity from library but this activity does not load .xml layout but webView.
Project activity:
public void onClick(View v) {
startActivity(new Intent(MainActivity.this, LibraryActivity.class));
}
Library activity:
WebView web;
web= new WebView(this);
web.loadUrl("http://www.google.com");
web.setWebViewClient(new WebViewClient());
setContentView(web);
A:
Change your Activity's layout by setContentView(webView) and after the registration is done, again change the layout.
Edit:
After setting your layout by setContentView(R.layout.activity_main); invalidate your View :
ViewGroup vg =(ViewGroup) findViewById(R.id.Yourcontainer);
vg.invalidate();
|
Digimon Adventure and Digimon Adventure 02 are the first two seasons of the Digimon anime series. Digimon Adventure originally aired from March 7, 1999 – March 26, 2000 whilst Digimon Adventure 02 ori...
Digimon Adventure and Digimon Adventure 02 are the first two seasons of the Digimon anime series. Digimon Adventure originally aired from March 7, 1999 – March 26, 2000 whilst Digimon Adventure 02 ori... |
Shop
Stemple Creek Ranch Ground Beef (1 lb.)
Shipped To Your Door
Our 100% grass-fed and grass-finished ground beef is our biggest seller and adds an amazing flavor profile to spaghetti sauce, lasagna, tacos, meat balls, meat loaf, and of course hamburgers! For the perfect amount of juiciness and flavor, we use a 80/20 lean-to-fat ratio on all of our ground beef.
Size: 1 lb.
Shipping rates are based on the shipping address for all individual cuts. |
Will tuberculosis of the brain infect my child?
Q:My wife had severe headache, vomiting, vision problem and dizziness from last 6-7 months. She also suffered from high blood pressure in the range 160/110 and used to take Betaloc 50mg. Her reports are as follows:
Impression of biopsy from right cerebellum - Necrotising granulomatous inflammation consistent with tuberculosis, right cerebellum.
Present state: She has been discharged from hospital and advised bed rest. She doesn't complain about the headache. But has some movement pain below the chest area. Was the treatment right? If at all the brain was affected by tuberculosis, was the operation of right cerebellum necessary? What may be the cause of the infection in brain? Will it affect our 2 and half year old daughter? Will it affect our marriage? Does tension play any role in such a situation? Following medicines have been advised-
a) R-Cinex (600 mg)-daily once
b) P-Zide (750 mg)- daily twice
c) Tab Zofer(4 mg)- daily thrice
d) Injection Streptomycin (.75 gm)- daily once
e) Espra (40 mg)- daily once.
f) Benadon (40 mg)- daily once.
No BP medicines advised.
Doctor has advised for LFT within two months. Should we follow his advice?
A:The best thing in her case is operation and full medical treatment as given. Surgery was necessary. TB is very common in our country and 85% have the infection and usually are able to control it. Sometimes it may escape the body's defences and then phase up. TB in the brain is not infective to people around. I am sure a chest x-ray of your wife has been done. Only TB lungs is infective to others. There will be no effect on your marital relations. Tension plays no significant part in its aetiology. |
Epithelioid cell histiocytoma
Epithelioid cell histiocytoma is a rare skin condition that is considered to be a variant of a dermatofibroma.
See also
Pleomorphic lipoma
List of cutaneous conditions
References
Category:Dermal and subcutaneous growths |
Holy Crap
$34.95
Holy Crap is the perfect breakfast food. This slow-burning protein-rich rocket fuel leaves you satisfied until lunch. Mix with yogurt or your choice of milks.
Our three main seed ingredients are some of the oldest perfect foods known to humans.
Our key ingredient, chia or salvia hispanica l. is a recently revived oil seed crop from the Americas that was once more valuable than gold to the Aztecs. The Tarahumara, the greatest long distance runners on the planet, have had a long history of using this slow burning rocket fuel for both athletes and warriors alike. |
Q:
How do I detect that all frames have returned responses from a contentscript in a Chrome extension?
I want my Chrome extension to rerun a contentscript for all open tabs prior to viewing the popup window to gather the most recent data about each webpage (it sends a message to the background page). My problem is that in order to get full access to each iframe, I need to set allframes to true. This means that prior to showing the popup window, I need to wait until I have received a message from each frame.
My problem is that I am not sure how to determine how many frames there are. One approach I looked at would be to detect the frame count from within the contentscript, but there is an open bug which indicates that you can't request that right now. Another is to just wait for a number of responses equivalent to what I got when I first navigated to the page, but it is possible that due to AJAX calls the number of frames increased since then. Finally, I was hoping that the response to chrome.extension.SendRequest would include such information but it does not.
Your help is appreciated.
A:
I suggest that you always run the content script from the manifest, so that the content script will run every time a new frame loads. Then, have the content script open a port to the extension, so that every open frame has a port to the background. Then it's up to the background page to keep track of all the open ports for each tab, and send messages to the desired ports.
|
The saddest part of a divorce is when you have children, isn’t it? I mean, who’s going to get who for Christmas, Thanksgiving, and summer holidays? Then when one partner gets remarried, things become even more difficult. With Legendary Entertainment splitting from Warner Brothers and shacking up with Universal, dividing the shared assets of the two studios has become quite a custody battle. Seventh Son, a fantasy epic starring Jeff Bridges, is among the first films that Universal and Legendary get in the battle.
Seventh Son was co-produced by Warner Brothers and Legendary, but now goes with Legendary to meet its new mom, Universal Pictures. This is just the latest film to be divided in the Warner Brothers/Legendary split. So far Pacific Rim and Godzilla remain with Warner Brothers, while Legendary gave up their share of Batman vs. Superman in exchange for a piece of Interstellar.
Seventh Son has not generated the most positive buzz as yet, drawing into question who’s getting the better end of the deal. Titles and release dates for Seventh Son have changed several times so far and the release date will probably change again now that Universal has ahold of it. The trailer, which I saw attached to Pacific Rim, does not inspire confidence. It looks like a mix of Eragon, Season of the Witch and Snow White and the Huntsman.
According to the official plot synopsis of Seventh Son, Jeff Bridges plays Master Gregory, a knight who once imprisoned the witch Mother Malkin (Julianne Moore, wearing one of Charlize Theron’s old costumes). Mother Malkin has escaped, plotting vengeance, and Master Gregory must find and train a new apprentice ‘before the next blood moon’ (I’m not kidding you) to fight her dark magic. The apprentice is played by Ben Barnes (Prince Caspian in The Chronicles of Narnia films), the seventh son of a seventh son. The only bright light that I can see in all of this is that Seventh Son was directed by Sergei Bodrov, who earned an Oscar nomination for his film Mongol.
So Universal and Legendary walk away with Seventh Son, while Warner Brothers still has ahold of Batman vs. Superman? I don’t know; I think someone’s getting a raw deal. |
Early neurological deterioration in acute ischaemic stroke: predictors, mechanisms and management.
Early neurological deterioration (END) in acute ischaemic stroke is a common event. The underlying mechanisms are heterogeneous. The clinical predictors of END include severity of the initial stroke, large vessel occlusion, diabetes mellitus, hypotension, and atrial fibrillation. Serial observations and detailed assessment by the trained staff in specialised stroke units are key to the successful management of these patients. Advances in brain and vascular imaging have provided insight into the underlying mechanisms, enabling clinicians to use preventative and therapeutic interventions specifically targeted at them, though several questions still remain unanswered. END has potentially serious consequences on the short term (morbidity and death) and long term (recovery from stroke) outcomes for the patient. Therefore, attempts to prevent and treat END should be made promptly and aggressively. |
Noncytotoxic IgE-mediated release of histamine and serotonin from murine mastocytoma cells.
Cultured murine mastocytoma (AB-CBF1-MCT-1) cells were stimulated to release endogenous or incorporated histamine or serotonin by an IgE-mediated mechanisms without loss of viability. Stimulation was achieved by incubation of the cells with rat IgE-anti-IgE, rat IgE-anti-light chain, fluoresceinated rat IgE-anti-fluorescein, IgE-enriched mouse anti-ovalbumin-ovalbumin, or covalently linked dimers of rat IgE, at doses similar to those optimal for normal peritoneal mast cells. Active cell metabolism and Ca++ were required to obtain release. Despite the latter, no dose of the calcium ionophore, A23187, could be found which caused release without concomitant cytotoxicity. Phosphatidylserine did not enhance release. |
With no Major League season in sight anytime soon, it has gotten me to think about various Royals questions roster-wise, especially if no season occurs this year. While that would be massively disappointing to Royals as well as general baseball fans everywhere, the idea of a missed season would also affect many Royals players throughout the organization (i.e. both Major and Minor League players). While there are some obvious candidates when it comes to “most affected,” such as Ian Kennedy and Alex Gordon, who will be free agents after this year (I think Kennedy is more affected, for I think the Royals will bring back Gordo another year if they don’t play in 2020), there are some ones who are going under the radar.
Salvador Perez is the prime example of that latter case.
Now, Salvy is an interesting player because he will not be a free agent until after the 2021 season. So in reality, the Royals have two more seasons with the 29-year-old catcher before Dayton Moore and the organization have to make a decision on Salvy, who is currently the second-highest paid player on the team in 2020 (behind only Kennedy). But, with this season in question, and Salvy potentially missing two full years of baseball, Moore and the Royals may have to start planning what to do when it comes to the backstop position after 2021.
Which makes Royals fans wonder: do the Royals view Salvy as their Yadier Molina, who is considered the heart and soul of the St. Louis Cardinals and has been with the Cardinals for nearly 16 seasons? Or will the Royals move on from Salvy in 2021, and perhaps look to a younger option and try to build the roster in his absence, since a lot of money will be available to the Royals after Salvy’s contract comes off the books?
Let’s take a look at what Salvy offers the Royals and whether or not Moore will include the Royals star in the club’s plans long term.
The Royals weren’t expected to do much in 2019. After all, they went 59-103, only a one-game improvement from their 58-104 mark the year before in 2018. However, the loss of Salvy late in 2019 Spring Training seemed to have a strong impact on the club from the start. The Royals, though talented, didn’t seem to have the strong clubhouse leadership in 2019 that they had in the past, and undoubtedly, that was probably due to Salvy’s extended absence. While Gordo is a great leader by example, and some other players, like Hunter Dozier and Whit Merrifield stepped up, they couldn’t replicate the clubhouse presence that Salvy brought in seasons prior.
However, in addition to leadership, Royals fans have to wonder if Salvy’s absence had an effect on a starting pitching staff that struggled immensely in 2019. Last year, the Royals pitching staff ranked 27th in WAR and FIP, according to Fangraphs. While the Royals staff talent-wise wasn’t much to shout about it, the rotating carousel behind the plate probably didn’t help things, as the Royals had Martin Maldonado, Cam Gallagher, Meibrys Viloria, and Nick Dini all put time in as Royals catcher. Thus, it’s unlikely that the Royals staff gained much momentum or a rhythm over the course of the year, especially with so many catchers suiting up in Kansas City a season ago.
Case in point: the Royals actually ranked 20th in starting pitching WAR and 22nd in FIP in 2018, according to Fangraphs. Those are much better metrics than last year’s totals, even though the 2018 team was worse overall. And the common denominator? Consistency behind the plate, as Salvy caught 831 of 1431 total innings caught from Royals catchers a year ago, a 58 percent mark. Last year? Those marks were far more spread out, as Maldonado caught 604 innings (42 percent), with Gallagher catching 325 innings (22 percent), and Viloria catching 345 innings (24 percent). Without a “set” catcher, let alone an All-Star one like Salvy, it is not surprising that the Royals starting pitching didn’t produce consistently on the mound a season ago.
And while Salvy’s pitch-calling and chemistry with starting pitchers was widely missed last year, the lack of Salvy’s bat in the lineup didn’t help things either. In 2018, the Royals ranked 22nd in baseball in terms of wRC+ with a mark of 76, and they also ranked 2nd in catcher home runs with 30, which was mostly helped by Salvy’s 27 that season. In 2019, the Royals regressed significantly, ranking 25th in wRC+ with a mark of 65, nearly 11 points worse. Furthermore, the Royals lacked any kind of power from their catcher position, as they ranked 27th in homers (12 total) and 26th in ISO.
Safe to say, the Royals missed Salvy in more ways than one in 2019. And thus, there was reason to hope that the Royals could overachieve in 2020 with Salvy’s return behind the plate and in the lineup. After all, the Royals experienced breakout seasons from hitters like Jorge Soler and Dozier. The return of Salvy to go along with Soler and Dozier and a consistent Whit Merrifield and Gordo? Well, it was possible to think that the Royals could have had the most underrated lineup in the AL Central in 2020.
Of course, they still can…the season isn’t officially cancelled just yet, even if it doesn’t look good.
With this information being known, it seems obvious that Moore should sign Salvy to an extension after 2021. He’ll be 31, still relatively young, and after recovering all of 2019, he may have an extra year or two in the tank thanks to that year off. After all, catcher is one of the most brutal positions physically on a baseball player, so in actuality, a season off may have benefited Salvy in terms of extending his career, especially behind the plate, which is where he seems to want to be long term (though there has been some talk about him moving to first or DH). So a fresher Salvy in 2020 and 2021 could only mean good things for Salvy: more innings behind the plate, more production at the dish, and better leadership in the clubhouse and with the starting rotation.
Unfortunately it’s just not that easy.
First off, the biggest reservation about Salvy may be his lackluster plate discipline, as Salvy is known for swinging at any and everything around the strike zone. While Salvy did hit 27 home runs in 2019, he also had a swinging strike percentage of 12.8, the highest of his career, according to Fangraphs. In fact, Salvy has become less and less disciplined as a hitter over his career, not necessarily a good sign: typically, hitters trend in the opposite direction. In 2013, Salvy had a swinging strike percentage of 6.2 percent, and nearly six years later, that percentage doubled. That is further compounded by him getting worse at swinging at pitches outside the strike zone, as he went from swinging at 36.4 percent of pitches outside the zone in 2013 to 48.4 percent in 2018.
Considering Salvy hasn’t face any pitching in a year, it seems likely that Salvy’s whiff percentage will go up in 2020 (or 2021 if we miss the year). The question of course is how much will it go up, and how will it affect his other metrics (specifically his power ones)?
The next big issue with Salvy is his defense, which is a bit of a mixed bag. Salvy has a gun, as he gunned down 25 runners in 2018, which was the fourth-highest mark in the league. However, when it comes to advanced metrics, Salvy’s resume is a lot more questionable, as he ranked 3rd worst in frame runs out of MLB catchers with 750 or more innings in 2018. His framing isn’t a one-year thing either, as even before 2018, Salvy has often rated as one of the worst framers in the league.
Yes, Salvy brings a lot of power to the plate as a Royals catcher. Furthermore, he may have an arm and a good relationship with his pitchers. That being said, it will be hard for him to continue behind the plate long term in Kansas City if his framing continues to be so bad in the future. The Royals can’t give away strikes and put base runners on because of poor framing, especially with a starting staff that isn’t expected to be much better in 2020 than they were last season.
Catcher is one of the more interesting positions in baseball when it comes to how teams invest in their starting catcher. If a team has a really good one, like a Buster Posey, Joe Mauer, or Yadier Molina, they tend to invest large and long-term into that particular catcher. However, if they feel like that catcher is “on the fence” or is not quite a star player, they’re willing to part ways fast, and go with more defensive-oriented catchers who can save runs with their framing and defense rather than their bat. Hell, just look at how long a career Jeff Mathis has had in baseball even though he’s a mediocre hitter at best.
The big question Moore and the Royals will have to ask themselves is this: does Salvy mean as much to this Royals team as Yadier Molina does to the Cardinals? While that is a specific reference, the correlations are there: Yadier is the face of the franchise in St. Louis, and Salvy is the same in Kansas City. Much like Yadier was the last remnant of the Cardinals World Series title in 2011, Salvy most likely will be in the same boat by 2021. And while Yadier leads the clubhouse and is the heart and soul of the Cardinals, one can say the same about Salvy with the Royals.
Of course, Salvy isn’t at that Molina point just yet. Molina is currently 37 years-old, so he has some age and seasons on the younger Royals catcher, who isn’t even 30 just yet. But, the big question is this: will Moore view Salvy like the Cardinals view Molina come contract time in the Winter of 2021? It is likely that Salvy will look for a 4-5 year deal after his contract expires, which makes sense because at 31-years-old, Salvy will have some leverage when it comes to his age. But considering his recent injury history, size, and flaws in his game (especially when it comes to plate discipline and framing), it’s still unclear if Moore will be eager to commit such a long-term deal to Salvy, especially if the Royals aren’t necessarily showing signs of contention by that time.
It’s still a while, but the time will come sooner than Royals fans will think, especially if this season is cancelled. If the Royals don’t play in 2020, that only gives Salvy one year to prove he is worth a long-term extension, and Moore one year to figure out what that contract number is and if Salvy is worth keeping in Kansas City long term. There is no question the kind of fan support and enthusiasm Salvy brings to the ballpark, and how much he is beloved in KC, especially after helping bring the Royals a World Series title in 2015.
That being said, baseball is a business, and in business, hard, unpopular decisions have to be made. It will be interesting to see if Moore will make an unpopular decision with Salvy, especially if he regresses significantly this year (if they play) or next.
I guess it all depends if Moore views Salvy as the Royals’ Molina. If he doesn’t, it’s likely that Salvy will be playing for another club in 2022.
If Moore does believe in that comparison though, it is likely that not only will Salvy get an extension, but also spend his entire career as a Royal. And as a result, Salvy will bring the same infectious energy and leadership to the Royals clubhouse and Kauffman Stadium for years to come, much to the delight of Royals nation.
Furthermore, it also will be likely that Royals fans will see Salvy’s jersey retired along with George Brett and Frank White in the Royals Hall of Fame if Salvy spends his entire career in Kansas City.
I guess Royals fans will know for sure by 2022. |
This ebook is available for the following devices:
iPhone
iPad
Android
Kindle Fire
Windows
Mac
Sony Reader
Cool-er Reader
Nook
Kobo Reader
iRiver Story
more
Our lives are composed of millions of choices, ranging from trivial to life-changing and momentous. Luckily, our brains have evolved a number of mental shortcuts, biases, and tricks that allow us to quickly negotiate this endless array of decisions. We don’t want to rationally deliberate every choice we make, and thanks to these cognitive rules of thumb, we don’t need to.
Yet these hard-wired shortcuts, mental wonders though they may be, can also be perilous. They can distort our thinking in ways that are often invisible to us, leading us to make poor decisions, to be easy targets for manipulators…and they can even cost us our lives.
The truth is, despite all the buzz about the power of gut-instinct decision-making in recent years, sometimes it’s better to stop and say, “On second thought . . .”
The trick, of course, lies in knowing when to trust that instant response, and when to question it. In On Second Thought, acclaimed science writer Wray Herbert provides the first guide to achieving that balance. Drawing on real-world examples and cutting-edge research, he takes us on a fascinating, wide-ranging journey through our innate cognitive traps and tools, exposing the hidden dangers lurking in familiarity and consistency; the obstacles that keep us from accurately evaluating risk and value; the delusions that make it hard for us to accurately predict the future; the perils of the human yearning for order and simplicity; the ways our fears can color our very perceptions . . . and much more.
Along the way, Herbert reveals the often-bizarre cross-connections these shortcuts have secretly ingrained in our brains, answering such questions as why jury decisions may be shaped by our ancient need for cleanliness; what the state of your desk has to do with your political preferences; why loneliness can literally make us shiver; how drawing two dots on a piece of paper can desensitize us to violence… and how the very typeface on this page is affecting your decision about whether or not to buy this book.
Ultimately, On Second Thought is both a captivating exploration of the workings of the mind and an invaluable resource for anyone who wants to learn how to make smarter, better judgments every day. |
/* Office 2007 cracker patch for JtR. Hacked together during March of 2012 by
* Dhiru Kholia <dhiru.kholia at gmail.com> */
#if FMT_EXTERNS_H
extern struct fmt_main fmt_office;
#elif FMT_REGISTERS_H
john_register_one(&fmt_office);
#else
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <assert.h>
#include <errno.h>
#include <openssl/aes.h>
#ifdef _OPENMP
#include <omp.h>
#ifndef OMP_SCALE
#define OMP_SCALE 4
#endif
#endif
#include "arch.h"
#include "misc.h"
#include "common.h"
#include "formats.h"
#include "params.h"
#include "options.h"
#include "unicode.h"
#include "sha.h"
#include "sha2.h"
#include "johnswap.h"
#include "office_common.h"
#include "sse-intrinsics.h"
#include "memdbg.h"
//#undef SIMD_COEF_32
//#undef SIMD_COEF_64
#define FORMAT_LABEL "Office"
#define FORMAT_NAME "2007/2010/2013"
#define ALGORITHM_NAME "SHA1 " SHA1_ALGORITHM_NAME " / SHA512 " SHA512_ALGORITHM_NAME " AES"
#define BENCHMARK_COMMENT ""
#define BENCHMARK_LENGTH -1
#define PLAINTEXT_LENGTH 125
#define BINARY_SIZE 16
#define SALT_SIZE sizeof(*cur_salt)
#define BINARY_ALIGN 4
#define SALT_ALIGN sizeof(int)
#ifdef SIMD_COEF_32
#define GETPOS_1(i, index) ( (index&(SIMD_COEF_32-1))*4 + ((i)&(0xffffffff-3))*SIMD_COEF_32 + (3-((i)&3)) + (unsigned int)index/SIMD_COEF_32*SHA_BUF_SIZ*SIMD_COEF_32*4 )
#define GETPOS_512(i, index) ( (index&(SIMD_COEF_64-1))*8 + ((i)&(0xffffffff-7))*SIMD_COEF_64 + (7-((i)&7)) + (unsigned int)index/SIMD_COEF_64*SHA_BUF_SIZ*SIMD_COEF_64*8 )
#define SHA1_LOOP_CNT (SIMD_COEF_32*SIMD_PARA_SHA1)
#define SHA512_LOOP_CNT (SIMD_COEF_64 * SIMD_PARA_SHA512)
#define MIN_KEYS_PER_CRYPT (SIMD_COEF_32 * SIMD_PARA_SHA1 * SIMD_PARA_SHA512)
#define MAX_KEYS_PER_CRYPT (SIMD_COEF_32 * SIMD_PARA_SHA1 * SIMD_PARA_SHA512)
#else
#define SHA1_LOOP_CNT 1
#define SHA512_LOOP_CNT 1
#define MIN_KEYS_PER_CRYPT 1
#define MAX_KEYS_PER_CRYPT 1
#endif
static struct fmt_tests office_tests[] = {
{"$office$*2007*20*128*16*8b2c9e8c878844fc842012273be4bea8*aa862168b80d8c45c852696a8bb499eb*a413507fabe2d87606595f987f679ff4b5b4c2cd", "Password"},
/* 2007-Default_myhovercraftisfullofeels_.docx */
{"$office$*2007*20*128*16*91f095a1fd02595359fe3938fa9236fd*e22668eb1347957987175079e980990f*659f50b9062d36999bf3d0911068c93268ae1d86", "myhovercraftisfullofeels"},
/* 2007-Default_myhovercraftisfullofeels_.dotx */
{"$office$*2007*20*128*16*56ea65016fbb4eac14a6770b2dbe7e99*8cf82ce1b62f01fd3b2c7666a2313302*21443fe938177e648c482da72212a8848c2e9c80", "myhovercraftisfullofeels"},
/* 2007-Default_myhovercraftisfullofeels_.xlsb */
{"$office$*2007*20*128*16*fbd4cc5dab9b8e341778ddcde9eca740*3a040a9cef3d3675009b22f99718e39c*48053b27e95fa53b3597d48ca4ad41eec382e0c8", "myhovercraftisfullofeels"},
/* 2007-Default_myhovercraftisfullofeels_.xlsm */
{"$office$*2007*20*128*16*fbd4cc5dab9b8e341778ddcde9eca740*92bb2ef34ca662ca8a26c8e2105b05c0*0261ba08cd36a324aa1a70b3908a24e7b5a89dd6", "myhovercraftisfullofeels"},
/* 2007-Default_myhovercraftisfullofeels_.xlsx */
{"$office$*2007*20*128*16*fbd4cc5dab9b8e341778ddcde9eca740*46bef371486919d4bffe7280110f913d*b51af42e6696baa097a7109cebc3d0ff7cc8b1d8", "myhovercraftisfullofeels"},
/* 2007-Default_myhovercraftisfullofeels_.xltx */
{"$office$*2007*20*128*16*fbd4cc5dab9b8e341778ddcde9eca740*1addb6823689aca9ce400be8f9e55fc9*e06bf10aaf3a4049ffa49dd91cf9e7bbf88a1b3b", "myhovercraftisfullofeels"},
/* 2010-Default_myhovercraftisfullofeels_.docx */
{"$office$*2010*100000*128*16*213aefcafd9f9188e78c1936cbb05a44*d5fc7691292ab6daf7903b9a8f8c8441*46bfac7fb87cd43bd0ab54ebc21c120df5fab7e6f11375e79ee044e663641d5e", "myhovercraftisfullofeels"},
/* 2010-Default_myhovercraftisfullofeels_.dotx */
{"$office$*2010*100000*128*16*0907ec6ecf82ede273b7ee87e44f4ce5*d156501661638cfa3abdb7fdae05555e*4e4b64e12b23f44d9a8e2e00196e582b2da70e5e1ab4784384ad631000a5097a", "myhovercraftisfullofeels"},
/* 2010-Default_myhovercraftisfullofeels_.xlsb */
{"$office$*2010*100000*128*16*71093d08cf950f8e8397b8708de27c1f*00780eeb9605c7e27227c5619e91dc21*90aaf0ea5ccc508e699de7d62c310f94b6798ae77632be0fc1a0dc71600dac38", "myhovercraftisfullofeels"},
/* 2010-Default_myhovercraftisfullofeels_.xlsx */
{"$office$*2010*100000*128*16*71093d08cf950f8e8397b8708de27c1f*ef51883a775075f30d2207e87987e6a3*a867f87ea955d15d8cb08dc8980c04bf564f8af060ab61bf7fa3543853e0d11a", "myhovercraftisfullofeels"},
/* 2013-openwall.pptx */
{"$office$*2013*100000*256*16*9b12805dd6d56f46d07315153f3ecb9c*c5a4a167b51faa6629f6a4caf0b4baa8*87397e0659b2a6fff90291f8e6d6d0018b750b792fefed77001edbafba7769cd", "openwall"},
/* 365-2013-openwall.docx */
{"$office$*2013*100000*256*16*774a174239a7495a59cac39a122d991c*b2f9197840f9e5d013f95a3797708e83*ecfc6d24808691aac0daeaeba72aba314d72c6bbd12f7ff0ea1a33770187caef", "openwall"},
/* 365-2013-password.docx */
{"$office$*2013*100000*256*16*d4fc9302eedabf9872b24ca700a5258b*7c9554d582520747ec3e872f109a7026*1af5b5024f00e35eaf5fd8148b410b57e7451a32898acaf14275a8c119c3a4fd", "password"},
/* 365-2013-password.xlsx */
{"$office$*2013*100000*256*16*59b49c64c0d29de733f0025837327d50*70acc7946646ea300fc13cfe3bd751e2*627c8bdb7d9846228aaea81eeed434d022bb93bb5f4da146cb3ad9d847de9ec9", "password"},
/* 365-2013-strict-password.docx */
{"$office$*2013*100000*256*16*f1c23049d85876e6b20e95ab86a477f1*13303dbd27a38ea86ef11f1b2bc56225*9a69596de0655a6c6a5b2dc4b24d6e713e307fb70af2d6b67b566173e89f941d", "password"},
/* Max password length data, 125 bytes. Made with pass_gen.pl */
{"$office$*2007*20*128*16*7268323350556e527671367031526263*54344b786a6967615052493837496735*96c9d7cc44e81971aadfe81cce88cb8b00000000", "12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345"},
{"$office$*2010*100000*128*16*42624931633777446c67354e34686e64*73592fdc2ecb12cd8dcb3ca2cec852bd*82f7315701818a7150ed7a7977717d0b56dcd1bc27e40a23dee6287a6ed55f9b", "12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345"},
{"$office$*2013*100000*256*16*36537a3373756b587632386d77665362*c5958bd6177be548ce33d99f8e4fd7a7*43baa9dfab09a7e54b9d719dbe5187f1f7b55d7b761361fe1f60c85b044aa125", "12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345"},
{NULL}
};
static ms_office_custom_salt *cur_salt;
#define MS_OFFICE_2007_ITERATIONS 50000
#if defined (_OPENMP)
static int omp_t = 1;
#endif
/* Password encoded in UCS-2 */
static UTF16 (*saved_key)[PLAINTEXT_LENGTH + 1];
/* UCS-2 password length, in octets */
static int *saved_len;
static ARCH_WORD_32 (*crypt_key)[4];
static int *cracked;
/* Office 2010/2013 */
static const unsigned char encryptedVerifierHashInputBlockKey[] = { 0xfe, 0xa7, 0xd2, 0x76, 0x3b, 0x4b, 0x9e, 0x79 };
static const unsigned char encryptedVerifierHashValueBlockKey[] = { 0xd7, 0xaa, 0x0f, 0x6d, 0x30, 0x61, 0x34, 0x4e };
static unsigned char *DeriveKey(unsigned char *hashValue, unsigned char *X1)
{
int i;
unsigned char derivedKey[64];
SHA_CTX ctx;
// This is step 4a in 2.3.4.7 of MS_OFFCRYPT version 1.0
// and is required even though the notes say it should be
// used only when the encryption algorithm key > hash length.
for (i = 0; i < 64; i++)
derivedKey[i] = (i < 20 ? 0x36 ^ hashValue[i] : 0x36);
SHA1_Init(&ctx);
SHA1_Update(&ctx, derivedKey, 64);
SHA1_Final(X1, &ctx);
if (cur_salt->verifierHashSize > cur_salt->keySize/8)
return X1;
/* TODO: finish up this function */
//for (i = 0; i < 64; i++)
// derivedKey[i] = (i < 30 ? 0x5C ^ hashValue[i] : 0x5C);
fprintf(stderr, "\n\n*** ERROR: DeriveKey() entered Limbo.\n");
fprintf(stderr, "Please report to john-dev mailing list.\n");
error();
return NULL;
}
#ifdef SIMD_COEF_32
static void GeneratePasswordHashUsingSHA1(int idx, unsigned char final[SHA1_LOOP_CNT][20])
{
unsigned char hashBuf[20];
/* H(0) = H(salt, password)
* hashBuf = SHA1Hash(salt, password);
* create input buffer for SHA1 from salt and unicode version of password */
unsigned char X1[20];
SHA_CTX ctx;
unsigned char _IBuf[64*SHA1_LOOP_CNT+MEM_ALIGN_CACHE], *keys;
uint32_t *keys32;
unsigned i, j;
keys = (unsigned char*)mem_align(_IBuf, MEM_ALIGN_CACHE);
keys32 = (uint32_t*)keys;
memset(keys, 0, 64*SHA1_LOOP_CNT);
for (i = 0; i < SHA1_LOOP_CNT; ++i) {
SHA1_Init(&ctx);
SHA1_Update(&ctx, cur_salt->osalt, cur_salt->saltSize);
SHA1_Update(&ctx, saved_key[idx+i], saved_len[idx+i]);
SHA1_Final(hashBuf, &ctx);
/* Generate each hash in turn
* H(n) = H(i, H(n-1))
* hashBuf = SHA1Hash(i, hashBuf); */
// Create a byte array of the integer and put at the front of the input buffer
// 1.3.6 says that little-endian byte ordering is expected
for (j = 4; j < 24; ++j)
keys[GETPOS_1(j, i)] = hashBuf[j-4];
keys[GETPOS_1(j, i)] = 0x80;
// 24 bytes of crypt data (192 bits).
keys[GETPOS_1(63, i)] = 192;
}
// we do 1 less than actual number of iterations here.
for (i = 0; i < MS_OFFICE_2007_ITERATIONS-1; i++) {
for (j = 0; j < SHA1_LOOP_CNT; ++j) {
keys[GETPOS_1(0, j)] = i&0xff;
keys[GETPOS_1(1, j)] = i>>8;
}
// Here we output to 4 bytes past start of input buffer.
SSESHA1body(keys, &keys32[SIMD_COEF_32], NULL, SSEi_MIXED_IN|SSEi_OUTPUT_AS_INP_FMT);
}
// last iteration is output to start of input buffer, then 32 bit 0 appended.
// but this is still ends up being 24 bytes of crypt data.
for (j = 0; j < SHA1_LOOP_CNT; ++j) {
keys[GETPOS_1(0, j)] = i&0xff;
keys[GETPOS_1(1, j)] = i>>8;
}
SSESHA1body(keys, keys32, NULL, SSEi_MIXED_IN|SSEi_OUTPUT_AS_INP_FMT);
// Finally, append "block" (0) to H(n)
// hashBuf = SHA1Hash(hashBuf, 0);
for (i = 0; i < SIMD_PARA_SHA1; ++i)
memset(&keys[GETPOS_1(23,i*SIMD_COEF_32)], 0, 4*SIMD_COEF_32);
SSESHA1body(keys, keys32, NULL, SSEi_MIXED_IN|SSEi_FLAT_OUT);
// Now convert back into a 'flat' value, which is a flat array.
for (i = 0; i < SHA1_LOOP_CNT; ++i)
memcpy(final[i], DeriveKey(&keys[20*i], X1), cur_salt->keySize/8);
}
#else
// for non MMX, SHA1_LOOP_CNT is 1
static void GeneratePasswordHashUsingSHA1(int idx, unsigned char final[SHA1_LOOP_CNT][20])
{
unsigned char hashBuf[20], *key;
UTF16 *passwordBuf=saved_key[idx];
int passwordBufSize=saved_len[idx];
/* H(0) = H(salt, password)
* hashBuf = SHA1Hash(salt, password);
* create input buffer for SHA1 from salt and unicode version of password */
unsigned int inputBuf[(0x14 + 0x04 + 4) / sizeof(int)];
unsigned char X1[20];
int i;
SHA_CTX ctx;
SHA1_Init(&ctx);
SHA1_Update(&ctx, cur_salt->osalt, cur_salt->saltSize);
SHA1_Update(&ctx, passwordBuf, passwordBufSize);
SHA1_Final(hashBuf, &ctx);
/* Generate each hash in turn
* H(n) = H(i, H(n-1))
* hashBuf = SHA1Hash(i, hashBuf); */
// Create a byte array of the integer and put at the front of the input buffer
// 1.3.6 says that little-endian byte ordering is expected
memcpy(&inputBuf[1], hashBuf, 20);
for (i = 0; i < MS_OFFICE_2007_ITERATIONS; i++) {
#if ARCH_LITTLE_ENDIAN
*inputBuf = i;
#else
*inputBuf = JOHNSWAP(i);
#endif
// 'append' the previously generated hash to the input buffer
SHA1_Init(&ctx);
SHA1_Update(&ctx, inputBuf, 0x14 + 0x04);
SHA1_Final((unsigned char*)&inputBuf[1], &ctx);
}
// Finally, append "block" (0) to H(n)
// hashBuf = SHA1Hash(hashBuf, 0);
memset(&inputBuf[6], 0, 4);
SHA1_Init(&ctx);
SHA1_Update(&ctx, &inputBuf[1], 0x14 + 0x04);
SHA1_Final(hashBuf, &ctx);
key = DeriveKey(hashBuf, X1);
// Should handle the case of longer key lengths as shown in 2.3.4.9
// Grab the key length bytes of the final hash as the encrypytion key
memcpy(final[0], key, cur_salt->keySize/8);
}
#endif
#ifdef SIMD_COEF_32
static void GenerateAgileEncryptionKey(int idx, unsigned char hashBuf[SHA1_LOOP_CNT][64])
{
unsigned char tmpBuf[20];
int hashSize = cur_salt->keySize >> 3;
unsigned i, j;
SHA_CTX ctx;
unsigned char _IBuf[64*SHA1_LOOP_CNT+MEM_ALIGN_CACHE], *keys,
_OBuf[20*SHA1_LOOP_CNT+MEM_ALIGN_CACHE];
uint32_t *keys32, (*crypt)[20/4];
crypt = (void*)mem_align(_OBuf, MEM_ALIGN_CACHE);
keys = (unsigned char*)mem_align(_IBuf, MEM_ALIGN_CACHE);
keys32 = (uint32_t*)keys;
memset(keys, 0, 64*SHA1_LOOP_CNT);
for (i = 0; i < SHA1_LOOP_CNT; ++i) {
SHA1_Init(&ctx);
SHA1_Update(&ctx, cur_salt->osalt, cur_salt->saltSize);
SHA1_Update(&ctx, saved_key[idx+i], saved_len[idx+i]);
SHA1_Final(tmpBuf, &ctx);
for (j = 4; j < 24; ++j)
keys[GETPOS_1(j, i)] = tmpBuf[j-4];
keys[GETPOS_1(j, i)] = 0x80;
// 24 bytes of crypt data (192 bits).
keys[GETPOS_1(63, i)] = 192;
}
// we do 1 less than actual number of iterations here.
for (i = 0; i < cur_salt->spinCount-1; i++) {
for (j = 0; j < SHA1_LOOP_CNT; ++j) {
keys[GETPOS_1(0, j)] = i&0xff;
keys[GETPOS_1(1, j)] = (i>>8)&0xff;
keys[GETPOS_1(2, j)] = i>>16;
}
// Here we output to 4 bytes past start of input buffer.
SSESHA1body(keys, &keys32[SIMD_COEF_32], NULL, SSEi_MIXED_IN|SSEi_OUTPUT_AS_INP_FMT);
}
// last iteration is output to start of input buffer, then 32 bit 0 appended.
// but this is still ends up being 24 bytes of crypt data.
for (j = 0; j < SHA1_LOOP_CNT; ++j) {
keys[GETPOS_1(0, j)] = i&0xff;
keys[GETPOS_1(1, j)] = (i>>8)&0xff;
keys[GETPOS_1(2, j)] = i>>16;
}
SSESHA1body(keys, keys32, NULL, SSEi_MIXED_IN|SSEi_OUTPUT_AS_INP_FMT);
// Finally, append "block" (0) to H(n)
for (i = 0; i < SHA1_LOOP_CNT; ++i) {
for (j = 0; j < 8; ++j)
keys[GETPOS_1(20+j, i)] = encryptedVerifierHashInputBlockKey[j];
keys[GETPOS_1(20+j, i)] = 0x80;
// 28 bytes of crypt data (192 bits).
keys[GETPOS_1(63, i)] = 224;
}
SSESHA1body(keys, (ARCH_WORD_32*)crypt, NULL, SSEi_MIXED_IN|SSEi_FLAT_OUT);
for (i = 0; i < SHA1_LOOP_CNT; ++i)
memcpy(hashBuf[i], crypt[i], 20);
// And second "block" (0) to H(n)
for (i = 0; i < SHA1_LOOP_CNT; ++i) {
for (j = 0; j < 8; ++j)
keys[GETPOS_1(20+j, i)] = encryptedVerifierHashValueBlockKey[j];
}
SSESHA1body(keys, (ARCH_WORD_32*)crypt, NULL, SSEi_MIXED_IN|SSEi_FLAT_OUT);
for (i = 0; i < SHA1_LOOP_CNT; ++i)
memcpy(&hashBuf[i][32], crypt[i], 20);
// Fix up the size per the spec
if (20 < hashSize) { // FIXME: Is this ever true?
for (i = 0; i < SHA1_LOOP_CNT; ++i) {
for(j = 20; j < hashSize; j++) {
hashBuf[i][j] = 0x36;
hashBuf[i][32 + j] = 0x36;
}
}
}
}
#else
static void GenerateAgileEncryptionKey(int idx, unsigned char hashBuf[SHA1_LOOP_CNT][64])
{
/* H(0) = H(salt, password)
* hashBuf = SHA1Hash(salt, password);
* create input buffer for SHA1 from salt and unicode version of password */
UTF16 *passwordBuf=saved_key[idx];
int passwordBufSize=saved_len[idx];
int hashSize = cur_salt->keySize >> 3;
unsigned int inputBuf[(28 + 4) / sizeof(int)];
unsigned int i;
SHA_CTX ctx;
SHA1_Init(&ctx);
SHA1_Update(&ctx, cur_salt->osalt, cur_salt->saltSize);
SHA1_Update(&ctx, passwordBuf, passwordBufSize);
SHA1_Final(hashBuf[0], &ctx);
/* Generate each hash in turn
* H(n) = H(i, H(n-1))
* hashBuf = SHA1Hash(i, hashBuf); */
// Create a byte array of the integer and put at the front of the input buffer
// 1.3.6 says that little-endian byte ordering is expected
memcpy(&inputBuf[1], hashBuf[0], 20);
for (i = 0; i < cur_salt->spinCount; i++) {
#if ARCH_LITTLE_ENDIAN
*inputBuf = i;
#else
*inputBuf = JOHNSWAP(i);
#endif
// 'append' the previously generated hash to the input buffer
SHA1_Init(&ctx);
SHA1_Update(&ctx, inputBuf, 0x14 + 0x04);
SHA1_Final((unsigned char*)&inputBuf[1], &ctx);
}
// Finally, append "block" (0) to H(n)
memcpy(&inputBuf[6], encryptedVerifierHashInputBlockKey, 8);
SHA1_Init(&ctx);
SHA1_Update(&ctx, &inputBuf[1], 28);
SHA1_Final(hashBuf[0], &ctx);
// And second "block" (0) to H(n)
memcpy(&inputBuf[6], encryptedVerifierHashValueBlockKey, 8);
SHA1_Init(&ctx);
SHA1_Update(&ctx, &inputBuf[1], 28);
SHA1_Final(&hashBuf[0][32], &ctx);
// Fix up the size per the spec
if (20 < hashSize) { // FIXME: Is this ever true?
for(i = 20; i < hashSize; i++) {
hashBuf[0][i] = 0x36;
hashBuf[0][32 + i] = 0x36;
}
}
}
#endif
#ifdef SIMD_COEF_64
static void GenerateAgileEncryptionKey512(int idx, unsigned char hashBuf[SHA512_LOOP_CNT][128])
{
unsigned char tmpBuf[64];
unsigned int i, j, k;
SHA512_CTX ctx;
unsigned char _IBuf[128*SHA512_LOOP_CNT+MEM_ALIGN_CACHE], *keys,
_OBuf[64*SHA512_LOOP_CNT+MEM_ALIGN_CACHE];
ARCH_WORD_64 *keys64, (*crypt)[64/8];
uint32_t *keys32, *crypt32;
crypt = (void*)mem_align(_OBuf, MEM_ALIGN_CACHE);
keys = (unsigned char*)mem_align(_IBuf, MEM_ALIGN_CACHE);
keys64 = (ARCH_WORD_64*)keys;
keys32 = (uint32_t*)keys;
crypt32 = (uint32_t*)crypt;
memset(keys, 0, 128*SHA512_LOOP_CNT);
for (i = 0; i < SHA512_LOOP_CNT; ++i) {
SHA512_Init(&ctx);
SHA512_Update(&ctx, cur_salt->osalt, cur_salt->saltSize);
SHA512_Update(&ctx, saved_key[idx+i], saved_len[idx+i]);
SHA512_Final(tmpBuf, &ctx);
for (j = 4; j < 68; ++j)
keys[GETPOS_512(j, i)] = tmpBuf[j-4];
keys[GETPOS_512(j, i)] = 0x80;
// 68 bytes of crypt data (0x220 bits).
keys[GETPOS_512(127, i)] = 0x20;
keys[GETPOS_512(126, i)] = 0x02;
}
// we do 1 less than actual number of iterations here.
for (i = 0; i < cur_salt->spinCount-1; i++) {
unsigned int i_be = JOHNSWAP(i);
// Iteration counter in first 4 bytes
for (j = 0; j < SHA512_LOOP_CNT; j++)
keys32[(j&(SIMD_COEF_64-1))*2 + j/SIMD_COEF_64*2*SHA_BUF_SIZ*SIMD_COEF_64 + 1] = i_be;
SSESHA512body(keys, (ARCH_WORD_64*)crypt, NULL, SSEi_MIXED_IN);
// Then we output to 4 bytes past start of input buffer.
for (j = 0; j < SHA512_LOOP_CNT; j++) {
uint32_t *o = keys32 + (j&(SIMD_COEF_64-1))*2 + j/SIMD_COEF_64*2*SHA_BUF_SIZ*SIMD_COEF_64;
uint32_t *in = crypt32 + (j&(SIMD_COEF_64-1))*2 + j/SIMD_COEF_64*2*8*SIMD_COEF_64;
for (k = 0; k < 8; k++) {
o[0] = in[1];
o += SIMD_COEF_64*2;
o[1] = in[0];
in += SIMD_COEF_64*2;
}
}
}
// last iteration is output to start of input buffer, then 32 bit 0 appended.
// but this is still ends up being 24 bytes of crypt data.
for (j = 0; j < SHA512_LOOP_CNT; ++j) {
keys[GETPOS_512(0, j)] = i&0xff;
keys[GETPOS_512(1, j)] = (i>>8)&0xff;
keys[GETPOS_512(2, j)] = i>>16;
}
SSESHA512body(keys, keys64, NULL, SSEi_MIXED_IN|SSEi_OUTPUT_AS_INP_FMT);
// Finally, append "block" (0) to H(n)
for (i = 0; i < SHA512_LOOP_CNT; ++i) {
for (j = 0; j < 8; ++j)
keys[GETPOS_512(64+j, i)] = encryptedVerifierHashInputBlockKey[j];
keys[GETPOS_512(64+j, i)] = 0x80;
// 72 bytes of crypt data (0x240 we already have 0x220 here)
keys[GETPOS_512(127, i)] = 0x40;
}
SSESHA512body(keys, (ARCH_WORD_64*)crypt, NULL, SSEi_MIXED_IN|SSEi_FLAT_OUT);
for (i = 0; i < SHA512_LOOP_CNT; ++i)
memcpy((ARCH_WORD_64*)(hashBuf[i]), crypt[i], 64);
// And second "block" (0) to H(n)
for (i = 0; i < SHA512_LOOP_CNT; ++i) {
for (j = 0; j < 8; ++j)
keys[GETPOS_512(64+j, i)] = encryptedVerifierHashValueBlockKey[j];
}
SSESHA512body(keys, (ARCH_WORD_64*)crypt, NULL, SSEi_MIXED_IN|SSEi_FLAT_OUT);
for (i = 0; i < SHA512_LOOP_CNT; ++i)
memcpy((ARCH_WORD_64*)(&hashBuf[i][64]), crypt[i], 64);
}
#else
static void GenerateAgileEncryptionKey512(int idx, unsigned char hashBuf[SHA512_LOOP_CNT][128])
{
UTF16 *passwordBuf=saved_key[idx];
int passwordBufSize=saved_len[idx];
unsigned int inputBuf[128 / sizeof(int)];
int i;
SHA512_CTX ctx;
SHA512_Init(&ctx);
SHA512_Update(&ctx, cur_salt->osalt, cur_salt->saltSize);
SHA512_Update(&ctx, passwordBuf, passwordBufSize);
SHA512_Final(hashBuf[0], &ctx);
// Create a byte array of the integer and put at the front of the input buffer
// 1.3.6 says that little-endian byte ordering is expected
memcpy(&inputBuf[1], hashBuf, 64);
for (i = 0; i < cur_salt->spinCount; i++) {
#if ARCH_LITTLE_ENDIAN
*inputBuf = i;
#else
*inputBuf = JOHNSWAP(i);
#endif
// 'append' the previously generated hash to the input buffer
SHA512_Init(&ctx);
SHA512_Update(&ctx, inputBuf, 64 + 0x04);
SHA512_Final((unsigned char*)&inputBuf[1], &ctx);
}
// Finally, append "block" (0) to H(n)
memcpy(&inputBuf[68/4], encryptedVerifierHashInputBlockKey, 8);
SHA512_Init(&ctx);
SHA512_Update(&ctx, &inputBuf[1], 64 + 8);
SHA512_Final(hashBuf[0], &ctx);
// And second "block" (0) to H(n)
memcpy(&inputBuf[68/4], encryptedVerifierHashValueBlockKey, 8);
SHA512_Init(&ctx);
SHA512_Update(&ctx, &inputBuf[1], 64 + 8);
SHA512_Final(&hashBuf[0][64], &ctx);
}
#endif
static void init(struct fmt_main *self)
{
#if defined (_OPENMP)
omp_t = omp_get_max_threads();
self->params.min_keys_per_crypt *= omp_t;
omp_t *= OMP_SCALE;
self->params.max_keys_per_crypt *= omp_t;
#endif
saved_key = mem_calloc_tiny(sizeof(*saved_key) *
self->params.max_keys_per_crypt, sizeof(UTF16));
saved_len = mem_calloc_tiny(sizeof(*saved_len) *
self->params.max_keys_per_crypt, MEM_ALIGN_WORD);
crypt_key = mem_calloc_tiny(sizeof(*crypt_key) *
self->params.max_keys_per_crypt, MEM_ALIGN_WORD);
cracked = mem_calloc_tiny(sizeof(*cracked) *
self->params.max_keys_per_crypt, MEM_ALIGN_WORD);
if (pers_opts.target_enc == UTF_8)
self->params.plaintext_length = MIN(125, PLAINTEXT_LENGTH * 3);
}
static void set_salt(void *salt)
{
cur_salt = (ms_office_custom_salt *)salt;
}
static int crypt_all(int *pcount, struct db_salt *salt)
{
const int count = *pcount;
int index = 0, inc = SHA1_LOOP_CNT;
if (cur_salt->version == 2013)
inc = SHA512_LOOP_CNT;
#ifdef _OPENMP
#pragma omp parallel for
#endif
for (index = 0; index < count; index+=inc)
{
int i;
if(cur_salt->version == 2007) {
unsigned char encryptionKey[SHA1_LOOP_CNT][20];
GeneratePasswordHashUsingSHA1(index, encryptionKey);
for (i = 0; i < SHA1_LOOP_CNT; ++i)
ms_office_common_PasswordVerifier(cur_salt, encryptionKey[i], crypt_key[index+i]);
}
else if (cur_salt->version == 2010) {
unsigned char verifierKeys[SHA1_LOOP_CNT][64], decryptedVerifierHashInputBytes[16], decryptedVerifierHashBytes[32];
unsigned char hash[20];
SHA_CTX ctx;
GenerateAgileEncryptionKey(index, verifierKeys);
for (i = 0; i < inc; ++i) {
ms_office_common_DecryptUsingSymmetricKeyAlgorithm(cur_salt, verifierKeys[i], cur_salt->encryptedVerifier, decryptedVerifierHashInputBytes, 16);
ms_office_common_DecryptUsingSymmetricKeyAlgorithm(cur_salt, &verifierKeys[i][32], cur_salt->encryptedVerifierHash, decryptedVerifierHashBytes, 32);
SHA1_Init(&ctx);
SHA1_Update(&ctx, decryptedVerifierHashInputBytes, 16);
SHA1_Final(hash, &ctx);
cracked[index+i] = !memcmp(hash, decryptedVerifierHashBytes, 20);
}
}
else if (cur_salt->version == 2013) {
unsigned char verifierKeys[SHA512_LOOP_CNT][128], decryptedVerifierHashInputBytes[16], decryptedVerifierHashBytes[32];
unsigned char hash[64];
SHA512_CTX ctx;
GenerateAgileEncryptionKey512(index, verifierKeys);
for (i = 0; i < inc; ++i) {
ms_office_common_DecryptUsingSymmetricKeyAlgorithm(cur_salt, verifierKeys[i], cur_salt->encryptedVerifier, decryptedVerifierHashInputBytes, 16);
ms_office_common_DecryptUsingSymmetricKeyAlgorithm(cur_salt, &verifierKeys[i][64], cur_salt->encryptedVerifierHash, decryptedVerifierHashBytes, 32);
SHA512_Init(&ctx);
SHA512_Update(&ctx, decryptedVerifierHashInputBytes, 16);
SHA512_Final(hash, &ctx);
cracked[index+i] = !memcmp(hash, decryptedVerifierHashBytes, 20);
}
}
}
return count;
}
static int cmp_all(void *binary, int count)
{
int index;
if (cur_salt->version == 2007) {
for (index = 0; index < count; index++) {
if ( ((ARCH_WORD_32*)binary)[0] == crypt_key[index][0] )
return 1;
}
return 0;
}
for (index = 0; index < count; index++)
if (cracked[index])
return 1;
return 0;
}
static int cmp_one(void *binary, int index)
{
if (cur_salt->version == 2007) {
return !memcmp(binary, crypt_key[index], BINARY_SIZE);
}
return cracked[index];
}
static int cmp_exact(char *source, int index)
{
return 1;
}
static int get_hash_0(int index) { if (cur_salt->version!=2007) return 0; return crypt_key[index][0] & 0xf; }
static int get_hash_1(int index) { if (cur_salt->version!=2007) return 0; return crypt_key[index][0] & 0xff; }
static int get_hash_2(int index) { if (cur_salt->version!=2007) return 0; return crypt_key[index][0] & 0xfff; }
static int get_hash_3(int index) { if (cur_salt->version!=2007) return 0; return crypt_key[index][0] & 0xffff; }
static int get_hash_4(int index) { if (cur_salt->version!=2007) return 0; return crypt_key[index][0] & 0xfffff; }
static int get_hash_5(int index) { if (cur_salt->version!=2007) return 0; return crypt_key[index][0] & 0xffffff; }
static int get_hash_6(int index) { if (cur_salt->version!=2007) return 0; return crypt_key[index][0] & 0x7ffffff; }
static void office_set_key(char *key, int index)
{
/* convert key to UTF-16LE */
saved_len[index] = enc_to_utf16(saved_key[index], PLAINTEXT_LENGTH, (UTF8*)key, strlen(key));
if (saved_len[index] < 0)
saved_len[index] = strlen16(saved_key[index]);
saved_len[index] <<= 1;
}
static char *get_key(int index)
{
return (char*)utf16_to_enc(saved_key[index]);
}
#if FMT_MAIN_VERSION > 11
/*
* MS Office version (2007, 2010, 2013) as first tunable cost
*/
static unsigned int ms_office_version(void *salt)
{
ms_office_custom_salt *my_salt;
my_salt = salt;
return (unsigned int) my_salt->version;
}
#endif
struct fmt_main fmt_office = {
{
FORMAT_LABEL,
FORMAT_NAME,
ALGORITHM_NAME,
BENCHMARK_COMMENT,
BENCHMARK_LENGTH,
0,
PLAINTEXT_LENGTH,
BINARY_SIZE,
BINARY_ALIGN,
SALT_SIZE,
SALT_ALIGN,
MIN_KEYS_PER_CRYPT,
MAX_KEYS_PER_CRYPT,
FMT_CASE | FMT_8_BIT | FMT_OMP | FMT_UNICODE | FMT_UTF8,
#if FMT_MAIN_VERSION > 11
{
"MS Office version",
"iteration count",
},
#endif
office_tests
}, {
init,
fmt_default_done,
fmt_default_reset,
fmt_default_prepare,
ms_office_common_valid_all,
fmt_default_split,
ms_office_common_binary,
ms_office_common_get_salt,
#if FMT_MAIN_VERSION > 11
{
ms_office_version,
ms_office_common_iteration_count,
},
#endif
fmt_default_source,
{
fmt_default_binary_hash_0,
fmt_default_binary_hash_1,
fmt_default_binary_hash_2,
fmt_default_binary_hash_3,
fmt_default_binary_hash_4,
fmt_default_binary_hash_5,
fmt_default_binary_hash_6
},
fmt_default_salt_hash,
NULL,
set_salt,
office_set_key,
get_key,
fmt_default_clear_keys,
crypt_all,
{
get_hash_0,
get_hash_1,
get_hash_2,
get_hash_3,
get_hash_4,
get_hash_5,
get_hash_6
},
cmp_all,
cmp_one,
cmp_exact
}
};
#endif /* plugin stanza */
|
<abc class="index" bindtap="onTap">{{prop}}</abc>
<div class="index">
<div class="inner">321</div>
</div>
|
Q:
Show Fiber Product of Rational Elliptic Surfaces is Calabi-Yau
In a handful of contexts people study Calabi-Yau threefolds formed by taking the fiber product of two rational elliptic surfaces. I can't find any detailed explanation of why such geometries are actually Calabi-Yau, so I think it's just a straightforward computation which I don't fully understand.
Let $\pi: S \to \mathbb{P}^{1}$ and $\pi' : S' \to \mathbb{P}^{1}$ be two rational elliptic surfaces, and define their fiber product
$$X = S \times_{\mathbb{P}^{1}} S'.$$
With some mild assumptions on $S$ and $S'$, $X$ should be a Calabi-Yau threefold, and I'm hoping someone can help me complete the proof of this. In other words, I want to see that $\omega_{X}=0$ or $K_{X}=0$ (however, note that in general, $X$ will certainly not be smooth).
I believe one should start by considering the obvious map induced by $\pi$ and $\pi'$
$$f: S \times S' \to \mathbb{P}^{1} \times \mathbb{P}^{1}.$$
We can then write $X$ as the pullback of the diagonal $\Delta \subset \mathbb{P}^{1} \times \mathbb{P}^{1}$,
$$X = f^{*} \Delta.$$
So we can realize $X$ as a hypersurface in $S \times S'$, so you should then be able to apply the adjunction formula:
$$\omega_{X} = \omega_{S \times S'}|_{X} \otimes \mathcal{N}_{X/S \times S'},$$
where $\mathcal{N}_{X/S \times S'}$ is the normal bundle of $X$ in $S \times S'$. However, I'm sort of stuck on how to proceed -- How can one explicitly handle both of the two factors in the above tensor product and show they somehow cancel to give 0?
A:
The diagonal $\Delta $ is linearly equivalent to $\{p\}\times \mathbb{P}^1 +\mathbb{P}^1\times \{p\} $ for any $p$ in $\mathbb{P}^1$. Therefore $X$ is the zero locus in $S\times S'$ of a section of $L:=\pi^*\mathcal{O}(1) \boxtimes \pi'^*\mathcal{O}(1) $. On the other hand, standard theory of elliptic surfaces gives
$\omega _S=\pi ^*\mathcal{O}(-1) $ and $\omega _{S'}=\pi' ^*\mathcal{O}(-1) $, therefore $\omega _{S\times S'}= L^{-1}$. Then the adjunction formula gives indeed $\omega_X\cong \mathcal{O}_{X}$.
A:
$S\times_{\mathbb{P}^1}S'$ is a complete intersection in $\mathbb{P}^1\times \mathbb{P}^2\times \mathbb{P}^2$: it is given by two equations of degree $(1,3,0)$ and $(1,0,3)$. The canonical bundle is trivial by the adjunction formula.
|
Ланцюг знань ("The Chain of Knowledge")
This photo was taken in the main squareof a town in Ukraine on May 15th, 2012. The Chain of Knowledge was the culmination of an HIV Awareness project I did with the pupils of my school. Participants of the project went into the main square of our oblast (regional) center and approached different people to share what they learned through at HIV Awareness Project throughout the semester.
Then they explained that knowledge is like a chain; every little fact that we know is a link in our understanding of a concept. They encouraged people to add their own links to the town's knowledge of HIV-- to write something the project participants shared with them on a slip of red paper, fold it into a ribbon, and attach it to the Chain of Knowledge. The project participants then encouraged people to share what they had learned with others-- after all, sharing knowledge is the only way to make the chain bigger. |
Background {#Sec1}
==========
According to Hoy et al., low back pain (LBP) is the most prevalent among a variety of other musculoskeletal disorders with an estimated global lifetime prevalence of 70--80%, a 1-year prevalence of 15--45% and an average point prevalence of 30% among the general population \[[@CR1]\]. A systematic review by Morris et al. investigating the prevalence of low back pain in Africa revealed that the lifetime, 1-year and point prevalence of low back pain among African populations was substantially higher than the revealed global LBP prevalence estimates \[[@CR2]\]. Morris et al. found that the point prevalence of low back pain among African countries was 39%, while the global point prevalence according to Hoy et al. was 18.3% \[[@CR1], [@CR2]\]. Similarly, the 1-year prevalence of low back pain among Africans (57%) was significantly higher than the global annual prevalence estimates (38.5%) \[[@CR1], [@CR2]\]. According to Hoy et al. and Majid et al., a higher prevalence of low back pain has been significantly correlated with a low socio-economic status and lower educational levels \[[@CR1], [@CR3]\]. Low back pain possesses a significant socio-economic burden to the society and has been reported to be the leading cause of activity limitation worldwide \[[@CR4]\]. Years lived with disability caused by low back pain has scaled up by 54% from the years 1990 to 2015 due to aging and population increase, and the greatest burden thereof has been observed in low-middle-income countries \[[@CR5]\]. The burden attributed to low back pain is predicted to continue to increase in the coming decades, particularly in LMICS where the healthcare infrastructure and a conglomeration of other systems are poor equipped to cope with the increasing burden of low back pain in addition to other priorities such as infectious diseases \[[@CR5]\].
Low back pain is often defined as pain, muscle tension, or stiffness localized below the coastal margin and above the inferior gluteal fold with or without leg pain \[[@CR6]\]. Low back pain is classified as acute (symptoms that lasts less than 3 months) or chronic (symptoms that lasts more than 3 months) based on the duration of symptoms and as specific (known causative factors) or non-specific (idiopathic or unknown origin) based on the etiology of symptoms \[[@CR7]\]. About 90% of the low back pain cases will improve in 6 weeks or less, and only about 10% of the cases will progress to recurrent or chronic low back pain \[[@CR8]\]. The significant amount of cost and disability associated with low back pain is attributed to this small percentage group of chronic low back pain sufferers \[[@CR9]\]. There is widespread literature of low back pain across all continents with little knowledge of chronic low back pain particularly in LMICs \[[@CR2]\].
Low back pain has been regarded as a trivial condition, and its research has not been given priority especially in African countries where the concern has been placed on epidemic infectious diseases such as HIV/AIDS, TB, and Malaria \[[@CR2]\]. However, according to the Global Burden of Disease 2015, low back pain was found to be the leading cause of disability and associated with a significant amount of cost \[[@CR10]\]. While the current focus on public health research and funding has been shifted towards epidemic diseases, the neglection of low back pain research possesses a blind spot threat to the healthcare system. According to Gore et al., the economic burden attributed to low back pain incorporating both direct and indirect costs range from \$84.1 billion to \$624.8 billion in the USA \[[@CR11]\]. Indirect cost due to lost work productivity was the main contributor to this economic burden accounting for \$7.4 billion to \$28 billion \[[@CR11]\]. However, low back pain is also associated with a significant amount of direct cost which includes healthcare resource utilization. Low back pain is the second most common reason to visit a physician after the common cold, the third most cause of surgical procedures, and the fifth ranking cause of admission to hospital.
Low back pain has been found to directly impair activities of daily living (ADL) with the frail constitution suffering more from the chronic subcategory of back pain avoid performing these ADL mainly due to fear of re-injury or worsening of the symptoms \[[@CR5]\]. This fear avoidance belief has been shown to have a strong correlation with chronic low back pain and will eventually exacerbate the symptoms culminating in persistent disabling back pain \[[@CR12]\]. Being unable to or avoiding these ADL can lead to weight gain and development or progression of other chronic health conditions, and this will ultimately lead to earlier death \[[@CR12], [@CR13]\]. A telephonic survey in the USA showed that the prevalence of chronic low back pain in the interval between 1992 and 2006 has more than doubled \[[@CR14]\]. The population of adults is increasing worldwide. Approximately 8% of the world population are adults aged 65 years and above, and it is estimated to continue rising to almost 17% by 2050 \[[@CR13], [@CR15]\], thus increasing the socio-economic burden of chronic low back pain since this condition is more prevalent to this age group.
A scoping review of the literature regarding the epidemiology, incidence, mortality, risk factors, and economic burden of chronic low back pain among adults in SSA will be conducted. This scoping review will seek to find the evidence-based research gaps and help inform future research policy. The implementation of interventions directed at preventing and controlling the development of chronic low back pain in SSA and a change of the research focus and funding directions will be encouraged. This scoping review protocol therefore aims to highlight the existing knowledge gap in the distribution of chronic low back pain in SSA among adults with estimates on prevalence, incidence, mortality, risk factors, comorbidities, and the associated socio-economic burden.
Methodology {#Sec2}
===========
Scoping review {#Sec3}
--------------
The proposed scoping review will highlight the available literature or the evidence on the distribution of chronic low back pain among adults in SSA with estimates on prevalence, incidence, mortality, risk factors, comorbidities, and associated cost (economic burden). This protocol is a portion of a large research study, which aims to determine the burden of chronic low back pain in KwaZulu-Natal (a coastal South African Province). A PRISMA flow diagram (Fig. [1](#Fig1){ref-type="fig"}) will be utilized to guide the flow of citations reviewed and to illustrate the outcome of the title search from different databases. The proposed scoping review will be reported in accordance with the MOOSE guidelines for observational studies in epidemiology and the preferred reporting items for systematic reviews and meta-analysis extended for scoping reviews (PRISMA-ScR) \[[@CR16]\] (Additional file [1](#MOESM1){ref-type="media"}). A methodological framework proposed by Arksey and O'Malley will be used \[[@CR17]\]. This Arksey and O'Malley methodological framework is identified by the following six steps, (I) Identify the research question, (II) Identify the relevant studies, (III) Study selection, (IV) Charting the data, (V) Collating, summarizing and reporting data, and (VI) Consultation (optional) \[[@CR17]\]. Fig. 1PRISMA-P flow diagram
### Identifying the research question {#Sec4}
The main research question of the proposed scoping review is, "What is the existing evidence on the distribution of chronic low back pain among adults in SSA?" Sub-questions What is the burden of chronic low back pain in SSA with estimations on the prevalence, incidence and mortality?What are the comorbidities and risk factors associated with chronic low back pain in SSA?What are the estimated costs associated with chronic low back pain?Inclusion criteria (studies that will be included in this review should present evidence on either of the factors listed below)The prevalence of chronic low back pain in SSAThe incidence of chronic low back pain in SSAThe risk factors associated chronic low back pain in SSAThe comorbidities associated with chronic low back painStudies done on the adult population aged 18 years and aboveStudies that have a clear definition of low back painOnly studies conducted in English and in other languages with an English version will be included into the study3.Exclusion criteriaStudies which do not satisfy the above listed criterion will be excluded.Studies done on children or adolescenceStudies done outside the context of SSAClinical trials and intervention-based studies will be excludedStudies contacted in other languages other than English and do not have an English version will also be excludedStudies that lack a clear definition of low back pain in terms of its anatomical location4.Eligibility of research question (the eligibility of the research question was determined using the Population Exposure Context Outcome design (PECOd) framework; some part of it has been recommended by the Joanna Briggs Institute 2015 \[[@CR18]\]. This framework is outlined in Table [1](#Tab1){ref-type="table"})Table 1PECOd framework for eligibility of research questionCriteriaDeterminantsPopulationIndividual with chronic low back painExposureChronic low back painContextSub-Saharan AfricaOutcome1. Prevalence2. Incidence3. Mortality4. Risk factors5. Associated costs6. Comorbidities and disabilities associatedDesignCohort, cross-sectional studies
### Identifying relevant studies (literature search) {#Sec5}
A keyword search of studies conducted in English will be performed without a date limit to identify the relevant studies. An electronic literature search will be performed using the EBSCOhost platform by searching the following databases within the platform: Academic search complete, health source: nursing/academic edition, CINAHL with full text, Embase, PubMed, MEDLINE, Science Direct databases, Google Scholar, and World Health Organization (WHO) library databases and gray literature to retrieve articles that are relevant to the objective of proposed scoping review, guided by the study inclusion and exclusion criteria. The following keyword terms will be used: low back pain, lumbar pain, spinal pain, musculoskeletal pain, epidemiology, prevalence, incidence, mortality, risk factors, burden, impact, disability, comorbidities, Africa, Sub-Saharan Africa, and Low-Middle-Income-Countries (Table [2](#Tab2){ref-type="table"}). The Boolean apparatus OR and AND will be used to separate the keywords during the literature search. We have contacted a pilot search using the above keyword to determine the feasibility of the search criteria (Table [3](#Tab3){ref-type="table"}). The primary database search will be performed by the principle investigator from the above-mentioned online databases. All the database search outcomes will then be transferred to the Endnote X8- reference management software, which will be used to create a library for this review. Deduplication will follow immediately after the transfer of all the retrieved articles to the Endnote X8 and prior to the initial (title and abstract) phase of screening. Table 2Pilot search result from PubMed databaseDate of searchSearch engine usedKeyword search usedNumber of articles retrieved05/03/2020PubMed((((((("low back pain"\[MeSH Terms\] OR ("low"\[All Fields\] AND "back"\[All Fields\] AND "pain"\[All Fields\]) OR "low back pain"\[All Fields\]) OR (("lumbosacral region"\[MeSH Terms\] OR ("lumbosacral"\[All Fields\] AND "region"\[All Fields\]) OR "lumbosacral region"\[All Fields\] OR "lumbar"\[All Fields\]) AND spinal\[All Fields\] AND ("pain"\[MeSH Terms\] OR "pain"\[All Fields\]))) OR ("musculoskeletal pain"\[MeSH Terms\] OR ("musculoskeletal"\[All Fields\] AND "pain"\[All Fields\]) OR "musculoskeletal pain"\[All Fields\])) OR ("back pain"\[MeSH Terms\] OR ("back"\[All Fields\] AND "pain"\[All Fields\]) OR "back pain"\[All Fields\])) OR (lumbosacral\[All Fields\] AND ("pain"\[MeSH Terms\] OR "pain"\[All Fields\]))) OR (discogenic\[All Fields\] AND ("pain"\[MeSH Terms\] OR "pain"\[All Fields\]))) AND ((("epidemiology"\[Subheading\] OR "epidemiology"\[All Fields\] OR "prevalence"\[All Fields\] OR "prevalence"\[MeSH Terms\]) OR ("epidemiology"\[Subheading\] OR "epidemiology"\[All Fields\] OR "incidence"\[All Fields\] OR "incidence"\[MeSH Terms\])) OR ("epidemiology"\[Subheading\] OR "epidemiology"\[All Fields\] OR "epidemiology"\[MeSH Terms\]))) AND (("africa south of the sahara"\[MeSH Terms\] OR ("africa"\[All Fields\] AND "south"\[All Fields\] AND "sahara"\[All Fields\]) OR "africa south of the sahara"\[All Fields\] OR ("sub"\[All Fields\] AND "saharan"\[All Fields\] AND "africa"\[All Fields\]) OR "sub saharan africa"\[All Fields\]) OR ("africa south of the sahara"\[MeSH Terms\] OR ("africa"\[All Fields\] AND "south"\[All Fields\] AND "sahara"\[All Fields\]) OR "africa south of the sahara"\[All Fields\]))253Table 3Data extraction formStudy detailsAuthor and publication yearStudy aims and objectivesStudy settingStudy designSample size ^Male;\ Female^Age groupCase definitionMain findingsOutcomes of interest
### Study selection and eligibility {#Sec6}
Following deduplication, two independent reviewers will begin in parallel the initial (title and abstract) phase of screening. The titles and abstracts that do not meet the study eligibility criteria will be excluded. Following this initial title and abstract screening phase, a thorough full-text screen of the included articles, at this stage, will be performed, again by two independent reviewers, to assess the eligibility of the article. Studies that do not meet the inclusion criteria will be excluded. A third reviewer will be employed to adjudicate discrepancies between the two reviewers. Following the full text screening, a secondary search of the reference list of all the included studies will be performed for other articles which may not have been identified during database search. The selection of the relevant studies will follow the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) flow diagram (Fig. [1](#Fig1){ref-type="fig"}). Data extraction will be performed on the included articles after the full-text screening by the two independent reviewers.
### Charting the data {#Sec7}
The data from all included studies will be retrieved using a data extraction form (Table [2](#Tab2){ref-type="table"}) by the two independent reviewers. This data extraction form (Table [2](#Tab2){ref-type="table"}) specific to the objectives of the proposed scoping review will be developed by the two reviewers, and this form will be piloted by the two independent reviewers in parallel to test the consistency of the extraction process prior to the commencement of the actual data extraction process. Ten articles will be randomly selected from included studies and will be used for piloting the data extraction form, and if there is need to modify the form based on the results of the pilot study, this will be done prior to its final use. The following information will be extracted from included studies: author and publication year, aims and objectives of the study, study setting, study design, sample size, age group, case definition, and the study main findings. The outcome measures to be analyzed include low back pain prevalence, incidence, mortality, disabilities, comorbidities, risk factors, and the cost estimates. The trends of included studies will also be analyzed to determine if there have been any changes in the distribution of chronic low back pain over time. The data extraction process will be done by two independent reviewers who will discuss any differences until a consensus is reached for the final presentation of data. A coding system will be used to code all the reviewed articles. This is done to keep track of the studies included and excluded during the charting process of the scoping review.
### Collating, summarizing and reporting the results {#Sec8}
The aim of this scoping review is to map the existing evidence on the prevalence, incidence, risk factors, mortality, and cost associated with chronic low back pain among adults in Sub-Saharan Africa and summate the findings of the study articles included. Following completion of the data extraction process, a narrative account of the extracted data from the existing literature will be given using the thematic content analysis \[[@CR19]\], where the distribution pattern of chronic low back pain will be noted and analyzed to have a deep appreciation of the study phenomenon. The NVivo 12 data analyses software will be used to extract and categorize data to generate themes from the literature content of included articles \[[@CR19]\]. All data that relates to the prevalence of chronic low back pain, incidence of chronic low back pain, risk factors of chronic low back pain, comorbidities and disability related to chronic low back pain, and chronic low back pain estimated cost will be extracted and structured and the emerging themes analyzed with the results critically examined to inspect the relationship between the findings and the study purpose. The meaning of these results will then be considered as they relate to the objectives of the study and the implications thereof for future research, policy-making, and practice will be assessed.
Quality Appraisal Assessment {#Sec9}
----------------------------
All included studies will be assessed for the risk of bias using three reliable quality assessment tools based on the design of the study. The methodological quality of included studies was assessed using a tool adopted from Hoy et al. which has been shown to be a valid tool to assess low back pain epidemiological studies (Table [4](#Tab4){ref-type="table"}). It contained eight items which are sample frame, sample size estimates, randomization used, likelihood of non-response bias, validity of the study instruments, standardization of data collection, use of human body drawings, and if the data was collected directly from the subjects. A score weighting index of 0.2 was attributed to sample size representativeness, non-response bias probability, and randomization. This greater weighting was given to the characteristics that had a higher chance of causing bias in chronic low back pain epidemiological estimates. The remaining five items were rendered an index score weighting of 0.08, which enabled a total score of 1.0. The methodological quality of cost-of-illness studies will assess for the risk of bias using an analytical grid adopted from Costa's 2012 COI study (Table [5](#Tab5){ref-type="table"}) which contains the following main aspects of COI studies, clear definition of the illness, carefully described epidemiological sources, sufficiently disaggregated costs, description and assessment of activity data source, analytical description of cost values, proper valuation of unit costs, careful description of used methodology, discounted costs, testation of major assumptions, and the consistency of the study results presentation with the methodology of the study \[[@CR20], [@CR21]\]. Each item bears the same weight, and the final score will be the sum of all the eleven items. This will be done to make sure that the studies are of moderate to good quality and that the study design is appropriate for the study objectives and to eliminate the risk of bias. Table 4Methodological quality assessment tool for LBP epidemiological studiesScore weightstudy0.20.080.20.20.080.080.080.08Was the sampling frame a true or close representation of the target populationWas the sample size estimated?Was some form of random selection used to select sample OR was a census undertaken?Was the likelihood of non-response bias minimal?Were data collected directly from the subjects (as opposed to a proxy)?Had the study instrument that measured the parameter of interest (eg, CLBP prevalence) been tested for reliability and validity?Was data collection standardised?Was a human body drawing usedTotal scoreTable 5Methodological quality assessment of cost-of-illness studiesStudyWas a clear definition of the illness given?Were epidemiological sources carefully described?Were the costs sufficiently disaggregated?Were activity data sources carefully described?Were activity data appropriately assessed?Were the sources of cost values analytically described?Were unit costs appropriately valued?Were the methods adopted carefully explained?Were costs discounted?Were major assumptions tested in a sensitivity analysis?Was the presentation of study result consistent with the methodology of the studyTotal score
Discussion {#Sec10}
==========
The proposed scoping review seek to map existing evidence regarding the distribution (prevalence, incidence and mortality), risk factors, estimated costs associated to chronic low back pain among adults SSA in order to highlight the research gaps in this area. This scoping review will include cohort studies, cross-section studies, and cost of illness studies designs conducted among adults in Sub-Saharan Africa. The studies to be included should have been conducted in English and there will be no date restrictions. Intervention-based studies and randomized controlled clinical trials will be excluded from this review as the data will not be relevant and not be addressing the research question of the review.
This study will be the first study to map the evidence on the distribution of chronic low back pain in Sub-Saharan Africa with estimates on prevalence, incidence, mortality, risk factors, and associated cost. Low back pain is a common public health problem with a significant impact to the society and imposes a financial burden to both the HICs and the LMICs \[[@CR11]\]. Low back pain has been referred to as a healthcare enigma because the determinants thereof are unknown in most cases and its normal diagnosis of convenience is non-specific low back pain. According to the Global Burden of Disease 2010, low back pain is the sixth ranking burden among 291 diseases and the cause of more years lived with the disease \[[@CR10], [@CR22], [@CR23]\]. Low back pain is the leading cause of disability and activity limitation, resulting in significant production losses at work and demands billions of dollars in medical care expenditure annually \[[@CR10]\]. It therefore imposes a significant economic burden on the national heath budgets, further constraining already fragile African health systems.
Mapping out the trends of chronic low back pain over time is essential to predict the future outcomes of the disease. This will help the primary healthcare workers, policy makers, and other stakeholders to take precautionary measures to prevent or minimize the predicted socio-economic burden. The economic forecast of low back pain will help to ensure an efficient healthcare resource allocation, influence cost-benefit, and/or cost-effectiveness analyses studies. Knowing that the mortality trends of low back pain is of paramount importance to healthcare providers. This will serve as a wake-up call for the implementation of programs such as exercise programs for the frail constitution and dietary recommendations, and clinical trials to come up with cost effective interventions.
Epidemiological estimates are essential; knowing which population group is more vulnerable will assist in the design of interventions specific to that population group, for example chronic low back pain is more prevalent among adults of the working class particularly those with a sedentary lifestyle. Health is a human right; therefore, it should be implemented that all employers of sedentary employees are obliged to supply ergonomic chairs for their employees and have gym allowances for exercise programs. All work places should be assessed at regular intervals by safety and health professionals to ensure that all necessary low back pain preventative measures are implemented. By identifying the risk factors of low back pain, this will help primary healthcare providers to implement prevention programs and copying strategies to alleviate the impact of the disease to the society. Mapping out the existing evidence on the prevalence, incidence, mortality, disability, risk factors, and the socio-economic burden of chronic low back pain in SSA will provide evidence-based knowledge gaps, inform future research, and enrich the study findings.
Conclusion {#Sec11}
==========
The proposed scoping review is anticipated to give a reflection on the distribution of chronic low back pain among adults in the SSA region with estimates on prevalence, incidence, mortality with the identification of risk factors, and the associated economic burden. Therefore, the evidence synthesized from this study will help researchers, decision makers, and other stakeholders to inform policy and ensure an efficient allocation of healthcare resources, improving the healthcare system performance and thus improving treatment and reduce mortality associated.
Supplementary information
=========================
{#Sec12}
**Additional file 1.** PRISMA-ScR Checklist (16).
SSA
: Sub-Saharan Africa
CLBP
: Chronic low back pain
KZN
: KwaZulu-Natal
HICs
: High-income countries
LMICs
: Low-middle-income countries
MMQAT
: Mixed Methods Quality Appraisal Test
PECOd
: Population Exposure Context Outcome design framework
AIDS
: Acquired immunodeficiency syndrome
HIV
: Human immunodeficiency virus
TB
: Tuberculosis
**Publisher's Note**
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
=========================
**Supplementary information** accompanies this paper at 10.1186/s13643-020-01321-w.
The authors would like to thank the University of KwaZulu-Natal (UKZN) for the provision of resources towards this review and the UKZN Systematic Review Unit for the training and technical support.
Timeline {#FPar1}
========
We will not proceed with conduct of the review until the final peer-review comments are received, and protocol accepted for publication
MK conceptualized the study under the supervision of TG and designed the data collection methods. MK and TG contributed to writing the first draft of the manuscript. All authors critically reviewed and approved of the final manuscript.
Not applicable
All data generated or analyzed during this study will be included in the published systematic review article.
Not applicable
Not applicable
The authors declare that they have no competing interests.
|
Q:
find a path in maze using recursion
Hi~I got stuck in this problem. Someone please help me!!!!
The problem is that the program will ask the user to input a number between 4 and 20 to decide the size of the maze. It will later ask the user to input the contents of the maze row by row and store it into a 2D bool array (true means blocked and false means clear). Then the program starts at the top left corner and tries to find a path leading to the lower right corner (can move right, left, up, down). At this time, the program should also maintain another char array which record the path found (if there's any) and print it out at the end of processing. This problem asks to use recursion to solve it.
Here's what I got now:
#include<iostream>
using namespace std;
int row, col;
int size=0;
bool maze[21][21];
char print[22][22];
const char start = 's', up = 'u', down = 'd', left = 'l', right = 'r', barrier = 'x';
char path(int coorx, int coory, int size)
{
if(maze[coorx][coory+1]=0)
{
print[coorx+1][coory+2]='r';
return path(coorx,coory+1,size);
}
else
{
if(maze[coorx+1][coory]=0)
{
print[coorx+2][coory+1]='d';
return path(coorx+1,coory,size);
}
else
{
if(maze[coorx][coory-1]=0)
{
print[coorx+1][coory]='l';
return path(coorx,coory-1,size);
}
else
{
if(maze[coorx-1][coory]=0)
{
print[coorx][coory+1]='u';
return path(coorx-1,coory,size);
}
}
}
}
}
int main()
{
while(size<4 || size>20)
{
cout<<"Please input size of maze (a number between 4 and 20 is expected) -> ";
cin >>size;
if(size<4 || size>20)
cout<<"**Error** maze size not in range!"<<endl;
}
cout<<"Please input contents of maze row by row, 1 for barrier and 0 for free passage."<<endl;
cout<<endl;
for(int i=1; i<size+1; i++)
{
for(int j=1; j<size+1; j++)
cin>>maze[i][j];
}
if(maze[1][1]==1)
cout<<"**Error** entrance to maze is blocked!"<<endl;
else
{
// find the path
for(int coorx=0;coorx<size;coorx++)
{
for(int coory=0;coory<size;coory++)
path(coorx,coory,size);
}
cout<<"The maze and the path:"<<endl;
// print the forum (adding characters '+','-', ' ')
print[0][0]=print[size+1][size+1]=print[0][size+1]=print[size+1][0]='+';
print[1][1]='s';
for(int x=1; x<size+1; x++)
{
for(int y=0; y<size+2; y++){
if(y==0 || y==size+1)
{
print[x][y]='|';
}
}
}
for(int x=0; x<size+2; x++)
{
for(int y=0; y<size+2; y++){
if(x==0 || x== size+1){
if(y!=0 && y!=size+1)
print[x][y]='-';
}
}
}
for(int row=0; row<size+2; row++)
{
for(int col=0; col<size+2; col++)
{
if(maze[row][col]==1)
print[row][col]='x';
}
}
// print out the record of the path found
for(int row=0; row<size+2; row++)
{
for(int col=0; col<size+2; col++){
cout<<print[row][col];
}
cout << endl;
}
}
return 0;
}
I don't know why I can't show those 'r', 'd', 'l', 'u'. I already assigned them to print[][], but why it won't show why I print out the print[][]??
New coding
#include<iostream>
using namespace std;
int row,col;
int size=0;
bool maze[20][20];
char print[22][22];
bool path(int coorx, int coory, int size)
{
if(coorx==size-1 && coory==size-1)
return true;
if(!maze[coorx][coory+1] && path(coorx,coory+1,size))
return true;
return 'r';
if(!maze[coorx+1][coory] && path(coorx+1,coory,size))
return true;
return 'd';
if(!maze[coorx][coory-1] && path(coorx,coory-1,size))
return true;
return 'l';
if(!maze[coorx-1][coory] && path(coorx-1,coory,size))
return true;
return 'u';
}
int main()
{
while(size<4 || size>20)
{
cout<<"Please input size of maze (a number between 4 and 20 is expected) -> ";
cin >>size;
if(size<4 || size>20)
cout<<"**Error** maze size not in range!"<<endl;
}
cout<<"Please input contents of maze row by row, 1 for barrier and 0 for free passage."<<endl;
cout<<endl;
for(int i=0; i<size; i++)
{
for(int j=0; j<size; j++)
cin>>maze[i][j];
}
if(maze[0][0]==1)
cout<<"**Error** entrance to maze is blocked!"<<endl;
else
{
int row=0;
int col=0;
path(row,col,size);
if(!path(row,col,size))
{
cout<<"**Warning** no path from entrance to exit!"<<endl;
}
else
{
if('r')
print[row+2][col+3]='r';
if('d')
print[row+3][col+2]='d';
if('l')
print[row+2][col+1]='l';
if('u')
print[row+1][col+2]='u';
}
cout<<"The maze and the path:"<<endl;
// print the forum (adding characters '+','-', ' ')
print[0][0]=print[size+1][size+1]=print[0][size+1]=print[size+1][0]=='+';
print[1][1]='s';
for(int x=1; x<size+1; x++)
{
for(int y=0; y<size+2; y++){
if(y==0 || y==size+1)
{
print[x][y]=='|';
}
}
}
for(int x=0; x<size+2; x++)
{
for(int y=0; y<size+2; y++){
if(x==0 || x== size+1){
if(y!=0 && y!=size+1)
print[x][y]=='-';
}
}
}
for(int row=0; row<size; row++)
{
for(int col=0; col<size; col++)
{
if(maze[row][col]==1)
print[row+1][col+1]=='x';
}
}
path(0,0,size);
// pirnt out the record of the path found
for(int row=0; row<size+2; row++)
{
for(int col=0; col<size+2; col++){
cout<<print[row][col];
}
cout << endl;
}
}
return 0;
}
for maze size=4
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
the sample run looks like
s r r r
d
d
d
but my program run like this
s r
nothing afterward, don't know why
A:
In your if(maze[coorx][coory+1]=0) (and similar) statements in path(), the single equals is the assignment operator, so it'll always evaluate to false.
Since maze is a bool array, you should just use if(!maze[coorx][coory + 1]).
print should probably be a one dimensional array instead of a two dimensional array since it only needs to keep track of "r", "l", "d", and "u", and not the cells where you take those actions.
In path(), you're returning early instead of backtracking. You should return only if a path exists, and you can tell whether a path exists by recursing.
In other words, the path() function needs to be rewritten to look something like this:
/* returns true if there's a path to the bottom right cell, otherwise false */
bool path(int coorx, int coory, int size) {
if(coorx == size - 1 && coory == size - 1) { // exit of maze
return true;
}
if(!maze[coorx][coory + 1] && path(coorx, coory + 1, size)) {
// add "right" to your path
return true;
}
if(!maze[coorx + 1][coory] && path(coorx + 1, coory, size)) {
// add "down" to your path
return true;
}
// etc...
}
Of course, you should add bounds checking as well.
And, path will be backwards at the end of the recursion, but you can just reverse it.
You should call the path() function just once at the top left cell, not at each and every cell. Recursion will handle the search through all cells of the maze.
if(maze[1][1]==1) in main() should probably be if(maze[0][0]) instead, since you're apparently trying to start at the top left cell.
Edit:
Once you get the recursion working, you can simply add to print in path():
bool path(int coorx, int coory, int size, int depth) {
// if(coorx == size - 1 ... base case
if(!maze[coorx][coory + 1] && path(coorx, coory + 1, size, depth + 1)) {
print[coorx][coory + 1] = 'r';
return true;
}
// etc...
}
You're calling path() three times in main(). Just call it once and store the result in a boolean.
You're not printing the path correctly. The sample output prints out the character for each cell in print -- you should do the same.
|
Q:
Issues with Flume HDFS sink from Twitter
I currently have this configuration in Flume :
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# The configuration file needs to define the sources,
# the channels and the sinks.
# Sources, channels and sinks are defined per agent,
# in this case called 'TwitterAgent'
TwitterAgent.sources = Twitter
TwitterAgent.channels = MemChannel
TwitterAgent.sinks = HDFS
TwitterAgent.sources.Twitter.type = com.cloudera.flume.source.TwitterSource
TwitterAgent.sources.Twitter.channels = MemChannel
TwitterAgent.sources.Twitter.consumerKey = YPTxqtRamIZ1bnJXYwGW
TwitterAgent.sources.Twitter.consumerSecret = Wjyw9714OBzao7dktH0csuTByk4iLG9Zu4ddtI6s0ho
TwitterAgent.sources.Twitter.accessToken = 2340010790-KhWiNLt63GuZ6QZNYuPMJtaMVjLFpiMP4A2v
TwitterAgent.sources.Twitter.accessTokenSecret = x1pVVuyxfvaTbPoKvXqh2r5xUA6tf9einoByLIL8rar
TwitterAgent.sources.Twitter.keywords = hadoop, big data, analytics, bigdata, cloudera, data science, data scientiest, business intelligence, mapreduce, data warehouse, data warehousing, mahout, hbase, nosql, newsql, businessintelligence, cloudcomputing
TwitterAgent.sinks.HDFS.channel = MemChannel
TwitterAgent.sinks.HDFS.type = hdfs
TwitterAgent.sinks.HDFS.hdfs.path = hdfs://hadoop1:8020/user/flume/tweets/%Y/%m/%d/%H/
TwitterAgent.sinks.HDFS.hdfs.fileType = DataStream
TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text
TwitterAgent.sinks.HDFS.hdfs.batchSize = 1000
TwitterAgent.sinks.HDFS.hdfs.rollSize = 0
TwitterAgent.sinks.HDFS.hdfs.rollCount = 10000
TwitterAgent.channels.MemChannel.type = memory
TwitterAgent.channels.MemChannel.capacity = 10000
TwitterAgent.channels.MemChannel.transactionCapacity = 100
The twitter app auth keys are correct.
And I keep getting this error in the flume log file:
ERROR org.apache.flume.SinkRunner
Unable to deliver event. Exception follows.
org.apache.flume.EventDeliveryException: java.lang.IllegalArgumentException: java.net.UnknownHostException: hadoop1
at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:446)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: hadoop1
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:414)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:164)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:129)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:448)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:410)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:128)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2310)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2344)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2326)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:353)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:227)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:221)
at org.apache.flume.sink.hdfs.BucketWriter$8$1.run(BucketWriter.java:589)
at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:161)
at org.apache.flume.sink.hdfs.BucketWriter.access$800(BucketWriter.java:57)
at org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:586)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
... 1 more
Caused by: java.net.UnknownHostException: hadoop1
... 23 more
Does any one here knows why and could explain it to me?
Thanks in advance.
A:
According to the Exception, the problem is that the host hadoop1 is unknown.
according to the flume configuration file the path you have given is
hdfs://hadoop1:8020/user/flume/tweets/%Y/%m/%d/%H/
which is supposed to be accessible from the machine with the flume agent. since machine names cannot be used to access the HDFS without being in the same domain, you need to access the HDFS using the IP address as set in core-site.xml
|
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.cxf.systest.provider;
import java.lang.reflect.UndeclaredThrowableException;
import javax.xml.namespace.QName;
import javax.xml.soap.MessageFactory;
import javax.xml.soap.SOAPEnvelope;
import javax.xml.soap.SOAPException;
import javax.xml.soap.SOAPMessage;
import javax.xml.ws.Dispatch;
import javax.xml.ws.Endpoint;
import javax.xml.ws.Service;
import javax.xml.ws.soap.SOAPBinding;
import org.w3c.dom.Node;
import org.w3c.dom.NodeList;
import org.apache.cxf.testutil.common.AbstractBusClientServerTestBase;
import org.apache.cxf.testutil.common.AbstractBusTestServerBase;
import org.apache.cxf.testutil.common.TestUtil;
import org.junit.BeforeClass;
import org.junit.Test;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.fail;
public class NBProviderClientServerTest extends AbstractBusClientServerTestBase {
public static final String ADDRESS
= "http://localhost:" + TestUtil.getPortNumber(Server.class)
+ "/SoapContext/SoapProviderPort";
private static QName sayHi = new QName("http://apache.org/hello_world_soap_http/types", "sayHi");
public static class Server extends AbstractBusTestServerBase {
Endpoint ep;
protected void run() {
Object implementor = new NBSoapMessageDocProvider();
ep = Endpoint.publish(ADDRESS, implementor);
}
@Override
public void tearDown() {
ep.stop();
}
public static void main(String[] args) {
try {
Server s = new Server();
s.start();
} catch (Exception ex) {
ex.printStackTrace();
System.exit(-1);
} finally {
System.out.println("done!");
}
}
}
@BeforeClass
public static void startServers() throws Exception {
assertTrue("server did not launch correctly", launchServer(Server.class, true));
}
@Test
public void testSOAPMessageModeDocLit() throws Exception {
QName serviceName =
new QName("http://apache.org/hello_world_soap_http", "SOAPProviderService");
QName portName =
new QName("http://apache.org/hello_world_soap_http", "SoapProviderPort");
Service service = Service.create(serviceName);
assertNotNull(service);
service.addPort(portName, SOAPBinding.SOAP11HTTP_BINDING, ADDRESS);
try {
Dispatch<SOAPMessage> dispatch = service.createDispatch(portName, SOAPMessage.class, Service.Mode.MESSAGE);
MessageFactory factory = MessageFactory.newInstance();
SOAPMessage request = encodeRequest(factory, "sayHi");
SOAPMessage response;
try {
response = dispatch.invoke(request);
fail("Should have thrown an exception");
} catch (Exception ex) {
//expected
assertEquals("no body expected", ex.getMessage());
}
request = encodeRequest(factory, null);
response = dispatch.invoke(request);
String resp = decodeResponse(response);
assertEquals("Bonjour", resp);
} catch (UndeclaredThrowableException ex) {
throw (Exception)ex.getCause();
}
}
private SOAPMessage encodeRequest(MessageFactory factory, String value) throws SOAPException {
SOAPMessage request = factory.createMessage();
SOAPEnvelope envelope = request.getSOAPPart().getEnvelope();
request.setProperty("soapaction", "");
if (value != null) {
request.getSOAPBody().addBodyElement(envelope.createName(value, "ns1", sayHi.getNamespaceURI()));
}
return request;
}
private String decodeResponse(SOAPMessage response) throws SOAPException {
NodeList nodelist = response.getSOAPBody().getElementsByTagNameNS(sayHi.getNamespaceURI(), "responseType");
if (nodelist.getLength() == 1) {
Node node = nodelist.item(0).getFirstChild();
if (node != null) {
return node.getNodeValue();
}
}
return null;
}
}
|
Media playback is unsupported on your device Media caption "These are some of the most serious charges of recent years, and the security reflected that," reports Daniel Sandford
Five men have appeared in court charged in connection with a terror plot "to shoot, to kill, police officers or soldiers on the streets of London".
Tarik Hassane, Suhaib Majeed, Nyall Hamlett, and Momen Motasim, all from London, have been charged with intending to commit acts of terrorism.
A fifth man, Nathan Cuffy, 25, from London, faces firearms offences.
All five were remanded in custody until 27 October after the hearing at Westminster Magistrates' Court.
They will next appear at the Old Bailey.
The men, who have been jointly charged with the intention of committing acts of terrorism, or assisting others to commit such acts, between 8 July and 7 October, are:
Tarik Hassane, 21, of Dalgarno Way, west London
Suhaib Majeed, 20, of Church Street Estate, north west London
Nyall Hamlett, 24, of Great Western Road, west London
Momen Motasim, 21, of Hallfield Estate, west London
They were charged following arrests in London over the last two weeks.
Prosecuting, Mark Dawson, told the court the case revolved around an alleged plot "to shoot, to kill, police officers or soldiers on the streets of London."
He said 21 details within the terror charge related to different individuals, but did not give specific details about which parts of the charge related to which accused.
The five men had appeared in the dock flanked by 12 police and security guards, said the BBC's Steve Swann, who was in court.
'Secret communications'
The Metropolitan Police said the men are accused of taking an oath of allegiance to the Islamic State (IS) militant group and arranging to buy a handgun equipped with a silencer and conducting "hostile reconnaissance" of a police station and Army barracks in London using Google Street View.
The terror charges also include allegations the four men set-up "methods of secret communications" and that they "entered into covert discussions".
They are accused of discussing the sourcing of a moped, along with where they could store the vehicle in Shepherd's Bush, west London.
Image copyright Unknown Image caption Tarik Hassane was among those charged
The charges also allege the men conducted reconnaissance of Shepherd's Bush police station and White City Territorial Army barracks.
Police say the men are also accused of viewing and retaining images sent via Instagram of two Met Police officers and two Met police community support officers, and having "graphic images" of the beheading of Steven Sotloff.
Mr Sotloff, a US journalist, was killed by IS militants last month having been kidnapped in Syria in 2013.
Two other men, aged 20 and 21, arrested on 7 October on suspicion of being concerned in the commission, preparation or instigation of acts of terrorism were released without charge on Monday.
Another man, arrested on the same date, was released from custody earlier. Police say evidence was submitted to the Crown Prosecution Service, which advised it was "insufficient" to bring charges "at this time".
A woman, 19, has been released on police bail until next week pending further enquiries, police said. |
About Twin Peaks Inspections
Duane Younger is a construction professional who already had a quarter century of construction experience prior to becoming an inspector. From the moment an inspection begins, it will be obvious to you that he is passionate about helping people understand their buildings. As a home inspector, Duane has seen nearly every type of home imaginable and is eager to demystify your unique property for you. A quick glance at his client testimonials will show that he is serious about providing you with the knowledge you need to make a good decision.
Your family’s safety is of paramount importance; don’t trust it to a home inspector without Twin Peaks Inspections’ extensive background in home environmental remediation.
Duane’s distinguished construction resume includes extensive work in environmental remediation. Within the building industry, this refers to ensuring that the interior of a structure is a healthy environment with no contaminants that could threaten the safety of occupants over time. Even before becoming an ASHI certified home inspector, Duane had years of experience diagnosing and designing solutions to such serious home health hazards as asbestos, lead paint, mold, and radon. These dangers are not always easily noticeable, even by people with some familiarity with these hazardous materials.
Another key factor which makes a building a healthy environment is Indoor Air Quality (IAQ). Many components contribute to a structure’s IAQ, and without a thorough understanding of how they work together, it is impossible to be certain about a home’s long term safety. You shouldn’t have to research what a “panned return” is, or why it can contribute to aggravating sensitive respiratory tracts. You should hire an inspector with a highly specialized background in environmental safety who goes above and beyond to ensure your building is a safe and place for your family and guests.
When Duane isn’t helping you understand your property, he loves gardening, the challenges of hunting and fishing, and the relaxation of a day spent boating and playing on the water. |
---
abstract: 'This paper studies the Craig variant of the Golub-Kahan bidiagonalization algorithm as an iterative solver for linear systems with saddle point structure. Such symmetric indefinite systems in 2x2 block form arise in many applications, but standard iterative solvers are often found to perform poorly on them and robust preconditioners may not be available. Specifically, such systems arise in structural mechanics, when a semidefinite finite element stiffness matrix is augmented with linear multi-point constraints via Lagrange multipliers. Engineers often use such multi-point constraints to introduce boundary or coupling conditions into complex finite element models. The article will present a systematic convergence study of the Golub-Kahan algorithm for a sequence of test problems of increasing complexity, including concrete structures enforced with pretension cables and the coupled finite element model of a reactor containment building. When the systems are suitably transformed using augmented Lagrangians on the semidefinite block and when the constraint equations are properly scaled, the Golub-Kahan algorithm is found to exhibit excellent convergence that depends only weakly on the size of the model. The new algorithm is found to be robust in practical cases that are otherwise considered to be difficult for iterative solvers.'
author:
- 'Mario Arioli [^1]'
- 'Carola Kruse [^2]'
- 'Ulrich Rüde [^3]'
- 'Nicolas Tardieu [^4]'
bibliography:
- 'ind.bib'
title: 'An iterative generalized Golub-Kahan algorithm for problems in structural mechanics '
---
iterative solvers, indefinite systems, saddle point, Golub-Kahan bidiagonalization, structural mechanics, multi-point constraints
65F10, 65F08, 35Q74
Introduction {#sec:intro}
============
In structural mechanics, it is very common to impose kinematic relationships between degrees of freedom (DOF) in a finite element model. Rigid body conditions of a stiff part of a mechanical system or cyclic periodicity conditions on a mesh representing only a section of a periodic structure are typical examples of this approach. Such conditions can also be used to glue non-conforming meshes or meshes containing different types of finite elements. For example, we could link a thin structure modeled by shell finite elements to a massive structure modeled with continuum finite elements. These kinematic relationships are often called multi-point constraints (MPC) in standard finite element software and can be linear or nonlinear. In the case of a well-posed mechanical problem discretized with finite elements, the solution of the linearized problem can be expressed as the following constrained minimization problem $$\begin{aligned}
\min_{{{\bf A}}^T{{\bf w}}= {{\bf r}}} \frac{1}{2} {{\bf w}}^T {{\bf W}}{{\bf w}}- {{\bf g}}^T{{\bf w}}, \label{eqn:minu}
$$ where
- is the tangent stiffness matrix,
- is the linearized matrix of the constraints,
- is the vector of nodal displacement unknowns,
- is the volume force vector,
- is the data vector for inhomogeneous constraints.
With the introduction of Lagrange multipliers ${{\bf p}}$, the augmented system that gives the optimality conditions for (\[eqn:minu\]) reads $$\begin{aligned}
\left[ \begin{array}{cc}
{{\bf W}}& {{\bf A}}\\
{{\bf A}}^T & 0
\end{array}\right]
\left[ \begin{array}{c}
{{\bf w}}\\{{\bf p}}\end{array}\right] =
\left[ \begin{array}{c}
{{\bf g}}\\ {{\bf r}}\end{array}\right]. \label{eqn:augsys}\end{aligned}$$ In this article we assume that ${{\bf W}}$ is symmetric positive semidefinite, as it is typically the case when ${{\bf W}}$ arises from finite element models in structural mechanics. We additionally assume that $$\begin{aligned}
\ker({{\bf W}})\cap \ker({{\bf A}}^T) = \left\{0\right\} \mbox{ and } \ker {{\bf A}}= \left\{ 0\right\}. \label{eqn:WKerAt}\end{aligned}$$ To obtain a positive definite (1,1)-block in \[eqn:augsys\], a common method is to apply an *augmented Lagrangian approach* as described by Golub and Greiff [@GoGr2003]. Let therefore ${{\bf N}}\in \mathbb{R}^{n\times n}$ be a positive symmetric definite matrix. Then we modify the leading block into
$$\begin{aligned}
{{\bf M}}:= {{\bf W}}+ {{\bf A}}{{\bf N}}^{-1} {{\bf A}}^{T}. \label{eqn:regular}\end{aligned}$$
With the transformation $$\begin{aligned}
\begin{array}{lll}
{{\bf M}}&= &{{\bf W}}+ {{\bf A}}{{\bf N}}^{-1}{{\bf A}}^T\\
{{\bf u}}&= &{{\bf w}}- {{\bf M}}^{-1}({{\bf g}}- {{\bf A}}{{\bf N}}^{-1}{{\bf r}})\\
{{\bf b}}& = &{{\bf r}}- {{\bf A}}^T{{\bf M}}^{-1}({{\bf g}}- {{\bf A}}{{\bf N}}^{-1}{{\bf r}}),
\end{array}
\label{eqn:trafo_semi_def}\end{aligned}$$ \[eqn:augsys\] is transformed into the equivalent system $$\begin{aligned}
\left[
\begin{array}{cc}
{{\bf W}}+ {{\bf A}}{{\bf N}}^{-1} {{\bf A}}^T & {{\bf A}}\\
{{\bf A}}^T & 0
\end{array}
\right]
\left[
\begin{array}{c}
{{\bf u}}\\
{{\bf p}}\end{array}
\right]
=
\left[
\begin{array}{c}
0 \\
{{\bf b}}\end{array}
\right].\label{eqn:augsys_auglag}\end{aligned}$$ This kind of regularization of the $(1,1)$-block is a common technique [@GoGr2003; @BeGoLi2005; @Ar2013]. It can also be applied when ${{\bf W}}$ is positive definite, with the goal that for a suitably chosen ${{\bf N}}$, we may find that \[eqn:augsys\_auglag\] becomes easier to solve than the original system. In the following, we will use the notation ${{\bf M}}$ for a positive definite matrix.\
\
The efficient solution of the above saddle point linear system \[eqn:augsys\_auglag\] has stimulated intensive research. One possible approach is to introduce the constraints on the continuous level, i.e. in the weak form of a PDE as with the mortar approach [@Bernardi1989]. In industrial software, when multi-point constraints are used, the constraints are however imposed on the already discretized equations. As it is furthermore usually not possible to make major modifications to an existing legacy code, any method of mortar-type becomes unfeasible. In this article, we will focus on the situation that the constraints are introduced on the discrete level, for which the solution of \[eqn:minu\] remains a difficult task. We refer the reader to [@BeGoLi2005] for a comprehensive review of the topic. One of the commonly used methods is the Schur complement reduction technique, which requires an invertible (1,1)-block ${{\bf M}}$. It then has the advantage of solving two linear systems of size $m$ and $n$, instead of one system of size $m+n$. There is however the disadvantage that the Schur complement matrix ${{\bf S}}= -{{\bf A}}^T {{\bf M}}^{-1} {{\bf A}}$ may be dense and thus becomes expensive to solve. Krylov subspace methods for \[eqn:augsys\_auglag\] are reviewed in [@Saad2003]. In realistic finite element applications the saddle point matrix can be very poorly conditioned. As it is discussed in [@BeGoLi2005 section 3.5], when the mesh size parameter $h$ goes to zero, the condition number of \[eqn:augsys\_auglag\] may increase. Krylov subspace methods will thus perform poorly with increasing problem size and rely on good preconditioning techniques. Another method to solve the saddle point system is based on an elimination technique [@Abel1979; @jendele2009]. This strategy implies major modifications of the matrix of the linear system, whose profile can become much denser. Furthermore, the underlying algorithm is often sequential, where each constraint is treated one after the other. Consequently, this technique can not be used easily in a parallel framework. A different approach is used in [@stgeorges98]. The authors introduce a projector on the orthogonal of the kernel of the constraints matrix ${{\bf A}}$ and solve the linear system on that subspace with an iterative method. This subspace projection technique is elegant and favorable convergence properties are shown. Unfortunately, the definition of the projector involves the factorization of the operator ${{\bf A}}^T {{\bf A}}$, which, in many practical cases, can be quite dense, causing the factorization to be expensive in time and space. Furthermore, one forward-backward substitution is needed at each iteration of the iterative method.\
\
In this paper we will focus on an iterative method for \[eqn:augsys\_auglag\] based on the Golub-Kahan bidiagonalization technique. We will find the iterates ${{\bf u}}^k$ and ${{\bf p}}^k$ separately, which requires to solve linear systems for ${{\bf M}}$ and for ${{\bf N}}$. We will show that for an appropriate choice of the matrix ${{\bf N}}$, the number of iterations required for convergence stays small and constant when the problem size increases. In particular, we will use this algorithm to solve problems in solid mechanics for which commonly used iterative solvers show a poor performance. Our test problems are generated by the finite element software code\_aster ([www.code-aster.org](www.code-aster.org)). Code\_aster covers a wide range of physics including solid mechanics, thermics, acoustics, coupled thermo-hydro-mechanics and is also developed to numerically simulate critical industrial applications. It can treat steady-state and transient problems with various nonlinearities including frictional contact or complex constitutive laws. Code\_aster is developed since 1989 by one of the biggest electric utility companies in the world called EDF and is released as an open source software under GPL license since 2001. It is developed under Quality Insurance and it has been approved by the French (Autorité de Sûeté Nucléaire) and English (Health and Safety Executive) Nuclear Regulatory Authorities to run numerical studies related to Nuclear Safety. The paper is organized as follows: We first introduce and review the Golub-Kahan bidiagonalization algorithm in \[sec:GGKB\]. In \[sec:numexp\], we focus on models in structural mechanics and present a systematic convergence study. In \[sec:containment\], we will apply the proposed algorithm to a realistic industrial test case of a reactor containment building.
The generalized Golub-Kahan bidiagonalization method {#sec:GGKB}
====================================================
We will start by summarizing the main results of [@Ar2013] which are needed in our further discussion.
Fundamentals of the Golub-Kahan bidiagonalization algorithm
-----------------------------------------------------------
In the following, we will use the Hilbert spaces $$\begin{aligned}
\mathcal{M} = \{{\bf v} \in \mathbb{R}^m: \|{\bf v} \|_{{{\bf M}}}^2 = {\bf v}^T {{\bf M}}{\bf v}\}, \hspace{0.3cm} \mathcal{N} = \{{{\bf q}}\in \mathbb{R}^n: \|{{\bf q}}\|_{{{\bf N}}}^2 = {{\bf q}}^T {{\bf N}}{{\bf q}}\} \end{aligned}$$ and their dual spaces $$\begin{aligned}
\mathcal{M}' = \{{\bf v} \in \mathbb{R}^m: \|{\bf v} \|_{{{\bf M}}^{-1}}^2 = {\bf v}^T {{\bf M}}^{-1} {\bf v}\}, \hspace{0.3cm} \mathcal{N}' = \{{{\bf q}}\in \mathbb{R}^n: \|{{\bf q}}\|_{{{\bf N}}^{-1}}^2 = {{\bf q}}^T {{\bf N}}^{-1} {{\bf q}}\}. \end{aligned}$$ The scalar products for $\mathcal{M}$ and $\mathcal{N}$ are denoted by $$\begin{aligned}
({{\bf v}}_1,\, {{\bf v}}_2)_{{{\bf M}}} &= {{\bf v}}_1^T {{\bf M}}{{\bf v}}_2, &\forall {{\bf v}}_1, {{\bf v}}_2\in \mathcal{M},\\
({{\bf q}}_1,\, {{\bf q}}_2)_{{{\bf N}}} &= {{\bf v}}_1^T {{\bf N}}{{\bf q}}_2, &\forall {{\bf q}}_1, {{\bf q}}_2\in \mathcal{N}.\end{aligned}$$ The respective scalar products in the dual spaces are given by $$\begin{aligned}
({{\bf v}}_1,\, {{\bf v}}_2)_{{{\bf M}}^{-1}} &= {{\bf v}}_1^T {{\bf M}}^{-1} {{\bf v}}_2, &\forall {{\bf v}}_1, {{\bf v}}_2\in \mathcal{M},\\
({{\bf q}}_1,\, {{\bf q}}_2)_{{{\bf N}}^{-1}} &= {{\bf v}}_1^T {{\bf N}}^{-1} {{\bf q}}_2, &\forall {{\bf q}}_1, {{\bf q}}_2\in \mathcal{N}.\end{aligned}$$ Given ${{\bf q}}\in {{\cal M}}$ and ${{\bf v}}\in {{\cal N}}$, we define the functional $$\begin{aligned}
{{\cal F}}: {{\cal M}}\times {{\cal N}}\rightarrow \mathbb{R}, \hspace{0.5cm} (q,v)\mapsto \dfrac{{{\bf v}}^T {{\bf A}}{{\bf q}}}{\|{{\bf q}}\|_{{\bf N}}\; \|{{\bf v}}\|_{{\bf M}}}.\label{func}\end{aligned}$$ The critical points of ${{\cal F}}$ are the [*elliptic singular values*]{} and ${{\bf q}}_i$,${{\bf v}}_i$ are the [*elliptic singular vectors*]{} of ${{\bf A}}$. Indeed the saddle-point conditions for \[func\] are $$\begin{aligned}
\label{GSVD}
\left\lbrace
\begin{array}{lcll@{}l}
{{\bf A}}{{\bf q}}_i &=& \sigma_i {{\bf M}}{{\bf v}}_i &\qquad {{\bf v}}_i^T {{\bf M}}{{\bf v}}_j &= \delta_{ij} \\
{{\bf A}}^T {{\bf v}}_i &=& \sigma_i {{\bf N}}{{\bf q}}_i &\qquad {{\bf q}}_i^T {{\bf N}}{{\bf q}}_j &= \delta_{ij}
\end{array}
\right..\end{aligned}$$ Hereafter, we assume that $\sigma_1 \ge \sigma_2 \ge \dots \geq \sigma_n > 0$. If we operate a change of variables using ${{\bf M}}^{-\frac{1}{2}}$ and ${{\bf N}}^{-\frac{1}{2}}$, $$\begin{aligned}
\left\{
\begin{array}{l}
{{\bf v}}= {{\bf M}}^{-1/2} x\\
{{\bf q}}= {{\bf N}}^{-1/2} y\\
\end{array}
\right.
\end{aligned}$$ we have that the elliptic singular values are the standard singular values of $$\tilde{{{\bf A}}} = {{\bf M}}^{-1/2} {{\bf A}}{{\bf N}}^{-1/2}.$$ The generalized singular vectors ${{\bf q}}_i$ and $ {{\bf v}}_i$, $i = 1, \dots ,n$ are the transformation by ${{\bf M}}^{-1/2}$ and ${{\bf N}}^{-1/2}$ respectively of the left and right standard singular vector of $\tilde{{{\bf A}}}$ [@Ar2013].\
\
In [@GoKa1965; @PaSa1982], several algorithms for the bidiagonalization of a $m \times n$ matrix are presented. All of them can be theoretically applied to $\tilde{{{\bf A}}}$ and their generalization to ${{\bf A}}$ is straightforward as shown by Benbow [@Benbow1999]. Here, we will specifically analyze one of the variants known as the “Craig”-variant [@PaSa1982; @Sa1995; @Sa1997]. We seek the matrices ${{\bf Q}}\in \mathbb{R}^{n\times n}, {{\bf V}}\in \mathbb{R}^{m\times m}$ and the bidiagonal matrix ${{\bf B}}$, such that the following relations are satisfied $$\begin{aligned}
\left\lbrace
\begin{array}{r@{}c@{}ll@{}l}
{{\bf A}}{{\bf Q}}&=& {{\bf M}}{{\bf V}}\left[ \begin{array}{c}{{\bf B}}\\ 0\end{array} \right] &\qquad {{\bf V}}^T {{\bf M}}{{\bf V}}&= {{\bf I}}_m \\
&&\\
{{\bf A}}^T {{\bf V}}&=& {{\bf N}}{{\bf Q}}\left[ {{\bf B}}^T ; 0 \right] &\qquad {{\bf Q}}^T {{\bf N}}{{\bf Q}}&= {{\bf I}}_n
\end{array}
\right. \label{eqn:GKalg}\end{aligned}$$ where $$\begin{aligned}
{{\bf B}}=
\left[ \begin{array}{ccccc}
\alpha_1 & \beta_1 & 0 & \cdots & 0 \\
0 & \alpha_2 & \beta_2 & \ddots & 0 \\
\vdots &\ddots & \ddots & \ddots &\ddots \\
0 & \cdots & 0 &\alpha_{n-1} & \beta_{n-1} \\
0 & \cdots & 0 & 0 & \alpha_n
\end{array}\right] .\end{aligned}$$ We apply the above relations to the augmented system $$\begin{aligned}
\left[ \begin{array}{cc}
{{\bf M}}& {{\bf A}}\\
{{\bf A}}^T & 0
\end{array}\right]
\left[ \begin{array}{c}
{{\bf u}}\\{{\bf p}}\end{array}\right] =
\left[ \begin{array}{c}
0\\ {{\bf b}}\end{array}\right]. \label{eqn:augsys_GKB}\end{aligned}$$ By the change of variables $$\begin{aligned}
\left\lbrace
\begin{array}{l}
{{\bf u}}= {{\bf V}}\hat{{{\bf z}}} \\
{{\bf p}}= {{\bf Q}}\hat{{{\bf y}}}
\end{array}\right. \label{eqn:chvar}\end{aligned}$$ and by multiplying the system from the left by $$\begin{aligned}
\left[
\begin{array}{cc}
{{\bf V}}^T & 0\\
0 & {{\bf Q}}^T
\end{array}
\right],\end{aligned}$$ the augmented system can be transformed with \[eqn:GKalg\] into $$\begin{aligned}
\left[ \begin{array}{ccc}
{{\bf I}}_n & 0 & {{\bf B}}\\
0 & {{\bf I}}_{m-n} & 0 \\
{{\bf B}}^T & 0 & 0
\end{array}\right]
\left[ \begin{array}{c}
\hat{{{\bf z}}}_1 \\ \hat{{{\bf z}}}_2 \\ \hat{{{\bf y}}}
\end{array}\right] =
\left[ \begin{array}{c}
0 \\ 0 \\ {{\bf Q}}^T {{\bf b}}\end{array} \right] .\end{aligned}$$ We see that $\hat{{{\bf z}}} = (\hat{{{\bf z}}}_1, \hat{{{\bf z}}}_2) = (\hat{{{\bf z}}}_1, 0)$. Consequently, ${{\bf u}}$ only depends on the first $n$ columns of ${{\bf V}}$ and thus the system reduces to $$\begin{aligned}
\left[ \begin{array}{cc}
{{\bf I}}_n & {{\bf B}}\\ {{\bf B}}^T & 0
\end{array}\right]
\left[ \begin{array}{c}
\hat{{{\bf z}}}_1 \\ \hat{{{\bf y}}}
\end{array}\right] =
\left[ \begin{array}{c}
0 \\ {{\bf Q}}^T {{\bf b}}\end{array} \right] .\end{aligned}$$ To define a bidiagonalization algorithm, we choose the first vector ${{\bf q}}_1$ in ${{\bf Q}}^T {{\bf N}}{{\bf Q}}$ as $$\begin{aligned}
{{\bf q}}_1 = {{\bf N}}^{-1} {{\bf b}}/ \|{{\bf b}}\|_{{{\bf N}}^{-1}}.\end{aligned}$$ A straightforward calculation then shows that $$\begin{aligned}
{{\bf Q}}^T {{\bf b}}= {{\bf e}}_1 \|{{\bf b}}\|_{{\bf N}}.\end{aligned}$$ In [@Ar2013], it is proved that denoting by $\zeta_j$ the entries of $\hat{{{\bf z}}}$, taking advantage of the recursive properties of the Golub-Kahan algorithm [@GoKa1965], and using some of the results of [@PaSa1982], we can obtain a fully recursive algorithm. The final Golub-Kahan bidiagonalization algorithm is presented in Algorithm \[alg:GKB\].
${{\bf u}}^{k+1}, {{\bf p}}^{k+1}$
We highlight that, in the following, the values of $\zeta_k$, $\alpha_k$ and $\beta_k$ will be always those as computed in \[alg:GKB\]. Furthermore note that in each iteration two linear systems, one for ${{\bf M}}$ and one for ${{\bf N}}$ have to be solved. Furthermore, the Craig algorithm has an important property of minimization. Let ${{\cal V}}= {\mbox{span}}\left\{{{\bf v}}_1,...,{{\bf v}}_k \right\}$ and ${{\cal Q}}= {\mbox{span}}\left\{{{\bf q}}_1,...,{{\bf q}}_k \right\}$. At each step $k$, the \[alg:GKB\] computes ${{\bf u}}^{(k)}$ such that [@Sa1995] $$\begin{aligned}
\min_{{{\bf u}}^{(k)}\in {{\cal V}}, \,({{\bf A}}^T{{\bf u}}^{(k)}-{{\bf b}})\perp {{\cal Q}}} \|{{\bf u}}- {{\bf u}}^{(k)}\|_{{{\bf M}}}. \label{eqn:minprop}\end{aligned}$$
Convergence properties of the Golub-Kahan algorithm {#sec:genprob}
---------------------------------------------------
We now consider an augmented system with a positive definite (1,1)-block ${{\bf W}}$. We apply the augmented Lagrangian approach ${{\bf M}}= {{\bf W}}+ {{\bf A}}{{\bf N}}^{-1} {{\bf A}}^{T}$ of \[eqn:regular\], where the matrix ${{\bf N}}$ corresponds to the one in \[eqn:GKalg\]. With the transformation \[eqn:trafo\_semi\_def\], we arrive at an augmented system of the form \[eqn:augsys\_auglag\]. We follow the discussion in [@GoGr2003] and choose $$\begin{aligned}
{{\bf N}}= \frac{1}{\eta} {{\bf I}}.\end{aligned}$$ For an appropriate choice of $\eta$, the following theorem states our main result on the convergence of the GKB method.
\[thm:eta\] Let ${{\bf M}}= {{\bf W}}+ \eta {{\bf A}}{{\bf A}}^T$ and ${{\bf W}}$ be positive definite matrices and $\lambda_1 \leq \dots \leq \lambda_n$ be the eigenvalues of ${{\bf A}}^T {{\bf W}}^{-1} {{\bf A}}$.\
If $\eta \geq \lambda_1^{-1} > 0$, then $\kappa(\tilde{{{\bf A}}}) \leq \sqrt{2}$
Let $$\begin{aligned}
\sigma_1 \leq \dots \leq \sigma_n \end{aligned}$$ be the elliptic singular values of ${{\bf A}}$ with ${{\bf M}}$ and ${{\bf N}}$ norms as in \[GSVD\]. From \[GSVD\] follows $$\begin{aligned}
\eta {{\bf A}}^T {{\bf M}}^{-1} {{\bf A}}p_i = \sigma_i^2 p_i.\end{aligned}$$ Thus $\mu_i = \sigma^2_i$ are the eigenvalues of $$\begin{aligned}
\eta {{\bf A}}^T \bigl({{\bf W}}+ \eta {{\bf A}}{{\bf A}}^T \bigr)^{-1} {{\bf A}}.\end{aligned}$$ With the Sherman-Morrison formula, we obtain $$\begin{aligned}
\eta {{\bf A}}^T \bigl({{\bf W}}+ \eta {{\bf A}}{{\bf A}}^T \bigr)^{-1} {{\bf A}}= \eta {{\bf A}}^T {{\bf W}}^{-1}{{\bf A}}\bigl({{\bf I}}+ \eta {{\bf A}}^T {{\bf W}}^{-1} {{\bf A}}\bigr)^{-1} \end{aligned}$$ Let $\lambda_1 \leq \dots \leq \lambda_n$ be the eigenvalues of ${{\bf A}}^T {{\bf W}}^{-1} {{\bf A}}$. Then $$\begin{aligned}
\mu_i = \dfrac{\eta \lambda_i}{1 + \eta\lambda_i} \qquad \forall i.\end{aligned}$$ We obtain for the condition number of $\tilde{{{\bf A}}} = {{\bf M}}^{-\frac{1}{2}}{{\bf A}}{{\bf N}}^{-\frac{1}{2}}=\eta\bigl({{\bf W}}+ \eta {{\bf A}}{{\bf A}}^T \bigr)^{-\frac{1}{2}}{{\bf A}}$ $$\begin{aligned}
\kappa^2(\tilde{{{\bf A}}}) = \dfrac{\mu_{max}}{\mu_{min}} \leq \dfrac{1+\eta \lambda_1}{\eta \lambda_1}\end{aligned}$$ It follows that if $\eta \geq \lambda_1^{-1}$, then $\kappa(\tilde{{{\bf A}}}) \leq \sqrt{2}$.
From the previous result, we can conclude that if we choose $\eta$ big enough, the condition number of $\tilde{{{\bf A}}}$ is bounded by $\sqrt{2}$. In [@OrAr2017 Section 4.2], it is discussed that the standard Golub-Kahan bidiagonalization process applied to $\tilde{{{\bf A}}} = {{\bf M}}^{-1/2}{{\bf A}}{{\bf N}}^{-1/2}$ is equivalent to the generalized Golub-Kahan bidiagonalization applied to ${{\bf A}}$. We can thus conclude from \[thm:eta\], that \[alg:GKB\] exhibits excellent convergence properties and that only few iterations should be necessary to obtain sufficiently accurate results. As second desirable property, we can expect the number of iterations to be independent of the mesh size for problems coming from constrained FEM discretizations, as long as we choose $\eta$ big enough.\
\
However, there is no such thing as a free lunch. In each iteration in \[alg:GKB\], we have to solve linear systems with the matrices ${{\bf M}}$ and ${{\bf N}}$. While ${{\bf N}}^{-1} = \eta {{\bf I}}$ is trivial, the condition number of ${{\bf M}}$ depends on $\eta$ and thus on the smallest eigenvalue of ${{\bf A}}^T {{\bf W}}^{-1} {{\bf A}}$. The condition number of the resulting matrix ${{\bf M}}= {{\bf W}}+ \eta {{\bf A}}{{\bf A}}^T$ could become very large for large $\eta$. The solution of the linear systems in \[alg:GKB\] may thus become difficult, and additional numerical errors may be introduced. The possibly high condition number of ${{\bf M}}$ is especially problematic for large scale problems, when an inner direct solver is no longer applicable and an iterative solver is applied. It is thus crucial to find an optimal balance of $\eta$ to enable an efficient inner solution step. The numerical experiments suggest that in practice reasonable values of $\eta$ proportional to $|| {{\bf W}}||_1$ reduce $\kappa(\tilde{{{\bf A}}})$ sensibly, without dramatically increasing the ill-conditioning of ${{\bf M}}$.
Stopping criteria {#sec:stopcrit}
-----------------
In the following, we summarize possible stopping criteria for the GK bidiagonalization algorithm as suggested in [@Ar2013].
### A lower bound estimate
First, we look at a lower bound estimate of the error in the energy norm. The error ${{\bf e}}^{(k)} = {{\bf u}}- {{\bf u}}^{(k)}$ can be expressed using the [[**M**]{}]{}-orthogonality property of ${{\bf V}}$ and \[eqn:chvar\] by $$\begin{aligned}
\| {{\bf e}}^{(k)} \|_{{{\bf M}}}^2 = \sum_{j=k+1}^n \zeta_j^2 = \Big|\Big| \hat{{{\bf z}}} - \left[ \begin{array}{c}
{{\bf z}}_k \\ 0
\end{array}\right] \Big|\Big|_2^2.\end{aligned}$$ To compute the error ${{\bf e}}^{(k)}$, we thus need $\zeta_{k+1}$ to $\zeta_{n}$, which are available only after the full $n$ iterations of the algorithm. Given a threshold $\tau < 1$ and an integer $d$, we can define a lower bound of $\| {{\bf e}}^{(k)} \|_{{{\bf M}}}^2$ by $$\begin{aligned}
\xi_{k,d}^2 = \sum_{j=k+1}^{k+d+1} \zeta_j^2 < \| {{\bf e}}^{(k)} \|_{{{\bf M}}}^2 .\end{aligned}$$ $\xi_{k,d}$ measures the error at step $k-d$, but as the following ${{\bf u}}^{(k)}$ minimize the error due to \[eqn:minprop\], we can safely use the last ones. Also, this lower bound estimate is very inexpensive to compute and it has additionally the advantage that it yields an upper bound for the residual in the dual norm defined by ${{\bf N}}^{-1}$ $$\begin{aligned}
\| {{\bf A}}^T {{\bf u}}^{(k)} - {{\bf b}}\|_{{{\bf N}}^{-1}} = | \beta_{k+1} \; \zeta_k | \le \sigma_1 | \zeta_k |
= \| \tilde{{{\bf A}}} \|_2 | \zeta_k |< \| \tilde{{{\bf A}}} \|_2\tau.\end{aligned}$$ With a carefully chosen $d$, procedure “[check$({{\bf z}}_k, \dots) $]{}” in \[alg:GKB\] can then be constructed as \[alg:lowbound\].
${{\bf z}}_k , k , k, d, \tau$ convergence = false; $\xi^2 =\sum_{j=k-d +1}^{k} \zeta_j^2$; convergence = true; convergence
### An upper bound estimate
To define a stopping criterion for the GKB method, it is useful to also have an upper bound error estimate. Obviously, this estimate is more reliable than the previous lower bound. The following approach has been presented in [@Ar2013]. It is inspired by the Gauss-Radau quadrature algorithm and similar to the one described in [@GoMeu2010]. Let therefore ${{\bf T}}= {{\bf B}}^T {{\bf B}}$. ${{\bf T}}$ is a non-negative, triagonal and positive definite matrix of entries
$$\begin{aligned}
\left\{
\begin{array}{ll}
{{\bf T}}_{1,1} = \alpha_1^2, & \\
{{\bf T}}_{i,i} = \alpha_i^2 + \beta_i^2, & i = 2,..,n, \\
{{\bf T}}_{i,i+1} = {{\bf T}}_{i+1,i} = \alpha_i\beta_{i+1}, & i = 1,..,n, \\
0 & \mbox{otherwise}.
\end{array}
\right.\end{aligned}$$
With straightforward calculations, we have $$\begin{aligned}
\| {{\bf e}}^{(k)} \|_{{{\bf M}}}^2 = \sum_{j=k+1}^n \zeta_j^2 = \|b\|_{{{\bf N}}}^2 \left[ \left( {{\bf T}}^{-1}\right) _{1,1} - \left( {{\bf T}}_k^{-1}\right) _{1,1} \right],\end{aligned}$$ where ${{\bf T}}_k$ is the $k \times k$ principal submatrix of ${{\bf T}}$ [@GoMeu2010]. Let $0 < a < \sigma_n $ a lower bound for all the singular values of ${{\bf B}}$. We compute the matrix $\hat{{{\bf T}}}_{k+1}$ as $$\begin{aligned}
\hat{{{\bf T}}}_{k+1} = \left[
\begin{array}{cc}
{{\bf T}}_k & \alpha_k \beta_k {{\bf e}}_k\\
\alpha_k \beta_k {{\bf e}}_k^T& \omega_{k+1}
\end{array}
\right] ,\end{aligned}$$ where $\omega_{k+1} = a^2 + \delta_k(a^2) $ and $\delta_k(a^2) $ is the $k$-entry of the solution of $$\begin{aligned}
\left( {{\bf T}}_k - a^2 {{\bf I}}\right) \mathbf{\delta}(a^2) = \alpha_k^2 \beta_k^2 {{\bf e}}_k .\end{aligned}$$ We point out that the matrix $({{\bf T}}_k-a^2{{\bf I}})$ is positive definite and that $\hat{{{\bf T}}}_{k+1}$ has one eigenvalue equal to $a^2$. Analogously to what is done in [@GoMeu2010] for the conjugate gradient method, we can recursively compute $\delta(a^2)_k$ and $\omega_{k+1}$ by using the Cholesky decomposition. The pseudo-code for obtaining the upper bound estimate $\Xi$ is presented in \[alg:upbound\]. It is a practical realization of a Gauss-Radau quadrature that uses the matrices $\hat{{{\bf T}}}_k$. Therefore, from [@GoMeu2010 Theorem 6.4], we can derive that $\Xi$ is an upper bound for $\|{{\bf e}}^{(k)}\|_{{{\bf M}}}$. Although this upper bound estimate gives a reliable stopping criterion, its calculation is in practice very difficult to obtain owing to the need of an accurate estimate of the smallest singular value. In the following numerical experiments, we will use exclusively the lower bound stopping criterion. For any further details on error estimates and global bounds, we refer to [@Ar2013].
$\bar{d}_1 = \alpha_1^2 + \beta_1^2 - a^2$; $\bar{d}_k = \alpha_k^2 + \beta_k^2 - \varpi_{k-1}$; $\varpi_k = a^2 + \dfrac{\alpha_k^2 \beta_k^2}{\bar{d}_k}$; $\;\;\varphi_k = \dfrac{\beta_k^2 \zeta_k^2}{\sqrt{ \bar{d}_k +a^2 - \beta_k^2}}$ ; $\xi^2 =\sum_{j=k-d +1}^{k} \zeta_j^2$; $\qquad \Xi^2 = \xi^2 + \varphi_k$; convergence = true; convergence
Numerical Experiments {#sec:numexp}
=====================
In the following, we will apply the generalized GKB method to augmented matrix systems generated in the open source all-purpose finite element software code\_aster. In each test case, the models obey the laws of linear elasticity. We focus on the equilibrium of an elastic body under the small displacement hypothesis, for which the problem is to find the displacement field ${{\bf u}}$ with ${{\bf u}}:\bar{\Omega}\rightarrow\mathbb{R}^3$ such that $$\begin{aligned}
-{\mbox{div}}(\sigma({{\bf u}})) &= {{\bf f}}, &\mbox{ in } \Omega, \label{eqn:elas1}\\
\sigma({{\bf u}})n &= {{\bf h}}, &\mbox{ on } \Gamma_N,\\
{{\bf u}}&= {{\bf u}}_D, &\mbox{ on } \Gamma_D.\end{aligned}$$ Here ${{\bf h}}$ and ${{\bf u}}_D$ are the Neumann and the Dirichlet data and the stress and strain tensors are defined as $$\begin{aligned}
\sigma({{\bf u}})&= C\epsilon({{\bf u}}),\\
\epsilon({{\bf u}}) &= (\nabla {{\bf u}}+\nabla^T {{\bf u}})/2. \label{eqn:constlaw}\end{aligned}$$ In the elastic case, $C$ is the fourth order elastic coefficient (or Hooke’s law) tensor satisfying both symmetry and ellipticity conditions. Furthermore, the constitutive law (\[eqn:constlaw\]) connects linearly $\sigma$ to the strain tensor field $\epsilon$. Although we know the underlying physical model of the test cases, the following convergence analysis of the GKB algorithm is done only on matrix level. We thus refer the interested reader for any further details on the finite element discretization of \[eqn:elas1\] to \[eqn:constlaw\] used in code\_aster to [@Abbas2013].\
\
The simulations in this section are done in Matlab. We will use the Matlab backslash solver for the inversion of ${{\bf M}}$ and ${{\bf N}}$ in \[alg:GKB\].
Example: Cylinder {#ex1:cylinder}
-----------------
As our first example, the domain $\Omega$ is chosen as a thick-walled cylinder as illustrated in \[fig:cyl\]. The model is a classical linear elasticity system, as described above, with $m$ degrees of freedom approximated by a linear finite element method. Dirichlet boundary conditions are imposed on the left end and are shown in green. Furthermore, MPCs are applied to obtain a rigid inner ring, which is illustrated in \[fig:cyl\] by the gray elements. For the derivation of the constraint equations, we refer to [@Pe2011]. These kinematic relationships ensure that the inner ring resists any kind of outer forces.\
![Cylinder with rigid ring and Dirichlet boundary conditions.[]{data-label="fig:cyl"}](Solution_Q1_3_resu_new2.jpg){width="9.0cm"}
### Matrix setup {#sec:extr_data}
A double Lagrange multiplier approach [@Pe2011_01] is used in code\_aster which leads to augmented systems with the structure
$$\begin{aligned}
{{\bf K}}= \left(
\begin{array}{ccc}
{{\bf W}}& \gamma {{\bf A}}& \gamma {{\bf A}}\\
\gamma {{\bf A}}^T & -\gamma I & \gamma I\\
\gamma {{\bf A}}^T & \gamma I & -\gamma I
\end{array}
\right).\label{eqn:doubleLg}\end{aligned}$$
Here, ${{\bf W}}$ is the positive definite elasticity stiffness matrix, ${{\bf A}}$ is the stiffness constraint matrix following the derivation in [@Pe2011] and $\gamma := \frac{1}{2}(\min{{\bf W}}_{ii}+\max{{\bf W}}_{ii})$ are multiplicative factors to equilibrate the scaling of the blocks. After extraction of the matrices ${{\bf W}}$ and $\gamma {{\bf A}}$, we thus get
$$\begin{aligned}
\left(
\begin{array}{cc}
{{\bf W}}& \gamma {{\bf A}}\\
\gamma {{\bf A}}^T & 0
\end{array}
\right)
\left(
\begin{array}{c}
{{\bf u}}\\
\lambda
\end{array}
\right)
=
\left(
\begin{array}{c}
{{\bf g}}\\
0
\end{array}
\right). \label{eqn:extractedSystem}\end{aligned}$$
The structure of the augmented system is shown in \[fig:mataugG\]. Furthermore, we observe that the system \[eqn:extractedSystem\] can be simplified by scaling it by $\gamma$. To exploit the result of \[thm:eta\], we modify the $(1,1)$-block as described in \[eqn:regular\] and \[sec:genprob\] to $$\begin{aligned}
{{\bf M}}= \frac{1}{\gamma}{{\bf W}}+ \eta {{\bf A}}{{\bf A}}^T \label{eqn:trafomat}\end{aligned}$$ and transform \[eqn:extractedSystem\] following \[eqn:trafo\_semi\_def\] to obtain a system of type \[eqn:augsys\_GKB\]. The exact solutions are obtained by solving the original augmented system \[eqn:doubleLg\] for a given right-hand side received from code\_aster, using the Matlab backslash solver. The delay parameter of \[alg:GKB\] is chosen as $d=5$ and the tolerance as $\tau = 10^{-5}$.
![Augmented matrix system for cylinder, Problem 1[]{data-label="fig:mataugG"}](augsys_cylinder.pdf){width="9.0cm"}
### Results
We define four test problems with increasing resolution. In \[tab:probsize\], the number of the degrees of freedoms can be found, where $m$ corresponds to the physical degrees of freedom, $n$ corresponds to the number of constraints and $nnz$ stands for the non-zero entries of the respective sparse matrices. We choose $\eta = \frac{1}{\gamma}\| {{\bf W}}\|_1$. The transformation \[eqn:trafomat\] increases the number of nonzero entries, but the ratios still stay reasonably small. In \[tab:condnum1\], the condition numbers and norms of the occurring matrices are presented. The condition number of ${{\bf M}}$ does increase in $\eta$ (see \[sec:GGKB\]).\
\
name $m$ $n$ $nnz({{\bf M}})$ $nnz({{\bf A}})$ $nnz({{\bf W}})$
--------- ------- ------ ------------------ ------------------ ------------------
Prob. 1 648 210 30080 1259 28296
Prob. 2 2520 714 147800 4985 139636
Prob. 3 6384 1674 409246 10045 392816
Prob. 4 46620 8814 3367462 26436 3262086
: Test problem sizes
\[tab:probsize\]
name $\eta = \frac{1}{\gamma}||{{\bf W}}||_1$ $\kappa({{\bf M}})$ $\kappa({{\bf W}})$ $||{{\bf A}}||_1$
--------- ------------------------------------------ --------------------- --------------------- -------------------
Prob. 1 9.13 $ 8.3\cdot 10^5$ $5.8 \cdot 10^3$ 6.27
Prob. 2 8.95 $7.1 \cdot 10^6$ $1.9 \cdot 10^4$ 5.79
Prob. 3 8.86 $3.0 \cdot 10^{7}$ $3.5 \cdot 10^4$ 5.74
Prob. 4 8.96 $5.0 \cdot 10^{8}$ $1.2 \cdot 10^5$ 5.34
: Norms and condition numbers of matrices
\[tab:condnum1\]
The convergence plots with upper and lower bound estimates of the GKB method are presented in \[fig:errUL12,fig:errUL34\]. The error of the GKB solution obtains the required tolerance of $10^{-5}$ already after 6 iterations for the smallest problem and after 7, 8 and 9 for Problems 2 - 4 (see \[fig:errUL12,fig:errUL34\]), respectively. The lower bound for the error at iteration $k$ is however computed only when iteration $k+d$ has been reached. Consequently, the GKB stops only after 11 to 14 iterations. This also explains why the final errors are remarkably smaller than the sought precision. We observe that although the number of DOF increases from Problems 1 to 4, the number of iterations increases by only 1 for each finer mesh and the algorithm stops after 14 iterations at most. To obtain a complete independence of the mesh size as it is shown in \[thm:eta\], $\eta$ would need to be chosen bigger. This will be discussed in the following section.
![Convergence of generalized GKB method for Problems 1 and 2.[]{data-label="fig:errUL12"}](error_cylinder_1.pdf "fig:"){width="6.4cm"} ![Convergence of generalized GKB method for Problems 1 and 2.[]{data-label="fig:errUL12"}](error_cylinder_2.pdf "fig:"){width="6.4cm"}
![Convergence of generalized GKB method for Problems 3 and 4.[]{data-label="fig:errUL34"}](error_cylinder_3.pdf "fig:"){width="6.4cm"} ![Convergence of generalized GKB method for Problems 3 and 4.[]{data-label="fig:errUL34"}](error_cylinder_4.pdf "fig:"){width="6.4cm"}
### Choice of ${{\bf N}}$
In the previous numerical examples, we choose the parameter $\eta = \frac{1}{\gamma}\|{{\bf W}}\|_1$ to better represent the energy subject to the MPC constraints, as described in the augmented system. The recommendation of Golub and Greiff in [@GoGr2003], who found numerically that $\eta = \gamma \frac{\|{{\bf W}}\|}{\|{{\bf A}}\|^2}$ could be a good value, leads to too small an $\eta$ for our practical examples. With this choice, we found that the number of iterations increases noticeably. In \[thm:eta\], we proved that for $\eta \geq \lambda_1^{-1}$, the condition number of $\tilde{{{\bf A}}}$ is bounded above and the number of iterations in \[alg:GKB\] is independent of the mesh size of the finite element discretization. In general, we are not able to compute $\lambda_1$ of the saddle point system and thus obtain a more precise estimate of $\eta$. For the smallest three test problems above, we are however able to determine $\lambda_1$ using Matlab and we can compare the previous choice to the optimal value. From \[tab:condnum1,tab:condnum\], it seems that the choice of $\eta = \frac{1}{\gamma}\|{{\bf W}}\|_1$ leads to smaller values than needed for \[thm:eta\]. Using $\eta$ as given in \[tab:condnum\], the number of iterations stays at 8 for Problems 2 and 3. A short study on the possible choice of $\eta$ in \[tab:prob4eta\] suggests similar behavior for problem 4. Furthermore, \[tab:prob4eta\] shows that the number of iterations decreases with increasing $\eta$ and that the modification of the (1,1)-block is a major factor determining the speed of convergence of the GKB method. Note that this behavior agrees with Theorem 2.1.\
\
name $\lambda_1 $ $\eta$ $\kappa(\tilde{{{\bf A}}})^2$ iter $\kappa({{\bf M}})$
--------- -------------- -------- ------------------------------- ------ ---------------------
Prob. 1 0.06 17 1.978 10 $1.5 \cdot 10^6$
Prob. 2 7.5e-3 133 1.995 8 $1.0 \cdot 10^8$
Prob. 3 2.8e-3 357 1.995 8 $1.2 \cdot 10^9$
: Parameter $\eta$ and condition numbers of matrices
\[tab:condnum\]
$\eta$ \#iter $\frac{\|{{\bf u}}- {{\bf u}}_{dir}\|_{{{\bf M}}}}{\|{{\bf u}}_{dir}\|_{{{\bf M}}}}$ $\frac{\|{{\bf p}}- {{\bf p}}_{dir}\|_{2}}{\|{{\bf p}}_{dir}\|_{2}}$
--------------------------------- -------- -------------------------------------------------------------------------------------- ----------------------------------------------------------------------
0 ([[**M**]{}]{}=[[**W**]{}]{}) 327 $8.83 \cdot 10^{-6}$ $7.09 \cdot 10^{-6}$
1 29 $1.02 \cdot 10^{-7}$ $8.98 \cdot 10^{-8}$
17 13 $7.59 \cdot 10^{-11}$ $1.35 \cdot 10^{-10}$
133 9 $3.41 \cdot 10^{-10}$ $2.53 \cdot 10^{-10}$
357 8 $4.57 \cdot 10^{-10}$ $7.88 \cdot 10^{-10}$
: Different choices for $\eta$ for problem 4, $\epsilon_{GKB}$=1e-5 and $d=5$
\[tab:prob4eta\]
Example: Prestressed concrete {#sec:prest_concrete}
-----------------------------
As our second set of examples, we consider a simple model of a concrete block with embedded pretension cables. The block is clamped on its lateral faces and submitted to a constant pressure on its top face. All materials are elastic. presents a projected view to the 2D surface: the orange points are the concrete nodes and the gray points are the cable nodes. The cable nodes are only constrained by linear relationships with the concrete nodes, so that the displacement of the cables included in a given concrete element is a linear combination of the displacement of the concrete nodes, $ {{\bf u}}_{\mbox{cables}}=\sum_{i=0}^{4} a_i {{\bf u}}^x_{\mbox{concrete}} + b_i {{\bf u}}_{\mbox{concrete}}^y$. The vectors $a$ and $b$ are the barycentric coordinates of the cable node with respect to the concrete element [@Pe2011].
![Simple model of prestressed concrete[]{data-label="fig:PSB:simpleModel"}](Prestressed-Concrete.JPG)
We first extract the ${{\bf W}}$ and ${{\bf A}}$ submatrices as already described in \[ex1:cylinder\]. For the purpose of illustrating the particular matrix structure, we apply a permutation to sort the matrix entries in the (1,1)-block with respect to the size of the diagonal elements of ${{\bf W}}$, starting from the smallest to the largest. Second, we apply a column permutation to the constraint block ${{\bf A}}$ (and the respective row permutation for ${{\bf A}}^T$) to obtain the diagonal part in the upper $n\times n$ block, as shown in \[fig:PSB:matrix\]. The augmented matrix exhibits particular features. The (1,1)-block contains rows and columns with only zero entries. However, the non-singularity of the full system \[eqn:augsys\] is ensured, since \[eqn:WKerAt\] is satisfied.
### Numerical experiments {#numerical-experiments}
Owing to the singular (1,1)-block, the GKB algorithm as introduced in \[sec:GGKB\] cannot be directly applied to this problem class. We thus rely on the augmented Lagrangian approach and choose $\eta = \|{{\bf W}}\|_{1}$. \[eqn:WKerAt\] now ensures that the (1,1)-block of the augmented system is non-singular. However, also for these shifted matrices, we do not obtain satisfactory results with the GKB algorithm because of the unfavorable scaling of the matrices when generated by code\_aster. The algorithm converges, the solution however exhibits oscillations. As described in \[sec:extr\_data\], the constraint matrices are multiplied by the factor $\gamma = \frac{1}{2}(\min{{\bf W}}_{ii}+\max{{\bf W}}_{ii})$ to obtain a good equilibrium of the augmented system. We undo this multiplication in our numerical experiments and divide the augmented system \[eqn:extractedSystem\] by $\gamma$. The right-hand sides are provided by code\_aster and the exact solutions are obtained for comparison by solving \[eqn:doubleLg\] with a direct solver.\
\
Numerical results are presented in \[tab:precst\_concrete\]. We use the lower bound estimate as stopping criterion and choose the tolerance as $\tau = 10^{-5}$ and $d = 5$. The algorithm shows excellent convergence properties. Although the result of \[thm:eta\] is not applicable to this case, the number of iterations until convergence stays constant at 8 and is bounded with increasing problem size. Indeed, the energy error is already smaller than the tolerance after only 3 iterations, but we recall that the lower bound estimate for the iterate ${{\bf u}}^3$ is only computed at iteration $3+d$. The bound for the smallest singular value of ${{\bf B}}$, necessary for the upper bound estimate, has been obtained experimentally as $a = 0.2$. The convergence of the energy error and the lower and upper bound estimates are presented in \[fig:PSB:errUL12,fig:PSB:errUL3\].
name m n \#Iter $\frac{\| {{\bf u}}- {{{\bf u}}}^{(k)} \|_{{{\bf M}}}}{\| {{\bf u}}\|_{{{\bf M}}}} $ $\frac{\| {{\bf u}}- {{{\bf u}}}^{(k)} \|_{2}}{\|{{\bf u}}\|_2}$ $\frac{\| {{\bf p}}-{{{\bf p}}}^{(k)} \|_{2}}{\|{{\bf p}}\|_2}$
------- ------- ------- -------- -------------------------------------------------------------------------------------- ------------------------------------------------------------------ -----------------------------------------------------------------
Prob1 498 258 9 9.6e-13 9.5e-13 2.0e-12
Prob2 3207 1590 9 3.2e-12 3.1e-12 9.2e-12
Prob3 23043 11382 9 5.0e-11 5.0e-11 4.9e-11
: Example prestressed concrete: Golub-Kahan convergence for $\epsilon_{GKB}$=1e-5 and $d=5$
\[tab:precst\_concrete\]
![Augmented system for prestressed block example.[]{data-label="fig:PSB:matrix"}](prestressed-concrete.pdf){width="9.0cm"}
![GKB convergence for Problem 1 and 2.[]{data-label="fig:PSB:errUL12"}](conv_up_low_prob1.pdf "fig:"){width="6.4cm"} ![GKB convergence for Problem 1 and 2.[]{data-label="fig:PSB:errUL12"}](conv_up_low_prob2.pdf "fig:"){width="6.4cm"}
![Convergence of generalized GKB method for Problem 3.[]{data-label="fig:PSB:errUL3"}](conv_up_low_prob3.pdf){width="6.4cm"}
Large scale example and parallel implementation {#sec:containment}
===============================================
In this example, we study a critical industrial application, the structural analysis of the reactor containment building of a nuclear power plant. The structure is set under compression during the construction phase, such that it resists better outer influences. The containment building additionally consists of an outer shell layer. The model thus requires the coupling of three dimensional elements (the concrete), two dimensional elements (the outer shell) and one dimensional elements representing the metallic prestressing cables (\[fig:containment\_building\]). The underlying equations for each material are those of linear elasticity.
![Modeling of a containment building.[]{data-label="fig:containment_building"}](containment_big.png){width="13.0cm"}
Numerical Experiments {#numerical-experiments-1}
---------------------
The matrix is generated by code\_aster. The discretization is illustrated in \[fig:containment\_building\] and the blocks are of size $m = 283797$ and $n = 158928$. The number of constraints is thus more than 50% of the number of physical degrees of freedom. We apply the permutations as explained in the above-mentioned example in \[sec:prest\_concrete\] and obtain the matrix presented in \[fig:containment\_matrix\]. The augmented system contains row and columns with only zero entries in the (1,1)-block, but again \[eqn:WKerAt\] holds and the nonsingularity of \[eqn:augsys\] is ensured by the constraint matrix.\
\
We implement the Golub-Kahan bidiagonalization method in Julia [@Bezanson2014] and we use the interface to the parallel direct solver MUMPS [@MUMPS:1] from the JuliaSmoothOptimizers package [^5] to solve the inner linear system. The factorization of the system matrix is done once. The right-hand side is provided by code\_aster and the exact solution is obtained for comparison by solving \[eqn:doubleLg\] with MUMPS. As in the previous example, we scale the augmented system \[eqn:extractedSystem\] with the factor $\gamma = \frac{1}{2}(\min{{\bf W}}_{ii} + \max{{\bf W}}_{ii})$. The GKB method is not directly applicable to the augmented system \[eqn:augsys\] with a singular (1,1)-block. For this reason, but also to obtain an improved convergence for the GKB method, we apply the augmented Lagrangian approach with $\eta = \| {{\bf W}}\|_1$. Again we use the tolerance $\tau=10^{-5}$ for the lower bound stopping criterion of the GKB method and $d=5$. We apply the algorithm to the unpermuted system as it is obtained from code\_aster.\
![Augmented system after permutation and scaling. []{data-label="fig:containment_matrix"}](augsys_containment.pdf){width="9.0cm"}
![GKB convergence.[]{data-label="fig:containment"}](conv_containment.pdf){width="9.0cm"}
The algorithm stops after 9 iterations and the relative errors of ${{\bf u}}$ and ${{\bf p}}$ are summarized in \[tab:containment\]. The upper and lower bound estimates are presented in \[fig:containment\]. Here, the lower bound for the smallest singular values as needed for the upper bound has been estimated numerically as $a = 0.2$.\
\
Also for this realistic industrial test case, the GKB iterative method converges after only 9 $(4+d)$ iterations. The final errors obtained in the energy and 2-norm for the solution ${{\bf u}}$ and also the Lagrange multipliers ${{\bf p}}$ are remarkably small. In fact, they are by several orders of magnitude better than the required stopping tolerance. Furthermore, we reduce the problem of solving a matrix of size $m + 2n$ (as currently implemented in code\_aster), to solve a linear system of size $m$. We compared the efficiency of the proposed GKB iterative method to solving \[eqn:doubleLg\] directly with MUMPS under the same conditions. A complete performance analysis of the algorithm is outside the scope of this paper, but a preliminary study shows that in sequential simulations speedups of a factor between 2 and 3 can be observed.
m n \#Iter $\frac{\| {{\bf u}}- {{{\bf u}}}^{(k)} \|_{{{\bf M}}}}{\| {{\bf u}}\|_{{{\bf M}}}} $ $\frac{\| {{\bf u}}- {{{\bf u}}}^{(k)} \|_{2}}{\|{{\bf u}}\|_2}$ $\frac{\| {{\bf p}}-{{{\bf p}}}^{(k)} \|_{2}}{\|{{\bf p}}\|_2}$
-------- -------- -------- -------------------------------------------------------------------------------------- ------------------------------------------------------------------ -----------------------------------------------------------------
283797 158928 9 3.39e-11 3.8e-11 1.4e-9
: Golub-Kahan convergence for the containment building example and $\epsilon_{GKB}$=1e-5 and $d=5$
\[tab:containment\]
Conclusions
===========
In this work, we presented an algorithm based on the Golub-Kahan bidiagonalization method and applied it to problems in structural mechanics. These problems exhibit the difficulty of multi-point constraints imposed on the discretized finite element formulation. We showed that the GKB algorithm converges in only a few iterations for each of the three classes of test problems. In particular, we confirmed our main result of \[thm:eta\]: The number of GKB iterations is independent of the discretization size for a given problem, whenever we choose the stabilization parameter $\eta$ appropriately. This has also been true for the example of a block of prestressed concrete, although the leading block is singular and does not satisfy the requirements of \[thm:eta\]. The errors obtained for the solutions ${{\bf u}}$ and ${{\bf p}}$ are remarkably small and since the lower bound of the error at iteration $k$ can only be computed at $k+d$, they undershoot the required tolerance by several orders of magnitude. Summarizing, the proposed algorithm presents a new alternative to the more commonly used standard iterative solvers and, in particular, the ones provided currently in code\_aster.\
\
The final example of the reactor containment building is a realistic application. However, the dimensions of the matrices are still relatively small. For other applications, the number of degrees of freedoms might be in the order of millions, when also the inner direct solver MUMPS will no longer be satisfactory. It is thus indispensable to solve the inner linear system defined by ${{\bf M}}$ with an iterative scheme, which results in an inner-outer iterative method. The study of such algorithms will be the subject of future work.
[^1]: Libera Universita Mediterranea, Casamassima, Bari, Italy ().
[^2]: Cerfacs, 29 Avenue Gaspard Coriolis, 31100 Toulouse, France (, ).
[^3]: Friedrich-Alexander-Universität Erlangen-Nuremberg, Cauerstr. 6, 91058 Erlangen, Germany ().
[^4]: EDF R&D, 7 Boulevard Gaspard Monge, 91120 Palaiseau, France ().
[^5]: <https://github.com/JuliaSmoothOptimizers>
|
Background {#Sec1}
==========
Thanks to the ever-increasing fluorescent probes, proteins, and dyes, quantities of in vivo biomedical researches at cellular and subcellular levels are achieved noninvasively such as protein--protein interactions, protein function and gene expression \[[@CR1]--[@CR3]\]. With the advancement of fluorescent markers, a number of fluorescent imaging techniques now are available to visualize them at either microscopic \[[@CR4]--[@CR7]\] or macroscopic scale \[[@CR8]--[@CR10]\]. Fluorescence molecular tomography (FMT) \[[@CR10]--[@CR12]\] is a typical macroscopic fluorescent imaging technique that noninvasively reveals the distributions of fluorescent markers inside the bodies of small animals through fluorescent measurements at the surfaces of bodies, which has been applied for drug discovery \[[@CR13], [@CR14]\] and oncology \[[@CR15], [@CR16]\].
The concept of FMT can be summarized as: the excitation and fluorescent light propagations in tissues are described with a certain mathematical model firstly, and then, a reconstruction scheme is conceived based on the minimization of the differences between fluorescent measurements and the corresponding predicted ones through the model. The two processes are generalized as two problems: forward and inverse problems \[[@CR17]\]. Commonly, diffusion equation, an approximation of radiative transfer equation, is used to model the light propagations in tissues in forward problem \[[@CR18]\]. As a partial differential equation, diffusion equation is usually solved with numerical methods such as finite element method (FEM) \[[@CR19]\]. On the other hand, because the fluorescent measurements used in reconstruction are only obtained at the surfaces, inverse problem is ill-posed, which makes reconstruction results sensitive to measurement noise and numerical errors. To overcome the ill-posedness, inverse problem is treated as an optimization problem with regularizations and a list of numerical methods can be applied to solve it such as Newton method \[[@CR17]\] and conjugate gradient method \[[@CR18]\]. In reconstruction, the value of fluorescent yield of each node, pixel, or voxel is recovered from a set of fluorescent measurements obtained from different projection angles. However, the high ill-posedness of inverse problem and the utilization of regularization result in a poor spatial resolution that the boundaries of reconstructed objects are blurred \[[@CR20]\].
The blurry images inhibit the applications of FMT in some occasions that need explicit boundaries \[[@CR20]\]. To deal with these cases, shape-based reconstruction methods \[[@CR20]--[@CR28]\] are developed, which parameterize the shapes of reconstructed objects and recover these shape parameters instead of the values of fluorescent yield at each node, pixel, or voxel in classical image-based reconstruction schemes. In general, shape-based reconstruction methods can be classified into two types: implicit \[[@CR20]--[@CR24]\] and explicit shape methods \[[@CR25]--[@CR28]\]. Explicit shape method describes the boundaries of reconstructed objects with a spherical harmonics expansion and the expansion coefficients are reconstructed to construct the images. Implicit shape method defines the shapes of reconstructed objects with a level set function, which is updated during reconstruction iterations to recover the boundaries. Both of the two types are capable of recovering arbitrary shapes theoretically, but the complexity of the shapes defined by spherical harmonics expansion is restricted by the maximum degree of spherical harmonics which is limited and determined manually in practical applications \[[@CR25]\].
Shape-based reconstruction methods are capable of achieving higher image clarity than the image-based reconstruction methods. However, *priori* information about the number of reconstructed objects is essential for both of the two types of shape-based reconstruction methods. For the spherical harmonics expansion, a set of expansion coefficients can only describe a single object. Thus, multi-object reconstruction needs more than one set of expansion coefficients and each one needs to be initialized at the beginning of reconstruction \[[@CR25]\]. In parallel, the definition of multiple objects needs multiple levels of a single level set function or more than one level set function and the initializations of both need to know the number of objects \[[@CR29], [@CR30]\]. Moreover, in the implicit shape method, reconstruction is commonly accomplished through an artificial time evolution approach which utilizes gradient-based optimization methods to update the level set function such as the gradient descent method \[[@CR20]--[@CR24]\]. The gradient-based optimization methods are first order methods which suffer from a low convergence speed and sometimes converge to a local minimum \[[@CR31]\]. In addition, the choice of step length is also an intractable problem, which controls the convergence speed and the calculation accuracy.
Second order methods, e.g. Newton-type methods, converge quadratically, which benefits from the utilization of the second derivative of object function. Compared to first order methods, second order methods converge more quickly and perform more stably \[[@CR31]\]. However, the implicit shape method is failure to take advantage of second order methods due to the non-differentiability of the derivative of the Heaviside function. In this paper, a shape-based reconstruction scheme of FMT with cosinoidal level set method is conceived to take use of Newton-type method. This reconstruction method replaces the Heaviside function with a cosine function in the classical implicit shape method so as to obtain the second derivative of object function. Simulation and phantom studies are implemented to validate the performance of the proposed method.
Methods {#Sec2}
=======
The lights with wavelength between 700 and 900 nm are highly scattered and lowly absorbed in tissues, which are called diffuse lights commonly. Diffuse lights are appropriate for macroscopic imaging because of the high tissue penetration. Due to the high scattering property, diffusion equation is usually used to describe the propagations of diffuse light \[[@CR18]\]. Because the generation of fluorescence consists of two processes (excitation and emission), a couple of diffusion equations are commonly used to describe the propagations of the excitation light and fluorescence, which are converted into linear equations through FEM as follows \[[@CR32], [@CR33]\]:$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\left\{ {\begin{array}{*{20}c} {KU_{x} = Q} \\ {KU_{m} = FX} \\ \end{array} } \right.$$\end{document}$$where *U* is the photon density and *Q* is the source term. The subscripts *x* and *m* are used to discriminate between the excitation and the emission. *U* and *Q* are column vectors with *N* ~*n*~ elements. *N* ~*n*~ denotes the number of nodes used in FEM. *X* is the vector of fluorescent yield with the same length with *U* and *Q*, which is the unknown vector to be reconstructed. *K* is the stiffness matrix with *N* ~*n*~ × *N* ~*n*~ elements. *F* is a matrix with the same size with *K*. The elements of *K*, *F*, and *Q* are given by:$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$K_{ij} = \int\limits_{\varOmega } {(D\nabla \upsilon_{i} \cdot \nabla \upsilon_{j} + \mu_{a} \upsilon_{i} \upsilon_{j} )dr^{n} } + \frac{1}{2q}\int\limits_{\partial \varOmega } {\upsilon_{i} \upsilon_{j} dr^{n - 1} }$$\end{document}$$ $$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$Q_{i} = \int\limits_{\varOmega } {Q(r)\upsilon_{i} dr^{n} }$$\end{document}$$ $$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$F_{ij} = \int\limits_{\varOmega } {U_{x} (r)\upsilon_{i} \upsilon_{j} dr^{n} }$$\end{document}$$where *μ* ~*a*~ is the absorption coefficient, *D* is the diffusion coefficient, *q* is a term related to the optical reflective index mismatch at the boundary, *r* is the position, *υ* represents the shape function, and the subscripts *i* and *j* denote the indices of column and row, respectively.
In the implicit shape method, the level set function is introduced to express the distribution of the unknown parameter as follows \[[@CR21]\]:$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$x(r) = \left\{ {\begin{array}{*{20}l} {\begin{array}{*{20}l} {x_{f} } & {\psi (r) \le 0} \\ \end{array} } \\ {\begin{array}{*{20}l} {x_{b} } & {\psi (r) > 0} \\ \end{array} } \\ \end{array} } \right.$$\end{document}$$where *x* denotes the fluorescent yield and *ψ* represents the level set function. The subscripts *f* and *b* denote the regions of fluorescent targets and background, respectively. To obtain the gradient used in reconstruction, the Heaviside function *H*(*ψ*) is used to express *x*(*r*) in the classical implicit shape method \[[@CR21]\]:$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$x(\psi ) = x_{b} H(\psi ) + x_{f} [1 - H(\psi )]$$\end{document}$$
Equation ([6](#Equ6){ref-type=""}) is capable of describing the distribution of unknown parameter with the level set function. However, this equation is only appropriate for the cases with a single object or multiple objects with the same fluorescent yield. For the cases with multiple objects with different fluorescent yields, a level set function with multiple levels or more than one level set function should be used to express the distribution of fluorescent yield and the number of the levels or level set functions is determined by the number of objects \[[@CR29], [@CR30]\]. Consequently, for the classical implicit shape method, *priori* information about the number of objects is required to initialize the configuration of level set function and fluorescent yield. Moreover, the derivative of the Heaviside function is the Dirac function, which cannot be differentiated further. This leads to the inability of the applications of second order methods in reconstruction. To solve these problems, the following equation is used to describe the distribution of fluorescent yield:$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$x(\psi ) = \frac{1}{2}[1 + \cos (\pi \psi )]x_{b} + \frac{1}{2}[1 - \cos (\pi \psi )]x_{f}$$\end{document}$$
Equation ([7](#Equ7){ref-type=""}) replaces the pair of Heaviside functions with a pair of cosine function. When the level set function *ψ* varies within \[0,1\], the value of fluorescent yield *x*(*ψ*) varies between *x* ~*b*~ and *x* ~*f*~. Compared to Eq. ([6](#Equ6){ref-type=""}), the advantage of Eq. ([7](#Equ7){ref-type=""}) is that the value of fluorescent yield is not restricted at only two values but varies between two values, i.e., Eq ([7](#Equ7){ref-type=""}) is capable of the representation of multiple objects with different fluorescent yields. As a consequence, the number of objects is not required any more. In addition, the derivative of cosine function is sine function, which can be further differentiated. Therefore, second order methods can be applied.
From the second equation of Eq. ([1](#Equ1){ref-type=""}), equation *U* ~*m*~=*K* ^−1^ *FX*=*AX* can be obtained. In reconstruction, measurements acquired from different projection angles are corresponding to different fluorescent photon density *U* ~*m*~ and matrix *A*. Extracting all the elements of *U* ~*m*~ and rows of *A* according to the nodes on the surface for measurements and assembling them yields the vector of measurements *Y* and the Jacobian matrix *J* with respect to the fluorescent yield *x*. Then the following equation can express the relationship between the measurements *Y* and the fluorescent yield *X*:$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$Y = JX$$\end{document}$$
To reconstruct the fluorescent yield *X* from the measurements *Y*, an object function is defined as follows:$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\varGamma = \frac{1}{2}\left\| {JX - Y} \right\|_{2}^{2} = \frac{1}{2}\sum\limits_{i = 1}^{{N_{m} }} {\left(\sum\limits_{j = 1}^{{N_{n} }} {J_{ij} X_{j} } - Y_{i} \right)^{2} }$$\end{document}$$where *J* ~*ij*~ denotes the elements of the matrix *J* at the *i*th row and *j*th column. *N* ~*m*~ is the number of measurements.
The level set function *ψ* is discretized into a vector *Ψ* with the basis expansion of shape functions as follows:$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\psi (r) = \sum\limits_{i = 1}^{{N_{n} }} {\varPsi_{i} \upsilon_{i} }$$\end{document}$$Then differentiating the object function *Γ* shown in Eq. ([9](#Equ9){ref-type=""}) with respect to the level set function of a certain node *Ψ* ~*k*~ yields:$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\frac{\partial \varGamma }{{\partial \varPsi_{k} }} = \sum\limits_{i = 1}^{{N_{m} }} {\left( {\sum\limits_{j = 1}^{{N_{n} }} {J_{ij} X_{j} } - Y_{i} } \right)J_{ik} \frac{{\partial X_{k} }}{{\partial \varPsi_{k} }}}$$\end{document}$$
From Eq. ([7](#Equ7){ref-type=""}) the derivative of *X* ~*k*~ with respect to *Ψ* ~*k*~ can be obtained:$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\frac{{\partial X_{k} }}{{\partial \varPsi_{k} }} = \frac{\pi }{2}(x_{f} - x_{b} )\sin (\pi \varPsi_{k} )$$\end{document}$$
Assembling the derivatives of the object function with respect to the level set function for all the nodes yields:$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\varGamma^{\prime} = \left[ {\begin{array}{*{20}c} {\frac{\partial \varGamma }{{\partial \varPsi_{1} }}} & \cdots & {\frac{\partial \varGamma }{{\partial \varPsi_{{N_{n} }} }}} \\ \end{array} } \right]^{T} = J_{\psi }^{T} (JX - Y)$$\end{document}$$ $$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$J_{\psi ij} = \frac{\pi }{2}(x_{f} - x_{b} )J_{ij} \sin (\pi \varPsi_{j} )$$\end{document}$$where *J* ~*ψ*~ is the Jacobian matrix with respect to the level set function *ψ*.
Further differentiating Eq. ([13](#Equ13){ref-type=""}) with respect to the level set function yields the second derivative of the object function as follows:$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\frac{{\partial ({{\partial \varGamma } \mathord{\left/ {\vphantom {{\partial \varGamma } {\partial \varPsi_{k} }}} \right. \kern-0pt} {\partial \varPsi_{k} }})}}{{\partial \varPsi_{l} }} = \sum\limits_{i = 1}^{{N_{m} }} {J_{\psi ik} J_{\psi il} } + \sum\limits_{i = 1}^{{N_{m} }} {\frac{{\partial J_{\psi ik} }}{{\partial \varPsi_{l} }}\left( {\sum\limits_{j = 1}^{{N_{n} }} {J_{ij} X_{j} - Y_{i} } } \right)}$$\end{document}$$ $$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\frac{{\partial J_{\psi ik} }}{{\partial \varPsi_{l} }} = \left\{ {\begin{array}{*{20}l} {0 \quad l \ne k} \hfill \\ {\frac{{\pi^{2} }}{2}(x_{f} - x_{b} )J_{ij} \cos (\pi \varPsi_{k} ) \quad l = k} \hfill \\ \end{array} } \right.$$\end{document}$$
Assembling the second derivatives of the object function with respect to the level set function for all the nodes yields:$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\varGamma^{\prime\prime} = \left[ {\begin{array}{*{20}c} {\frac{{\partial ({{\partial \varGamma } \mathord{\left/ {\vphantom {{\partial \varGamma } {\partial \varPsi_{1} )}}} \right. \kern-0pt} {\partial \varPsi_{1} )}}}}{{\partial \varPsi_{1} }}} & \cdots & {\frac{{\partial ({{\partial \varGamma } \mathord{\left/ {\vphantom {{\partial \varGamma } {\partial \varPsi_{{N_{n} }} )}}} \right. \kern-0pt} {\partial \varPsi_{{N_{n} }} )}}}}{{\partial \varPsi_{1} }}} \\ \vdots & \ddots & \vdots \\ {\frac{{\partial ({{\partial \varGamma } \mathord{\left/ {\vphantom {{\partial \varGamma } {\partial \varPsi_{1} )}}} \right. \kern-0pt} {\partial \varPsi_{1} )}}}}{{\partial \varPsi_{{N_{n} }} }}} & \cdots & {\frac{{\partial ({{\partial \varGamma } \mathord{\left/ {\vphantom {{\partial \varGamma } {\partial \varPsi_{{N_{n} }} )}}} \right. \kern-0pt} {\partial \varPsi_{{N_{n} }} )}}}}{{\partial \varPsi_{{N_{n} }} }}} \\ \end{array} } \right] = J_{\psi }^{T} J_{\psi } + H_{\psi } \left[ {\begin{array}{*{20}c} {JX - Y} & {} & {} \\ {} & \ddots & {} \\ {} & {} & {JX - Y} \\ \end{array} } \right]$$\end{document}$$where *H* ~*ψ*~ is the Hessian matrix with respect to the level set function and can be expressed as:$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$H_{\psi } = \left[ {\begin{array}{*{20}c} {\frac{{\partial J_{\psi 11} }}{{\partial \varPsi_{1} }}} & \cdots & {\frac{{\partial J_{{\psi N_{m} 1}} }}{{\partial \varPsi_{1} }}} & {\frac{{\partial J_{\psi 12} }}{{\partial \varPsi_{1} }}} & \cdots & {\frac{{\partial J_{{\psi N_{m} 2}} }}{{\partial \varPsi_{1} }}} & \cdots & {\frac{{\partial J_{{\psi N_{m} N_{n} }} }}{{\partial \varPsi_{1} }}} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\ {\frac{{\partial J_{\psi 11} }}{{\partial \varPsi_{{N_{n} }} }}} & \cdots & {\frac{{\partial J_{{\psi N_{m} 1}} }}{{\partial \varPsi_{{N_{n} }} }}} & {\frac{{\partial J_{\psi 12} }}{{\partial \varPsi_{{N_{n} }} }}} & \cdots & {\frac{{\partial J_{{\psi N_{m} 2}} }}{{\partial \varPsi_{{N_{n} }} }}} & \cdots & {\frac{{\partial J_{{\psi N_{m} N_{n} }} }}{{\partial \varPsi_{{N_{n} }} }}} \\ \end{array} } \right]$$\end{document}$$
Then Eqs. ([13](#Equ13){ref-type=""}) and ([17](#Equ17){ref-type=""}) are substituted into the Newton method ($\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$x^{(n + 1)} = x^{(n)} - (\varGamma^{\prime\prime})^{ - 1} \varGamma^{\prime}$$\end{document}$, where *x* denotes the unknown parameters and the superscript *n* denotes the index of iteration) \[[@CR18]\] to obtain the iteration equation as follows:$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\varPsi^{(n + 1)} = \varPsi^{(n)} - (J_{\psi }^{T} J_{\psi } + Hb)^{-1}J_{\psi }^{T} (JX - Y)$$\end{document}$$where *b* represents the matrix consisting of residual vectors on the right side of Eq. ([17](#Equ17){ref-type=""}). To simplify the calculations and take advantage of regularization, the Levenberg--Marquardt (LM) method \[[@CR18]\] is used to reconstruct the level set function instead of Eq. ([19](#Equ19){ref-type=""}):$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\varPsi^{(n + 1)} = \varPsi^{(n)} - (J_{\psi }^{T} J_{\psi } + \lambda I)^{-1}J_{\psi }^{T} (JX - Y)$$\end{document}$$where *I* denotes the identity matrix and *λ* is a regularization parameter. The LM method is a variation of the Newton method but more useful in practical applications, which ignores the Hessian matrix to reduce the computational requirements and introduces a regularization term to suppress the influence of noise. Compared with the original Newton method, the LM method provides a similar convergence speed but consumes less computational time and less storage space.
In parallel, the iteration equation for the fluorescent yields *x* ~*b*~ and *x* ~*f*~ can be acquired through the derivative of object function with respect to *x* ~*b*~ and *x* ~*f*~:$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\left[ {\begin{array}{*{20}c} {x_{b} } \\ {x_{f} } \\ \end{array} } \right]^{(n + 1)} = \left[ {\begin{array}{*{20}c} {x_{b} } \\ {x_{f} } \\ \end{array} } \right]^{(n)} - (J_{x}^{T} J_{x} + \lambda I)^{-1}J_{x}^{T} (JX - Y)$$\end{document}$$ $$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$J_{x} = \frac{1}{2}\left[ {\begin{array}{*{20}c} {J(1 + \cos (\pi \varPsi ))} & {J(1 - \cos (\pi \varPsi ))} \\ \end{array} } \right]$$\end{document}$$
Equations ([20](#Equ20){ref-type=""}) and ([21](#Equ21){ref-type=""}) are used to reconstruct the level set function and fluorescent yields, respectively. During the reconstruction, the update of the level set function and fluorescent yields is carried out separately. Within each iteration, the level set function *Ψ* is updated through Eq. ([20](#Equ20){ref-type=""}) firstly, and then the fluorescent yields *x* ~*b*~ and *x* ~*f*~ are updated through Eq. ([21](#Equ21){ref-type=""}). After the update of the level set function and fluorescent yields, a restriction process is executed to ensure the level set function within \[0,1\]. When *ψ* \< 0 the level set function is set as 0. When *ψ* \> 1 the level set function is set as 1.
Results and discussion {#Sec3}
======================
In order to validate the performance of the proposed method, numerical simulations and phantom experiments were carried out. The geometry of the imaged object used in the simulations and phantom experiments was a cylinder with a diameter of 3 cm and a height of 5 cm as shown in Fig. [1](#Fig1){ref-type="fig"}. Two tubes with a diameter of 0.4 cm and a height of 5 cm were inserted into the cylinder as the fluorescent targets. The distance between the centers of the two tubes was 1 cm. In the simulations, fluorescent measurements were generated through Eq. ([1](#Equ1){ref-type=""}) and contaminated with 1% Gaussian noise. In the phantom experiments, a free-space FMT system \[[@CR34]\] was used to acquire the fluorescent measurements. A schematic of the imaging system is shown in Fig. [2](#Fig2){ref-type="fig"}. A 250W Halogen lamp (7ILT250, 7-star, Beijing, China) was used as the excitation light source. A 775 ± 23 nm band pass filter (FF01-775/46-25, Semrock, Rochester, NY, USA) was placed toward the lamp and coupled with a special optical fiber. The output of the fiber was rectangular beam which was converted into line-shaped beam through an adjustable slit. The imaged object was placed on a rotation stage for full-angle projection measurements. An electron multiplying charge-coupled device (EMCCD) camera (iXon DU-897, Andor Technologies, Belfast, Northern Ireland) coupled with a Nikkor 60 mm f/2.8D lens (Nikon, Melville, NY, USA) and an 840 ± 6 nm bandpass filter (FF01-840/12-25, Semrock, Rochester, NY, USA) was implemented to capture the images. In both the simulations and phantom experiments, two different groups of measurements were obtained for the test of the cases with single or double targets. Tubes 1 and 2 were filled with 1.7 and 1.02 μmol/L indocyanine green (ICG) in the phantom experiments for the measurements of double targets, respectively, whereas only tube 1 was filled with 1.02 μmol/L ICG for those of single target. In addition, 1% intralipid with a reduced scattering coefficient of 10 cm^−1^ and an absorption coefficient of 0.02 cm^−1^ was used to fill the cylindrical object to simulate the tissues. Accordingly, the fluorescent yields of the two targets used in the simulations were set to 1 and 0.6, respectively, and the optical coefficients were set as the same with the phantom experiments. For each group of the measurements, a line source was utilized to illuminate the cylinder along z-axis and 36 fluorescent images of different projection angles were acquired.Fig. 1Geometry of the imaged object used in simulations and phantom experiments. **a** 3D view of the geometry of the cylindrical object. **b** Top view of the geometry Fig. 2Schematic of the free-space FMT system
In reconstruction, the distributions of fluorescent yield at the central slice z = 2.5 cm were recovered through the proposed method, the classical image-based reconstruction method as well as the implicit shape method. The LM method was implemented to accomplish the iteration in the proposed and image-based reconstruction method, while the gradient descent method based on the artificial time evolution approach \[[@CR23]\] was used in the implicit shape method. A mesh with 1352 nodes and 2602 elements was used in the simulations while another one with 1473 nodes and 2800 elements was utilized for the phantom experiments. The reconstruction was terminated after 5 iterations for the LM method whereas 500 iterations were executed for the gradient descent method. The initial values of fluorescent yield used in the image-based reconstruction method were set to 0, while the initial values of level set function in the proposed and implicit shape method were set to 0.5 and 0.05, respectively. In addition, the reconstructions through the implicit shape method were implemented without the *priori* information about the number of targets, i.e. only a single level set function was used and the fluorescent yield was initialized with a background coefficient *x* ~*b*~ and a single target coefficient *x* ~*f*~.
The reconstruction results of simulations and phantom experiments are shown in Figs. [3](#Fig3){ref-type="fig"}, [4](#Fig4){ref-type="fig"}, [5](#Fig5){ref-type="fig"}, [6](#Fig6){ref-type="fig"}, [7](#Fig7){ref-type="fig"} and [8](#Fig8){ref-type="fig"}, respectively. Figures [3](#Fig3){ref-type="fig"} and [6](#Fig6){ref-type="fig"} show the distributions of fluorescent yield normalized with the maximum and the corresponding distributions of level set function. Figures [4](#Fig4){ref-type="fig"} and [7](#Fig7){ref-type="fig"} show the profiles of normalized fluorescent yield along the blue dotted lines in Figs. [3](#Fig3){ref-type="fig"}d, j and [6](#Fig6){ref-type="fig"}d, j. Figures [5](#Fig5){ref-type="fig"} and [8](#Fig8){ref-type="fig"} give the residual norms as a function of iteration indices. To compare the convergence speeds of different methods, the residual norms are also normalized with the maximum. To evaluate the reconstruction results quantitatively, the contrast to noise ratio (CNR) and Pearson correlation (PC) \[[@CR35]\] were used, which are defined as follows:$$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$CNR = \frac{{\frac{1}{T}\sum\nolimits_{i = 1}^{T} {(x_{i} - x_{back} )} }}{{\sqrt {\frac{{a_{tar} }}{T}\sum\nolimits_{i = 1}^{T} {\sigma_{i}^{2} } + \sigma_{back}^{2} a_{back} } }}$$\end{document}$$ $$\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$PC(X_{tru} ,X_{rec} ) = \frac{{COV(X_{tru} ,X_{rec} )}}{{\sigma (X_{tru} )\sigma (X_{rec} )}}$$\end{document}$$where *x* ~*i*~ and *x* ~*back*~ denote the mean value of fluorescent yield within the true region of the *i*th target and the background, respectively. σ~*i*~ and σ~*back*~ are the corresponding variances. *a* ~*tar*~ and *a* ~*back*~ represent the ratios of the areas, which are given as *a* ~*tar*~ = *A* ~*tar*~/*A* and *a* ~*back*~ = *A* ~*back*~/*A*, where *A* ~*tar*~, *A* ~*back*~, and *A* denote the area of targets, the area of background, and the total area, respectively. *T* is the number of targets. *X* ~*true*~ and *X* ~*rec*~ are the vectors of fluorescent yield for the true and reconstructed distribution, respectively. The *COV* denotes the covariance and σ is the standard deviation. Higher CNR indicates better differentiability between the targets and the background, i.e. better image quality. The metric PC is used to describe the similarity between the true distribution and the reconstructed one. The CNRs and PCs of the reconstruction results of the simulation and phantom studies are listed in Tables [1](#Tab1){ref-type="table"} and [2](#Tab2){ref-type="table"}.Fig. 3Reconstruction results of simulations. **a**--**c** Distributions of fluorescent yield reconstructed with the image-based method, the proposed method, and the implicit shape method for single target, respectively. **d** True distribution of fluorescent yield for single target. **e**, **f** Distributions of level set function reconstructed with the proposed method and the implicit shape method for single target, respectively. **g**--**l** Corresponding results for double targets. The reconstructions with the implicit shape method were implemented without the *priori* information about the number of targets Fig. 4Profiles of normalized fluorescent yield along the blue dotted lines in Fig. [3](#Fig3){ref-type="fig"}d and j. **a** Result for Fig. [3](#Fig3){ref-type="fig"}d. **b** Result for Fig. [3](#Fig3){ref-type="fig"}j Fig. 5Residual norms as a function of iteration indices for simulation studies. **a** Result for single target. **b** Result for double targets Fig. 6Reconstruction results of phantom studies. **a**--**c** Distributions of fluorescent yield reconstructed with the image-based method, the proposed method, and the implicit shape method for single target, respectively. **d** True distribution of fluorescent yield for single target. **e**, **f** Distributions of level set function reconstructed with the proposed method and the implicit shape method for single target, respectively. **g**--**l** Corresponding results for double targets. The reconstructions with the implicit shape method were implemented without the *priori* information about the number of targets Fig. 7Profiles of normalized fluorescent yield along the *blue dotted lines* in Fig. [6](#Fig6){ref-type="fig"}d and j. **a** Result for Fig. [6](#Fig6){ref-type="fig"}d. **b** Result for Fig. [6](#Fig6){ref-type="fig"}j Fig. 8Residual norms as a function of iteration indices for phantom studies. **a** Result for single target. **b** Result for double targets Table 1CNRs and PCs of reconstruction results of simulation studiesSingle targetDouble targetsCNRPCCNRPCImage-based method6.29050.63414.48730.6533Proposed method17.9690.922110.9450.9027Implicit shape method12.1350.8486−0.00340.0052 Table 2CNRs and PCs of reconstruction results of phantom studiesSingle targetDouble targetsCNRPCCNRPCImage-based method3.25470.65034.65870.6407Proposed method5.48470.81345.94630.7351Implicit shape method5.24470.79710.74260.1645
Figures [3](#Fig3){ref-type="fig"}a--f and [6](#Fig6){ref-type="fig"}a--f show that all of the three methods are capable of recovering the distribution of fluorescent yield for single target, but the results of image-based method are more blurry than the results of the other two methods due to the over-smoothness. Because of the Heaviside function used in the implicit shape method, the results of the implicit shape method show explicit boundaries, while the results of the proposed method show blurry boundaries due to the cosine function. However, the results of the implicit shape method indicate incapability of recovering boundaries exactly matching the true shapes due to the irregularity of the meshes and the ill-posedness of the inverse problem. As a result, the CNRs and PCs of the results of the proposed method are higher than those of the implicit shape method as shown in Tables [1](#Tab1){ref-type="table"} and [2](#Tab2){ref-type="table"}. In general, the proposed method and the implicit shape method achieve higher image clarity than the image-based method and have similar performance for the reconstruction of single target. However, through Figs. [3](#Fig3){ref-type="fig"}g--l and [6](#Fig6){ref-type="fig"}g--l as well as the corresponding CNRs and PCs in Tables [1](#Tab1){ref-type="table"} and [2](#Tab2){ref-type="table"}, it can be found that the implicit shape method is incapable of reconstructing the two targets without the *priori* information about the number of targets but the proposed method still works. The inability of the implicit shape method reconstructing double targets with different fluorescent yields derives from that the level set function is unable to represent multiple targets unless these targets have the same fluorescent yield. If multiple regions can be recognized during the iterative process, the coefficient *x* ~*f*~ in Eq. ([5](#Equ5){ref-type=""}) that describes the fluorescent yield of the targets can be split into multiple coefficients to represent multiple targets. Nevertheless, this condition is usually difficult to meet especially when the targets are close to each other like Figs. [3](#Fig3){ref-type="fig"}l and [6](#Fig6){ref-type="fig"}l. To avoid the overlap of the reconstructed shapes of multiple targets, multiple coefficients and the corresponding shapes should be initialized before the start of the iterative process, however, a good guess of the distribution of fluorescent yield and the number of targets is essential for the initialization. Alternatively, for multiple targets, multiple levels of a level set function or more than one level set function can be adopted, but both of them need to be initialized with the information about the number of targets.
As a variation of the Newton method, the LM method converges much faster than the gradient descent method which is a first order method as shown in Figs. [5](#Fig5){ref-type="fig"} and [8](#Fig8){ref-type="fig"}. Five iterations are sufficient for the LM method while hundreds of iterations are required for the gradient descent method. Furthermore, the gradient descent method needs a number of iterations to make the level set function decline to minus and during these iterations the residual norm does not vary because there is no region restricted by the level set function. It leads to a flat section in the curve of residual norm versus iteration indices as shown in Figs. [5](#Fig5){ref-type="fig"} and [8](#Fig8){ref-type="fig"} and the length of the flat section are controlled by the initial conditions including the step length and the initial values of level set function and fluorescent yield. The initial conditions of the gradient descent method are more difficult to be determined than the LM method because the gradient descent method is usually unable to converge when the initial conditions are chosen improperly. The choice of the initial value of level set function and the choice of step length are contradicted with each other. When a large step length and a small initial value of level set function are used, the flat section can be shortened but the residual norm may increase along with the increasing of the iteration index, which results in divergence. On the contrary, a small step length and a large initial value of level set function lead to a low convergence speed, i.e. more iterations are required. Moreover, it is difficult to avoid the iterations those increase the residual norm in the gradient descent method, thus the curve of residual norm versus iteration indices commonly appears as a sawtooth pattern that the residual norm increases and decreases alternately along with the increase of iteration index, which can be observed in Figs. [5](#Fig5){ref-type="fig"} and [8](#Fig8){ref-type="fig"}. A varied step length may solve the problem while how to change the step length is still intractable and the choice of the step length would be time-consuming. The difficulty of the choice of step length also derives from that the gradient used in the artificial time evolution approach is not the true gradient of the object function because the Dirac function, the derivative of the Heaviside function, is omitted in the gradient. Rigorously, the gradient is only proper for the positions with the level set function equal to 0. The Dirac function makes the reconstruction results so sensitive to the step length that the step length is difficult to be chosen.
Figures [3](#Fig3){ref-type="fig"}a, g and [6](#Fig6){ref-type="fig"}a, g show that the reconstruction results of the image-based method are not homogeneous within the regions of targets and there are caves in the reconstructed targets. This phenomenon can also be observed in Figs. [4](#Fig4){ref-type="fig"} and [7](#Fig7){ref-type="fig"}. It is caused by the irregularity of the meshes. The reconstructed values of fluorescent yield of nodes are affected by the sizes of the elements which contain these nodes. A reconstruction strategy with double levels of mesh that performs the forward calculations on a triangular or tetrahedral mesh and implements the reconstruction on a square or cubic mesh can solve this problem but will complicate the reconstruction process. As an alternative, a low-pass filter can be applied to smooth the reconstruction results \[[@CR36]\]. In addition, the irregularity of the meshes also distorts the reconstruction results of the implicit shape method that it makes the regions restricted by the level set function are divided into pieces as shown in Fig. [9](#Fig9){ref-type="fig"}. To solve this problem, the results of the implicit shape method in Figs. [3](#Fig3){ref-type="fig"}, [6](#Fig6){ref-type="fig"}, [7](#Fig7){ref-type="fig"} and [8](#Fig8){ref-type="fig"} are obtained with a process smoothing the distributions of level set function through the low-pass filter after each iteration. In parallel, the proposed method is not influenced by the irregularity of the meshes as shown in Figs. [3](#Fig3){ref-type="fig"}b, e, h, k and [6](#Fig6){ref-type="fig"}b, e, h, k.Fig. 9Reconstruction results of implicit shape method without low-pass filter for single target. **a** Distribution of fluorescent yield normalized with maximum. **b** Distribution of level set function
The difference between the proposed method and the implicit shape method derives from the replacement of the Heaviside function with the cosine function in Eq. ([7](#Equ7){ref-type=""}). The primary defect of the Heaviside function is the nonderivability of its derivative the Dirac function. It results in the unavailability of second order methods. In parallel, the derivative of cosine function, sine function, is derivable. Consequently, the second order methods can be implemented. Moreover, the gradient of the object function for the Heaviside function includes the Dirac function which cannot be calculated numerically and has to be omitted in reconstruction. The omitted Dirac function in the gradient leads to that the reconstruction results are sensitive to the step length so that the step length is difficult to be determined. In comparison, the utilization of the cosine function avoids this problem. Finally, the Heaviside function fixes the fluorescent yield on two values *x* ~*b*~ and *x* ~*f*~, which results in the requirement of the *priori* information about the number of targets. On the contrary, the cosine function makes the fluorescent yield vary between *x* ~*b*~ and *x* ~*f*~, hence the *priori* information about the number of targets is not required any more. However, the variable fluorescent yield also results in the blurring of the reconstructed shapes. This is the disadvantage of the cosine function.
Generally, the proposed method can be considered as a compromise between the image-based method and the implicit shape method. Taking advantage of the Newton-type method, the image-based method is good at fast convergence and stable reconstruction but suffers from low image clarity. On the contrary, the implicit shape method provides high image clarity with the level set function but suffers from a slow convergence speed and unstable reconstruction due to the utilization of first order methods. The proposed method implements both the Newton-type method and the level set function to achieve the advantages of both the two methods. However, the proposed method is incapable of obtaining images with explicit boundaries because the cosine function blurs the shapes of the reconstruction results.
Conclusions {#Sec4}
===========
In conclusion, a shape-based reconstruction scheme of FMT with cosinoidal level set method is proposed in this paper. This reconstruction method replaces the Heaviside function with a cosine function in the classical implicit shape method so as to take use of the Levenberg--Marquardt method. The proposed method provides a faster convergence speed than the implicit shape method and higher image clarity than the image-based reconstruction method. Furthermore, the proposed method does not need to know the number of targets and avoids the choice of step length, which is an intractable problem in the gradient descent method. As a result, the proposed method performs more stably than the implicit shape method.
FMT
: fluorescence molecular tomography
FEM
: finite element method
LM
: Levenberg--Marquardt
ICG
: indocyanine green
CNR
: contrast to noise ratio
PC
: Pearson correlation
XZ, XC, and SZ designed the research. XZ developed the algorithm and drafted the manuscript. XZ and XC performed the simulation and phantom experiments. XC and SZ revised the manuscript. All authors read and approved the final manuscript.
Acknowledgements {#FPar1}
================
Not applicable.
Competing interests {#FPar2}
===================
The authors declare that they have no competing interests.
Data availability statement {#FPar3}
===========================
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
Funding {#FPar4}
=======
This work was supported by the Program of National Natural Science Foundation of China under Grant Nos. 81227901, 61405149, 81230033, and 61471279, the Program of the National key Research and Development Program of China under Grant No. 2016YFC0103802, and the Fundamental Research Funds for the Central Universities NSIZ021402 and XJS17049.
Publisher's Note {#FPar5}
================
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
|
As described in Publication of Japanese Patent Application No. 2001-2986151, a conventional C-MOSFET uses the same material for the gate insulation layer of both a PMOSFET and an NMOSFET in order to simplify fabrication process. In recent years of deep-sub-micron generation, in order to reduce a short channel effect and to improve driving performance, an NMOSFET includes a gate electrode of N-type, while a PMOSFET includes a gate electrode of P-type, which is called “Dual-gate structure”. In fabrication, an N-type impurity, such as phosphorus, is ion-implanted into an N-type poly-silicon gate electrode. On the other hand, a P-type impurity, such as boron, is ion-implanted into a P-type poly-silicon gate electrode.
However, in such fabricated PMOSFET, boron atomics may get out of the gate electrode through the gate insulation layer to the silicon substrate, which may be called “cut-through phenomenon”. As a result, the threshold voltage (Vth) of the PMOSFET is changed in level, and the reliability of the device is decreased as shown in “Kurasawa et al., IEDM Tech. Digest, p. 895, 1993”.
In order to prevent such cut-thorough phenomenon of boron, nitrogen may be added into the gate insulation layer. However, nitrogen atomics may be diffused to the interface between the silicon substrate and the gate insulation layer, so that the composition of nitrogen is increased adjacent the interface. When the composition of nitrogen is increased, fixed charge is generated in response to the composition of nitrogen. Such fixed charge lowers the flat band voltage of the MOSFET, and therefore, the threshold voltage of the NMOSFET is lowered and standby current is increased.
According to “S. Takagi et al., IEEE Trans. On Electron Device, Vol. 41, p. 2357, 1994”, when the nitrogen composition is increased, electron mobility is reduced. Further, there are other problems in that a) the mobility of the MOSFET is decreased and mutual inductance is decreased, “H. Iwai et al., Symp. On VLSI Tech., p. 131, 1990; b) NBTI (Negative Bias Temperature Instability) is worsened; c) the lifetime of a transistor, used at an I/O portion with a higher power supply voltage, is shortened, N. Kimizuka et al., Symp. On VLSI Tech. Digest, p. 73, 1999.
In order to prevent the above described problems, according to an invention described in Publication of Japanese Patent Application No. H05-218405 and E. Hasegawa et al., IEDM Tech. Digest, p. 327, 1995, the nitrogen composition in the gate insulation layer is controlled to not be higher than 1 atom %. However, when the gate insulation layer is thinner in accordance with minuteness of transistors, the amount of nitrogen in the gate insulation layer is reduced. It becomes difficult to prevent the cut-through phenomenon of boron in a PMOSFET.
According to Publication of Japanese Patent Application No. 2001-291865, nitrogen concentration in a gate insulation layer is controlled not to be higher than 1×1021 cm−3, while the nitrogen concentration at the interface between the silicon substrate and the gate insulation layer is controlled not to be higher than 1×1019 cm−3. However, the fabrications steps are increased and mobility of an NMOSFET is still low. |
Q:
Changing uri-design SEO
I think I need to redesign my uri-collections of my site.
Right now I am using the following structure by a mistake:
/fruits for the whole collection
/fruit/banana for an item in the collection
But I am changing this to:
/fruits
/fruits/banana
How will this affect my indexed pages of the search-engines? Should I create 303-redirects from the old collection structure to the new one? Or is it best to live with my mistake and not make the change?
Thanks,
James Ford
A:
If you really want to change URLs structure, you can apply 301 redirections with .htaccess (if you use Apache Web Server) in order to indicate to Google you use new URLs. Google will replace old URLs by new ones in its index by itself.
If you want to see old URLs disappear more quickly, after new ones are indexed, you can ask remove them from index in Google Webmaster Tools (in menu: "Optimization" => "Remove URLs").
Otherwise, you can live without changing but If you want it's best to change, you can apply 301 redirections without many impacts for SEO. As you wish.
|
package net.citizensnpcs.nms.v1_16_R2.entity;
import org.bukkit.Bukkit;
import net.minecraft.server.v1_16_R2.EntityMinecartAbstract;
import org.bukkit.craftbukkit.v1_16_R2.CraftServer;
import org.bukkit.craftbukkit.v1_16_R2.entity.CraftEntity;
import org.bukkit.craftbukkit.v1_16_R2.entity.CraftZombie;
import org.bukkit.entity.Zombie;
import org.bukkit.util.Vector;
import net.citizensnpcs.api.event.NPCEnderTeleportEvent;
import net.citizensnpcs.api.event.NPCPushEvent;
import net.citizensnpcs.api.npc.NPC;
import net.citizensnpcs.nms.v1_16_R2.util.NMSImpl;
import net.citizensnpcs.npc.CitizensNPC;
import net.citizensnpcs.npc.ai.NPCHolder;
import net.citizensnpcs.util.Util;
import net.minecraft.server.v1_16_R2.BlockPosition;
import net.minecraft.server.v1_16_R2.DamageSource;
import net.minecraft.server.v1_16_R2.Entity;
import net.minecraft.server.v1_16_R2.EntityBoat;
import net.minecraft.server.v1_16_R2.EntityTypes;
import net.minecraft.server.v1_16_R2.EntityZombie;
import net.minecraft.server.v1_16_R2.IBlockData;
import net.minecraft.server.v1_16_R2.NBTTagCompound;
import net.minecraft.server.v1_16_R2.SoundEffect;
import net.minecraft.server.v1_16_R2.Vec3D;
import net.minecraft.server.v1_16_R2.World;
public class ZombieController extends MobEntityController {
public ZombieController() {
super(EntityZombieNPC.class);
}
@Override
public Zombie getBukkitEntity() {
return (Zombie) super.getBukkitEntity();
}
public static class EntityZombieNPC extends EntityZombie implements NPCHolder {
private final CitizensNPC npc;
public EntityZombieNPC(EntityTypes<? extends EntityZombie> types, World world) {
this(types, world, null);
}
public EntityZombieNPC(EntityTypes<? extends EntityZombie> types, World world, NPC npc) {
super(types, world);
this.npc = (CitizensNPC) npc;
if (npc != null) {
NMSImpl.clearGoals(npc, goalSelector, targetSelector);
}
}
@Override
protected void a(double d0, boolean flag, IBlockData block, BlockPosition blockposition) {
if (npc == null || !npc.isFlyable()) {
super.a(d0, flag, block, blockposition);
}
}
@Override
public boolean b(float f, float f1) {
if (npc == null || !npc.isFlyable()) {
return super.b(f, f1);
}
return false;
}
@Override
public void checkDespawn() {
if (npc == null) {
super.checkDespawn();
}
}
@Override
public void collide(net.minecraft.server.v1_16_R2.Entity entity) {
// this method is called by both the entities involved - cancelling
// it will not stop the NPC from moving.
super.collide(entity);
if (npc != null)
Util.callCollisionEvent(npc, entity.getBukkitEntity());
}
@Override
public boolean d(NBTTagCompound save) {
return npc == null ? super.d(save) : false;
}
@Override
public void g(Vec3D vec3d) {
if (npc == null || !npc.isFlyable()) {
super.g(vec3d);
} else {
NMSImpl.flyingMoveLogic(this, vec3d);
}
}
@Override
public void enderTeleportTo(double d0, double d1, double d2) {
if (npc == null) {
super.enderTeleportTo(d0, d1, d2);
return;
}
NPCEnderTeleportEvent event = new NPCEnderTeleportEvent(npc);
Bukkit.getPluginManager().callEvent(event);
if (!event.isCancelled()) {
super.enderTeleportTo(d0, d1, d2);
}
}
@Override
public CraftEntity getBukkitEntity() {
if (npc != null && !(super.getBukkitEntity() instanceof NPCHolder)) {
NMSImpl.setBukkitEntity(this, new ZombieNPC(this));
}
return super.getBukkitEntity();
}
@Override
public NPC getNPC() {
return npc;
}
@Override
protected SoundEffect getSoundAmbient() {
return NMSImpl.getSoundEffect(npc, super.getSoundAmbient(), NPC.AMBIENT_SOUND_METADATA);
}
@Override
protected SoundEffect getSoundDeath() {
return NMSImpl.getSoundEffect(npc, super.getSoundDeath(), NPC.DEATH_SOUND_METADATA);
}
@Override
protected SoundEffect getSoundHurt(DamageSource damagesource) {
return NMSImpl.getSoundEffect(npc, super.getSoundHurt(damagesource), NPC.HURT_SOUND_METADATA);
}
@Override
public void i(double x, double y, double z) {
if (npc == null) {
super.i(x, y, z);
return;
}
if (NPCPushEvent.getHandlerList().getRegisteredListeners().length == 0) {
if (!npc.data().get(NPC.DEFAULT_PROTECTED_METADATA, true))
super.i(x, y, z);
return;
}
Vector vector = new Vector(x, y, z);
NPCPushEvent event = Util.callPushEvent(npc, vector);
if (!event.isCancelled()) {
vector = event.getCollisionVector();
super.i(vector.getX(), vector.getY(), vector.getZ());
}
// when another entity collides, this method is called to push the
// NPC so we prevent it from doing anything if the event is
// cancelled.
}
@Override
public boolean isClimbing() {
if (npc == null || !npc.isFlyable()) {
return super.isClimbing();
} else {
return false;
}
}
@Override
public boolean isLeashed() {
if (npc == null)
return super.isLeashed();
boolean protectedDefault = npc.data().get(NPC.DEFAULT_PROTECTED_METADATA, true);
if (!protectedDefault || !npc.data().get(NPC.LEASH_PROTECTED_METADATA, protectedDefault))
return super.isLeashed();
if (super.isLeashed()) {
unleash(true, false); // clearLeash with client update
}
return false; // shouldLeash
}
@Override
public void mobTick() {
super.mobTick();
if (npc != null) {
NMSImpl.updateMinecraftAIState(npc, this);
npc.update();
}
}
@Override
protected boolean n(Entity entity) {
if (npc != null && (entity instanceof EntityBoat || entity instanceof EntityMinecartAbstract)) {
return !npc.data().get(NPC.DEFAULT_PROTECTED_METADATA, true);
}
return super.n(entity);
}
}
public static class ZombieNPC extends CraftZombie implements NPCHolder {
private final CitizensNPC npc;
public ZombieNPC(EntityZombieNPC entity) {
super((CraftServer) Bukkit.getServer(), entity);
this.npc = entity.npc;
}
@Override
public NPC getNPC() {
return npc;
}
}
}
|
1.. Introduction {#sec1}
==================
Apart from the approved drug Fasudil (HA-1077), H-89 is one of the most prominent representatives of the 'H-series' of kinase inhibitors, a set of ATP-competitive isoquinoline sulfonamides (Chijiwa *et al.*, 1990[@bb6]; Hidaka *et al.*, 1984[@bb13]; Ono-Saito *et al.*, 1999[@bb18]; Fig. 1[▶](#fig1){ref-type="fig"}). H-89 was developed and reported to be selective towards the catalytic subunit of cAMP-dependent protein kinase, also known as protein kinase A (PKA). Despite its misregulation in certain types of cancer, PKA is usually considered to be an 'antitarget' in drug development owing to the ubiquitous and essential nature of the cellular processes that it regulates. Hence, the use of H-89 has largely remained confined to academic research. In contrast, the Rho kinase-targeting inhibitor Fasudil was approved in Japan in 1995 for the prevention of cerebral vasospasm in patients with subarachnoid haemorrhage and was found to potentially be useful to enhance the memory and improve the prognosis of Alzheimers patients (Huentelman *et al.*, 2009[@bb14]). However, H-89 became particularly popular for *in vitro* studies requiring the absence of PKA activity or on the regulatory role of PKA itself. It is still used frequently, but now in the context of recent studies that have shown H-89 to be a rather general AGC kinase inhibitor (Davies *et al.*, 2000[@bb10]; Lochner & Moolman, 2006[@bb16]). While one barrier to the development of H-series compounds as drugs may be the inhibition of PKA, H-89 has also proven to be useful in drug-design projects. The H-89 scaffold has provided the basis for the design of new compounds with selectivity towards protein kinase B (PKB/Akt; Caldwell *et al.*, 2008[@bb4]; Collins *et al.*, 2006[@bb8]; Reuveni *et al.*, 2002[@bb21]), which is structurally similar to PKA (Gassel *et al.*, 2003[@bb12]) and remains an important drug target (Cheng *et al.*, 2005[@bb5]; Wu & Hu, 2010[@bb27]).
2.. Materials and methods {#sec2}
===========================
2.1.. Protein production, purification and crystallization {#sec2.1}
------------------------------------------------------------
The full-length human catalytic subunit α of PKA (GenBank accession No. NP_002721) was expressed in *Escherichia coli* BL21 (DE3)-RIL cells (Stratagene) from a construct based on the vector pT7-7 in auto-induction medium (Studier, 2005[@bb23]) over a period of approximately 20 h at 297 K. The procedures used for the purification of PKA followed previously published protocols (Engh *et al.*, 1996[@bb11]).
Cocrystallization of PKA and H-89 was carried out in hanging drops at 277 K. Drops consisting of 10 mg ml^−1^ protein, 25 m*M* bis-tris/MES pH 6.9, 50 m*M* KCl, 1.5 m*M* octanoyl-*N*-methylglucamide, 1 m*M* 'protein kinase inhibitor' peptide (PKI; ~5~TTYADFIASGRTGRRNAIHD~24~) and 5 m*M* H-89 (added from a 100 m*M* methanol stock) were equilibrated against 12--22%(*v*/*v*) methanol. For data collection, crystals were transferred into 30% 2-methyl-2,4-pentanediol and flash-cooled.
2.2.. Diffraction data collection and data processing {#sec2.2}
-------------------------------------------------------
The diffraction of a cooled crystal was measured on beamline ID29 at the European Synchrotron Radiation Facility (ESRF; Grenoble, France). The wavelength of 0.91969 Å was chosen for data collection after a scan for the maximum X-ray absorption of the crystal in proximity to the theoretical *K* absorption edge of bromine, and the data-collection strategy was designed to obtain an overall multiplicity of greater than four. Subsequent processing of the data was carried out with the *XDS* software package (Kabsch, 2010[@bb15]) and the *CCP*4 program suite (Winn *et al.*, 2011[@bb7]). The diffraction frames were integrated with *XDS* and the resulting intensities were scaled with *XSCALE*, in which Friedel pairs were not merged ('FRIEDEL'S_LAW=FALSE' option; Table 1[▶](#table1){ref-type="table"}). The data set was phased by molecular replacement with *MOLREP* (Vagin & Teplyakov, 2010[@bb25]) employing the coordinates of PDB entry [1ydt](http://scripts.iucr.org/cgi-bin/cr.cgi?rm=pdb&pdbId=1ydt) (Engh *et al.*, 1996[@bb11]). The structure was refined with *REFMAC*5 (Murshudov *et al.*, 2011[@bb17]; Table 1[▶](#table1){ref-type="table"}) and the resulting structure factors were merged with the columns 'DANO' and 'SIGDANO' of the unphased original `*.mtz` file using the program *CAD*. The resulting `*.mtz` file containing both the structure factors with phases and the anomalous signal of the bromine of H-89 was used to calculate anomalous difference Fourier maps with the program *FFT* (Ten Eyck, 1973[@bb24]).
*REFMAC*5 (Murshudov *et al.*, 2011[@bb17]) was used to generate weighted electron-density and difference maps (2*mF* ~o~ − *DF* ~c~ and *mF* ~o~ − *DF* ~c~, respectively) for the refined structures (Figs. 2[▶](#fig2){ref-type="fig"} *b* and 2[▶](#fig2){ref-type="fig"} *c*). *F* ~o~ and *F* ~c~ refer to the observed and model structure factors, *m* is the figure of merit and *D* is the model error parameter. The *REFMAC*5 calculation of the weighting factors *D* and *m* (Murshudov *et al.*, 2011[@bb17]) matches that in *SIGMAA* (Read, 1986[@bb20]) except that only the free-*R*-flagged reflections are used to estimate *m* and *D*. For missing reflections, the map coefficients are replaced with *DF* ~c~ in electron-density maps and zero in difference maps.
2.3.. Inhibitors {#sec2.3}
------------------
The kinase inhibitor H-89 was purchased from Cayman Chemicals (Ann Arbor, USA). The 'protein kinase inhibitor' peptide (PKI; ~5~TTYADFIASGRTGRRNAIHD~24~) used in the purification and cocrystallization of PKA was purchased from GL Biochem Shanghai Ltd (Shanghai, People's Republic of China).
2.4.. Structure deposition {#sec2.4}
----------------------------
The coordinates and structure factors of the PKA--H-89 complex crystal structure described here have been deposited in the Protein Data Bank (PDB) with accession code [3vqh](http://scripts.iucr.org/cgi-bin/cr.cgi?rm=pdb&pdbId=3vqh).
3.. Results {#sec3}
=============
3.1.. Overall structure {#sec3.1}
-------------------------
In PDB entry [3vqh](http://scripts.iucr.org/cgi-bin/cr.cgi?rm=pdb&pdbId=3vqh) the catalytic subunit α of protein kinase A appears in its usual conformation (Fig. 3[▶](#fig3){ref-type="fig"} *a*), similar to that in reference structures such as [1atp](http://scripts.iucr.org/cgi-bin/cr.cgi?rm=pdb&pdbId=1atp) (Zheng *et al.*, 1993[@bb29]) and [1cdk](http://scripts.iucr.org/cgi-bin/cr.cgi?rm=pdb&pdbId=1cdk) (Bossemeyer *et al.*, 1993[@bb3]) and nearly identical to an earlier structure of a PKA--H-89 complex (PDB entry [1ydt](http://scripts.iucr.org/cgi-bin/cr.cgi?rm=pdb&pdbId=1ydt); Engh *et al.*, 1996[@bb11]). The root-mean-square deviation (r.m.s.d.) between the structures [1ydt](http://scripts.iucr.org/cgi-bin/cr.cgi?rm=pdb&pdbId=1ydt) and [3vqh](http://scripts.iucr.org/cgi-bin/cr.cgi?rm=pdb&pdbId=3vqh) is 0.33 Å. In [3vqh](http://scripts.iucr.org/cgi-bin/cr.cgi?rm=pdb&pdbId=3vqh) the fragment 5--24 of the PKI ('protein kinase inhibitor') peptide, which is routinely used for structural work on PKA as it stabilizes the kinase domain and facilitates crystallization, occupies the peptide-substrate site, which is formed primarily by the surface of the α-helical C-terminal lobe of the protein. The N-terminal lobe, which is mainly comprised of a five-stranded antiparallel β-sheet, is linked covalently to the C-terminal lobe by a single peptide chain (the hinge). Their interface forms a deep cleft which constitutes the binding pocket for the nucleotide substrate ATP. In the structure described here, the ATP-competitive inhibitor H-89 occupies the ATP-binding site (Fig. 3[▶](#fig3){ref-type="fig"} *a*). As shown in Fig. 1[▶](#fig1){ref-type="fig"}, H-89 binds with respect to ATP such that the isoquinoline group of H-89 occupies the adenine-binding pocket, the sulfonamide mimics the ribose group of ATP and the bromobenzene moiety occupies the site of the phosphate groups beneath the glycine-rich loop (formed by β-strands 1 and 2 and the β-turn that links them).
As in PDB entry [1ydt](http://scripts.iucr.org/cgi-bin/cr.cgi?rm=pdb&pdbId=1ydt) (Engh *et al.*, 1996[@bb11]), the electron density for the bromobenzene portion of H-89 is diffuse and its position is not clearly defined (Fig. 2[▶](#fig2){ref-type="fig"}).
3.2.. The anomalous signal describes a bivalent binding mode for H-89 {#sec3.2}
------------------------------------------------------------------------
The anomalous signal of the bromine of H-89 in the collected data set is rather weak, as is evident from the 'Anomal. corr.' and 'SigAno' parameters in Table 2[▶](#table2){ref-type="table"}, which represent the mean correlation factor between two random subsets of anomalous intensity differences and the mean anomalous difference in units of its estimated standard deviation, respectively. As would be hoped for a diffraction data set containing useful anomalous dispersion information, the anomalous correlation ('Anomal. corr.') exceeds 30% and the anomalous signal is stronger than noise ('SigAno' \> 1), but this applies only for resolutions coarser than ∼5 Å (Table 2[▶](#table2){ref-type="table"}). However, although the strength of the anomalous signal is far below the requirements for a successful SAD phasing experiment (Dauter *et al.*, 2002[@bb9]), it is sufficient to unambiguously localize the position of the bromine moiety of H-89 within the asymmetric unit (Fig. 2[▶](#fig2){ref-type="fig"}).
The anomalous difference Fourier maps in Fig. 2[▶](#fig2){ref-type="fig"}(*a*) display a single strong feature in the asymmetric unit which is dominant in the maps contoured at signal-to-noise ratios of 3σ and 4σ and is unique in the map contoured at 5σ. This peak corresponds to the localization of the bromine group of H-89 in the ATP pocket of PKA (Fig. 2[▶](#fig2){ref-type="fig"} *c*). However, the anomalous density is not localized to one site but appears spread out into two spheres of density, indicating two main positions of the bromine moiety. This result correlates well with the unclear electron density of the bromobenzene group of H-89 in both the structure [1ydt](http://scripts.iucr.org/cgi-bin/cr.cgi?rm=pdb&pdbId=1ydt) (Fig. 2[▶](#fig2){ref-type="fig"} *b*; Engh *et al.*, 1996[@bb11]) and the higher resolution H-89--PKA complex structure determined in this study (Fig. 2[▶](#fig2){ref-type="fig"} *c*). In both cases it seems that the bromobenzene group of H-89 has some freedom to rotate about an axis running perpendicular to its benzene ring. While the electron density alone hints at this, the anomalous density shows distinctly preferred positions of the Br atom.
Consistent with the appearance of two peaks of similar intensity in the anomalous difference Fourier map, the ligand H-89 was modelled in the structure [3vqh](http://scripts.iucr.org/cgi-bin/cr.cgi?rm=pdb&pdbId=3vqh) with two alternative conformations, each with 50% occupancy. Rotation of the C3---N4 bond (Fig. 1[▶](#fig1){ref-type="fig"}) in the flexible linker of H-89 and adjustment of the following dihedrals placed the bromine moieties of the two conformers into the distinct positions indicated by the anomalous difference map. The coordinates of the structure were subsequently refined in *REFMAC*5 without positional restraints for the ligand molecule, resulting in the conformations presented in Fig. 2[▶](#fig2){ref-type="fig"}(*c*). In the refinement the bromobenzene moieties of H-89 retained their distinct positions in good agreement with the density of the anomalous difference map. The linker geometries are correspondingly displaced relative to one another; this is in accord with the partial weak electron density of this portion of H-89 in the structure [3vqh](http://scripts.iucr.org/cgi-bin/cr.cgi?rm=pdb&pdbId=3vqh) (Fig. 2[▶](#fig2){ref-type="fig"} *c*). In contrast, the isoquinoline moiety of H-89 is anchored to the hinge of the kinase domain *via* a hydrogen bond (Fig. 3[▶](#fig3){ref-type="fig"} *c*) and hence features the lowest temperature factors in the ligand molecule (Fig. 3[▶](#fig3){ref-type="fig"} *b*); the adjacent sulfonamide group shows a slight rotation. The torsion angles of the amide groups with respect to the isoquinoline vary by ∼17°. The protein--ligand interactions between PKA and the two conformers differ marginally. This is true for both the polar contacts of the linker of H-89 with PKA and the hydrophobic contacts between the bromobenzene moiety of H-89 and the glycine-rich loop of PKA (Fig. 3[▶](#fig3){ref-type="fig"} *c*). In either case the bromine moiety of H-89 is not involved in polar contacts to the protein and the discrete positions of its two conformers are likely to be a consequence of constraints imposed by the linker dihedrals.
3.3.. Data quality {#sec3.3}
--------------------
The diffraction data utilized for modelling the structure [3vqh](http://scripts.iucr.org/cgi-bin/cr.cgi?rm=pdb&pdbId=3vqh) originate from a single crystal which was exposed to an X-ray dose of approximately 7 MGy. Because neither the overall diffraction quality nor the anomalous signal decayed during data collection, the disorder of the bromine does not appear to result from radiation damage and both H-89 conformers are clearly observable. Consistently, the bromine moieties of both H-89 conformers appear electron-dense and well observed. The lengths of the bromine--carbon bonds in the bromobenzene moieties were both 1.9 Å, which is in good agreement with literature values.
4.. Discussion {#sec4}
================
The approach of incorporating bromine into small-molecule ligands in order to quickly screen for binding and to subsequently efficiently determine the binding geometry has been employed as a drug-discovery business model (Antonysamy *et al.*, 2008[@bb1]; Blaney *et al.*, 2006[@bb2]; Wolf *et al.*, 2002[@bb26]). Details of its application and utility are sparse in the scientific literature. However, this would be one approach to address the problem of evaluating ligand flexibility in drug design (Seddon *et al.*, 2012[@bb22]). Here, we show the successful use of this approach for a very specific application, namely the characterization of the apparently heterogeneous binding mode of a kinase inhibitor, which was not possible using electron-density maps (2*mF* ~o~ − *DF* ~c~) alone.
Although the statistics showed the overall anomalous signal to only be significant in lower resolution shells, the effects of the total signal transformed into the real-space anomalous electron-density map were unambiguously localized at the bromine group of H-89 with a resolution sufficient to identify two discrete positions. This demonstrates the utility of the approach for other related applications in characterizing binding-mode heterogeneity and also confirms its usefulness for applications involving weak-binding ligands or low-affinity small-molecule fragments that may bind with only partial occupancies.
Regarding the flexible binding mode of H-89 in the ATP pocket of PKA, the question arises whether this information may be useful for application in inhibitor design. In general, binding flexibility is associated with adaptability to variation in binding sites, consistent with the broad inhibition profile of H-89 for AGC kinases (Davies *et al.*, 2000[@bb10]; Lochner & Moolman, 2006[@bb16]). In order to develop H-89 towards a PKB inhibitor the linker between its aromatic moieties was rigidified, but the selectivity of the resulting compounds was not reported (Caldwell *et al.*, 2008[@bb4]; Collins *et al.*, 2006[@bb8]). An interesting approach could be to modify the linker of H-89 to capture the two respective binding conformations and investigate potential changes in the target-selectivity pattern of the compounds.
Supplementary Material
======================
PDB reference: [PKA--H-89 complex, 3vqh](3vqh)
. 'PKA' refers to cAMP-dependent protein kinase catalytic subunit α isoform 1 and 'ROCK2' to Rho kinase α. Affinity values were taken from the literature: \*, Rajagopalan *et al.* (2010[@bb19]); †, Gassel *et al.* (2003[@bb12]); ‡, Yano *et al.* (2008[@bb28]); §, Engh *et al.* (1996[@bb11]).](f-68-00873-fig1){#fig1}
 contoured at levels of 3σ, 4σ and 5σ. (*b*) 2.3 Å resolution electron-density map (grey) and difference density map (green) surrounding the compound H-89 in PDB entry [1ydt](http://scripts.iucr.org/cgi-bin/cr.cgi?rm=pdb&pdbId=1ydt) (cyan; Engh *et al.*, 1996[@bb11]). (*c*) 1.95 Å resolution OMIT electron-density map (grey) and anomalous difference density map (blue) carved around the two conformations of compound H-89 in PDB entry [3vqh](http://scripts.iucr.org/cgi-bin/cr.cgi?rm=pdb&pdbId=3vqh) (yellow).](f-68-00873-fig2){#fig2}
: a ternary complex of the catalytic subunit α of protein kinase A (PKA; white), the peptidic pseudosubstrate 'protein kinase inhibitor' (PKI; grey) and the ATP-competitive inhibitor H-89 (yellow). (*b*) *B*/temperature factors of the H-89 conformers in the structure [3vqh](http://scripts.iucr.org/cgi-bin/cr.cgi?rm=pdb&pdbId=3vqh) plotted as spheres on the respective atoms. The values are indicated for selected atoms. (*c*) Binding environment of the H-89 conformers in the ATP pocket of PKA in structure [3vqh](http://scripts.iucr.org/cgi-bin/cr.cgi?rm=pdb&pdbId=3vqh). Residues Val123, Glu127 and Asn171 form hydrogen bonds to the ligands; the bromine moieties pack against hydrophobic atoms from the side chains of Phe54 and Lys174 and some main-chain atoms of the glycine-rich loop. The inner surface of the ATP pocket is represented in red (maximum distance of 2 Å from the inhibitor molecule).](f-68-00873-fig3){#fig3}
###### Refinement and structure statistics of PDB entry [3vqh](http://scripts.iucr.org/cgi-bin/cr.cgi?rm=pdb&pdbId=3vqh)
Values in parentheses are for the last shell.
---------------------------------------------------- ---------------------------------------
Data collection and scaling (*XDS*)
X-ray source ID29, ESRF
Resolution limits () 35.01.95 (2.201.95)
Unit-cell parameters () *a* = 72.73, *b* = 75.18, *c* = 80.33
Space group *P*2~1~2~1~2~1~
Wavelength () 0.91969
Total No. of reflections 283804 (84880)
No. of unique reflections 61702 (18509)
Multiplicity 4.6 (4.6)
*I*/(*I*) 19.08 (5.68)
*R* ~mrgd-*F*~ [†](#tfn1){ref-type="table-fn"} (%) 9.0 (28.8)
Completeness (%) 98.0 (99.3)
Wilson *B* (^2^) 21.57
Refinement (*REFMAC*5)
*R* ~work~ [‡](#tfn2){ref-type="table-fn"} (%) 19.0
*R* ~free~ [‡](#tfn2){ref-type="table-fn"} (%) 23.4
Average *B* factor (^2^) 21.88
No. of protein atoms 3015
No. of ligand atoms 62
No. of water molecules 178
R.m.s.d. bond lengths () 0.010
R.m.s.d. bond angles () 1.38
Ramachandran plot (*PROCHECK*)
Most favoured (%) 91.1
Additionally allowed (%) 8.9
Generously allowed (%) 0
Disallowed (%) 0
---------------------------------------------------- ---------------------------------------
*R* ~mrgd-*F*~ = .
*R* ~work~/*R* ~free~ = .
###### Selected columns from the *XSCALE* scaling statistics of PDB entry [3vqh](http://scripts.iucr.org/cgi-bin/cr.cgi?rm=pdb&pdbId=3vqh)
Resolution limit () Completeness of data (%) *I*/(*I*) *R* ~mrgd-*F*~ (%) Anomal. corr.[†](#tfn3){ref-type="table-fn"} (%) SigAno[‡](#tfn4){ref-type="table-fn"}
--------------------- -------------------------- ----------- -------------------- -------------------------------------------------- ---------------------------------------
35 0.0 99 99.9 0 0
30 90.0 65.84 1.3 35 1.870
25 90.9 32.47 2.4 3 1.318
20 96.7 62.23 1.4 75 1.661
15 100.0 56.57 1.3 57 1.599
10 100.0 59.68 1.2 47 1.132
9 100.0 54.80 1.3 53 1.224
8 100.0 53.60 1.6 42 1.211
7 100.0 50.58 1.6 44 1.457
6 100.0 46.30 1.7 36 1.192
5 100.0 46.07 1.8 22 1.021
4 99.8 46.41 1.8 15 0.940
3.8 99.9 43.37 2.1 13 0.920
3.55 99.9 40.27 2.4 4 0.847
3.3 99.9 35.20 2.8 8 0.920
3.05 100.0 30.36 3.7 10 0.935
2.8 99.9 23.71 5.3 9 0.906
2.45 100.0 16.07 8.6 6 0.865
2.2 100.0 10.88 13.7 3 0.836
1.95 98.0 5.68 28.8 3 0.791
Total 99.3 19.08 9.0 6 0.867
Anomal. Corr. is the mean correlation factor between two random subsets of anomalous intensity differences.
SigAno = \[\|*F* ~(+)~ *F* ~()~\|/\].
|
Losing Smith Bagley
(A giant tree at Musgrove, the family estate of Smith Bagley and the Arca Foundation — photo credit: Steve Clemons)
In Dubai a day and a half ago, I was sitting on a bus next to well known Clinton family friend and adviser Lanny Davis who was chatting with me about the widest possible array of fascinating and simultaneously disturbing topics (about the political judgment of some others). The New Yorker‘s brilliant social and political guru Hendrik Hertzberg was there, as was former New York Times correspondent and Full Court Press blogger Charles Kaiser. Sitting in front of Lanny and me was former New York Times national security correspondent and Fox News commentator Judith Miller and just across the aisle, so to speak, was Marie Brenner of Vanity Fair.
They were there for the launch of a new annual meeting called the Dubai Forum, which was sponsored by “Brand Dubai” and focused this year on architecture and sustainability.
It was an odd bed fellow bus ride — in Dubai, which makes sense on a number of levels.
But then Lanny Davis’ face went ashen — and he leaned over to me and said, “Smith Bagley has just died.”It was an odd, completely weird moment to hear such tragic news about the passing of one of America’s great political players and philanthropists. I was stunned. And Lanny Davis, who is not the most loved attorney in liberal circles, was clearly upset — but we were stuck on a bus with a wildly eclectic assortment of type A personalities. Lanny spoke to me a lot about his memories of Smith Bagley — and I shared my own encounters with this giant of human beings.
It’s hard to overstate the significance of Smith Bagley to liberal and progressive causes in America. The grandson and an heir of R.J. Reynolds, the tobacco tycoon, Bagley early on committed himself to advancing racial civil rights, promoting liberal Democratic Party candidates, promoting global human rights, ending the US-Cuba embargo, and helping to create a climate of sensible justice and fairness inside the United States that focused on helping those with little sustain themselves and promulgating policies that sought to reverse the erosion of the American middle class.
When Jimmy Carter was elected President of the United States in 1976, Bagley offered his family estate to Carter to assemble his likely cabinet and closest advisers before formally assuming office in January 1977. The home is full of pictures of that Carter clan retreat — and an important Norman Rockwell painting of Carter hangs on the second floor loft of one of the estate’s great rooms.
I first met Smith Bagley at a private home some years ago when Hillary Clinton was running for the Senate. He was dressed in jeans and a pretty ratty sweater and was just completely unpretentious on the surface.
I didn’t know who he was — but he had views, strong ones, which he would occasionally whisper to me, while we sat together, on the hearth of a big fireplace as I recall. He seemed completely unaffected by the power players in the room; he seemed like a big time farmer or lumberjack — very down to earth, but deeply irritated by the Bush administration’s course and by the “lack of humanity” in politicians on the right, and the left.
Bagley headed the Arca Foundation, which has been a major funder of progressive causes around the country — supporting both sophisticated policy development and advocacy work within the DC policy community as well as enlightened grass roots organization and outreach activities. Last year, I had the great privilege of speaking at the annual board meeting of the Arca Foundation at the Musgrove Estate in Georgia, which was part of the massive land holdings of the R.J. Reynolds estate.
Smith and two of his daughters, Nicole and Nancy, were there — and you could feel palpably their collective commitment to smart progressive philanthropy inspired by the old world, giant oak surroundings of Musgrove. Some of the work of my organization has been supported by Arca, but my views about Smith Bagley and his family are independent of that support.
Lanny Davis sent a very warm email immediately to Elizabeth Bagley, who is a former US Ambassador to Portugal and now is now the State Department’s first Special Representative for Global Partnerships, and I only wish I could have as well.
Bagley had been felled in recent years by a stroke — but the Smith Bagley I saw as recently as the Clinton Global Initiative gala dinner a few months ago still had a fiery furnace of political views and ambition.
Frankly, his money and his advocacy of fairness and civil rights helped push political and policy needles, and like the great, massive, history-laden trees at his old family estate of Musgrove, Smith Bagley will be impossible to replace in the pantheon of contemporary progressive political leaders and funders.
He will be greatly missed by progressive policy practitioners like myself — and condolences to his family and to the board and staff of the Arca Foundation.— Steve Clemons
Former Executive Director of MoveOn.org, Eli Pariser discusses his new book “The Filter Bubble” and how the architecture of the internet is evolving to match our interests and filtering out information that might challenge our opinions.
The latest from the washington note
On International Youth Day, which was August 12th this year, The Hill published an essay of mine about why the aspirations of youth in the Middle East, particularly Arab youth, matter to America and other global stakeholders |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.