text
stringlengths
12
4.76M
timestamp
stringlengths
26
26
url
stringlengths
32
32
Q: How to evaluate the following definite integral $\int_0^1\frac{\arctan(ax)}{x\sqrt{1-x^2}}dx$? $$\int_0^1\frac{\arctan(ax)}{x\sqrt{1-x^2}}dx$$ My working is as follows; Let $$I(a)=\int_0^1\frac{\arctan(ax)}{x\sqrt{1-x^2}}dx$$ and thus $$I'(a)=0-0+\int_0^1\frac{x}{(1+a^2x^2)(x\sqrt{1-x^2})}dx$$ (I did this using Leibniz-Newton's formula. I took the partial derivative of the integrand with respect to $a$, and since the two limits of the integral are constants, the first two terms are zero.) Now, I'm stuck... I've been trying this one for the last hour. Can someone hint me towards the right answer? (which I looked up to be $\frac{\pi}{2}\sinh^{-1}a$) P.S: I am aware of this site's norm of keeping MathJax out of the titles, but I'm getting a message that a question with this title already exists... Can someone please help? A: By Feynman's trick $$ \int_{0}^{1}\frac{\arctan(ax)}{x\sqrt{1-x^2}}\,dx = \int_{0}^{a}\int_{0}^{1}\frac{dx}{(1+b^2 x^2)\sqrt{1-x^2}}\,db = \int_{0}^{a}\frac{\pi}{2\sqrt{b^2+1}}\,db = \color{red}{\frac{\pi}{2}\text{arcsinh}(a).}$$ A: Now you need to solve $$I=I'(a)=0-0+\int_0^1\frac{x}{(1+a^2x^2)(x\sqrt{1-x^2})}dx$$ $$I=I'(a)=\int_0^1\frac{1}{(1+a^2x^2)(\sqrt{1-x^2})}dx$$ Substitue $x=\sin\left(u\right)$ thus $\mathrm{d}x=\cos\left(u\right)\,\mathrm{d}u$ $$I={\displaystyle\int}\dfrac{\cos\left(u\right)}{\sqrt{1-\sin^2\left(u\right)}\left(a^2\sin^2\left(u\right)+1\right)}\,\mathrm{d}u$$ $$I={\displaystyle\int}\dfrac{1}{a^2\sin^2\left(u\right)+1}\,\mathrm{d}u$$ Use $$\sin\left(u\right)=\dfrac{\tan\left(u\right)}{\sec\left(u\right)}$$ $$\sec^2\left(u\right)=\tan^2\left(u\right)+1$$ $$I={\displaystyle\int}\class{steps-node}{\cssId{steps-node-1}{\sec^2\left(u\right)}}\cdot\class{steps-node}{\cssId{steps-node-2}{\dfrac{1}{\left(a^2+1\right)\tan^2\left(u\right)+1}}}\,\mathrm{d}u$$ Substitute $v=\tan\left(u\right)$ and $\mathrm{d}v=\dfrac{1}{\sqrt{a^2+1}}\,\mathrm{d}w$ $$I={\displaystyle\int}\dfrac{1}{\left(a^2+1\right)v^2+1}\,\mathrm{d}v$$ Substitue $w=\sqrt{a^2+1}v$ $$I=\class{steps-node}{\cssId{steps-node-3}{\dfrac{1}{\sqrt{a^2+1}}}}{\displaystyle\int}\dfrac{1}{w^2+1}\,\mathrm{d}w$$ $$I=\dfrac{\arctan\left(w\right)}{\sqrt{a^2+1}}$$ Putting back all the values you get $$I=\dfrac{\arctan\left(\frac{\sqrt{a^2+1}x}{\sqrt{1-x^2}}\right)}{\sqrt{a^2+1}}+C$$
2024-05-24T01:26:29.851802
https://example.com/article/9223
A radio control transmitter for a model controlling an object to be controlled such as a model plane, a model helicopter, a model car or a model boat comprises a stick lever operating a main controller and various levers and switches operating as an auxiliary controller. Each of the stick lever and the various levers is connected to a shaft of a variable resistor. Each of the various switches operates as the auxiliary controller by turning ON and OFF. Each of the stick lever and the various levers is controlled to control a rotational range of the variable resistor, thereby generating multiple control signals that is transmitted from the radio control transmitter as a radio frequency wave. The object to be controlled has a receiver for receiving the control signal and servos for operating an operating section of the object to be controlled. The object is remotely controlled by controlling an operating range of each of the servos based on the control signal received by the receiver. For instance, when a model helicopter is remotely controlled as the object to be controlled, the model helicopter including a main rotor and a tail rotor is flown with various maneuvers by operating the stick lever of the radio control transmitter for the model to control pitch angles of the two rotors (Japanese Patent Publication 2000-225277 for reference). In other words, the control of the pitch angle of the main rotors is carried out by controlling a swash plate using the servo wherein the swash plate is disposed concentric with a shaft of the main rotor and has a degree of freedom in three axes. FIG. 7 illustrates a control manner of the swash plate in the model helicopter (the main rotor is not shown). A control of forward and reverse shown in FIG. 7 (a) is referred to as a pitch control (also referred to as an elevator control), a control of left and right shown in FIG. 7 (b) is referred to as a roll control (also referred to as an aileron control), and a control of up and down shown in FIG. 7 (c) is referred to as a collective pitch control. The helicopter is controlled to a desired direction by combining the controls during a flight. Specifically, in order to fly the helicopter in a forward direction (a direction of an arrow A) shown in FIG. 7(a), a left stick lever 101L of a radio control transmitter 100 is pushed upward (forward) to control a swash plate 120 disposed concentric with a shaft of a main rotor 110 using the servo (not shown) in a manner that the swash plate 120 is tilted in a direction of an arrow a. In order to fly the helicopter in a left direction (a direction of an arrow B) shown in FIG. 7(b), the left stick lever 101L of the radio control transmitter 100 is pushed left to control the swash plate 120 disposed concentric with the shaft of the main rotor 110 using the servo (not shown) in a manner that the swash plate 120 is tilted in a direction of an arrow b. In order to fly the helicopter in a upward direction (a direction of an arrow C) shown in FIG. 7(c), a right stick lever 101R of the radio control transmitter 100 is pushed upward to control the swash plate 120 disposed concentric with the shaft of the main rotor 110 using the servo (not shown) in a manner that the swash plate 120 is tilted in a direction of an arrow c. As described above, while the model helicopter is remotely controlled by controlling the swash plate using the servo, controls for moving a fuselage in the forward, reverse, left, right, upward and downward directions are carried out in a combined manner. Therefore, the swash plate is subjected to the combination of the pitch control, the roll control and the collective pitch control. However, the swash plate disposed concentric with the shaft of the main rotor has a limited maximum control range (maximum slant) at which a control range is maximum due to a mechanical limitation. Therefore, when the pitch control and the roll control perpendicular to each other are carried out simultaneously so that ranges of the pitch control and the roll control are added, a control range of the swash plate is saturated. When the control range of the swash plate is saturated, an excessive load is applied to the servo (the servo for controlling the roll or the pitch) which is an operating source thereof or to a linkage rod connecting the swash plate and the servo. Therefore, the control range is required to be large because an immediate response of the roll control and the pitch control is necessary in the model helicopter performing an acrobatic flying (three dimensional flying). Some of the transmitter employs a method wherein the control range of the swash plate is controlled by inserting a ring shape plate referred to as a stopper along an outer edge of the stick lever of the radio control transmitter for the model to mechanically limit the operation of the stick lever. However, even when the saturation of the swash plate is solved by the stopper which is a mechanical means, a drawback described below still exists. The controls of the pitch and the roll are carried out by one stick lever or by dividing into left and right stick levers. When the one stick lever is used, the stopper solves the problem. However, when the left and right stick levers are used, the stopper is not sufficient for a normal operation.
2023-10-12T01:26:29.851802
https://example.com/article/4382
Cu4SnS4-Rich Nanomaterials for Thin-Film Lithium Batteries with Enhanced Conversion Reaction. Through a simple gelation-solvothermal method with graphene oxide as the additive, a Cu4SnS4-rich composite of nanoparticles and nanotubes is synthesized and applied for thin and flexible Li-metal batteries. Unlike the Cu2SnS3-rich electrode, the Cu4SnS4-rich electrode cycles stably with an enhanced conversion capacity of ∼416 mAh g-1 (∼52% of total capacity) after 200 cycles. The lithiation/delithiation mechanisms of Cu-Sn-S electrodes and the voltage ranges of conversion and alloying reactions are informed by in situ X-ray diffraction tests. The conversion process of three Cu-Sn-S compounds is compared by density functional theory (DFT) calculations based on three algorithms, elucidating the enhanced conversion stability and superior diffusion kinetics of Cu4SnS4 electrodes. The reaction pathway of Cu-Sn-S electrodes and the root cause for the unstable capacity are revealed by in situ/ex situ characterizations, DFT calculations, and various electrochemical tests. This work provides insight into developing energy materials and power devices based on multiple lithiation mechanisms.
2023-09-10T01:26:29.851802
https://example.com/article/5316
I just loved watching the ducklings jump..did one land on mom? LOL...so cute how mom would swim over to each one when they landed...and lucky ducklings...getting to land in water... Of course, the loon baby is just precious...and I've been watching the albatross on and off since it hatched... thanks for the great vids!!! they reminded me of simpler times when there weren't a cazillion falcon and eagle cams to keep up with!!!...and osprey and kestrels and...and...and...reminds me that I am very behind with all the owls too!!! A post from Larry several years ago-he is sooooooo good!!~The biggest miracle of all is how the loons find their way back to the very same lake! All without AAA TripTics, maps or GPS. We think we are so smart but they have had this figured out for thousands of years without any help from us.Once again I choose to view it as one of countless miracles that are placed all around us that we seldom take time to stop and look and learn and consider.~ _________________ALL CREATURES GREAT AND SMALL, THE LORD GOD MADE THEM ALL
2023-10-08T01:26:29.851802
https://example.com/article/5886
Q: Best way to compute a function on each element with incremental shadowing of element in these collection What is the best way to compute a function on each element in a collection with an incremental shadowing of each element, like this simple example : val v = IndexedSeq(1,2,3,4) v.shadowMap{ e => e + 1} shadow 1: (3,4,5) shadow 2: (2,4,5) shadow 3: (2,3,5) shadow 4: (2,3,4) I think first to patch or slice to make this, but perhaps there is a more better way to do that in pure functional style ? Thanks Sr. A: def shadowMap[A,B](xs: Seq[A])(f: A => B) = { val ys = xs map f for (i <- ys.indices; (as, bs) = ys splitAt i) yield as ++ bs.tail }
2023-11-04T01:26:29.851802
https://example.com/article/3662
[Changes in the proteins of the nuclear RNP particles of the rat liver stimulated by hydrocortisone]. A difference in the ratio of the two main components of information of the nuclear RNP-particles isolated from the liver of normal and cortisone-stimulated rats was found. Under the action of cortisone, the amount of high molecular component increased. An increase in the content of the low molecular proteins typical for poly A-containing RNP-particles was also observed after cortisone administration.
2023-10-28T01:26:29.851802
https://example.com/article/6035
'Mohanlal never behaved like an actor' 'My wife is still in disbelief that I directed Mohanlal,' New Jersey-based filmmaker Arun Vaidyanathan tells George Joseph/Rediff.com Image: Mukesh and Mohanlal on the Peruchazhi poster Peruchazhi (Bandicoot), a Malayalam comedy, will release during the Onam season, both in India and the US. Its writer and director Arun Vaidyanathan is a native of Tamil Nadu, but lives in New Jersey and knows very little Malayalam. Yet, he says it was no problem directing the Mohanlal starrer. The film's plot revolves around the election campaign of John Kory, a Republican nominee for the Governor of California. Kory struggles to raise his ratings and his chief campaign officer, Sunny Kurishinkal, runs out of ideas to ensure victory. Sunny reaches out to his friend Francis Kunjappan (Mukesh) back home in Kerala to hire a political consultant to devise a campaign strategy. Kunjappan recommends his archrival Jagannadhan (Mohanlal). Kunjappan thinks Jagannadhan's unorthodox ideas will fail miserably in the US. But to his surprise, Jagannadhan succeeds with every move. Adding to the laughs are Jagannadhan's assistants, Jabbar Pottakuzhi and Vayalar Varki. The film is produced by the Friday Film House founded by Vijay Babu and Sandra Thomas. Vaidyanathan wrote and directed Br(a)illiant, a short comic film as part of his student project at the New York Film Academy way back in 2004. In 2009, he wrote and directed the Tamil film, Achamundu, Achamundu. New Jersey-based filmmaker Ajayan Venugopalan, who knows Tamil and Malayalam, translated the script into Malayalam. "The crew members knew Malayalam and everybody knew English. So language was no problem," Vaidyanathan says. "When I thought of a star to play the role of Jagannadhan, Mohanlal immediately came to mind. My wife Rajitha, who is from Kerala, is an avid fan of Mohanlal. Through a friend, we contacted Mohanlal in Dubai. He liked the story. The story is based on politics and I follow politics both here and in India very keenly." Vaidyanathan expects and wants the film to be a commercial hit, "but it should also be noted for its artistic merits. The audience will enjoy it as it is an untold story." He is full of praise for Mohanlal. "He was very friendly and he became family. He never behaved like an actor. He is a great human being and a great actor." "He understood the character perfectly. It was like going to a buffet where we get many things at one place." Vaidyanathan says his Malayalam is better now and he wants to make Malayalam films.
2023-09-21T01:26:29.851802
https://example.com/article/5419
United States Drone surveillance videos may have captured two of four potential instances of war crimes by Turkish-backed forces that attacked civilian positions along the Turkish-Syrian border in October. The drone surveillance videos, disclosed to the Wall Street Journal by U.S. intelligence officials, may add further controversy to a Turkish military offensive in October as well as President Donald Trump’s decision to withdraw U.S. troops from Syria ahead of the Turkish offensive. The Videos are reportedly a part of a State Department briefing of potential Turkish war crimes. U.S. drones were reportedly deployed on one occasion to monitor a highway in northern Syria where a Kurdish political figure, Hevrin Khalaf, was killed by a Syrian gunman believed to be backed by Turkey. During that surveillance effort, drones reportedly captured footage of an apparent shooting victim lying on the highway before being placed in a truck by Turkish-backed forces. Some officials believe the man had been shot, while others saw signs of movement by the supposed victim that complicated their view of what exactly happened. Another U.S. official said they had uncovered a “clear cut” instance where prisoners who had their hands tied were shot by Turkish-backed forces. Trump administration officials are expected to raise concerns about the potential war crimes during President Recep Tayyip Erdogan’s visit to the White House on Wednesday. - ADVERTISEMENT - Throughout October, Turkish-backed militias were accused of attacking civilians in areas controlled by Kurdish forces. Turkey also faced allegations of using white phosphorus incendiary weapons and chemical weapons in its efforts to drive off Kurdish forces in northern Syria. Those war crimes allegations continued in spite of a U.S.-backed ceasefire agreement between the Turkish and Kurdish forces and later assurances by Turkey that it would conclude its military operations for a permanent ceasefire. According to the Wall Street Journal, Erdogan has promised the Turkish army will investigate the war crimes allegations involving Turkey and its affiliated forces in Syria. “Those who commit such atrocities are no different than the members of Islamic State,” Erdogan told reporters in Istanbul on Oct. 18. Some U.S. officials reportedly doubt the sincerity of Erdogan’s assurances. One Turkish official said he wasn’t aware that any formal war crimes probe had been launched, when the Wall Street Journal asked for comment. “We expect them to investigate it, we expect them to hold these people to account and we will continue to push that with them,” chief Pentagon spokesman Jonathan Hoffman said of the Turkish war crimes probe. Turkish officials have acknowledged the U.S. concerns about war crimes but seemed unaware of the new drone footage that may substantiate those war crimes allegations. Some U.S. officials see the drone footage as a key piece of evidence of Turkish war crimes, while other officials still reportedly believe the footage is inconclusive. Those who passed the drone footage up to the Pentagon, as is required of evidence involving war crimes allegations, reportedly encountered skepticism from those higher up officials. William Roebuck, the State Department’s top diplomat in Syria, reportedly criticized the U.S. skepticism of Turkish war crimes in an internal memo provided to the Wall Street Journal. “One day when the diplomatic history is written, people will wonder what happened here and why officials didn’t do more to stop it,” Roebuck said. “The U.S. government should be much more forceful in calling Turkey out for its behavior
2024-06-05T01:26:29.851802
https://example.com/article/8966
Differential bioaccumulation of potentially toxic elements in benthic and pelagic food chains in Lake Baikal. Lake Baikal is located in eastern Siberia in the center of a vast mountain region. Even though the lake is regarded as a unique and pristine ecosystem, there are existing sources of anthropogenic pollution to the lake. In this study, the concentrations of the potentially toxic trace elements As, Cd, Pb, Hg, and Se were analyzed in water, plankton, invertebrates, and fish from riverine and pelagic influenced sites in Lake Baikal. Concentrations of Cd, Hg, Pb and Se in Lake Baikal water and biota were low, while concentrations of As were similar or slightly higher compared to in other freshwater ecosystems. The bioaccumulation potential of the trace elements in both the pelagic and the benthic ecosystems differed between the Selenga Shallows (riverine influence) and the Listvenichnyĭ Bay (pelagic influence). Despite the one order of magnitude higher water concentrations of Pb in the Selenga Shallows, Pb concentrations were significantly higher in both pelagic and benthic fish from the Listvenichnyĭ Bay. A similar trend was observed for Cd, Hg, and Se. The identified enhanced bioavailability of contaminants in the pelagic influenced Listvenichnyĭ Bay may be attributed to a lower abundance of natural ligands for contaminant complexation. Hg was found to biomagnify in both benthic and pelagic Baikal food chains, while As, Cd, and Pb were biodiluted. At both locations, Hg concentrations were around seven times higher in benthic than in pelagic fish, while pelagic fish had two times higher As concentrations compared to benthic fish. The calculated Se/Hg molar ratios revealed that, even though Lake Baikal is located in a Se-deficient region, Se is still present in excess over Hg and therefore the probability of Hg induced toxicity in the endemic fish species of Lake Baikal is assumed to be low.
2024-05-19T01:26:29.851802
https://example.com/article/8163
Louisville M11 Dean Blevins serves as Professor of Christian Education and Director of the Master of Arts in Christian Education Program at Nazarene Theological Seminary. Mark Maddix serves as Professor of Christian Education and Dean for the School of Theology & Christian Ministries at Northwest Nazarene University. Authors of the “Discovering Christian Discipleship” textbook published by Nazarene Publishing House will lead this workshop providing tools for pastors engaging aWesleyan view of Christian discipleship and formation for local congregations.
2023-12-26T01:26:29.851802
https://example.com/article/5303
# frozen_string_literal: true namespace :db do namespace :mongoid do task :load_models do end desc "Create indexes specified in Mongoid models" task :create_indexes => [:environment, :load_models] do ::Mongoid::Tasks::Database.create_indexes end desc "Remove indexes that exist in the database but are not specified in Mongoid models" task :remove_undefined_indexes => [:environment, :load_models] do ::Mongoid::Tasks::Database.remove_undefined_indexes end desc "Remove indexes specified in Mongoid models" task :remove_indexes => [:environment, :load_models] do ::Mongoid::Tasks::Database.remove_indexes end desc "Shard collections with shard keys specified in Mongoid models" task :shard_collections => [:environment, :load_models] do ::Mongoid::Tasks::Database.shard_collections end desc "Drop the database of the default Mongoid client" task :drop => :environment do ::Mongoid::Clients.default.database.drop end desc "Drop all non-system collections" task :purge => :environment do ::Mongoid.purge! end end end
2024-05-31T01:26:29.851802
https://example.com/article/8376
Q: js hide function When the page is loaded (the first dropdown (div StatusID) is dynamically populated from the mysql db) or the user selects Unemployed - EI from the first dropdown box, the div status_sub_6 shows the second select statement. My .hide .show function activates fine on change but the function hides the second dropdown list on the loading of the page even if the dynamically populated values of the first select (StatusID) meet the compare criteria. I'm sure that I need an onload function which overrides the following .js code but would really appreciate a bit of help in composing the additional code. JavaScript: $(document).ready(function(){ $('#status_sub_6').hide(); $('#StatusID').change(function () { if ($('#StatusID option:selected').text() == "Unemployed - EI"){ $('#status_sub_6').show(); } else { $('#status_sub_6').hide(); } }); }); A: You can do this by triggering a change event right after load. $(document).ready(function(){ ... // your current code ... $('#StatusID').trigger('change'); // trigger a change });
2023-11-15T01:26:29.851802
https://example.com/article/5679
Hotel Management Software development using NodeJS and Angular.JS technology. Hotel Management Software development using NodeJS and Angular.JS technology. We have an opportunity to design and develop software for the Hotel Chain for one of the Spiritual organization. They already have software to manage properties but that software is not web based and developed with very old technology which is now impossible to scale. Technologies Node.js AngularJS MySQL Challenge We do not have many challenges as we already design and develop enterprise hotel industry software for other customers. We already have domain expertise for Hotel industry. The challenges we have here is to understand few pieces of requirements because it's different for the spiritual organization. Approach We have started with what difficulties they are facing with the existing system. We have visited Hotel couple of times, understood their local software and make sure we do not miss any pieces of existing software. Also, choose the latest technology which is easy to scale. Freeze the scope of work. Working prototype design. Development. Integration and Testing. Deployment Maintenance and Support Result The software we have developed is with Angular and Node.js. The web-based application working like Desktop application that level performance we have achieved. The system is scalable and easy to integrate few other third-party systems as well.
2023-09-23T01:26:29.851802
https://example.com/article/2610
Q: Sources for Luther and Calvin quotes supporting geocentrism In response to Copernicus' heliocentric model of the solar system, Martin Luther and John Calvin are reported to have responded showing their support for geocentrism. I have found the following quotes in several places on the internet, however, I have not been able to find the original source for either quote. Luther: The fool wants to turn the whole art of astronomy upside-down. However, as Holy Scripture tells us, so did Joshua bid the sun to stand still and not the earth. Calvin: [They] pervert the course of nature [by saying] the sun does not move and that it is the earth that revolves and that it turns. The closest thing I can find to the primary source are statements to the effect that Calvin's quote comes from a sermon, and Luther's from something called "Table Talk". I found an online document titled The Table Talk of Martin Luther which includes a section labeled On Astronomy and Astrology but I was unable to locate this quote within it. Are these genuine quotes? Can someone help me find their respective sources? A: Although this quotation is not included in the online document titled The Table Talk of Martin Luther, it would seem to be genuine. First of all, evidence of Luther's beliefs can be found in paragraph DCCXCVII of that document: Astronomy is the most ancient of all sciences, and has been the introducer of vast knowledge; it was familiarly known to the Hebrews, for they diligently noted the course of the heavens, as God said to Abraham: "Behold the heavens; canst thou number the stars?" etc. Haven's motions are threefold; the first is, that the whole firmament moves swiftly around, every moment thousands of leagues, which, doubtless, is done by some angel. 'Tis wonderful so great a vault should go about in so short a time. If the sun and stars were composed of iron, steel, silver, or gold, they must needs suddenly melt in so swift a course, for one star is greater than the whole earth, and yet they are innumerable. The second motion is, of the planets, which have their particular and proper motions. The third is, a quaking or a trembling motion, lately discovered, but uncertain. I like astronomy and mathematics, which rely upon demonstrations and sure proofs. As to astrology, `tis nothing. [My emphasis] Christopher B. Kaiser (Creational Theology and the History of Physical Science), page 185, tells us Luther's comments were based on dinner table conversations. He says Anthony Lauterbach, who dined with Luther, recorded the following comments from Luther: So it goes now. Whoever wants to be clever must agree with nothing that others esteem. He must do something on his own. This is what that fellow does who wishes to turn the whole of astronomy upside down. Even in these things that are thrown into disorder I believe the Holy Scriptures, for Joshua commanded the sun to stand still and not the earth [Josh. 10:12] Kaiser says Clearly the issue for Luther was not a technical question of the merits of the heliocentric theory, but the seeming ambition ofthe astronomer and the possible disruptive effect his teachings might have on a Christian society. Calvin's comments are attrbuted to his 'Sermon on 1 Corinthians 10:19-24'.
2023-10-03T01:26:29.851802
https://example.com/article/1998
Q: Using regex to parse out values from dictionaries and count (Python) I have a column in a dataframe with column 'url_product' that contains a list of dictionaries as below (showing first 4 rows as an example). Each dictionary contains url and product associated with that url. df.url_product[0] [{'url': 'https://www.abcstore.com/product/11-abc-gift-card/', 'product': 'giftcard, abcstore'}, {'url': 'https://www.abcstore.com/product/10-skin-lotion/', 'product': 'lotion'}, {'url': 'https://www.abcstore.com/product/10414-moisturising-cream', 'product': 'cream'}, {'url': 'https://www.abcstore.com/blog/best-skincare-lotions/', 'product': 'lotion'}, {'url': 'https://www.abcstore.com/article/140-best-anti-aging-serum', 'product': 'serum'}] df.url_product[1] [{'url': 'https://www.abcstore.com/product/7-night-cream', 'product': 'nightcream'}, {'url': 'http://www.abcstore.com/product/149-smoothing-serum/', 'product': 'serum'}, {'url': 'https://www.abcstore.com/blog/rapid-reveal-face-peel', 'product': 'facepeel'}] df.url_product[2] [{'url': 'https://www.abcstore.com/product/25-night-infusion-cream', 'product': 'infusioncream'}, {'url': 'https://www.abcstore.com/product/144-bio-cellulose-mask', 'product': 'cellulosemask, mask'}, {'url': 'https://www.abcstore.com/', 'product': 'bestseller, homepage'}, {'url': 'https://www.abcstore.com/blog/essential-skincare-products/', 'product': 'essential, blog'}] df.url_product[3] [{'url': 'https://www.abcstore.com/blog/top-skincare-products-2020', 'product': 'skincare, 2020'}, {'url': 'http://www.abcstore.com/article/smoothing-serum/', 'product': 'serum'}] For each of these rows, I am looking to do the following Filter for only the dictionaries where URL contains '/product/' and parse out the number following the 'product/' (will call this product_id as easy reference). expected product_id of dictionary below = 11 {'url': 'https://www.abcstore.com/product/11-abc-gift-card/', 'product': 'giftcard, abcstore'} For each of the dictionaries where URL contains '/product/' also count the number of 'products'. For the example below, that count would be 2 (giftcard, abcstore) {'url': 'https://www.abcstore.com/product/11-abc-gift-card/', 'product': 'giftcard, abcstore'} For each row return the product_id that has the highest count and create a new column('top_product_id') in the dataframe to show this. If no single product_id has the highest count, leave as blank Expected outcome for the first 3 three rows after the steps above df.top_product_id [0] '11' [1] (blank) [2] '114' [3] (blank) a few points to explain the expected outcome Row[0]- expect 11 as product_id 11 has a count of 2 (giftcard, abcstore) while product_id 10 and 10414 only has 1 each. both the blog and article urls will be skipped as they do not contain '/product/' in the url Row[1]- expect the outcome to be blank as the two product URLs are attached to 1 product each and since there is no single url with the highest count, row would be blank Row[2]- expect 114 as product_id 114 has highest count of 2 (cellulosemask, mask) Row[3]- expect the outcome to be blank as there are no product URLs How would I create the new column ('top_product_id') in the dataframe with the expected outcome? A: Here is one possible approach: def findID(data): df1 = pd.DataFrame(data) df1 = df1.assign( count=df1['product'].str.split(', ').str.len(), product_id=df1['url'].str.extract(r'.*/product/(\d+)', expand=False) ).dropna().drop_duplicates(subset=['count'], keep=False) if df1.empty: return '(Blank)' return df1.loc[df1['count'].idxmax(), 'product_id'] df['top_product_id'] = df['url_product'].apply(findID) # print(df) url_product top_product_id 0 [{'url': 'https://www.abcstore.com/product/11-... 11 1 [{'url': 'https://www.abcstore.com/product/7-n... (Blank) 2 [{'url': 'https://www.abcstore.com/product/25-... 144 3 [{'url': 'https://www.abcstore.com/blog/top-sk... (Blank)
2023-09-06T01:26:29.851802
https://example.com/article/2871
Q: DerivativesApi.GetModelviewProperties for subset of properties The model viewer has the ability to get properties by passing a filter: viewer.model.getBulkProperties(dbIds, ['externalId', 'Category'], function) where we can limit the results to just the two properties 'externalId' and 'Category'. It would be a huge benefit for us to have this same filtering capability from the model derivative api: https://developer.autodesk.com/en/docs/model-derivative/v2/reference/http/urn-metadata-guid-properties-GET/ We have Revit files with 40,000+ parts, and it can take over 15 minutes to query for properties, but we are getting far more data than we need. A: it is a reasonable enhancement. I logged it as an internal ticket DERI-4610. If you have used Extractor to download the whole SVF dataset to local , you could try with extract the properties from properties.db (the other post tells more). This is a lite sql database which is actually used by Derivative API on Forge cloud. I'd think there is some smart ways to filter the specific properties by the db file.
2024-02-01T01:26:29.851802
https://example.com/article/4522
Cleantech Open Winner Revealed! EcoFactor Takes the Grand Prize For the entrepreneurs competing to win the $250,000 grand prize package in the Cleantech Open business plan competition — and join the ranks of past winners like Adura Technologies — the wait is over. Tonight in San Francisco, the organization announced finalists from each region (listed below the break). But the big kahuna goes to smart thermostat software developer EcoFactor. The 3-year-old startup, which beat out runners up MicroMidas (working on bioplastics) and Alphabet Energy (working on waste-heat recovery), has developed a service based on smart algorithms (read all about it here) that can continuously manage a home’s connected thermostat throughout the day, tweaking the settings ever so slightly to shave off energy consumption, but maintain a comfortable temperature. Advertisement Of course, the race to win this prize is over, but the rest of the climb toward a sustainable, profitable business lies ahead. When we spoke with EcoFactor earlier this month, the angel-funded company was in negotiations for its Series A round, and the company’s Senior VP of Products, Scott Hublou told us tonight that those talks are ongoing with several venture firms. He’s hopeful the grand prize will help smooth the way for that financing. “At the end of the day,” he said, the benefit of this type of competition is to “help you get to your next funding event.” As Marc Gottschalk, co-chair of the competition, said to venture capitalists and investors in the audience tonight: “These teams could always use more love.”Finalists: Pacific Northwest Green Lite Motors: Three-wheeled vehicles for commuters in large cities. “Easy to maneuver and drive, fun to drive as a race car.” Hydrovolts: A hydrokinetic turbine that floats in man-made water channels and can power 1-10 homes along a canal. Hydrovolts claims the system pays for itself in less than five years. Additional applications might include mines and wastewater treatment plants. LivinGreen Materials: High efficiency photo electrode for a next-gen solar cell. Fifty to 100 percent more efficient than traditional photo-electrodes. Rocky Mountain New Sky Energy: Carbon negative manufacturing company. Scrub CO2 out of the air or flue gas and incorporates it into consumer products — they contain a lot more CO2 than they produce in their manufacturing. Just landed a big customer. Growth has been extraordinary in the last 6 months.
2023-09-22T01:26:29.851802
https://example.com/article/8006
@extends('layouts.default') @section('title', '周刊详情 - ') @section('content') <div class="container"> <article class="col-md-8 col-md-offset-2 bg-white"> <div class="page-header text-center"> <h1>{{ $issue->name }} @if ($issue->is_published == 'no') <small>(预览)</small> @endif </h1> </div> @if (count($posts['news_posts']) > 0) @include('issues._issue_post_cell', ['section_title' => '最新资讯', 'posts' => $posts['news_posts'], 'category_id' => 1]) @endif @if (count($posts['tutorials_posts']) > 0) @include('issues._issue_post_cell', ['section_title' => '开发技巧', 'posts' => $posts['tutorials_posts'], 'category_id' => 2]) @endif @if (count($posts['packages_posts']) > 0) @include('issues._issue_post_cell', ['section_title' => '扩展推荐', 'posts' => $posts['packages_posts'], 'category_id' => 3]) @endif @if (count($posts['meetup']) > 0) @include('issues._issue_post_cell', ['section_title' => '线下聚会', 'posts' => $posts['meetup'], 'category_id' => 6]) @endif @if (count($posts['resources_posts']) > 0) @include('issues._issue_post_cell', ['section_title' => '资源推荐', 'posts' => $posts['resources_posts'], 'category_id' => 4, 'extra_class' => 'add-margin-bottom']) @endif </article> </div> @endsection
2024-05-14T01:26:29.851802
https://example.com/article/5171
Sweetest Operator Testo Testo Sweetest Operator Hey you don't realise your strength Sweet enough as you are, don't change To tell the truth, when I'm with you Aint nothing ever going to get me down now The way with you, the things you do Aint never never ever gonna let you down now I got the feeling that you could be the sweetest operator And I got that feeling that's what you are I've got a feeling that you could be the sweetest operator And I got that feeling that's what you are Hey I don't recognise the score So I'll stick to the things I know But I am still looking for a way Think about every day now And you can prove the very thing The way you operate is such a relief I got the feeling that you could be the sweetest operator And I got that feeling that's what you are I got a feeling that you could be the sweetest operator And I've got that feeling that's what you are I have to laugh the way you make perfect sense From every word I ever spoken, ??? I got a reason I got the feeling that you could be the sweetest operator And I got that feeling that's what you are I got the feeling that you could be the sweetest operator And I got that feeling that's what you are
2024-07-06T01:26:29.851802
https://example.com/article/8409
/* Copyright (c) 2010, Yahoo! Inc. All rights reserved. Code licensed under the BSD License: http://developer.yahoo.com/yui/license.html version: 3.3.0 build: 3167 */ YUI.add('cache-base', function(Y) { /** * The Cache utility provides a common configurable interface for components to * cache and retrieve data from a local JavaScript struct. * * @module cache */ var LANG = Y.Lang, isDate = Y.Lang.isDate, /** * Base class for the YUI Cache utility. * @class Cache * @extends Base * @constructor */ Cache = function() { Cache.superclass.constructor.apply(this, arguments); }; ///////////////////////////////////////////////////////////////////////////// // // Cache static properties // ///////////////////////////////////////////////////////////////////////////// Y.mix(Cache, { /** * Class name. * * @property NAME * @type String * @static * @final * @value "cache" */ NAME: "cache", ATTRS: { ///////////////////////////////////////////////////////////////////////////// // // Cache Attributes // ///////////////////////////////////////////////////////////////////////////// /** * @attribute max * @description Maximum number of entries the Cache can hold. * Set to 0 to turn off caching. * @type Number * @default 0 */ max: { value: 0, setter: "_setMax" }, /** * @attribute size * @description Number of entries currently cached. * @type Number */ size: { readOnly: true, getter: "_getSize" }, /** * @attribute uniqueKeys * @description Validate uniqueness of stored keys. Default is false and * is more performant. * @type Boolean */ uniqueKeys: { value: false }, /** * @attribute expires * @description Absolute Date when data expires or * relative number of milliseconds. Zero disables expiration. * @type Date | Number * @default 0 */ expires: { value: 0, validator: function(v) { return Y.Lang.isDate(v) || (Y.Lang.isNumber(v) && v >= 0); } }, /** * @attribute entries * @description Cached entries. * @type Array */ entries: { readOnly: true, getter: "_getEntries" } } }); Y.extend(Cache, Y.Base, { ///////////////////////////////////////////////////////////////////////////// // // Cache private properties // ///////////////////////////////////////////////////////////////////////////// /** * Array of request/response objects indexed chronologically. * * @property _entries * @type Object[] * @private */ _entries: null, ///////////////////////////////////////////////////////////////////////////// // // Cache private methods // ///////////////////////////////////////////////////////////////////////////// /** * @method initializer * @description Internal init() handler. * @param config {Object} Config object. * @private */ initializer: function(config) { /** * @event add * @description Fired when an entry is added. * @param e {Event.Facade} Event Facade with the following properties: * <dl> * <dt>entry (Object)</dt> <dd>The cached entry.</dd> * </dl> * @preventable _defAddFn */ this.publish("add", {defaultFn: this._defAddFn}); /** * @event flush * @description Fired when the cache is flushed. * @param e {Event.Facade} Event Facade object. * @preventable _defFlushFn */ this.publish("flush", {defaultFn: this._defFlushFn}); /** * @event request * @description Fired when an entry is requested from the cache. * @param e {Event.Facade} Event Facade with the following properties: * <dl> * <dt>request (Object)</dt> <dd>The request object.</dd> * </dl> */ /** * @event retrieve * @description Fired when an entry is retrieved from the cache. * @param e {Event.Facade} Event Facade with the following properties: * <dl> * <dt>entry (Object)</dt> <dd>The retrieved entry.</dd> * </dl> */ // Initialize internal values this._entries = []; }, /** * @method destructor * @description Internal destroy() handler. * @private */ destructor: function() { this._entries = []; }, ///////////////////////////////////////////////////////////////////////////// // // Cache protected methods // ///////////////////////////////////////////////////////////////////////////// /** * Sets max. * * @method _setMax * @protected */ _setMax: function(value) { // If the cache is full, make room by removing stalest element (index=0) var entries = this._entries; if(value > 0) { if(entries) { while(entries.length > value) { entries.shift(); } } } else { value = 0; this._entries = []; } return value; }, /** * Gets size. * * @method _getSize * @protected */ _getSize: function() { return this._entries.length; }, /** * Gets all entries. * * @method _getEntries * @protected */ _getEntries: function() { return this._entries; }, /** * Adds entry to cache. * * @method _defAddFn * @param e {Event.Facade} Event Facade with the following properties: * <dl> * <dt>entry (Object)</dt> <dd>The cached entry.</dd> * </dl> * @protected */ _defAddFn: function(e) { var entries = this._entries, max = this.get("max"), entry = e.entry; if(this.get("uniqueKeys") && (this.retrieve(e.entry.request))) { entries.shift(); } // If the cache at or over capacity, make room by removing stalest element (index=0) while(max && entries.length>=max) { entries.shift(); } // Add entry to cache in the newest position, at the end of the array entries[entries.length] = entry; }, /** * Flushes cache. * * @method _defFlushFn * @param e {Event.Facade} Event Facade object. * @protected */ _defFlushFn: function(e) { this._entries = []; }, /** * Default overridable method compares current request with given cache entry. * Returns true if current request matches the cached request, otherwise * false. Implementers should override this method to customize the * cache-matching algorithm. * * @method _isMatch * @param request {Object} Request object. * @param entry {Object} Cached entry. * @return {Boolean} True if current request matches given cached request, false otherwise. * @protected */ _isMatch: function(request, entry) { if(!entry.expires || new Date() < entry.expires) { return (request === entry.request); } return false; }, ///////////////////////////////////////////////////////////////////////////// // // Cache public methods // ///////////////////////////////////////////////////////////////////////////// /** * Adds a new entry to the cache of the format * {request:request, response:response, cached:cached, expires:expires}. * If cache is full, evicts the stalest entry before adding the new one. * * @method add * @param request {Object} Request value. * @param response {Object} Response value. */ add: function(request, response) { var expires = this.get("expires"); if(this.get("initialized") && ((this.get("max") === null) || this.get("max") > 0) && (LANG.isValue(request) || LANG.isNull(request) || LANG.isUndefined(request))) { this.fire("add", {entry: { request:request, response:response, cached: new Date(), expires: isDate(expires) ? expires : (expires ? new Date(new Date().getTime() + this.get("expires")) : null) }}); } else { } }, /** * Flushes cache. * * @method flush */ flush: function() { this.fire("flush"); }, /** * Retrieves cached object for given request, if available, and refreshes * entry in the cache. Returns null if there is no cache match. * * @method retrieve * @param request {Object} Request object. * @return {Object} Cached object with the properties request and response, or null. */ retrieve: function(request) { // If cache is enabled... var entries = this._entries, length = entries.length, entry = null, i = length-1; if((length > 0) && ((this.get("max") === null) || (this.get("max") > 0))) { this.fire("request", {request: request}); // Loop through each cached entry starting from the newest for(; i >= 0; i--) { entry = entries[i]; // Execute matching function if(this._isMatch(request, entry)) { this.fire("retrieve", {entry: entry}); // Refresh the position of the cache hit if(i < length-1) { // Remove element from its original location entries.splice(i,1); // Add as newest entries[entries.length] = entry; } return entry; } } } return null; } }); Y.Cache = Cache; }, '3.3.0' ,{requires:['base']});
2023-09-26T01:26:29.851802
https://example.com/article/2166
NEW POLITICAL PARTY: the TRASH CAN party Here is how to vote: VOTE TO THROW ANYONE IN POWER OUT OF POWER this post is dedicated to my mom
2024-03-24T01:26:29.851802
https://example.com/article/1649
Readers have offered some good suggestions for Fat Head-related t-shirts over the past year. We created one Fat Head t-shirt awhile back through Café Press and sold maybe two of them. The trouble with Café Press is that we never actually see the shirts, plus their prices are so high that by the time you tag on a profit of a buck or two, you’re looking at a pretty expensive piece of cotton. So we decided to roll the dice and get some t-shirts produced locally. The first, which I chose because several readers asked for it, is a Wheat Is Murder t-shirt — guaranteed to be a conversation-starter if you wear it to a vegan rally or a screening of Forks Over Knives. We’ve listed them on our updated Fat Head Store page. That’s my lovely wife modeling one of them in the picture at left. We originally planned to sell them for the same rate in the U.S. and overseas, but it turns out overseas postage would reduce the profit margin to zero, so there will be an international shipping charge. Depending on how this one sells, future candidates for t-shirts include The Guy From CSPI and Scientists Are Freakin’ Liars.
2023-10-31T01:26:29.851802
https://example.com/article/8029
Q: Performing a query search to match characters instead of exact words. I have a form that contains a command button, six unbound text boxes, and a query subform. The user enters data in the unbound text boxes to search for. When they press the search command button the query will search for the data entered into the text boxes. I have no problem with this working with my current code. However, if the user does not enter the information exactly as it is in the main table then a message box saying “No records found” is displayed. I know this may be a very simple fix but I would like for when the user enters data (example: cable) the query should display all records that contain the word or characters (example: rj-45 cable). If DCount("*", "Admin Customer Owned Parts Query") = 0 Then MsgBox "No Records Found" Else Me.Admin_Customer_Owned_Parts_Query_Subform.Requery End If A: You need to set your query to use the LIKE operator and then enclose your search terms in *'s. So if you wanted to find the word cable anywhere in the field you would put; WHERE Fieldname LIKE '*cable*' in your query's SQL statement.
2023-10-20T01:26:29.851802
https://example.com/article/5806
Determined to the Finish The Maximum Motorsport team has shown great resolve to finish this years Bathurst 12 Hour endurance event at Mt Panorama. The MMS Subaru WRX STI showed good pace during the two accident-interrupted qualifying sessions, the 4th fastest of the Class D production cars. Lead driver Dean Herridge elected the start the race this year and had a good opening stint. But as the first pit stop approached, it became apparent there was an issue with tyre degradation. “In our first stint the car was very good”, commented Dean. “We managed to get in some good, consistent lap times. But about three laps before I came in for the first change, we started to get a vibration in the front of the car. The front-left tyre was flat by the time I pitted, and we think it was down to the the tyres being tight within the guard. They were rubbing slightly, so we just had to manage that throughout the day. But, a pretty strong start”. With the tyre problems being managed, the team knuckled down to stay in touch with the Class D leaders. John O’Dowd and Angus Kennard put in good stints during the morning, navigating the sometimes heavy race traffic. But there were echoes of the teams first visit to the 12 Hour, with a fuel tank problem causing a series of unscheduled pit stops. “There’s two sections in the tank, one on each side. And one side isn’t working”, reported Angus after his first stint. “So we’re running on half capacity, and whenever we get a fuel surge we have to come back to top-up the tank.” With the fuel surge problem worsening, the team brought the car into the garage to investigate the problem – a stray piece of plastic inside the tank was found to be blocking the fuel pick up. Although the time lost took them out of contention for Class D honors, a finish was still in sight. “At this stage we will not be on the podium but we came here to finish the event and that is what we will do” stated John O’Dowd. Back on track and with the fuel problem solved, Dean, John and Angus banked some solid times during their stints as the race headed towards the 6:15pm finish. “It was a tough day, with some of the hottest tyre temperatures that have been recorded here”, said Dean at the end of a long 12 hours. “A tough way for a production car to spend 12 hours. But we finished. The boys never gave up. I’m immensely proud of them. Unfortunately we didn’t get the result we came for, but a finish in these circumstances is fantastic, so we’re pretty pleased.” The Maximum team now heads west to its Perth facility to prepare for the start of the WA rally championship and September’s Australasian Safari. This entry was posted on Saturday, February 15th, 2014 at 9:43 pm.Categories: 2014.
2023-09-30T01:26:29.851802
https://example.com/article/9582
Days of Anger: Part 2 (Old Man Logan #26 Comic Review) Old Man Logan is hunting The Hulk Gang, who recently reappeared in a remote part of Yukon Territory in Canada. Pulling the strings is The Maestro! Days of Anger continues with Part 2 of this classic in the making! What you need to know: Having discovered the Hulk Gang has somehow followed him to this earth, Logan realizes he has no choice but to cut his vacation short, and put an end to this threat for good! Let the hunt begin! What you’ll find out: Angry with Billy Bob (his alternate Earth Grandson) for blowing their cover by getting the attention of Logan, The Maestro moves his Hulk Gang to another military base in a remote area. After commandeering the facility with a bloody show of force, Maestro orders one of the gamma irradiated siblings, Buck, to take a team with him and seek out Logan before Logan finds them. He warns them not to engage in hand-to-hand combat with Logan, but to take as many long range weapons they can carry to deal with him. Maestro orders Buck to take Billy Bob with him to “show him how we TAKE CARE of problems”. Once again, Hulk Gang sister, Cambria, seems different than the others. She’s more sensitive and protective of Billy Bob. She wants to join the team after Logan (to keep an eye out for Billy Bob, I’m guessing) but is told to stay with the Maestro. Meanwhile, Logan is remembering his time in the Wastelands, specifically a moment with his son Scotty, and Clint Barton aka Hawkeye. “Why didn’t the heroes stop the villains?”, asks Scotty. Neither hero has an answer, but this memory fuels Logan’s drive to find the Hulk Gang in the here and now. He arrives at the Department H base Maestro and his Gang just vacated and begins to follow the tracks they left. Approaching a vast open space, Logan reluctantly calls Puck at the Alpha Flight Space Station for help locating the convoy. Once located, Logan heads towards the Hulk Gang. Taken by surprise due to fatigue and a wandering mind, Logan is ambushed by Buck and the Hulk Gang team he brought with him. He’s literally sprayed with a variety of mostly assualt type weapons. Once down, Buck sends Billy Bob over to Logan to deliver a killing shot to the fallen hero. But the gun isn’t loaded! Billy Bob turns to face his brothers, who quickly spray him with bullets. What just happened? Days of Anger Part 2 continues the excellent momentum the creative team of Brisson and Deodato Jr. have brought to this title. While owing quite a bit to the past, the story still feels fresh and forward moving. Ed Brisson brings a level of anxiousness to the reader that makes you want more. Logan really plays the tragic hero at his best. You get a sense he feels responsible for the Hulk Gang being on this Earth. I hope we get another appearance from Puck, as it was nice to see him interact (even by phone) with Logan. Mike Deodato Jr.’s art is consistantly gorgeous, conveying beautifully what is written. The art coincides with the subject matter spectacularly, enhancing the depth and moodiness of the story. Rating 9/10 Final thought: I’m enjoying this arc so much, it’s like watching a really great film. Pick up this issue! And if you haven’t, pick up Part 1 in Old Man Logan #25! Some of my earliest comic book memories are at age 4, when I was allowed to read comics left behind by my uncle, who passed away a few years before I was born. He had amazing Silver Age gems in his collection, ranging from DC's Jimmy Olsen, Superman, Batman, and The Legion of Superheroes to Marvel's Fantastic Four, Thor, Strange Tales and The Avengers. I was fortunate to have inherited not only his comics, but his passion for the genre. Some of the titles I started collecting on my own were Marvel's The Amazing Spider-Man, The Uncanny X-Men, The Fantastic Four, and The Mighty Avengers, during the late 1970's. I witnessed the dawn of the Bronze Age of comics, and I've been collecting ever since. Most of my attention has been given to the X-Men universe of books, which have always been my favorite. I welcome all feedback and I look forward to discussing my reviews. You can find me regularly on Facebook as a Moderator for a fantastic X-Men page called Age of X-Men, where we welcome discussion, art, and just about anything within the X-Men universe. KEEP BUYING COMICS!
2024-05-18T01:26:29.851802
https://example.com/article/2719
New use for historic Seguin hotel New use for historic Seguin hotel 1of11The former Park Hotel in Seguin, now The Park Plaza Building, opened in 1917 and was listed on the National Register of Historic Places in 1979.Photo: Kin Man Hui / San Antonio Express-News 2of11View from the mezzanine level of the Park Plaza Building in Seguin, Texas. The building was recently purchased by Remote Logistics International with the intent of converting it into housing for oil and gas executives. The building has undergone various iterations from a hotel to an office building. Many of the building's unique details from the days of being a hotel remain. (Kin Man Hui/San Antonio Express-News)Photo: San Antonio Express-News 3of11The Park Plaza Building in Seguin, Texas was recently purchased by Remote Logistics International with the intent of converting the building into housing for oil and gas executives. The building has undergone various iterations from a hotel to an office building. Many of the building's unique details from the days of being a hotel remain. (Kin Man Hui/San Antonio Express-News)Photo: San Antonio Express-News 4of11Detail of the plaster work which was reportedly done by hand at the Park Plaza Building in Seguin, Texas on Thursday, Aug. 29, 2013. The building was recently purchased by Remote Logistics International with the intent of being converted into housing for oil and gas executives. The building has undergone various iterations from a hotel to an office building. Many of the building's unique details from the days of being a hotel remain. (Kin Man Hui/San Antonio Express-News)Photo: San Antonio Express-News 5of11The Park Plaza Building in Seguin, Texas was recently purchased by Remote Logistics International with the intent of converting the building into housing for oil and gas executives. The building has undergone various iterations from a hotel to an office building. Many of the building's unique details from the days of being a hotel remain. (Kin Man Hui/San Antonio Express-News)Photo: San Antonio Express-News 6of11The lobby of the Park Plaza Building in Seguin, Texas. The building was recently purchased by Remote Logistics International with the intent of converting it into housing for oil and gas executives. The building has undergone various iterations from a hotel to an office building. Many of the building's unique details from the days of being a hotel remain. (Kin Man Hui/San Antonio Express-News)Photo: San Antonio Express-News 7of11A historical photograph of the Park Plaza Building in Seguin, Texas. The building was recently purchased by Remote Logistics International with the intent of being converted into housing for oil and gas executives. The building has undergone various iterations from a hotel to an office building. Many of the building's unique details from the days of being a hotel remain. (Kin Man Hui/San Antonio Express-News)Photo: San Antonio Express-News 8of11Detail of a rain gutter at the Park Plaza Building in Seguin, Texas on Thursday, Aug. 29 2013. The building was recently purchased by Remote Logistics International with the intent of converting it into housing for oil and gas executives. The building has undergone various iterations from a hotel to an office building. Many of the building's unique details from the days of being a hotel remain. (Kin Man Hui/San Antonio Express-News)Photo: San Antonio Express-News 9of11Detail of the plaster work which was reportedly done by hand at the Park Plaza Building in Seguin, Texas on Thursday, Aug. 29, 2013. The building was recently purchased by Remote Logistics International with the intent of being converted into housing for oil and gas executives. The building has undergone various iterations from a hotel to an office building. Many of the building's unique details from the days of being a hotel remain. (Kin Man Hui/San Antonio Express-News)Photo: San Antonio Express-News 10of11The Park Plaza Building in Seguin, Texas was recently purchased by Remote Logistics International with the intent of converting the building into housing for oil and gas executives. The building has undergone various iterations from a hotel to an office building. Many of the building's unique details from the days of being a hotel remain. (Kin Man Hui/San Antonio Express-News)Photo: San Antonio Express-News 11of11Details of the flooring at the Park Plaza Building in Seguin, Texas. The building was recently purchased by Remote Logistics International with the intent of converting it into housing for oil and gas executives. The building has undergone various iterations from a hotel to an office building. Many of the building's unique details from the days of being a hotel remain. (Kin Man Hui/San Antonio Express-News)Photo: Kin Man Hui, San Antonio Express-News SEGUIN — This city's proximity to the Eagle Ford Shale energy boom has prompted a company that caters to oil and gas executives to acquire a historic downtown hotel to renovate into niche housing. “It will be open to the public, but we're focused on the corporate market,” said Jenny Savage, president of Remote Logistics International, which bought the former Park Hotel on River Street in July. The Houston company already offers lodging for oil field workers in Three Rivers and Carrizo Springs, she said, as well as in Saudi Arabia, Qatar and Bahrain. Corporate customers in Texas get a roof, three meals a day, laundry cleaned and turndown service for less than $100 a night. “It's all about the food and clean accommodations for these guys,” Savage said. The five-story building, designed by architect Leo M.J. Dielmann of San Antonio, opened in 1917 and was listed on the National Register of Historic Places in 1979. “When the hotel was being built, the Guadalupe Gazette reported that it was to be the handsomest as well as best-equipped hotel in any town the size of Seguin,” says a Texas Historical Commission listing on the structure. It was called the Plaza Hotel from 1935 until 1954, when it became dorms for Texas Lutheran College, according to the THC listing. Then it was apartments between 1964 and 1978. It was converted into offices around 1980 and renamed The Park Plaza Building when bought by S&S Investments, which sold it to Remote Logistics. City leaders applaud the plans for the landmark, which is now appraised at only $182,000 and which faces the downtown square. Besides constructing a rooftop garden and restoring a ground-floor restaurant, the planned renovations include recapturing the historic flavor of its original marble lobby. “That's going to be a nice addition to downtown,” Mayor Don Keil said. Mary Jo Langford, director of Seguin's Main Street program, said, “We've felt the influence of the Eagle Ford Shale before, in terms of increased hotel occupancy, but this project is going to bring in executives and higher-level oil and gas employees.” The building's roughly 10 commercial tenants, including insurance salesman Charles Villeneuve, must vacate by December so restoration work can start. “I'm waiting to see what the restaurant is going to be like,” said Villeneuve, 66, a tenant of 22 years. Remote's chef, Daniel Friley, has fine-turned his recipes to satisfy the taste buds of roughnecks and their bosses. “We go more with the home-style country food for these guys, like dumplings,” he said. “For special events, we do fancy food like rib-eye, shrimp and pan-fried salmon.” Germane Warren of Tyler enjoys staying at the company's lodge in Carrizo Springs while working for Performance Technologies. “When I come home, my room's clean,” said Warren, 29. “They'll pretty much make you something to eat right on the spot. The food is great.” The Carrizo Springs and Three Rivers sites consist of mobile homes connected by decks and arrayed around dining halls and recreation centers. Savage has been warned to expect some heartache while tackling the $3 million Seguin restoration. “This is the most challenging thing I've ever done,” she said, “but I lived overseas, and I have a love for old, beautiful buildings.”
2024-05-29T01:26:29.851802
https://example.com/article/7784
Q: Summary (mySQL) of WooCommerce Purchases Per Club Membership I'm creating two "user admin" pages for our clubs, so they can see what each of their members are purchasing in WooCommerce. I've got most of the mySQL query complete, just need to finish some minor points (bit over my head). SELECT wp_users.ID, wp_users.display_name AS 'Name', wp_ihc_user_levels.level_id AS 'Roles', -- Check if User if full member or club visitor (CASE wp_usermeta.meta_key = 'club_member' WHEN wp_usermeta.meta_value LIKE 'Visitor' THEN 'Sponsored Visitor' WHEN wp_usermeta.meta_value LIKE 'Member' THEN 'Financial Member' END) AS 'Membership', -- Check if Member has purchased any items, by "category" (SELECT IF(COUNT(*) > 0, 'Yes', 'No') FROM wp_terms WHERE wp_users.ID AND name = 'Camping') AS 'Camping', (SELECT IF(COUNT(*) > 0, 'Yes', 'No') FROM wp_terms WHERE wp_users.ID AND name = 'Merchandise') AS 'Merchandise', (SELECT IF(COUNT(*) > 0, 'Yes', 'No') FROM wp_terms WHERE wp_users.ID AND name = 'Catering') AS 'Catering', (SELECT IF(COUNT(*) > 0, 'Yes', 'No') FROM wp_terms WHERE wp_users.ID AND name = 'Tickets') AS 'Tickets', -- Check if Member has booked any trips (SELECT IF(COUNT(*) > 0, 'Yes', 'No') FROM wp_em_bookings WHERE person_id = wp_users.ID) AS 'Trips' FROM wp_users JOIN wp_usermeta ON wp_users.ID = wp_usermeta.user_id JOIN wp_ihc_user_levels ON wp_users.ID = wp_ihc_user_levels.user_id WHERE wp_usermeta.meta_value = (SELECT MAX(CASE WHEN meta_key = 'affiliated_club' THEN meta_value END) FROM wp_usermeta WHERE user_id = '17') So the SQL query would run as the user_id who called the query, it would then find all other club members in "affiliate_club", and run the SELECT queries from the top. Issues are: A user can have multiple roles in "wp_ihc_user_levels.level_id", however the query is returning 2 lines for "4" and "5", instead of CONCAT i.e. "4,5" in same row. CASE wp_usermeta.meta_key = 'club_member' is returning "Visitor" for all entries, where there are some full members I'm uncertain which WooCommerce tables I need to query, to link Categories to each wp_users.ID purchases for count. The second query, I need to use "wp_users.ID" returned from query one, to expand each of the WooCommerce purchases in more detail, sorted by category. I can probably do most of the second query, if I understand the WooCommerce query one further. Thanks in advance. UPDATE 1: OK, So I was able to get Point 1 sorted out: Changed: wp_ihc_user_levels.level_id AS 'Roles', To: (SELECT GROUP_CONCAT(level_id SEPARATOR ',') FROM wp_ihc_user_levels WHERE user_id = wp_users.ID) AS 'Roles', Now "Roles" shows values "2,3,5" etc.. instead of single values UPDATE 2: OK, So I've now got Point 2 sorted out: Changed: (CASE wp_usermeta.meta_key = 'club_member' WHEN wp_usermeta.meta_value LIKE 'Visitor' THEN 'Sponsored Visitor' WHEN wp_usermeta.meta_value LIKE 'Member' THEN 'Financial Member' END) AS 'Membership', To: (SELECT wp_usermeta.meta_value FROM wp_usermeta WHERE wp_usermeta.meta_key = 'club_member' AND wp_usermeta.user_id = wp_users.ID) AS 'Membership', UPDATE 3: SQL Query Completed Ok, so this was either an extremely complex SQL query, or I've structured it all wrong, but my working solution is below. NOTE: The %CURRENT_USER_ID% placeholder is used to call the current user ID, where they exist in the same club as other members. SELECT DISTINCT wpdc_users.ID, wpdc_users.display_name AS 'Name', (SELECT wpdc_usermeta.meta_value FROM wpdc_usermeta WHERE wpdc_usermeta.meta_key = 'club_member' AND wpdc_usermeta.user_id = wpdc_users.ID) AS 'Membership', (SELECT GROUP_CONCAT(level_id SEPARATOR ',') FROM wpdc_ihc_user_levels WHERE user_id = wpdc_users.ID) AS 'Roles', (SELECT IF(SUM(wpdc_terms.name = 'Camping') >0, 'Yes', 'No') FROM wpdc_postmeta JOIN wpdc_woocommerce_order_items ON wpdc_woocommerce_order_items.order_id = wpdc_postmeta.post_id JOIN wpdc_woocommerce_order_itemmeta ON wpdc_woocommerce_order_items.order_item_id = wpdc_woocommerce_order_itemmeta.order_item_id JOIN wpdc_term_relationships ON wpdc_term_relationships.object_id = wpdc_woocommerce_order_itemmeta.meta_value JOIN wpdc_terms ON wpdc_terms.term_id = wpdc_term_relationships.term_taxonomy_id WHERE wpdc_postmeta.meta_key = '_customer_user' AND wpdc_woocommerce_order_itemmeta.meta_key = '_product_id' AND wpdc_term_relationships.object_id = wpdc_woocommerce_order_itemmeta.meta_value AND wpdc_terms.term_id = wpdc_term_relationships.term_taxonomy_id AND wpdc_terms.term_id > 23 AND wpdc_postmeta.meta_value = wpdc_users.ID) AS 'Camping', (SELECT IF(SUM(wpdc_terms.name = 'Catering') >0, 'Yes', 'No') FROM wpdc_postmeta JOIN wpdc_woocommerce_order_items ON wpdc_woocommerce_order_items.order_id = wpdc_postmeta.post_id JOIN wpdc_woocommerce_order_itemmeta ON wpdc_woocommerce_order_items.order_item_id = wpdc_woocommerce_order_itemmeta.order_item_id JOIN wpdc_term_relationships ON wpdc_term_relationships.object_id = wpdc_woocommerce_order_itemmeta.meta_value JOIN wpdc_terms ON wpdc_terms.term_id = wpdc_term_relationships.term_taxonomy_id WHERE wpdc_postmeta.meta_key = '_customer_user' AND wpdc_woocommerce_order_itemmeta.meta_key = '_product_id' AND wpdc_term_relationships.object_id = wpdc_woocommerce_order_itemmeta.meta_value AND wpdc_terms.term_id = wpdc_term_relationships.term_taxonomy_id AND wpdc_terms.term_id > 23 AND wpdc_postmeta.meta_value = wpdc_users.ID) AS 'Catering', (SELECT IF(SUM(wpdc_terms.name = 'Merchandise') >0, 'Yes', 'No') FROM wpdc_postmeta JOIN wpdc_woocommerce_order_items ON wpdc_woocommerce_order_items.order_id = wpdc_postmeta.post_id JOIN wpdc_woocommerce_order_itemmeta ON wpdc_woocommerce_order_items.order_item_id = wpdc_woocommerce_order_itemmeta.order_item_id JOIN wpdc_term_relationships ON wpdc_term_relationships.object_id = wpdc_woocommerce_order_itemmeta.meta_value JOIN wpdc_terms ON wpdc_terms.term_id = wpdc_term_relationships.term_taxonomy_id WHERE wpdc_postmeta.meta_key = '_customer_user' AND wpdc_woocommerce_order_itemmeta.meta_key = '_product_id' AND wpdc_term_relationships.object_id = wpdc_woocommerce_order_itemmeta.meta_value AND wpdc_terms.term_id = wpdc_term_relationships.term_taxonomy_id AND wpdc_terms.term_id > 23 AND wpdc_postmeta.meta_value = wpdc_users.ID) AS 'Merchandise', (SELECT IF(SUM(wpdc_terms.name = 'Tickets') >0, 'Yes', 'No') FROM wpdc_postmeta JOIN wpdc_woocommerce_order_items ON wpdc_woocommerce_order_items.order_id = wpdc_postmeta.post_id JOIN wpdc_woocommerce_order_itemmeta ON wpdc_woocommerce_order_items.order_item_id = wpdc_woocommerce_order_itemmeta.order_item_id JOIN wpdc_term_relationships ON wpdc_term_relationships.object_id = wpdc_woocommerce_order_itemmeta.meta_value JOIN wpdc_terms ON wpdc_terms.term_id = wpdc_term_relationships.term_taxonomy_id WHERE wpdc_postmeta.meta_key = '_customer_user' AND wpdc_woocommerce_order_itemmeta.meta_key = '_product_id' AND wpdc_term_relationships.object_id = wpdc_woocommerce_order_itemmeta.meta_value AND wpdc_terms.term_id = wpdc_term_relationships.term_taxonomy_id AND wpdc_terms.term_id > 23 AND wpdc_postmeta.meta_value = wpdc_users.ID) AS 'Tickets', (SELECT IF(COUNT(*) > 0, 'Yes', 'No') FROM wpdc_em_bookings WHERE person_id = wpdc_users.ID) AS 'Trips / Events' FROM wpdc_users JOIN wpdc_usermeta ON wpdc_usermeta.user_id = wpdc_users.ID JOIN wpdc_ihc_user_levels ON wpdc_users.ID = wpdc_ihc_user_levels.user_id WHERE wpdc_usermeta.meta_value = (SELECT MAX(CASE WHEN meta_key = 'affiliated_club' THEN meta_value END) FROM wpdc_usermeta WHERE user_id = %CURRENT_USER_ID%) I'm going to mark this resolved, but would appreciate if anyone is able to advise if this can be compacted / minimised to run more efficiently to check each Category Purchase. Thanks in advance. A: UPDATE 3: SQL Query Completed Ok, so this was either an extremely complex SQL query, or I've structured it all wrong, but my working solution is below. NOTE: The %CURRENT_USER_ID% placeholder is used to call the current user ID, where they exist in the same club as other members. SELECT DISTINCT wp_users.ID, wp_users.display_name AS 'Name', (SELECT wp_usermeta.meta_value FROM wp_usermeta WHERE wp_usermeta.meta_key = 'club_member' AND wp_usermeta.user_id = wp_users.ID) AS 'Membership', (SELECT GROUP_CONCAT(level_id SEPARATOR ',') FROM wp_ihc_user_levels WHERE user_id = wp_users.ID) AS 'Roles', (SELECT IF(SUM(wp_terms.name = 'Camping') >0, 'Yes', 'No') FROM wp_postmeta JOIN wp_woocommerce_order_items ON wp_woocommerce_order_items.order_id = wp_postmeta.post_id JOIN wp_woocommerce_order_itemmeta ON wp_woocommerce_order_items.order_item_id = wp_woocommerce_order_itemmeta.order_item_id JOIN wp_term_relationships ON wp_term_relationships.object_id = wp_woocommerce_order_itemmeta.meta_value JOIN wp_terms ON wp_terms.term_id = wp_term_relationships.term_taxonomy_id WHERE wp_postmeta.meta_key = '_customer_user' AND wp_woocommerce_order_itemmeta.meta_key = '_product_id' AND wp_term_relationships.object_id = wp_woocommerce_order_itemmeta.meta_value AND wp_terms.term_id = wp_term_relationships.term_taxonomy_id AND wp_terms.term_id > 23 AND wp_postmeta.meta_value = wp_users.ID) AS 'Camping', (SELECT IF(SUM(wp_terms.name = 'Catering') >0, 'Yes', 'No') FROM wp_postmeta JOIN wp_woocommerce_order_items ON wp_woocommerce_order_items.order_id = wp_postmeta.post_id JOIN wp_woocommerce_order_itemmeta ON wp_woocommerce_order_items.order_item_id = wp_woocommerce_order_itemmeta.order_item_id JOIN wp_term_relationships ON wp_term_relationships.object_id = wp_woocommerce_order_itemmeta.meta_value JOIN wp_terms ON wp_terms.term_id = wp_term_relationships.term_taxonomy_id WHERE wp_postmeta.meta_key = '_customer_user' AND wp_woocommerce_order_itemmeta.meta_key = '_product_id' AND wp_term_relationships.object_id = wp_woocommerce_order_itemmeta.meta_value AND wp_terms.term_id = wp_term_relationships.term_taxonomy_id AND wp_terms.term_id > 23 AND wp_postmeta.meta_value = wp_users.ID) AS 'Catering', (SELECT IF(SUM(wp_terms.name = 'Merchandise') >0, 'Yes', 'No') FROM wp_postmeta JOIN wp_woocommerce_order_items ON wp_woocommerce_order_items.order_id = wp_postmeta.post_id JOIN wp_woocommerce_order_itemmeta ON wp_woocommerce_order_items.order_item_id = wp_woocommerce_order_itemmeta.order_item_id JOIN wp_term_relationships ON wp_term_relationships.object_id = wp_woocommerce_order_itemmeta.meta_value JOIN wp_terms ON wp_terms.term_id = wp_term_relationships.term_taxonomy_id WHERE wp_postmeta.meta_key = '_customer_user' AND wp_woocommerce_order_itemmeta.meta_key = '_product_id' AND wp_term_relationships.object_id = wp_woocommerce_order_itemmeta.meta_value AND wp_terms.term_id = wp_term_relationships.term_taxonomy_id AND wp_terms.term_id > 23 AND wp_postmeta.meta_value = wp_users.ID) AS 'Merchandise', (SELECT IF(SUM(wp_terms.name = 'Tickets') >0, 'Yes', 'No') FROM wp_postmeta JOIN wp_woocommerce_order_items ON wp_woocommerce_order_items.order_id = wp_postmeta.post_id JOIN wp_woocommerce_order_itemmeta ON wp_woocommerce_order_items.order_item_id = wp_woocommerce_order_itemmeta.order_item_id JOIN wp_term_relationships ON wp_term_relationships.object_id = wp_woocommerce_order_itemmeta.meta_value JOIN wp_terms ON wp_terms.term_id = wp_term_relationships.term_taxonomy_id WHERE wp_postmeta.meta_key = '_customer_user' AND wp_woocommerce_order_itemmeta.meta_key = '_product_id' AND wp_term_relationships.object_id = wp_woocommerce_order_itemmeta.meta_value AND wp_terms.term_id = wp_term_relationships.term_taxonomy_id AND wp_terms.term_id > 23 AND wp_postmeta.meta_value = wp_users.ID) AS 'Tickets', (SELECT IF(COUNT(*) > 0, 'Yes', 'No') FROM wp_em_bookings WHERE person_id = wp_users.ID) AS 'Trips / Events' FROM wp_users JOIN wp_usermeta ON wp_usermeta.user_id = wp_users.ID JOIN wp_ihc_user_levels ON wp_users.ID = wp_ihc_user_levels.user_id WHERE wp_usermeta.meta_value = (SELECT MAX(CASE WHEN meta_key = 'affiliated_club' THEN meta_value END) FROM wp_usermeta WHERE user_id = %CURRENT_USER_ID%) I'm going to mark this resolved, but would appreciate if anyone is able to advise if this can be compacted / minimised to run more efficiently to check each Category Purchase. Thanks in advance.
2024-07-20T01:26:29.851802
https://example.com/article/3537
<?xml version='1.0' encoding='utf-8'?> <section xmlns="https://code.dccouncil.us/schemas/dc-library" xmlns:codified="https://code.dccouncil.us/schemas/codified" xmlns:codify="https://code.dccouncil.us/schemas/codify" xmlns:xi="http://www.w3.org/2001/XInclude" containing-doc="D.C. Code"> <num>6-917</num> <heading>Appropriations authorized.</heading> <text>Except as herein otherwise authorized all expenses incident to the enforcement of this chapter shall be paid from appropriations made from time to time for that purpose in like manner as other appropriations for the expenses of the District of Columbia.</text> <annotations> <annotation type="History" doc="Stat. 59-1-ch2073" path="§17">May 1, 1906, 34 Stat. 161, ch. 2073, § 17</annotation> <annotation type="History">Aug. 28, 1954, 68 Stat. 889, ch. 1032</annotation> <annotation type="Prior Codifications">1973 Ed., § 5-632.</annotation> <annotation type="Prior Codifications">1981 Ed., § 5-717.</annotation> </annotations> </section>
2024-06-03T01:26:29.851802
https://example.com/article/8178
Q: Why is my form's submit button not working? I am modifying a WP plugin (PHP; latest version of WP). It has a form where a user can ask a question. It came with it's own submit button. I found an action hook where I can add my own code to the form, so I added Braintree's simplest payment form - the DropIn UI. This is a screenshot of what both forms look like when I set them up. The problem: The "Post question" button that came with the plugin is not working and I suspect that it is due to the presence of the second form. What I've tried: I removed bits and pieces of the code I added to try and single out what could be causing this and it came down to the DropIn's form code itself. The presence of a 2nd form on this page is causing an issue. My question: What could cause submit issues when 2 forms are present on one page? Note - I used an action hook to insert my code to a function that described itself as the main plugin's form footer. I am getting the feeling that this is a form nesting problem. my function that creates the Braintree form: class FD_Braintree_Form { public function fd_bt_form() { echo '<form id="checkout" action="/process-trans.php" method="post"> <p> <label><font size="5">Amount:</font></label> <input type="text" size="4" name="amount" /> </p> <div id="payment-form"></div> <input type="submit" value="Pay" /> </form>'; } } my function that generates the Braintree form: class Find_Do_For_Anspress { add_action('ap_form_bottom_ask_form', array( $this, 'fd_bt_form_html')); //This is where I use the action hook to insert my code into the plugin's form_footer() public function fd_bt_form_html() { $class_bt_token = new Braintree_ClientToken(); $clientToken = $class_bt_token->generate(); ?> <script src="https://js.braintreegateway.com/v2/braintree.js"></script> <script> braintree.setup( '<?php echo $clientToken ?>', 'dropin', { container: 'payment-form', }); </script> <?php $class_bt_form = new FD_Braintree_Form(); $bt_form = $class_bt_form->fd_bt_form(); echo $bt_form; } } plugin's main code that generates the question form: function ap_ask_form($editing = false){ global $editing_post; $is_private = false; if($editing){ $is_private = $editing_post->post_status == 'private_post' ? true : false; } $args = array( 'name' => 'ask_form', 'is_ajaxified' => true, 'submit_button' => ($editing ? __('Update question', 'ap') : __('Post question', 'ap')), 'fields' => array( array( 'name' => 'title', 'label' => __('Title', 'ap'), 'type' => 'text', 'placeholder' => __('Question in one sentence', 'ap'), 'desc' => __('Write a meaningful title for the question.', 'ap'), 'value' => ( $editing ? $editing_post->post_title : sanitize_text_field( @$_POST['title'] ) ), 'order' => 5, 'attr' => 'data-action="suggest_similar_questions"', 'autocomplete' => false, ), array( 'name' => 'title', 'type' => 'custom', 'order' => 5, 'html' => '<div id="similar_suggestions"></div>' ), array( 'name' => 'description', 'label' => __('Description', 'ap'), 'type' => 'editor', 'desc' => __('Write description for the question.', 'ap'), 'value' => ( $editing ? apply_filters('the_content', $editing_post->post_content) : @$_POST['description'] ), 'settings' => apply_filters( 'ap_ask_form_editor_settings', array( 'textarea_rows' => 8, 'tinymce' => ap_opt('question_text_editor') ? false : true, 'quicktags' => ap_opt('question_text_editor') ? true : false , 'media_buttons' =>false, )), ), array( 'name' => 'ap_upload', 'type' => 'custom', 'html' => ap_post_upload_form(), 'order' => 10 ), array( 'name' => 'parent_id', 'type' => 'hidden', 'value' => ( $editing ? $editing_post->post_parent : get_query_var('parent') ), 'order' => 20 ) ), ); if(ap_opt('allow_private_posts')) $args['fields'][] = array( 'name' => 'is_private', 'type' => 'checkbox', 'desc' => __('Only visible to admin and moderator.', 'ap'), 'value' => $is_private, 'order' => 12, 'show_desc_tip' => false ); if(ap_opt('recaptcha_site_key') == '') $reCaptcha_html = '<div class="ap-notice red">'.__('reCaptach keys missing, please add keys', 'ap').'</div>'; else $reCaptcha_html = '<div class="g-recaptcha" id="recaptcha" data-sitekey="'.ap_opt('recaptcha_site_key').'"></div><script type="text/javascript" src="https://www.google.com/recaptcha/api.js?hl='.get_locale().'&onload=onloadCallback&render=explicit" async defer></script><script type="text/javascript">var onloadCallback = function() { widgetId1 = grecaptcha.render("recaptcha", { "sitekey" : "'.ap_opt('recaptcha_site_key').'" }); };</script>'; if(ap_opt('enable_recaptcha')) $args['fields'][] = array( 'name' => 'captcha', 'type' => 'custom', 'order' => 100, 'html' => $reCaptcha_html ); /** * FILTER: ap_ask_form_fields * Filter for modifying $args * @var array * @since 2.0 */ $args = apply_filters( 'ap_ask_form_fields', $args, $editing ); if($editing){ $args['fields'][] = array( 'name' => 'edit_post_id', 'type' => 'hidden', 'value' => $editing_post->ID, 'order' => 20 ); } $form = new AnsPress_Form($args); echo $form->get_form(); echo ap_post_upload_hidden_form(); } the plugin's "form footer" code and the action hook I used: private function form_footer() { ob_start(); /** * ACTION: ap_form_bottom_[form_name] * action for hooking captcha and extar fields * @since 2.0.1 */ do_action('ap_form_bottom_'. $this->name); $this->output .= ob_get_clean(); $this->output .= '<button type="submit" class="ap-btn ap-btn-submit">'.$this->args['submit_button'].'</button>'; if(@$this->args['show_cancel'] === true) $this->output .= '<button type="button" class="ap-btn ap-btn-cancel">'.__('Cancel', 'ap').'</button>'; $this->output .= '</form>'; } the hidden upload function: function ap_post_upload_hidden_form(){ if(ap_opt('allow_upload_image')) return '<form id="hidden-post-upload" enctype="multipart/form-data" method="POST" style="display:none"> <input type="file" name="post_upload_image" class="ap-upload-input"> <input type="hidden" name="ap_ajax_action" value="upload_post_image" /> <input type="hidden" name="ap_form_action" value="upload_post_image" /> <input type="hidden" name="__nonce" value="'.wp_create_nonce( 'upload_image_'.get_current_user_id()).'" /> <input type="hidden" name="action" value="ap_ajax" /> </form>'; } the HTML that houses the plugin's ask form code: <div class="ap-tab-container"> <div id="ap-form-main" class="active ap-tab-item"> <?php ap_ask_form(); ?> </div> </div> A: EDIT: The problem in the case is the action hook you are using is placing your form inside of the plugin's form. You need to use a different action hook (or you need to get hacky with this one.) The plugin you are using does not provide an appropriate action hook to put the form where you are trying to put it (after the ask_form). You will have to modify the plugin code for this to work. If you are willing to modify the plugin code, you would do this: Modify the plugins form footer method: private function form_footer() { ob_start(); /** * ACTION: ap_form_bottom_[form_name] * action for hooking captcha and extar fields * @since 2.0.1 */ do_action('ap_form_bottom_'. $this->name); $this->output .= ob_get_clean(); $this->output .= '<button type="submit" class="ap-btn ap-btn-submit">'.$this->args['submit_button'].'</button>'; if(@$this->args['show_cancel'] === true) $this->output .= '<button type="button" class="ap-btn ap-btn-cancel">'.__('Cancel', 'ap').'</button>'; $this->output .= '</form>'; //MODIFICATION added action do_action('ap_form_aftercustom_'. $this->name); } Then, modify your fd_bt_form_html function: class Find_Do_For_Anspress { //MODIFICATION use the new action add_action('ap_form_aftercustom_ask_form', array( $this, 'fd_bt_form_html')); //This is where I use the action hook to insert my code into the plugin's form_footer() public function fd_bt_form_html() { $class_bt_token = new Braintree_ClientToken(); $clientToken = $class_bt_token->generate(); ?> <script src="https://js.braintreegateway.com/v2/braintree.js"></script> <script> braintree.setup( '<?php echo $clientToken ?>', 'dropin', { container: 'payment-form', }); </script> <?php $class_bt_form = new FD_Braintree_Form(); $bt_form = $class_bt_form->fd_bt_form(); echo $bt_form; } } -End Edit- My original feedback before I got the extra info: I think there are two likely culprits here (without seeing your code). You put a form inside of a form. Your buttons/submits specifies a form by name. #1 Example <form> <input ...> <input ...> <input type="submit" form="SomeForm" value="Submit Form 1"> <form> <input ...> <input ...> <input type="submit" form="SomeForm" value="Submit Form 2"> </form> </form> #2 Example <form name="SomeForm" id="SomeForm"> <input form="SomeForm" ...> <input form="SomeForm" ...> <input type="submit" form="SomeForm" value="Submit Form 1"> </form> <form> <input form="SomeForm" ...> <input form="SomeForm" ...> <input type="submit" form="SomeForm" value="Submit Form 2"> </form> Here is an example of form nesting that will work on some browsers in html5 (not recommended solution). <form name="SomeForm" id="SomeForm"> <input form="SomeForm" ...> <input form="SomeForm" ...> <input type="submit" form="SomeForm" value="Submit Form 1"> <form name="SomeForm2" id="SomeForm2"> <input form="SomeForm2" ...> <input form="SomeForm2" ...> <input type="submit" form="SomeForm2" value="Submit Form 2"> </form> </form> Wordpress is a very popular and very powerful tool. However, whenever I am forced to look at it's code, I am reminded why I don't use it. This is the worst PHP code of any well-adapted program I have seen. It is downright awful. If any of the people reading this are learning PHP and using Wordpress as an example, please do yourself a favor and use ANY other source of PHP example code. Wordpress is utter garbage, code-wise.
2024-05-22T01:26:29.851802
https://example.com/article/9861
Florida Republican Party Leaders Urge Support For New Education Standards July 23, 2013 | 10:58 AM American Conservative Union chairman Al Cardenas is one of five former Republican Party of Florida chairmen urging support for Common Core State Standards. Five former Republican Party of Florida leaders have sent out an email asking state GOP members to support new education standards adopted by Florida and 44 other states. The letter is signed by state Sen. John Thrasher and four other former state party chairmen. When Florida has raised its standards in the past, Thrasher wrote in the email, it has resulted in better scores on international tests and gains from black and Hispanic students.” “Every leading indicator – test scores, graduation rates, national rankings, participation and achievement in Advanced Placement – continues to rise thanks to higher standards,” the email states. “But, we have to continue the fight. Common Core does that.” Common Core has come under fire by those on both the right and left ends of the political spectrum. Their concerns include losing local control over education, higher costs and increasing time spent on testing. Dear Florida Republican Leaders: Like many of you, we have been following the conversation regarding a new education reform initiative soon to arrive in Florida schools – the Common Core State Standards. Unfortunately, there has been a tremendous amount of misinformation about the movement to raise academic standards, especially among our fellow conservatives. As former chairs of the Republican Party of Florida, we wish to share our view on this effort and what it will mean for Florida’s students and our state’s future. We know that the most critical component for creating an even-playing field where every single individual has the opportunity to achieve greatness is education. A good system of education holds the power to keep America economically competitive and secure, while also lessening future generations’ reliance on government and entitlement programs. Florida once ran one of the worst public education systems in the nation. Now, as a result of conservative education reform based on stronger accountability and more choice, we have become a national leader in boosting student achievement. Even so, our academic standards currently do not set the bar high enough for children to be globally competitive. This trend is mirrored in states across the country. On international assessments in Math and Science, American students are embedded firmly in the middle of the pack. This hardly bodes well for America continuing as the dominant world power in the 21st Century. The nation’s Governors recognized this problem almost 15 years ago and began a process that eventually led to states collaborating on the development of Common Core State Standards. President Obama has falsely and dishonestly tried to take credit for this initiative, but this was a state issue, and state leaders developed the solution needed. Lately, there have been a number of myths about this initiative. We would like to address these directly. Common Core is not a federal dictate or national mandate. States are free to adopt the standards or to not adopt them. And, if they have already adopted them, they are free to drop out at any point. Some have alleged that the new standards change laws around student data and privacy. They don’t. Regardless of adopting the Common Core, states remain in control of their students’ private information, just as they are now. The federal government does not have access to individual student-level data – just aggregate information by school on how students are performing. States must remain vigilant in working with local school districts to continue protecting student information. The Common Core State Standards only set academic expectations in English and Math. They do not dictate curriculum – the textbooks used, the reading assignments handed down, the lesson plans employed by teachers, and the thousand other methods or materials used to help students learn. The standards are merely benchmarks for what a student should know by the end of the year at each grade level, from K-12. Ultimately, local school districts and teachers remain in control of their curriculum and in charge of their classrooms. Some have expressed concern about Common Core’s impact on parental choice. Common Core State Standards in no way impact the right of parents to choose the best educational opportunity for their child. We already have academic standards; we are just raising the bar. Home school parents and parents with children in schools that do not receive state funding remain completely unaffected. In non-traditional public schools that receive either voucher money or other state-funding, the current dynamic remains unchanged. Any exercise of this magnitude will have its supporters and detractors, its legitimate criticisms and its inevitable conspiracy theories. The simple questions for Florida are these: Will these new standards ensure we provide our kids with a better education and the taxpayers with a better return on their investment? Will the new assessments be better than the existing assessments? Will students graduate high school more prepared for college and the workforce? We believe the answer to these questions is “yes.” And, we are not alone. Common Core supporters include a wide swath of conservative leaders, including Mike Huckabee, Mitch Daniels, Haley Barbour, Bobby Jindal, Chris Christie, Susana Martinez, Rick Snyder and our own Governor Rick Scott. And, former Florida Governor Jeb Bush, who led the education reform efforts in Florida for eight years and initiated the turnaround of Florida schools, has been a strong proponent of higher standards. We’ve seen what high standards mean to students in Florida. In 1998, nearly half of Florida’s fourth graders were functionally illiterate. Today, Florida’s fourth graders and eighth graders are above the national average in Reading and fourth graders are above the national average in Math with eighth graders closing in on that benchmark. Best of all, Florida’s Hispanic and African-American students are making the greatest gains, narrowing the achievement gap for the first time in our lifetime. Every leading indicator – test scores, graduation rates, national rankings, participation and achievement in Advanced Placement – continues to rise thanks to higher standards. But, we have to continue the fight. Common Core does that. Finally, there are good conservatives on both sides of this issue. Questioning the integrity of anyone involved on either side of this debate does not do our Party or this issue any favors. We implore our fellow Republicans to judge the Common Core State Standards by what they are: academic standards, not curriculum and not a national mandate. You can learn more about the Common Core State Standards at www.highercorestandards.org. Read them. Listen to what teachers say about them. If you disagree, do so from an informed perspective. Thank you for taking the time to learn more about this initiative and thank you for continuing to provide the strong leadership needed to keep our Party strong and united in the Sunshine State. Sincerely, John Thrasher, Former Republican Party of Florida Chairman Carole Jean Jordan, Former Republican Party of Florida Chairwoman Al Cardenas, Former Republican Party of Florida Chairman Tom Slade, Former Republican Party of Florida Chairman Van Poole, Former Republican Party of Florida Chairman Topics Comments We are better than CCSS Mr. Thrasher’s comments are not entirely true. Yes this country needs to raise the standard of education state by state and town by town. However the common core will NOT bring us there! Every state should retain their sovereignty and these new standards and all of the underlying strings attached sentence our children to an unreasonable amount of testing. Instead of a teacher using their creativity and talents they find themselves teaching to the standardized tests. Their rating depends on the success of the child. There is also a massive amount of personal identifiable information being shared. Adopting the common core standards is extremely expensive and the burden to meet the cost falls largely upon the taxpayers! People like Bill Gates and companies like Pearson stand to make billions on the implementation of the common core and the development of textbooks, software, tools amd standardized test. Parents please do your research before you allow your elected officials lead you down a certain path! These politicians should remember that come election time; parents, teachers and school administrators have long memories! Amy PARCC (a test associated with Common Core which may not be used at all) is what is tied to the increased amount of testing, NOT the Common Core Standards themselves. Common Core is a set of standards, not a curriculum. They don’t tell teachers how to teach. They cover fewer topics in greater depth to foster greater critical thinking and less memorization. There are no required textbooks, but they do offer suggested texts. Examples? The Gettysburg Address, Declaration of Independence, and other very Conservative documents. I absolutely agree that it is necessary to do research before blindly accepting anything — and I believe that goes for both sides. I have done my research, and I know where I stand. Gretchen Hoyt McDevitt Amy, please continue your research by reading the following from those very concerned about Common Core: Way to go Amy! I love your statement ” I absolutely agree that it is necessary to do research before blindly accepting anything — and I believe that goes for both sides.” I think too many do the “blindly accepting” why I have no clue! I can guess, they are too self absorbed and lack in caring for others… So sad! jani The question is, who will make the most money from this adoption? With our administration’s push to Leave No Child (and teacher) Untested, where will there be monetary gains? who found the loopholes? Those are the bigger questions. I doubt that this is about the education of the students. I have too many years of experience behind me to believe that this is solely for the children. Someone already has their hands in my pockets. the tired teach Amy, you are drinking the Kool-Aid. Teachers are told how to teach each standard. Suggestions even accompny some of the standards. In addition to the fact that the K-8 math standards, for example, are very poor with errors and omissions (% is mentioned ONCE) there is going to be a huge cost . . . billions of $$$ for computers just to take the tests (They will be in a museum by the time the elem. students are ready for college.) And do you mean by “depth” that each student has to asnswer “Why” for every answer he/she gives? It is a pathetic attempt for several people to make a bundle of money and ruin the education of our students. Then again, that could be the purpose. Jason Buckwheat WE NEED THE FEDERAL GOVERNMENT TO HELL OUT OF OUR SCHOOL SYSTEM. IF YOU LOOK THE WAY THESE JERKS RUN WASHINGTON DC IT IS EASY TO UNDERSTAND WHY WE DON’T NEED IN OUR SCHOOLS. LOOSER’S ! RAW I think the Common Core is a terrible thing! Plus if any state looks at the State of Florida’s Education system as a role model, I hope they are viewing it as what not to do! I agree with some of the others who posted comments, and it’s about GREED. I also believe it’s going in the path our government hopes. Snowing the people, dumbing down the people, so that the government has full control. I think we as AMERICANS should be ashamed and stop the blaming and take the hit (for we all deserve it for allowing this) and together take a stand and take back our country! Leave the greed alone as it should be. And the first place to start is with CARE. Care people! Admit, Begin, Care, Do, Effort, Family, ….. ABC’s! Why are so many ignoring what is happening I have to wonder….. I care don’t you? We must think for ourselves and practice “honesty is the best” Without care and trust there is nothing but bad to be had. outraged_mom Im looking at the “Party Belief” questions/choices for issues such as abortion, gun control, taxes, military – If anyone filled out this questionnaire and came out a republican – they should be institutionalized for the criminally stupid and cruel. And, Im a republican – the nonsense they are teaching the kids is pure indoctrination. outraged_mom And just because the GOP say’s something, certainly doesn’t make it true – I think we all can agree – the political class is looking for one thing and one thing only – campaign contributions -that’s it. Everything they say and do proves it.
2024-07-12T01:26:29.851802
https://example.com/article/8773
/* * Copyright 2010-2018 JetBrains s.r.o. Use of this source code is governed by the Apache 2.0 license * that can be found in the LICENSE file. */ package codegen.coroutines.controlFlow_tryCatch4 import kotlin.test.* import kotlin.coroutines.* import kotlin.coroutines.intrinsics.* open class EmptyContinuation(override val context: CoroutineContext = EmptyCoroutineContext) : Continuation<Any?> { companion object : EmptyContinuation() override fun resumeWith(result: Result<Any?>) { result.getOrThrow() } } suspend fun s1(): Int = suspendCoroutineUninterceptedOrReturn { x -> println("s1") x.resume(42) COROUTINE_SUSPENDED } suspend fun s2(): Int = suspendCoroutineUninterceptedOrReturn { x -> println("s2") x.resumeWithException(Error()) COROUTINE_SUSPENDED } fun f1(): Int { println("f1") return 117 } fun f2(): Int { println("f2") return 1 } fun f3(x: Int, y: Int): Int { println("f3") return x + y } fun builder(c: suspend () -> Unit) { c.startCoroutine(EmptyContinuation) } @Test fun runTest() { var result = 0 builder { val x = try { s2() } catch (t: Throwable) { f2() } result = x } println(result) }
2024-03-25T01:26:29.851802
https://example.com/article/9771
Influence of renal failure, rheumatoid arthritis and old age on the pharmacokinetics of diflunisal. The single-dose plasma kinetics of diflunisal was studied in healthy young and old subjects, in patients with rheumatoid arthritis, and in patients with renal failure. The plasma and urine kinetics of the glucuronidated metabolites of diflunisal were studied in the healthy elderly subjects and in the patients with renal failure. In addition, the multiple-dose plasma kinetics of diflunisal was assessed in healthy volunteers and in patients with rheumatoid arthritis. After a single dose of diflunisal the terminal plasma half-life, mean residence time and apparent volume of distribution were higher in elderly subjects than in young adults. No difference was observed in any pharmacokinetic parameter between age-matched healthy subjects and patients with rheumatoid arthritis. The elimination half-life of unchanged diflunisal was correlated with the creatinine clearance (r = +0.89) and its apparent total body clearance exhibited linear dependence on creatinine clearance (r = +0.78). In patients with renal failure, the terminal plasma half-life and mean residence time of diflunisal were prolonged. The renal and apparent total body clearances were lower, the mean apparent volume of distribution was higher and the mean area under the concentration-time curve extrapolated to infinity (AUC) was greater in the renal failure patients than in controls. The plasma concentration of the glucuronidated metabolites rapidly rose to levels above those of unchanged drug in renal patients, whereas they were lower than those of unchanged diflunisal in controls. The AUC (0-96 h) of diflunisal glucuronides in the patients was four-times that in controls, and the terminal elimination half-life of the glucuronides was prolonged in them. The renal excretion and clearance of diflunisal glucuronides were reduced when renal function was impaired. After multiple dosing, the pre-dose steady-state plasma-concentration increased with decreasing creatinine clearance (r = -0.79). When the plasma concentration exceeded 200 mumols.l-1, the elimination half-life was doubled, due to partial saturation of diflunisal conjugation. This finding suggests that lower doses could be used in long-term treatment. Thus, old age and arthritic disease appear to have little influence on the kinetics of diflunisal in the absence of renal functional impairment. Ordinary doses can be given for short term treatment of elderly patients with or without RA. In patients with renal failure, however, reduced doses of diflunisal are recommended.
2024-02-13T01:26:29.851802
https://example.com/article/1761
The
Allure
of
the
Serial
Killer Eric
Dietrich
and
Tara
Fox
Hall1 To
appear:
Serial
Killers
and
Philosophy,
Edited
by
Sara
Waller, Wiley‐Blackwell,
Publishers, The
Wiley‐Blackwell
Series
Philosophy
for
Everyone,
General
Editor
Fritz
Allhoff The
only
sensible
way
to
live in
this
world
is
without
rules. The
Joker Abstract What
is
it
about
serial
killers
that
grips
our
imaginations? They
populate some
of
our
most
important
literature
and
art,
and
to
this
day,
Jack
the Ripper
intrigues
us. In
this
paper,
we
examine
this
phenomenon,
exploring the
idea
that
serial
killers
in
part
represent
something
in
us
that,
if
not
good, is
at
least
admirable. To
get
at
this,
we
have
to
peel
off
layers
of
other
causes of
our
attraction,
for
our
attraction
to
serial
killing
is
complex
(it
mixes
with repulsion,
too).
For
example,
part
of
the
attraction
is
curiosity
associated with
the
pragmatic
desire
to
understand
serial
killers.
Another
part
is
the allure
of
safe
violence,
the
very
same
allure
that
causes
us
to
slow
down
to look
at
traffic
accidents
and
that
makes
movies
like
Saw
box
office
gold.
Once we
are
through
the
initial
layers
of
attraction,
we
expose
the
one
we
are interested
in.
Humans
are
not
really
Homo
sapiens
(the
wise
human),
but rather
Homo
oboediens
(the
rule‐following
human),
and
these
rules
can become
oppressive. Serial
killers,
properly
sanitized,
show
us
something, albeit
in
a
twisted
way,
that
we
long
for
–
a
life
unfettered
by
rules,
a
life where
we
can
do
exactly
what
we
want.
We
close
by
noting
the
paradox
that an
actual
serial
killer
is
not
free
at
all. 2 1.
The
Allure
of
Monsters Question:
Dante's
Divine
Comedy
is
made
up
of
three
books
(or
canticles),
the first
of
which
is
called
the
Inferno.
What
are
the
names
of
the
other
two
books? Dante's
Divine
Comedy
is
considered
one
of
the
greatest
works
in
world literature.
Yet
few
can
name
all
three
of
its
books,
and
fewer
still
have
read
the whole
thing. Most
people
who
read
it
read
only
the
Inferno,
and
in
fact,
the
structure of
the
Inferno,
with
its
ever‐deepening
circles
of
Hell,
is
a
mainstay
of
common culture. Why
do
the
other
two
books,
the
Purgatorio
and
the
Paradiso,
receive
far less
attention?
It's
because
with
their
respective
images
of
waiting
interminably
and of
peace
and
plenty,
they
aren't
vivid
and
exciting;
they're
boring. But
gruesome horror
is
vivid
and
exciting. This
is
what
the
Inferno
contains,
indeed,
mostly consists
of. So
now
the
question
becomes:
why
is
gruesome
horror
exciting? As mysterious
as
this
question
is,
there's
a
bigger,
more
disturbing
mystery:
what
is
it in
our
nature
that
finds
the
monsters
responsible
for
such
horror
alluring?
The particular
monster
we
are
interested
in
is
the
serial
killer. That
monsters
are
alluring
is
not
in
doubt.
Nothing
else
explains
their appearance
throughout
human
verbal
and
written
art.
There
is
Humbaba,
the monstrous
giant
in
the
Gilgamesh
epic,
dating
from
before
2000
BCE. Homer's Odyssey
(written
perhaps
as
early
as
1100
BCE)
is
abundantly
supplied
with monsters,
from
Polyphemus
the
Cyclops,
through
the
Laestrygonians,
a
tribe
of
giant cannibals,
and
finally
to
Scylla
and
Charybdis. The
Old
English
epic
poem
Beowulf recounts
one
of
the
most
interesting
monsters,
the
mighty
and
terrifying
Grendel. Over
a
period
of
many
years,
he
attacked
and
ate
dozens
of
the
Danes
of
Heorot
Hall, only
to
be
finally
bested
by
Beowulf,
the
hero
the
Geats. All
of
these
monsters systematically
hunted
and
killed
humans
over
a
stretch
of
time
–
they
were
all
serial killers. 3 The
allure
is
just
as
powerful
in
real
life. The
Roman
emperor
Nero
was
an extravagant
tyrant,
ordering
the
executions
and
torture
of
perhaps
hundreds
of people,
including
his
own
mother. He
was
also
fond
of
viciously
persecuting Christians.
He
remains
a
source
of
inquiry
and
curiosity. Jack
the
Ripper,
circa 1888,
London,
is
still
an
important
subject
of
movies,
books,
and
historical
detective work. This
work
continues
because
the
Ripper's
identity
remains
unknown.
In
fact, "Jack
the
Ripper"
is
an
alias
given
to
the
serial
killer
by
a
letter
sent
to
the
London Central
News
Agency.2 The
Son
of
Sam,
Ted
Bundy,
and
all
the
other
modern
serial killers
grip
our
modern
imaginations. We
are
horrified
by
their
killing,
but
cannot look
away. Plausibly,
serial
killing
monsters
show
up
in
our
art
because
they
show
up
in real
life. And
they
are
alluring
in
our
art
‐
when
they
are,
which
is
often
‐
for
one reason:
the
artists
make
them
alluring
(Hannibal
the
Cannibal,
from
the
movie Silence
of
the
Lambs
is
the
Platonic
ideal
of
such
an
alluring
monster,
he
is
urbane, intelligent,
charming,
and
eats
his
victims). Other
alluring
serial
killers
include Sylar,
Jigsaw,
Jason,
Dexter,
and
Ghostface
from,
respectively, the
television shows/movies
Heroes,
Saw,
Friday
the
Thirteenth,
Dexter,
and
Scream,
as
well
as Aaron
Stampler,
from
William
Diehl's
books
Primal
Fear
and
Reign
in
Hell.
But
these artistic
constructs
we
enjoy
watching
and
reading
about
are
just
that,
constructs. Why
would
artists
render
serial
killers
alluring? Are
any
actual
serial
killers alluring?
How
do
we
square
our
horror
of
them
and
our
revulsion
at
their
killing with
any
allure? The
answers
to
these
questions,
which
we
explain
below,
reveals
how complex
humans'
thoughts
and
emotions
about
serial
killers
are. There
is
more than
one
answer
at
work,
and,
as
one
digs
deeper,
a
surprising
answer
emerges:
that what
is
actually
alluring
is
the
idea
of
the
serial
killer,
but
only
when
that
idea
is contemplated
from
a
certain,
specific,
safe
reference
frame
that
allows
both
the positive
and
negative
emotions
associated
with
serial
killers
to
be
experienced
at 4 the
same
time.
There
is
nothing
more
stimulating
than
surviving
a
brush
with
death, that
threat
to
the
one
thing
we
all
hold
most
dear:
our
lives.
The
feeling
of
being threatened
when
someone
dangerous
has
power
over
us
makes
our
hearts
pound
in our
chest
from
fear.
It
is
only
when
that
feeling
is
coupled
with
the
safe
boundaries of
the
silver
screen,
or
written
word,
that
it
becomes
irresistible. 2.
Explaining
the
Allure
–
First
look Serial
killers
in
real
life
may
not
be
as
alluring
as
fictional
ones,
but
they
are at
least
fascinating
in
a
terrifying
way. A
significant
part
of
this
fascination
comes from
the
mystery
serial
killers
represent
and
our
deep
human
need
to
minimize mysteries. We
all
ask,
"Why
would
anyone
stalk
and
kill
one
human
after
another?" This
behavior
is
bizarre,
senseless.
So
to
assuage
our
terrified
fascination,
we
seek reasons. Humans
require
reasons,
or
explanations,
for
everything,
from
why
the
sun comes
up
and
goes
down
to
why
the
stars
appear
in
recognizable
patterns
in
the night
sky
to
why
people
get
sick
and
die. It
is
easy
to
see
why
such
explanations
are needed:
they
provide
control,
prediction,
and,
emotionally,
they
reduce
fear.
In
fact, one
crucial
function
for
human
religions,
from
the
beginning,
was
(and
is)
making
a dangerous,
mysterious
world
seem
humanly
rational.
Gods,
who
were
a
lot
like people
only
stronger
and
magical,
controlled
everything.
Gods
controlled
the
seas and
rivers,
the
weather,
the
sun
and
moon,
the
stars
and
planets,
and
even
the afterlife.
A
well‐known
example
is
one
western
explanation
of
why
spring
returned every
year
(it
was
caused
by
Persephone's
return
from
the
underworld,
where
she was
the
consort
of
Hades).
Therefore,
explaining
what
happens
in
the
world
is important,
especially
so
if
what
happens
negatively
affects
us. But
to
this
day,
we cannot
explain
serial
killers'
behavior.
And
lacking
this
explanation
matters
a
great deal,
for
serial
killers
are
responsible
for
a
significant
proportion
of
murders
in
the United
States. According
to
Kenna
Quinet,
the
number
of
victims
of
serial
killers ranges
anywhere
from
around
350
to
almost
2000
a
year
(there
are
roughly
16,000 5 murders
in
the
U.S.
in
a
typical
year). See
her
paper:
"The
Missing
Missing:
Toward a
Quantification
of
Serial
Murder
Victimization
in
the
United
States"
in
the
journal Homicide
Studies. Wanting
to
explain
deadly
events
is
clearly
a
rational
want.
In
seeking explanations
of
deadly
events,
including
serial
killers,
humans
feel
a
certain
amount of
curiosity. One
cannot
seek
explanations
for
things
without
somehow
being drawn
to
the
thing
to
be
explained,
even
if
that
thing
is
extremely
dangerous
or repugnant,
such
as
why
prison
rape
occurs.
Curiosity
always
has
a
positive
emotive component;
it
is
the
sort
of
thing
that
feels
good,
at
least
somewhat,
when
satisfied, like
discovering
the
reasons
for
the
bright
pastel
colors
of
a
sunset. Hence,
we
are drawn
to
serial
killers
in
order
to
explain
them,
which
we
must
do
if
we
are
to
avoid them,
or
remove
them
from
society,
or
prevent
them
from
occurring. Our
being drawn
to
them
is
innate,
it
is
funded
by
our
curiosity.
This
explains
part
of
the
allure of
serial
killers:
we
are
just
curious
about
them
for
perfectly
rational
reasons:
we'd like
to
reduce
the
danger
and
horror
they
impose. This
curiosity‐driven
allure
is such
a
rational
course
of
action
that
it
is
common
throughout
the
animal
kingdom. Even
animals
with
quite
small
brains
engage
in
such
behavior. For
example,
some species
of
fish
do
what's
called
"predator
inspection." The
fish,
while
eating,
notice that
something
dark
looms
up
ahead
of
them. It
might
be
a
large
predator
fish
or
the legs
of
a
fish‐eating
bird,
or
it
might
be
some
floating
moss
or
a
log. Swimming
away from
food
every
time
something
dark
looms
up
ahead
is
a
good
way
to
starve
to death
because
it
happens
often. The
fish
have
to
stay
and
eat. This
is
clearly
a situation
where
more
information
would
be
very
helpful,
yet
the
only
way
to
get more
information
is
to
swim
a
bit
closer
to
the
looming
dark
thing
and
inspect
it
for the
telltale
signs
of
being
a
predator
(which
the
fish
apparently
know). The
fish's strategy
is:
If
it
appears
to
be
a
predator,
then
quickly
swim
off,
otherwise,
keep eating. Of
course,
the
risk
to
gathering
this
extra
information
is
that
the
dark looming
thing
might
in
fact
be
a
predator,
in
which
case
the
fish
have
just
swum closer
to
it. Risky
behavior
for
the
sake
of
a
good
meal:
it
was
ever
thus. 6 3.
Stalking
the
Deeper
Reasons But
there's
more
to
be
explained
here
than
just
our
need
to
understand
and cope
with
serial
killers. We
still
have
to
explain
why
serial
killers
are
often
central characters
in
our
most
horrifying
movies,
some
of
which
are
as
deservedly
famous as
any
movie
can
be
(e.g.,
Alfred
Hitchcock's
Psycho). We
begin
explaining
this
by noting
that
humans
engage
in
some
quite
peculiar
behavior
relative
to
other animals:
we
egregiously
violate
what
is
called
the
hedonistic
assumption. This assumption
says
that
for
the
most
part
animals
will
approach
what
is
good
and avoid
what
is
bad. Of
course,
all
curious
animals
violate
this
assumption
to
some small
degree
(see
above). But
humans
are
strange
in
the
extent
to
which
we
violate it.
Humans
go
far
out
of
their
way
to
find
and
engage
in
activities
that
are
obviously aversive,
things
that
from
a
purely
rational
perspective,
should
be
avoided. Examples
include
horror
movies,
fear‐inducing
rides
like
roller
coasters,
and dangerous,
extreme
sports,
like
parachuting,
bungee
jumping
and
mountain climbing.
Dangling
from
half
inch
hand‐
and
footholds
on
the
edge
of
a
sheer
rock wall
with
only
a
thin
rope
to
prevent
a
climber
from
a
fall
of
four
thousand
feet
is exhilarating
precisely
because
death
is
so
close. No
other
animal
engages
in
such reckless
thrill‐seeking. The
key
to
this
odd
behavior
seems
to
be
that
humans
experience
both positive
and
negative
feelings
at
the
same
time
when
exposed
to
aversive
things.3 Such
co-activation
(as
it
is
called)
means
that
just
because
we
are
frightened
doesn't mean
we
aren't
also
enjoying
ourselves. Indeed,
some
of
the
most
enjoyable moments
of
an
event
may
be
the
most
frightening,
such
as
the
moment
a
parachuter jumps
from
a
plane
out
into
air. Co‐activation
provides
a
positive
correlation between
opposite
feelings,
e.g.,
fear
and
pleasure.
Andrade
and
Cohen,
the
authors of
the
psychological
study
that
revealed
the
surprising
fact
that
humans
experience negative
and
positive
emotions
at
the
same
time
(see
footnote
3),
use
co‐activation to
partially
explain
why
people
go
to
horror
movies. The
idea
is
that
our
feelings
of 7 excitement
and
pleasure
so
closely
co‐occur
with
being
frightened
that
we
view
the latter
as
causing
the
former. Hence,
we
seek
out
aversive
actions. 4.
Closing
in
for
the
Kill However,
we
say
"partially
explain"
because
there
is
one
other
crucial ingredient
that
is
needed:
a
protective
frame. That
is,
moviegoers
and
other
thrill‐ seekers
usually
won't
experience
any
positive
emotions
together
with
their
negative emotions
unless
there
is
some
sort
of
mind‐set
they
can
enter
where
the
danger
to them
is
seen
to
be
not
real,
or
greatly
minimized,
or
something
they
are
confident they
can
deal
with
(see,
Andrade
and
Cohen,
2007).
Hearing
the
Joker
ask
"Why
so serious?"
on
the
screen
is
riveting
and
exciting.
Hearing
him
whisper
it
to
us
in
the dark
of
our
bedroom
just
as
we
are
falling
asleep
would
be
utterly
terrifying. Serial
killers
in
the
real
world
obviously
don't
allow
for
a
protective
frame. So
it
looks
like
an
explanation
of
their
allure
based
on
co‐activation
founders
here. But
in
movies,
books,
and
other
media,
a
protective
frame
does
exist.
The
serial killer
on
the
screen
is
up
there
on
the
screen.
He
can't
get
to
us;
we
are
perfectly safe. So
we
feel
safe
to
be
scared
to
death. 5.
Removing
Empathy Andrade
and
Cohen's
full
explanation
of
the
allure
of
celluloid
serial
killers seems
to
work. Within
a
protective
frame,
we
are
free
to
enjoy
being
afraid.
But
the protective
frame
does
something
else,
too,
something
disturbing. It
removes
any empathy
with
the
victims
(to
further
help
remove
such
empathy,
serial
killer
movies, and
slasher
movies
in
general,
almost
always
portray
victims
as
thoughtless
risk takers,
selfish
hedonists,
.
.
.
in
short
as
someone
not
deserving
our
empathy). Andrade
and
Cohen
point
out:
".
.
.
high
levels
of
cognitive
empathy
(i.e.,
perspective taking)
can
significantly
reduce
people's
ability
to
experience
positive
affect
when facing
negative
stimuli.
.
." This
is
the
key.
When
a
protective
frame
removes empathy,
it
removes
the
grounding
of
a
sense
of
morality
and
ethics. Abstraction sets
in. Victims
become
just
prey,
and
the
monsters
become
more
than
monsters. 8 This
loss
of
a
moral
sense
opens
up
a
path
to
a
deeper,
more
satisfying
explanation of
the
allure
of
the
celluloid
serial
killer,
and,
ultimately,
of
the
real
one. To
sum
up
what
we
have
so
far,
the
only
allure
of
real
(non‐celluloid)
serial killers
we've
uncovered
is
the
one
associated
with
our
curiosity
about
them,
which in
turn
arises
from
our
rational
need
to
explain
and
cope
with
them. The
allure
of celluloid
serial
killers
is
due
to
the
fact
that,
inside
a
protective
frame,
our
empathic sense
and
hence
our
morality
towards
the
killer's
victims
vanishes,
and
we
are
left with
our
feelings
of
pleasure
caused
by
excitement
and
fear,
i.e.,
negative
emotions co‐activating
positive
ones. 6.
The
Prison
of
Rules Feeling
empathy
for
others
is
the
basis
for
morality
and
ethics. Morality
is usually
defined
as
other-regarding
behavior,
behavior
based
on
empathy.
That
is, morality's
essence
lies
in
taking
another
living
being's
welfare
seriously.
Often,
such a
morality
flows
freely
and
naturally
from
each
of
us
to
those
we
interact
with. But many
philosophers
have
noted
that
this
natural
tendency
isn't
enough. Relying
only on
it
is
not
a
good
way
to
infuse
enough
of
the
needed
morality
into
the
world.
Such philosophers
have
suggested
that
morality
manifests
itself
as
a
requirement
on
each of
us
who
seeks
to
be
a
moral
person. Thus
morality
must
be
taken
further,
to
the point
where
we
are
required
to
take
another's
welfare
as
seriously
as
our
own. However,
once
we
are
inside
the
protective
frame,
this
requirement
vanishes because
others'
welfare
becomes
nonexistent.
Indeed,
we
can
say
that
within
the protective
frame
there
are
no
others,
there
are
just
ourselves
and
objects.
The question
emerges,
then,
is
it
moral
to
enter
a
protective
frame? This
question
takes on
an
edge
because
with
loss
of
a
moral
anchor
within
a
frame,
other
aspects
of
the serial
killer
can
come
to
fore. And
some
of
these
other
aspects
are
alluring
in perhaps
much
more
dangerous
ways. To
get
at
this,
we
start
from
a
new
direction. 9 Human
beings
are
immersed
in
rules
–
it
would
be
hard
to
overstate
how immersed. Our
species
really
should
be
called
Homo
oboediens
–
the
rule‐following human. Rules
form
the
girders
of
all
our
highly
structured
groups,
communities, societies,
and
cultures. Actually,
cultures
are
just
collections
of
rules,
which
those
of us
within
a
culture
learn
and
internalize. Languages,
essential
to
being
human,
are intricate
rule‐following
productions
of
sounds.
All
of
our
religions
are
repositories of
rules
controlling
the
most
intimate
aspects
of
our
lives:
who
we
can
marry,
who we
can
have
sex
with
and
when,
what
we
can
eat
and
when,
who
we
must
kill,
when we
should
kill
ourselves.
Games,
ubiquitous
in
human
cultures,
are
impossible without
rules. Art,
poetry,
music,
dance
.
.
.
are
all
based
on
rules,
and
meaningless shapes
and
noise
without
them.
Even
the
great
rule‐breaking
art
requires
rules
to break.
Cubism
and
Dadaism
in
painting
and
the
visual
arts,
the
poetry
of
e.
e. cummings,
the
later
writings
of
James
Joyce,
12‐tone
music,
or
any
"music"
by
John Cage
(his
famous
composition
4'3"
is
three
movements
of
noteless
music
–
the audience
is
meant
to
hear
the
sounds
of
the
surrounding
environment
while
it
is being
performed)
.
.
.
all
of
these
wouldn't
even
exist
if
it
weren't
for
rules.
Finally, science
is
not
only
profoundly
rule‐based,
but
exists
solely
to
unearth
the
rules
that govern
the
universe
and
all
things
in
it,
none
of
which
could
exist
without
rules. The
vast
majority
of
these
rules
are
implicit. Stopping
at
stop
signs
and
red lights
is
due
to
following
explicit
rules. But
most
other
rules
operate
implicitly, controlling
us
without
our
conscious
involvement. We
effortlessly
learn
these implicit
rules,
and
they
effortlessly
control
us. Very
often,
this
control
makes
life
on planet
Earth
better
than
it
would
be
otherwise. Yet,
in
spite
of
the
role
in
our
lives
of
this
vast
matrix
of
rules,
humans
are also
individual
selves. And
herein
lies
the
problem.
Rules,
by
definition,
require everyone
to
obey
them.
Indeed,
rules'
reason
for
being
is
this
very
obedience:
rules are
about
both
control
and
homogenization.
Under
such
conditions,
it
is
hard
to
be
a self,
for
one's
self
tends
to
merge
completely
with
the
rule‐governed
masses. Selves 10 wind
up
struggling
to
exist
as
selves.
To
win
this
struggle,
or
to
even
not
lose
it, requires
public
self-assertion,
usually
in
the
form
of
rule‐breaking.
Why
public
self‐ assertion?
Because
selves
derive
their
selfhood
in
large
measure
by
defining themselves
relative
to
others. This
is
true
throughout
much
of
the
animal
kingdom. For
example,
it
is
not
possible
to
be
the
dominant
alpha
male
or
female
in
some group
without
also
asserting
one's
independent
self‐hood.
Back
to
humans,
great athletes
can't
win
Olympic
medals
or
break
world
reconds
unless
they
first distinguish
themselves
from
the
crowd.
New
music,
art,
trends,
or
inventions
can't come
into
being
unless
the
creator
breaks
out
on
his
own
in
new
direction. This
means
that
the
crucial
commodity
for
the
self
is
freedom;
the
self requires
freedom
for
its
existence. Rule‐breaking
is
asserting
or
grasping
freedom by
breaking
out
of
the
chains
imposed
by
all
the
rules
we
have
to
follow. The
self's search
for
this
freedom
is
epic
.
.
.
and
costly,
a
fact
noted
by
many
throughout
the ages. From
the
story
of
Icarus
to
Catcher
in
the
Rye,
the
struggle
to
be
a
self
figures prominently
in
great
literature
and
other
art,
where
it
is
revealed
as
a
struggle
of ultimate
importance. We
quote
e.
e.
cummings:
"To
be
nobody‐but‐yourself‐in
a world
which
is
doing
its
best,
night
and
day,
to
make
you
everybody
else
-
means to
fight
the
hardest
battle
which
any
human
being
can
fight;
and
never
stop fighting."4 Humans
struggle
to
be
both
selves
and
rule‐followers.
We
all
both
seek
and eschew
freedom,
and
suffer
the
consequences
of
this
struggle. The
importance
of this
can
be
seen
by
noting
that
the
sum
of
all
human
rules
still
leaves
room
for assertion
of
one's
self. Even
military
organizations
leave
room
for
some
self‐ assertion. Personalities
shine
through
in
the
form
of
speech
patterns,
stated
beliefs and
goals. But
it's
not
enough.5 So,
rules,
while
good
in
many
ways,
are
also
bad because
they
make
it
hard
to
be
a
self. Very
often,
therefore,
we
break
the
rules. Usually,
we
just
dip
our
toes
in
the
sea
of
rule‐breaking:
we
drive
a
little
over
the speed
limit,
we
lie,
we
dress
inappropriately,
we
use
unsuitable
language,
we
buy
a 11 Harley.
Every
so
often,
a
few
of
us
step
up
to
our
ankles
in
that
sea. And
sometimes, a
tiny
number
of
us,
swim
far
out
into
it
.
.
.
with
deadly
results. In
this
battle
between
asserting
one's
self
and
merging
one's
self
with
the rule‐following
collective,
serial
killers
stand
as
an
avatar
of
ultimate
freedom. They appear
so
unbound
by
the
rules
of
civilized
society
that
they
wantonly
commit
one of
the
few
acts
regarded
as
wrong
in
all
human
cultures:
they
murder. And
as
a
final declaration,
they
murder
for
their
own
personal
reasons. This
view
of
serial
killers
as
alluring
individuals
arises
in
those
of
us
who
are not
serial
killers
and
who
are
not
in
any
immediate
danger
from
one;
in
other
words, in
those
of
us
who
are
in
a
protective
frame
–
a
frame
of
physical
and
psychological distance. Moreover,
we
think
that
this
view
of
serial
killers
as
ultimate
avatars
of freedom
is
not
explicitly
conscious
in
most
people. It
works
behind
the
scenes, generating
allure
and
causing
people
to
be
drawn
not
to
serial
killers
per
se,
but
to the
idea
of
serial
killers
(and
then,
only
certain
ones). There
is
one
other
aspect
of
this
cause
of
the
allure:
all
of
us
who
follow
the rules
and
struggle
to
be
moral
want
to
know
if
the
rules
have
any
substantiality,
any genuineness. Many
of
us
fear
that
the
rules
are
just
a
thin
veneer
of
modern civilization.
But
part
of
us
also
hopes
for
this.
We
want
to
know
how
thin
the
veneer is
and
how
much
force
it
would
take
to
break
through. To
quote
another
alluring killer,
the
Joker,
as
he
refers
to
all
of
us:
"You
see,
their
morals,
their
code,
it's
a
bad joke.
Dropped
at
the
first
sign
of
trouble.
They're
only
as
good
as
the
world
allows them
to
be.
I'll
show
you.
When
the
chips
are
down,
these
civilized
people,
they'll
eat each
other.
See,
I'm
not
a
monster...I'm
just
ahead
of
the
curve."6 So,
we
sit
down with
our
favorite
serial
killers
from
fiction
‐‐
the
Joker,
Grendel,
Jigsaw,
Hannibal, etc.,
‐‐
and
we
try
to
figure
out
if
they
really
are
monsters
or
not.
Our
rule‐following part
believes
that
their
ultimate
freedom
is
purchased
at
a
great
and
terrible
price, 12 proving
that
such
freedom
is
false
and
worthless. But
our
deeper,
inner
selves
long for
their
freedom,
if
only
to
stay
ahead
of
the
curve. 4.
Conclusion In
our
usually
unacknowledged
desire
to
break
free
from
society's
rules,
we not
only
condone
the
celluloid
serial
killer's
actions,
we
champion
them.
We
have taken
real
killers
and
transformed
them
into
sympathetic
heroes
and
put
them
in our
stories.
A
moral
question
arises:
Should
we
be
making
art
celebrating
serial killers? A
deep
irony
exists
here,
of
course. Real
serial
killers
are
not
free
at
all. They kill
for
pathological
reasons
that
push
them
along
like
a
raging
torrent
pushes
along whole
trees
and
gigantic
boulders. Moreover,
real
serial
killers
kill
real
people,
not just
those
who
"deserve"
to
die,
as
in
the
stories.
They
are
not
the
heroes
we
idealize them
as;
they
don't
kill
to
escape
boundaries,
they
kill
to
maintain
their
own perverse
boundaries. For
most
of
us,
sitting
down
with
a
real
serial
killer
would
not
be
an
artistic, philosophical
experience,
but
rather
a
terrifying,
repulsive
one. Yet,
within
a protective
frame,
safe
and
secure,
we
transform
the
serial
killer,
already
an
object
of deep
curiosity
because
of
his
or
her
fearsomeness,
into
an
icon,
an
avatar. We
revel in
this
transformation,
exploring
our
darker
side
that
longs
to
know
what
it
feels
like to
be
the
one
giving
the
orders
instead
of
taking
them,
doing
everything
we
want
to, impervious
to
the
consequences. 13 Referenced
Readings Andrade,
Eduardo
and
Joel
Cohen
(2007)
"On
the
Consumption
of
Negative Feelings"
Journal
of
Consumer
Research,
vol.
34,
Oct.
2007,
pp.
283‐300. Cummings,
E.
E.
(1958).
"A
poet's
advice
to
students,"
in
E.
E.
Cummings:
A Miscellany, edited
by
George
James
Firmage. Neisser,
Ulric
and David
A.
Jopling,
eds.
(1997).
The
Conceptual
Self
in
Context: Culture,
Experience,
Self-understanding,
Emory
Symposia
on
Cognition, Cambridge
University
Press. Quinet,
Kenna
(2007).
"The
Missing
Missing:
Toward
a
Quantification
of
Serial Murder
Victimization
in
the
United
States"
Homicide
Studies,
vol.
11,
No.
4, 319‐339. Sugden,
Philip
(2002).
The
Complete
History
of
Jack
the
Ripper.
New
York:
Carroll
& Graf.
pp.
260–270. 1
Authors'
addresses:
Philosophy
Department,
Binghamton
University,
Binghamton, New
York,
and
P.
O.
Box
372,
Lisle,
New
York. 2
Philip
Sugden
(2002).
The
Complete
History
of
Jack
the
Ripper. 3
That
we
humans
simultaneously
experience
conflicting
emotions
when
violating the
hedonistic
assumption
was
unexpected,
but
we
do. See
Eduardo
Andrade
and Joel
Cohen's
paper:
"On
the
Consumption
of
Negative
Feelings". 4
"A
poet's
advice
to
students,"
in
E.
E.
Cummings:
A
Miscellany. 5
At
least
it's
not
enough
in
Western
cultures. Some
Asian
cultures
differ,
notably Japanese
culture. In
general,
different
cultures
vary
considerably
in
the
importance they
ascribe
to
the
self.
However,
in
cultures
that
ascribe
less
importance
to
the
self, it
is
not
clear
whether
selves
exist
in
any
lesser
degree
in
individual
humans
or
if, rather,
the
selves
are
repressed
by
obedience
to
the
rules. See,
e.g.,
The
Conceptual 14 Self
in
Context,
by
Ulric
Neisser,
David
A.
Jopling.
Published
by
Cambridge
University Press,
1997. 6
Quoted
from
the
movie,
The
Dark
Knight,
2008.
2024-02-17T01:26:29.851802
https://example.com/article/1756
Story highlights Kirk Wiebe: Edward Snowden is entitled to amnesty in the U.S. without fear of incarceration Wiebe: Snowden reported surveillance of Americans that violated the Constitution Wiebe, an NSA whistleblower, says federal employees swear to uphold Constitution Wiebe: People who designed, implemented the surveillance also deserve a fair trial Edward Snowden deserves amnesty and the ability to return to the United States without fear of being incarcerated for reporting crimes by people in high places in the U.S. government. Monday's ruling by U.S. District Judge Richard J. Leon that the NSA's widespread collection of millions of Americans' telephone records was unconstitutional bolsters this view. But for some, whether to give Snowden amnesty is not an easy matter to reconcile. After all, they say, he broke laws in divulging classified information. Indeed, some say he is a traitor. But just as a member of the U.S. military is not required to follow an unlawful order, it is proper that an employee of the United States intelligence community -- NSA, CIA, DIA and others -- should report any information that concerns law-breaking by the intelligence agencies or their employees. J. Kirk Wiebe An NSA official's suggestion that amnesty for Snowden could possibly be put on the table was undoubtedly welcome news for Snowden, yet NSA Director Gen. Keith Alexander rejected the suggestion. But how can anyone believe that Snowden would not be deserving of amnesty? Clearly it is the government and its senior officials who committed the crime -- people who took oaths to defend the Constitution from enemies both foreign and domestic and who failed to take to heart the words they swore to uphold. Indeed, Snowden did not -- nor does any government employee -- swear allegiance to the president of the United States, or even to the secretary of Defense or the director of NSA. No, he swore to uphold and defend the Constitution. Unfortunately, while federal law protects whistleblowers who work in other government sectors from reprisals for truth-telling and have paths for reporting wrongdoing and mismanagement, those who work in intelligence are expressly denied such rights. When Senior Staff Representative Diane Roark and longtime senior NSA employees Bill Binney, Ed Loomis, and I submitted a formal complaint about mismanagement at the agency, the government's response on July 26, 2007, was to send the FBI to raid our homes, searching them for seven hours and seizing our computers, phones and other digital media. We are just now getting our property back after having successfully sued the government in December 2012. The government even indicted Tom Drake, although it dropped its criminal charges in the case against him. Still, for the five of us, it was the equivalent of a punch in the face and a warning to other would-be "truth-tellers" not to report wrongful government activities or the government will come after you. Snowden clearly saw what the government does to whistleblowers who try to work within government to fix things that are wrong. He knew that our complaint to the United States Department of Defense inspector general in September 2002 went for naught. Although the report agreed that our complaint was well-founded, nothing happened -- no one was found guilty of wrongful behavior or waste of hundreds of millions of taxpayer dollars. JUST WATCHED Snowden writes open letter to Brazil Replay More Videos ... MUST WATCH Snowden writes open letter to Brazil 02:01 JUST WATCHED NSA ruling exonerate Edward Snowden? Replay More Videos ... MUST WATCH NSA ruling exonerate Edward Snowden? 07:03 JUST WATCHED Can Edward Snowden get amnesty? Replay More Videos ... MUST WATCH Can Edward Snowden get amnesty? 02:47 Even before writing the complaint, we -- all longtime and senior NSA employees -- along with Diane Roark, a senior staffer on the House Permanent Select Subcommittee on Intelligence, had approached Congress in 2001 about the matter of illegal collection of data about U.S. citizens. No action. Snowden might have known that we were ultimately punished by approaching officials, and even had our security clearances revoked when the FBI raided our homes -- despite the fact that four of the five of us were not indicted and none of us was found guilty of committing a crime. For employees in the business of intelligence, there are no honest brokers, no viable paths to follow to report the subverting of the U.S. Constitution. It is the reason Snowden went first to Hong Kong and ultimately Moscow to seek refuge. He did not go to those places to give away national secrets, rather he needed a place to stay that was safe from extradition and where he could wait while the United States sorted through the facts, especially those regarding government leaders who violated the most basic of our nation's laws -- the right to privacy. It was shocking to see the interview on MSNBC a few years ago with the former director of NSA, Michael V. Hayden, and hear him redefine the Fourth Amendment of the U.S. Constitution. When asked whether NSA had violated the Fourth Amendment, Hayden said it had not. Hayden said "probable cause" was not the Fourth Amendment's standard for violating a citizen's privacy -- it was based on "reasonable suspicion." Recognizing that the whole matter of secret presidential orders and extreme interpretations of the Constitution in regard to executive wartime authorities by the U.S. Department of Justice could be the subject of a book by themselves, one thing is clear -- no one asked either the Supreme Court or the people of the United States whether bulk collection of citizens' phone metadata was constitutional. As we saw on Monday, Judge Richard Leon does not think so. In recent days, Hayden defended the actions of both the Bush and Obama administrations, stating that the NSA collection program was "blessed" by all three branches of the U.S. government. What Hayden has not said is that neither the Foreign Intelligence Surveillance Court nor Congress had a good understanding of what was going on. The NSA contends it provided Congress with the opportunity to be briefed on the surveillance, but some members of Congress dispute that. Snowden's revelations since June have certainly made it clear that no one -- except the NSA -- believes they had the whole truth about the extensiveness of its data collection efforts, whether from the Internet or from the phone system. Perhaps more germane to this discussion whether Snowden should receive amnesty and the matter of who committed the real crime -- Snowden or the government -- is that the legal basis for NSA in defending its actions can be found in a single court case called Smith v Maryland (1979) -- which went to the Supreme Court at a time when there was hardly an internet and nobody even dreamed there would be cell phones, social network sites or Twitter. In this case, touted by the government as legitimizing the bulk collection of metadata under Section 215 of the Patriot Act, the police inserted a recording device at the telephone company to record the metadata -- phone number originating the call, time of call, number called and duration of conversation -- associated with a man suspected of robbing a lady. The alleged thief challenged the constitutionality of the police recording the metadata associated with the phone call, but the Supreme Court backed the lower court's decision that doing so under the circumstances was constitutional. Now, one might ask how does the Supreme Court's approval of the collection of metadata associated with a single phone call made by a suspected thief end up authorizing the bulk collection of phone metadata of hundreds of millions of American citizens by the most powerful spy agency in the world? We all know that the field of law has its quirks, but it's clear such an interpretation of law does not constitute justice, let alone make sense. With those facts as background, I think most Americans would agree that Edward Snowden deserves amnesty. In fact, it is those who allowed these programs to be implemented and developed over the past 12 years who should be prosecuted. After all, do we not stand for "equal justice for all"?
2023-09-13T01:26:29.851802
https://example.com/article/5386
Q: Problem, when I redraw on a grease pencil layer new strokes doesn't appear above old stroke I'm trying to remake a grease pencil file and I forgot one little shading detail on the body part. I then went back to add the shading on said layer but it appears under it instead of over it. How can I fix this? See the images below. I'm 100% positive it is on the same layer. A: Enter Edit Strokes mode, and in tool panel you will find this menu: It orders selected strokes inside layer.
2023-10-22T01:26:29.851802
https://example.com/article/1647
The present invention relates to containers in general, and more particularly to containers which give evidence of tampering and which discourage counterfeiting. Users of high value spare parts and consumables can be targeted by third party manufacturers of substitute products which have a similar appearance to the authentic product, but which lack the oversight and quality control which contribute to optimal performance. These substitutes can be of much lower value than the original equipment manufacturer's product and markets will reflect this in a lower price. Unscrupulous retailers attempt to mislead their customers by passing off the lower value product as an authentic OEM part, thereby securing an unrealisticly high price from a deceived customer. There is even a danger that such a retailer will reuse the authentic OEM packaging to give an appearance of authenticity to the counterfeit goods. What is needed is a convenient package that gives evidence of tampering and which discourages reuse.
2024-04-05T01:26:29.851802
https://example.com/article/8311
Re: Lol...if i didnīt know the critics and GAīs score for this movie... Quote: Originally Posted by Gotham's Knight Really? I have not met a single person in real life that has liked them. The best I've heard was "meh", which I see reflected by the ratings on various sites: TPM: RT: 57% IMDB: 6.5/10 MC: 51% AOTC: RT: 67% IMDB: 6.7/10 MC: 53% ROTS does seem to have a lot more positive reception, however. I personally thought it was the best of the three but certainly not very good. Maybe it's just me, but the ratings for the first two just come off as a general "meh". RT and MC are not exactly websites I use to try to prove a point (that case with the rating about TA vs. TDKR was just to show how pointless they are). IMDB is close to being more in line with what I was talking about, but IMDB is ran by either fanboys or haters. Really, it's the vocal minority that gives a bad name to the prequels, where the fans have just been fed up with going in circles in the same arguments (just like what's happening with TDKR on the internet). As for outside the internet, almost everybody I know likes or loves them, and only three people I know dislike them, yet they don't call them bad.
2024-01-05T01:26:29.851802
https://example.com/article/1066
Q: Feedback on Optimizing C# NET Code Block I just spent quite a few hours reading up on TCP servers and my desired protocol I was trying to implement, and finally got everything working great. I noticed the code looks like absolute bollocks (is the the correct usage? Im not a brit) and would like some feedback on optimizing it, mostly for reuse and readability. The packet formats are always int, int, int, string, string. try { BinaryReader reader = new BinaryReader(clientStream); int packetsize = reader.ReadInt32(); int requestid = reader.ReadInt32(); int serverdata = reader.ReadInt32(); Console.WriteLine("Packet Size: {0} RequestID: {1} ServerData: {2}", packetsize, requestid, serverdata); List<byte> str = new List<byte>(); byte nextByte = reader.ReadByte(); while (nextByte != 0) { str.Add(nextByte); nextByte = reader.ReadByte(); } // Password Sent to be Authenticated string string1 = Encoding.UTF8.GetString(str.ToArray()); str.Clear(); nextByte = reader.ReadByte(); while (nextByte != 0) { str.Add(nextByte); nextByte = reader.ReadByte(); } // NULL string string string2 = Encoding.UTF8.GetString(str.ToArray()); Console.WriteLine("String1: {0} String2: {1}", string1, string2); // Reply to Authentication Request MemoryStream stream = new MemoryStream(); BinaryWriter writer = new BinaryWriter(stream); writer.Write((int)(1)); // Packet Size writer.Write((int)(requestid)); // Mirror RequestID if Authenticated, -1 if Failed byte[] buffer = stream.ToArray(); clientStream.Write(buffer, 0, buffer.Length); clientStream.Flush(); } I am going to be dealing with other packet types as well that are formatted the same (int/int/int/str/str), but different values. I could probably create a packet class, but this is a bit outside my scope of knowledge for how to apply it to this scenario. If it makes any difference, this is the Protocol I am implementing. http://developer.valvesoftware.com/wiki/Source_RCON_Protocol A: Thoughts: you arent really using the reader except for a few ints; otherwise, all you need is ReadByte you can do that from the Stream, and save some indirection/confusion read the ints manually to avoid Endianness issues reading byte by byte can be expensive; if possible, try to fill a buffer (or rather: read the right amount of data) by looping over Read rather that ReadByte if multiple messages are coming down the same pipe, reading to EOF will probably fail (either corrupt the data or block forever); you usually need either a terminator sequence or a length-prefix. I prefer the latter, as it let's you use Read instead of ReadByte I assume that is packetSize in your example; it is critical to use this: to separate the messages, to verify you have an entire message, and to deny over-sized data consider whether async (BeginRead) is suitable - sometimes yes, sometimes no; and note that this makes disposal trickier as you can't use "using" with async when using MemoryStream, using .GetBuffer() in combination with .Length has less overhead than using .ToArray() A: The first thing that jumps out to me is to always use the using statement with any object that implements IDisposable. This will ensure that your objects are properly disposed of even in the event of an exception. private void FillList(BinaryReader reader, List list) { while (reader.PeekChar() != -1) { list.Add(reader.ReadByte()); } } ... try { int packetsize, requestid, serverdata; string string1, string2; List<byte> str = new List<byte>(); using (BinaryReader reader = new BinaryReader(clientStream)) { packetsize = reader.ReadInt32(); requestid = reader.ReadInt32(); serverdata = reader.ReadInt32(); Console.WriteLine("Packet Size: {0} RequestID: {1} ServerData: {2}", packetsize, requestid, serverdata); FillList(reader, str); // Password Sent to be Authenticated string1 = Encoding.UTF8.GetString(str.ToArray()); str.Clear(); FillList(reader, str); } // NULL string string2 = Encoding.UTF8.GetString(str.ToArray()); Console.WriteLine("String1: {0} String2: {1}", string1, string2); // Reply to Authentication Request using (MemoryStream stream = new MemoryStream()) using (BinaryWriter writer = new BinaryWriter(stream)) { writer.Write((int)(1)); // Packet Size writer.Write((int)(requestid)); // Mirror RequestID if Authenticated, -1 if Failed byte[] buffer = stream.ToArray(); clientStream.Write(buffer, 0, buffer.Length); clientStream.Flush(); } }
2023-08-31T01:26:29.851802
https://example.com/article/9020
If you’ve spent time trying to ramp up on Elasticsearch and configure a local cluster, you may be wondering if there’s a better way. Perhaps you’re in need of solid advice, and maybe you’d like to find an easier path. In this article, we summarize a number of best practices in managing an ES cluster. Also, we provide many links to specific information that you can find in our extensive blog and knowledge base. Choose the Right Number of Shards Specifying too few shards per index will keep you from exploiting the full potential of your cluster. However, you risk performance issues by increasing the number of shards too much. OK then, you may ask: How many is too many? Read all about it in our hot article, Optimizing Elasticsearch: How Many Shards per Index?. A Test Cluster is Essential If you’re doing anything other than trivial experimentation, you will most definitely need a test cluster. Elasticsearch has an extensive array of configuration options and rich APIs, and we strongly recommend that you experiment with different approaches as you seek to find the best solution for your app stack. You can try different settings and look for variations in behavior. If you’re running locally, try making changes in both the elasticsearch.yml configuration file and using the cluster settings API. Also, don’t forget to test failure modes. These are questions that are important to answer: How many nodes can you lose and still accept reads and writes? What happens if one of your nodes runs out of disk space? What happens if you set ES_HEAP_SIZE too small? We also want you to know that there’s another way. A path of less resistance. You could avoid much of this tedium and ease your administration burden by migrating to hosted Elasticsearch services. We offer a number of tools and support options, including the following: Qbox also offers troubleshooting guidance and free 24/7 support. Practice Cluster Restarts and Node Outages Since cluster restarts can be common, you’ll want to do rolling restarts so that you minimize or downtime. You’ll need to do these for most configuration changes, and whenever you upgrade Elasticsearch versions. This is also an important consideration if your dataset grows and you need to scale. Read more in our article Thoughts on Launching and Scaling Elasticsearch. Size your Cluster Carefully Since Elasticsearch is resource intensive, you may not be able to justify as much hardware for your test cluster. We strongly recommend that your test cluster should be as close to the configuration as your production cluster. It should be as similar as possible, aiming for the same VM size and type as the production cluster. Remember, you can run multiple nodes on the same machine—if absolutely necessary. Be Generous with Memory If you have a large dataset, try to allocate as much memory as you can. The adequate amount of memory varies with application type and load, so it’s important to measure memory usage on your cluster from the very beginning. We recommend these articles for thoughtful guidance on sizing your cluster: Setting Up Visual Tools For quick troubleshooting and frequent stats, it’s good to have a easy-to-use dashboard to check the current state of our clusters. We highly recommend the Kopf tool. Avoid Routing on Data Nodes There are several options for routing in Elasticsearch. One popular option is to place a round-robin proxy in front of all your nodes. However, if your cluster will experience intensive use, then it’s better to handle the routing with a dataless node. The reason non-data node routing is more effective (at scale) than a simple round-robin HTTP proxy is because a dataless node will have a copy of the cluster state—a table of shards and the corresponding nodes. Since it knows the state of the entire cluster, the dataless node will know the specific node(s) that will get the request. For the case in which a simple HTTP proxy is placed in front of data nodes, the request goes to whichever data node (usually random or round-robin). The node that receives the request must examine its state and then perform the search locally or pass the request off to the appropriate node. We hope that you find this article helpful, and we invite you to make comments below.
2024-03-05T01:26:29.851802
https://example.com/article/8676
{ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": {}, "variables": {}, "resources": [], "outputs": {} }
2023-11-17T01:26:29.851802
https://example.com/article/8769
Studies on hypoxic dyslipidaemia. Effect of lipid modulating drugs. Haemorrhagic anaemia, exposure to altitude and depression of cell respiration are known to increase plasma triglyceride and cholesterol levels. The common triggering mechanism in all 3 instances is oxygen deficiency but mode of action is not known. The present study was intended to investigate whether lipid lowering drugs clofibrate or gemfibrozil could counteract such hypoxic dyslipidaemia in rats induced by altitude exposure. It was unexpectedly found that in normal rats gemfibrozil elevated plasma total cholesterol with an increase in the HDL-cholesterol component whereas clofibrate caused a rise in LDL-cholesterol. Nicotinic acid had no consistent effect. In hypoxia control rats showed an increase in cholesterol and triglyceride level. Both gemfibrozil and clofibrate prevented the rise of triglycerides. Total cholesterol fell in rats treated either with gemfibrozil or clofibrate during altitude exposure, indicating that neither natural nor gemfibrozil-augmented hyper-HDL-aemia could be maintained during oxygen deficiency.
2024-05-28T01:26:29.851802
https://example.com/article/2420
myCampusNotes / GGSIPU 2018 B.Tech College Comparison(s) Remember : The faculty strength is estimate and strictly according to the data provided on the college website. The data on this website is just the rough estimate of the data provided. You are also advised to consult the college regarding the same. Student/faculty ratio is the rough estimate of number of students, note that we didnot included some extra seats for Lateral Entry and this is only for indicative purpose only, and the faculty number might have been changed. You are advised to also consult the college regarding the same. The information provided here is for educational purpose only. The campus area is only the indicative and the requested area may or may not be same. You are advised to visit the college before coming to any final conclusion. The fees provided here is annual fees (1 year) and the fees might have changed, this is only an approximative fees and the original fees may or maynot be same, you are requested to visit the college website/college for further information. Still confused ! Here is what other(s) are searching Still confused ! Here is what other(s) are searching Still confused ! Here is what other(s) are searching More confused ! Dont worry ! Here is all the possible combination(s) of Every GGSIPU University College Comparison available More confused ! Dont worry ! Here is all the possible combination(s) of Every GGSIPU University College Comparison available More confused ! Dont worry ! Here is all the possible combination(s) of Every GGSIPU University College Comparison available
2023-09-22T01:26:29.851802
https://example.com/article/1430
Gilman City, Missouri Gilman City is a city in Daviess and Harrison counties in the U.S. state of Missouri. The population was 383 at the 2010 census. History Gilman City was platted in 1897 when the railroad was extended to that point. A post office called Gilman City has been in operation since 1897. The city has the name of Theodore Gilman, a railroad banker. Geography Gilman City is located at (40.141523, -93.873025). According to the United States Census Bureau, the city has a total area of , all land. Demographics 2010 census As of the census of 2010, there were 383 people, 156 households, and 108 families living in the city. The population density was . There were 196 housing units at an average density of . The racial makeup of the city was 99.5% White and 0.5% from two or more races. Hispanic or Latino of any race were 0.8% of the population. There were 156 households of which 35.9% had children under the age of 18 living with them, 53.2% were married couples living together, 7.7% had a female householder with no husband present, 8.3% had a male householder with no wife present, and 30.8% were non-families. 25.6% of all households were made up of individuals and 12.2% had someone living alone who was 65 years of age or older. The average household size was 2.46 and the average family size was 2.93. The median age in the city was 35.6 years. 26.1% of residents were under the age of 18; 9.9% were between the ages of 18 and 24; 24.4% were from 25 to 44; 25% were from 45 to 64; and 14.6% were 65 years of age or older. The gender makeup of the city was 48.8% male and 51.2% female. 2000 census As of the census of 2000, there were 380 people, 158 households, and 100 families living in the city. The population density was 444.9 people per square mile (172.6/km²). There were 207 housing units at an average density of 242.3 per square mile (94.0/km²). The racial makeup of the city was 99.47% White, and 0.53% from two or more races. Hispanic or Latino of any race were 0.53% of the population. There were 158 households out of which 31.0% had children under the age of 18 living with them, 53.8% were married couples living together, 7.0% had a female householder with no husband present, and 36.1% were non-families. 31.6% of all households were made up of individuals and 19.0% had someone living alone who was 65 years of age or older. The average household size was 2.41 and the average family size was 3.03. In the city the population was spread out with 26.1% under the age of 18, 12.1% from 18 to 24, 22.4% from 25 to 44, 19.7% from 45 to 64, and 19.7% who were 65 years of age or older. The median age was 36 years. For every 100 females, there were 91.0 males. For every 100 females age 18 and over, there were 95.1 males. The median income for a household in the city was $26,042, and the median income for a family was $33,482. Males had a median income of $21,518 versus $16,250 for females. The per capita income for the city was $12,413. About 4.0% of families and 9.8% of the population were below the poverty line, including 3.5% of those under age 18 and 13.3% of those age 65 or over. References Category:Cities in Harrison County, Missouri Category:Cities in Daviess County, Missouri Category:Cities in Missouri
2023-12-25T01:26:29.851802
https://example.com/article/2310
Monday, July 23, 2012 The Story of Anatoly Onoprienko Unwanted Overtime Map of the Ukraine (AP) Ukraine is the second largest country in Europe after Russia, and it is located in the eastern quadrant. The country has rarely stood alone and has been subjugated at one time or another by Poland, Lithuania and Russia. The population of the Ukraine is estimated to be approximately 50 million. The territory of the Ukraine is mostly a level, treeless plain, except for the Crimean Mountains in the Crimean peninsula and the Carpathians in the west. The climate is moderate and winters are relatively mild with no severe frosts. Because of these positive climatic conditions, the Ukraine is by tradition an agricultural area. They grow wheat, maize, buckwheat and a wide variety of fruits and vegetables. The Ukraine is also one of the world's main centers of sugar production. The country is also rich in natural resources, such as iron ore, coal, various metal ores, oil, gas, etc., and has a variety of industries concentrated mostly in and around big cities, such as Kiev, Zaporozhye, Dnepropetrovsk, and Dnyeprodzerzhinsk. They produce planes and ships, cars, buses, locomotives, computer and electronic equipment, precision instruments, agricultural machines, and various other consumer goods. Odessa, Sebastopol, Nickolayev, Kherson and Kerch are the Ukraines main ports. A massive Soviet military base once dominated the town of Yavoriv, located in Western Ukraine, but after the end of the Cold War, the base has been cut in size, and religion now dominates the area. Nobody works Sunday, much less Easter Sunday. Nobody, that is, except the police, for whom any holiday means double shifts and unwanted overtime. Investigator Igor Khuney usually has Sundays off, however by 10:00 in the morning on April 7, 1996, he was on his beat in the military housing area as part of an added holiday detail. At the precinct house a few kilometers across town, Khuney's boss, Deputy Police Chief Sergei Kryukov, was sitting in his office, stirring his fifth cup of tea that day. He'd been at work since midnight the previous day and was trying his best to stay alert. Both men were prepared for a long evening holidays always mean more public drinking and, subsequently, more work for police Neither police officer had the faintest idea that, within a matter of hours, he would be involved in the arrest of a suspect in one the worst series of murders in modern history. Nor did the two have any idea that they would get no credit for their work. A Killer Unmasked Sometime around noon Officer Khuney received a strange call from a man by the name of Pyotr Onoprienko. According to Pyotr, he had recently stumbled upon a stash of weapons hidden in his home. He had suspected that they belonged to his live-in cousin, Anatoly Onoprienko, and ordered him to pack up and move. Anatoly had become enraged at his cousin's accusations and told Pyotr that he better watch out, because he would take care of his cousin's family on Easter. Obviously fearing for the safety of his family, Pyotr wanted Khuney to investigate the threat. Pyotr told the investigator that his cousin had recently moved in with a woman and her child in the nearby town of Zhitomirskaya. The information about the suspicious character from the Zhitomirskaya intrigued Kryukov, who had just read a police report about a 12-gauge, Russian-made Tos-34 hunting rifle the type used in a recent local killing had been reported stolen in the Zhitomirskaya area. "It was a long shot, but I thought, here we've got an armed guy from Zhitomirskaya, and a weapon missing. And we don't have too many people from Zhitom come here," said Kryukov. "If I hadn't gotten the (tip) that morning, I might never have considered it. But as it was, I had to think about it." Concerned, Kryukov quickly called superiors in the Lviv police headquarters for advice on how to proceed. Lviv police chief, General Bogdan Romanuk, instructed Kryukov to form a task force and conduct a search of Anatoly Onoprienko's apartment. Within an hour, over 20 patrolmen and detectives were assembled, and the group set off for Ivana Khristitelya Street in unmarked cars. The suspect shared an apartment there with a Yavoriv hairdresser "Anna" and her two children. The exits to the suspect's building were blocked with unmarked cars and two men guarded the fourth and second floors. The remaining investigators surrounded the building. Khuney, Kryukov and patrolman Vladimir Kensalo then approached the suspect's door. Anatoly Onoprienko mugshot Kryukov had no idea whether Anna and her two children were home. Unbeknown to investigators, they were at church, and Anatoly Onoprienko, whom the children now called "Dad", was expecting them home any minute. When Kryukov rang the doorbell, Onoprienko assumed that it was Anna and opened the door without hesitation. To his surprise, he was quickly subdued and handcuffed. As Kryukov looked around the suspect's apartment, he noticed an Akai stereo in the living room. The stereo caught his eye because a Novosad family, recently murdered in nearby Busk on March 22, 1996, had a similar stereo, which was reported missing by family members shortly after their murder. "I had a list, which I always carried around, of certain items that had been reported missing, their makes and serial numbers," said Kryukov. "And the Akai matched the Busk crime scene." When police asked Onoprienko for his identification, he led them to a closet. As an investigator opened the closet door, Onoprienko dove for a pistol he had previously hidden inside. Regardless of his efforts, he was quickly subdued and unable to get to it in time. The pistol, as it would turn out, was the second piece of evidence it had been stolen from a murder scene in Odessa. Realizing the seriousness of the situation, investigators escorted Onoprienko back to police headquarters and began a comprehensive search of the premises. By the end of the day, 122 items, belonging to numerous unsolved murder victims were recovered from the scene, including a sawed-off Tos-34 rifle. As the search at Ivana Khristitelya Street was winding down, Anna came home. "She understood that something serious had happened, and asked me what was going on," Kryukov said. "There was nothing to do. I took her aside and said, 'Do you remember those killings in Bratkovichi?' and she broke down crying. Silence Although they had a mountain of material evidence, Kryukov needed a confession. Nonetheless, Onoprienko immediately made it clear that he was not interested in talking. When Kryukov confronted him with the facts, Onoprienko showed little reaction and just smiled. I'll talk to a general, but not to you, he said.� Yavoriv's lead investigator, Bogdan Teslya, had not been involved in the arrest or initial search. At the time of the operation, he had been at home relaxing with his family. Shortly after the search at Onoprienko's apartment was finished, at approximately 9:00 at night, he got a phone call from Kryukov asking him to come in and handle the interrogation. Teslya was considered by Khuney and other investigators to be the best interrogator in the area, because of his personality and ability to speak calmly with suspects. At police headquarters, Onoprienko had waived his right to an attorney and continued to remain silent. Despite his announcement that he would speak to no one below the rank of general, Teslya considered it imperative to try to get as much information as he could. I was terrified that it would go wrong, he said. In this kind of case, you never know what will happen. He might hang himself in his cell by the next morning, and then you'd never be able to really close the case. We needed to get him to speak. Beginning at 10 p.m., Teslya sat alone in an interrogation room with Onoprienko while they waited for an Interior Ministry general to arrive from Lviv, and tried to get him to talk about himself. Onoprienko was silent at first, but in the second half hour of questioning began to talk about his life, telling Teslya that he had been born in the town of Laski in the Zhitomirskaya Oblast. He told Teslya his mother had died when he was very young and that his father had put him into a Russian orphanage. Onoprienko talked at length about this, saying he was still angry that his father gave him away, but kept his older brother. Onoprienko said that he felt that his father and brother could easily have taken care of him, Teslya said. He was moved and upset to talk about it. Following this line of questioning, Teslya then asked Onoprienko whether he ever felt resentment toward families. Onoprienko hesitated briefly and then shook his head before restating that he would not talk to anyone below the rank of general. At that point, I tried something new, Teslya said. I said to him, 'We'll get you your general. We'll get 10 generals if you want. But how am I going to look if I bring them in here and you've got nothing to tell them? Because maybe there's nothing to tell. How will I look then? And that's when he said it. He said, Don't worry. There's definitely something to tell. Confessions of Madness Shortly after 11 p.m., Teslya left the room and went into the corridor, where General Romanuk was waiting. After a brief recess, the two men and Romanuk's assistant, Maryan Pleyukh, entered the room, and Onoprienko began his confession.� He first admitted that he had stolen the shotgun, and then admitted that he had used it in a recent murder. Onoprienko confessed to investigators that he killed for the first time in 1989. He had met a friend, Sergei Rogozin, at a local gym where the two worked out. The two hit it off and began spending much of their time together and their friendship eventually turned into a partnership of crime. They began robbing homes as a way to supplement their meager incomes.� However, one night while robbing a secluded home outside of town, the owners discovered the two intruders. Armed with weapons they carried for self-defense, the two felt that killing the family was necessary in assuring their freedom. Hence, in covering up their tracks, they murdered the entire family two adults and eight children. Onoprienko informed investigators that he broke all ties with Sergei a few months later and shot and killed five people, including an 11-year-old boy, who were sleeping in a car. He then burned their bodies. I was approaching the car only to rob it, he said. I was a completely different person then. Had I known there had been five people, I would have left. He said he had derived no pleasure from the act of the killing. Corpses are ugly, he said. They stink and send out bad vibes. After I killed the family in the car, I sat in the car with their bodies for two hours not knowing what to do with them. The smell was unbearable. Following the murders, Onoprienko kept to himself for several years and moved in with a distant cousin, before he killed again on December 24, 1995. That night, he broke into the secluded home of the Zaichenko family, located in Garmarnia, a village in central Ukraine. He murdered the forestry teacher, along with his wife and two young sons, with a sawed-off, double-barreled shotgun. He then escaped with the couples wedding rings, a small golden cross on a chain, earrings, and a bundle of worn clothes. Before leaving the scene of the crime, he set the home ablaze. I just shot them. It's not that it gave me pleasure, but I felt this urge, he said. From then on, it was almost like some game from outer space. � Onoprienko, hands up, in jail (AP/Wide World) Onoprienko informed investigators that he had a vision from god, was commanded to murder, and just nine days later killed a family of four, before burning the house down. All the victims were shot with his gun. He claimed that while fleeing the scene, he was spotted by a man on the road and decided to kill him as well, so as not to leave any living witnesses that could later identify him or place him at the scene. Less than a month later, on January 6, 1996 Onoprienko told investigators, that he killed four more people in three separate incidents. He was hanging out near the Berdyansk-Dnieprovskaya highway and decided to stop cars and kill the drivers. Onoprienko stated that he murdered four travelers that day - a Navy ensign named Kasai, a taxi driver named Savitsky, and a kolkhoz cook named Kochergina. To me it was like hunting. Hunting people down, he explained. I would be sitting, bored, with nothing to do. And then suddenly this idea would get into my head. I would do everything to get it out of my mind, but I couldn't. It was stronger than me. So I would get in the car or catch a train and go out to kill. Commanded to Kill Anatoly Onoprienko waited just 11 days after the highway murders before killing again. On January 17, 1996, he drove to Bratkovichi and broke into a home owned by the Pilat family. I look at it very simply, he told investigators. As an animal. I watched all this as an animal would stare at a sheep. He shot five in all, including a six-year-old boy. Following the murder, just before daybreak, he set the house ablaze prior to leaving. While making his get away, he was spotted by two witnesses, a 27-year-old female railroad worker named Kondzela, and a 56-year-old man named Zakharko. He wasted little time and shot them both in cold blood. Less than two weeks later, on January 30, 1996, in the Fastova, Kievskaya Oblast region, Onoprienko shot and killed a 28-year-old nurse named Marusina, along with her two young sons and a 32-year-old male visitor named Zagranichniy. He told investigators that he could not stop himself and was obsessed with killing. A month after the Fastova murders, on February 19, 1996, Onoprienko traveled to Olevsk, Zhitomirskaya Oblast, and broke into the home of the Dubchak family. He shot the father and son, and mauled the mother and daughter to death with a hammer before leaving. He stated that the young girl had witnessed him murder her parents and was praying when he walked into her room. Seconds before I smashed her head, I ordered her to show me where they kept their money, he said. She looked at me with an angry, defiant stare and said, No, I won't. That strength was incredible. But I felt nothing. On February 27, 1996, Onoprienko said that he drove to Malina, in the Lvivskaya Oblast region and broke into the Bodnarchuk family home. He shot the husband and wife to death and then murdered their two daughters, aged seven and eight. Rather than shooting the young children, he hacked them both to death with an axe. One hour later, a neighboring businessman named Tsalk was wandering around outside and Onoprienko decided to kill him as well. He shot the man and then hacked up his corpse with the same axe he had used to murder the children. Oh, you know, I killed them because I loved them so much, those children, those men and women, I had to kill them, the inner voice spoke inside my mind and heart and pushed me so hard!� Onoprienko claimed that his last murder occurred on March 22, 1996, when he traveled to the small village of Busk, just outside of Bratkovichi, and murdered the Novosad family, four in all. He shot them to death and set their home ablaze in order to destroy any evidence. I'm not a maniac, he said. If I were, I would have thrown myself onto you and killed you right here. No, it's not that simple. I have been taken over by a higher force, something telepathic or cosmic, which drove me. I am like a rabbit in a laboratory. A part of an experiment to prove that man is capable of murdering and learning to live with his crimes. To show that I can cope, that I can stand anything, forget everything. Investigators questioned Onoprienko until 6 a.m., as he confessed to committing over 50 murders during his 3-month rampage. They spent most of their time taking down details about each killing. There was little talk of motive, although Onoprienko stated several times that he wanted to be studied as a phenomenon of nature and that a higher being had commanded him to kill. Citizen O The day after the initial interview with Onoprienko, Teslya went to Lviv, where Onoprienko had been moved, and began a 5-day series of one-on-one interviews with his suspect. Teslya called Onoprienko the most perplexing person I've ever interviewed. The suspect told Teslya he was commanded by God to kill, and that he had been chosen as a superior specimen. He claimed he could wield strong hypnotic powers, control animals through telepathy and stop his heart with his mind. I told him that I thought his hypnotic powers were interesting, and asked him, for my benefit, if he could try them on me, Teslya said. But he said that it only worked with weak people, and I wasn't a weak enough person. Onoprienko revealed that he had previously spent time in a Kiev hospital for schizophrenia, a lead that Teslya, as an Lviv investigator, was not allowed to pursue. The statement was interesting because immediately following the arrest, Kiev Interior Ministry investigator Alexander Tevashchenko said that Onoprienko - then identified as "Citizen O" - was an outpatient whose therapists knew he was a killer. Teslya later stated that he knew nothing about that side of the case, and the Kiev investigators have yet to release any further information regarding it since the initial statement. On Friday, April 19, 1996, the investigation was taken out of Teslya's hands and turned over to federal Interior Ministry investigators. When his week of questioning the suspect was over, Teslya said he had concluded Onoprienko was genuinely insane and had acted alone. There have been many rumors that he was part of a gang, but my feeling is that his discussions of his motives, and of his special powers, were not fabricated. I can be wrong, but that's what I think, he said. Plus, just thinking rationally, I don't think anyone but a single killer could have pulled off so many murders. In a gang, someone talks, another drinks, a third whispers something to a girlfriend, and it's all overbut as I say, I can be wrong. Even though psychiatrists declared Anatoly Onoprienko mentally fit to stand trial, the proceedings did not begin until November of 1998. Incredibly, trials in the Ukraine cannot begin until the defendant has read all the evidence against him, at his leisure, and in the case of Anatoly Onoprienko there was plenty to get through - 99 volumes of gruesome photos, showing dismembered bodies, cars, houses and random objects Onoprienko stole from his victims. Another reason for the delay was money. It was not until the head judge in the trial made a televised appeal that the Ukrainian government agreed to allocate the necessary funds for a lengthy trial. On November 23, 1998, a Ukrainian court ruled that 39-year-old Anatoly Onoprienko was mentally competent and could be held responsible for his crimes. The regional court in Zhytomyr said that Onoprienko, Does not suffer any psychiatric diseases, is conscious of and is in control of the actions he commits, and does not require any extra psychiatric examination. Caged Justice Deemed competent to face the charges against him, Onoprienkos trial opened in the city of Zhytomyr, 90 miles west of Kiev on February 12, 1999. As the proceedings began, Onoprienko, like Andrei Chikatilo, Russia's infamous Rostov Ripper, sat in court in an iron cage, and was spat upon and raged at by the public. Hundreds of people huddled together in the unheated courtroom were angered, Let us tear him apart, shouted a woman from the back of the court room just before the hearing started, adding, He does not deserve to be shot. He needs to die a slow and agonizing death. Afraid that the crowd might take the law into their own hands, police searched bags and made everyone pass through an airport-style metal detector before continuing. Many of those attending the hearing said they were afraid that the killer would be sentenced to only 15 years in prison - the maximum sentence possible under Ukrainian law, except for capital punishment. While in court, Onoprienko had very little to say. Asked if he would like to make a statement he shrugged his shoulders and replied, No, nothing. Informed of his legal rights he growled, This is your law. When asked to state his nationality, he said, None. When Judge Dmitry Lipsky said this was impossible, Onoprienko rolled his eyes and replied, Well, according to law enforcement officers, I'm Ukrainian. The defendant claimed he felt like a robot driven for years by a dark force and argued that he should not be tried until authorities could determine the source. You are not able to take me as I am, he shouted at Judge Dmytro Lypsky. You do not see all the good I am going to do, and you will never understand me, he said. This is a great force that controls this hall as well. You will never understand this. Maybe only your grandchildren will understand. Onoprienko's lawyer, Ruslan Moshkovsky, who said he did not contest his client's guilt, blamed ineptitude of investigators for the extent of his rampage and asked that his childhood in the orphanage be viewed as an extenuating circumstance. Nonetheless, Prosecutor Yury Ignatenko countered that examinations of Onoprienko's mental health during the investigation had overturned an independent diagnosis of schizophrenia made before his arrest, and a further test ordered by the court confirmed his current mental health. The prosecutor said Onoprienko's motives lay in his own violent nature. In every society there have been and are people who due to their innate natures can kill, and there are those who will never do that, he added. People demand how come he killed so many people. But why not, if conditions make it possible?... Onoprienko led a double life, and that is the main thing. Onoprienko told the court that he had been driven by a devil, higher powers and mysterious voices. He assured the court he was guilty of all charges against him, however insisted that he felt no remorse. I would kill today in spite of anything, Anatoly told the court. Today I am a beast of Satan. Following 100 volumes of shocking evidence and the defendants own admissions, closing arguments began in April of 1999. Prosecutor Yury Ignatenko wasted little time in demanding the death sentence, In view of the extreme danger posed by (Anatoly) Onoprienko as a person, I consider that the punishment for him must also be extreme -- in the form of the death sentence, Yury Ignatenko told the court in his concluding speech. Onoprienko's lawyer Ruslan Moshkovsky, once again tried to play on the sympathy of the court as he began his own closing arguments, My defendant was from the age of four deprived of motherly love, and the absence of care which is necessary for the formation of a real man," Moshkovsky said. I appeal to the court...to soften the punishment. With the trial now over, court was adjourned to await the judges ultimate verdict. Epilogue After just 3 hours of deliberation, Judge Dmytro Lypsky called the court back into session. Onoprienko stood head bent, staring at the floor of his metal cage as the sentence was read. In line with Ukraines criminal code, Onoprienko is sentenced to the death penalty by shooting, Judge Lypsky announced to the court. In his final statement to the court, Onoprienko exclaimed, I've robbed and killed, but I'm a robot, I don't feel anything, I've been close to death so many times that it's even interesting for me now to venture into the afterworld, to see what is there, after this death.�� Onoprienko on videotape in jail (AP/Wide World) Thank goodness that's over, said a secretary leaving the hearing. The death sentence ruling put the Ukraine in an awkward position. Under its obligations as a Council of Europe member, they had committed to abolishing capital punishment. Nonetheless, both the public and the politicians argued that the Onoprienko case was an exception.� Following his sentencing, Onoprienko, the media dubbed Terminator, gave a lengthy interview to a London Times reporter. During their meeting, Onoprienko reminisced about the murders he had committed. I started preparing for prison life a long time ago -- I fasted, did yoga, I am not afraid of death, he said. Death for me is nothing. Naturally, I would prefer the death penalty. I have absolutely no interest in relations with people. I have betrayed them. The first time I killed, I shot down a deer in the woods. I was in my early twenties and I recall feeling very upset when I saw it dead. I couldn't explain why I had done it, and I felt sorry for it. I never had that feeling again. Leonid Kuchma, Ukranian president If I am ever let out, I will start killing again, but this time it will be worse, ten times worse. The urge is there. Seize this chance because I am being groomed to serve Satan. After what I have learnt out there, I have no competitors in my field. And if I am not killed I will escape from this jail and the first thing I'll do is find Kuchma (the Ukrainian president) and hang him from a tree by his testicles. Onoprienko's accomplice in the first set of murders, 36-year-old Serhiy Rogozin, was sentenced to 13 years in prison. �Anatoly Onoprienko currently resides on death row as authorities are still looking into a string of additional murders that took place between 1989 and 1995. Since there is a gap in Onoprienko's life during that time that he will not discuss and which cannot be accounted for, he remains a suspect in them.
2024-07-03T01:26:29.851802
https://example.com/article/2141
Q: jq json parser concate nested array object value Hi I have the below JSON file with nested object: { "Maps": { "Campus": [ { "name": "nus", "Building": [ { "name": "sde1", "Floor": [ { "name": "floor1" }, { "name": "floor2" } ] }, { "name": "sde2" } ] }, { "name": "ntu", "Building": [ { "name": "ece1", "Floor": [ { "name": "floor1" }, { "name": "floor2" } ] }, { "name": "ece2" } ] } ] } } I want to use jq to parse the above JSON file and get the below format: nus>sde1>floor1 nus>sde1>floor2 ntu>ece1>floor1 ntu>ece1>floor2 basically I have to concatenate the Campus Name with Building Name and Floor name and put a < symbol in between. If the nested object field Floor is not exist, ignore the parse and continue the next child object. How to achieve that? thanks. A: You can use the following jq command: jq '.Maps.Campus[]|"\(.name)>\(.Building[]|"\(.name)>\(.Floor[]?.name)")"' file.json jq is smart enough to print the combinations of .name and .Building[].name since .Building is an array. The same action get's applied to .Building[].name and Floor[]?.name. ? because floor is not always set.
2023-10-16T01:26:29.851802
https://example.com/article/4322
Permanent cardiac pacing after a cardiac operation: predicting the use of permanent pacemakers. The need for permanent cardiac pacing after cardiac operations is infrequent but associated with increased morbidity and resource utilization. We identified patient risk factors for pacemaker insertion to enable development of a predictive model. Data were collected prospectively for 10,421 consecutive patients who had cardiac operations between January 1990 and December 1995. Two hundred fifty-five patients (2.4%) were identified as having received a permanent pacemaker during the same hospitalization. Logistic regression analysis was performed to determine the independent, multivariate predictors of permanent pacing. The predictive accuracy and precision of the logistic regression model was evaluated in the 1996 database of 2,236 consecutive patients by the calculation of Brier scores. Eight independent predictors of permanent pacemaker requirement were identified. The factor-adjusted odds ratios (OR) with 95% confidence interval (CI) associated with each predictor are as follows: (1) valve replacement surgery (aortic: OR 5.8, CI 3.9-8.7; mitral: OR 4.9, CI 3.1-7.8; tricuspid: OR 8.0, CI 5.5-11.9; double: OR 8.9, CI 5.5-14.6; and triple: OR 7.5, CI 2.9-19.3); (2) repeat operation: OR 2.4, CI 1.8-3.3; (3) age 75 years or older: OR 3.0, CI 2.0-4.4; (4) ablative arrhythmia operation: OR 4.2, CI 1.9-9.5; (5) mitral valve annular reconstruction: OR 2.4, CI 1.4-4.2; (6) use of cold blood cardioplegia: OR 2.0, CI 1.2-3.6; (7) preoperative renal failure: OR 1.6, CI 1.0-2.6; and (8) active endocarditis: OR 1.7, CI 0.9-3.0. A model for postoperative permanent pacemaker requirement using the eight predictors was formulated and tested (Brier score = 0.017+/-0.003; Z = 0.18). The proposed predictive model correlated highly with actual pacemaker use, which suggests that the requirement for pacing results from either operative trauma or increased ischemic burden. Preoperative identification of patients at increased risk of conduction disturbances may allow for earlier detection and improved treatment. Patients requiring postoperative pacing had increased morbidity and length of stay.
2023-12-23T01:26:29.851802
https://example.com/article/7785
I was in the middle of a 7-gallon brew, when my landlord informed me that the grain that I had put down my garbage disposal had clogged the main drain for the house. My sink is too small for an ice bath for my kettle, so my cooling plan had been to use my copper immersion chiller. But, with nowhere to put the waste water (I am nowhere near the ground floor, and there isn't an outdoor hose I can access), I had to air chill the recently-boiled wort. I saran-wrapped all of the gaps in the brew kettle (the edge of the lid and the aperture for the thermometer, which I left in), and left it to air chill. It took two days until the wort came down to fermentation temperature, at which point I removed the plastic wrap, spritzed the outside of the kettle with sanitizer, and dumped it into primary. It's currently aerated and fermenting away. Question: How worried should I be about contamination that may have occurred during two days of air cooling this wort? Aging now. Out of primary there were some bitterness issues, but no sourness/yeast contamination notes. If those flavors are there, I can't taste them; it's a big, roasty, dark stout, so it could be there and just covered up. – Zac BNov 12 '12 at 15:10 3 Answers 3 How worried should I be about contamination that may have occurred during two days of air cooling this wort? Honestly, I'd be very worried. However, not much can be done now anyway, so don't sweat it, but don't do this again for future reference. As that wort cooled, it contracted in volume slightly, which created a very slight vacuum that might have pulled air down into the kettle that had wild yeasts/bugs in it. Regarding "No Chill Brewing" as mentioned by Ryan, I am a big fan of it (I hardly ever "chill" batches anymore) but for me, I always do it in an air tight, sealed tank. Did the wort smell "sharp" or "sour" when you poured into primary? I had a beer get subtly infected in my No Chill tank once, and I could definitely tell that something was wrong when I poured into primary, from the taste of the wort. Wort should be bitter + sweet, with an orange marmalade or coffee kind of flavor depending on the grainbill, but should never taste acidic or sour. No chill is always done in a sealed container, so the OP did not do the no chill method. – DaleNov 11 '12 at 0:26 1 I know its not a perfect seal, but if he plastic-wrapped everything, I doubt there was a whole lot getting in there. – PietroNov 13 '12 at 13:50 1 Sealing with saran-wrap could seal the container up. And if there was still steam from the boil it could have sterilized the whole business. As Charlie Papazian says relax and have a homebrew. Wait and see it could very likely turn out just fine. – Chris PlaisierNov 13 '12 at 16:25
2023-11-05T01:26:29.851802
https://example.com/article/1711
{ "CVE_data_meta": { "ASSIGNER": "cve@mitre.org", "ID": "CVE-2018-20301", "STATE": "PUBLIC" }, "affects": { "vendor": { "vendor_data": [ { "product": { "product_data": [ { "product_name": "n/a", "version": { "version_data": [ { "version_value": "n/a" } ] } } ] }, "vendor_name": "n/a" } ] } }, "data_format": "MITRE", "data_type": "CVE", "data_version": "4.0", "description": { "description_data": [ { "lang": "eng", "value": "An issue was discovered in Steve Pallen Coherence before 0.5.2 that is similar to a Mass Assignment vulnerability. In particular, \"registration\" endpoints (e.g., creating, editing, updating) allow users to update any coherence_fields data. For example, users can automatically confirm their accounts by sending the confirmed_at parameter with their registration request." } ] }, "problemtype": { "problemtype_data": [ { "description": [ { "lang": "eng", "value": "n/a" } ] } ] }, "references": { "reference_data": [ { "name": "https://github.com/smpallen99/coherence/issues/270", "refsource": "MISC", "url": "https://github.com/smpallen99/coherence/issues/270" } ] } }
2023-09-06T01:26:29.851802
https://example.com/article/8837
Q: Getting sbt-assembly working So thus far I've been compiling my Scala project with SBT (via Typesafe stack). I want to run the code across several machines now, via sbt-assembly. Following directions, the only one change I made was in my project/Build.scala file. Here is the related part: resolvers += "Typesafe Releases" at "http://repo.typesafe.com/typesafe/releases", resolvers += "artifactory" at "http://scalasbt.artifactoryonline.com/scalasbt/sbt-plugin-releases", libraryDependencies += "com.eed3si9n" % "sbt-assembly" % "0.8.3" When I run sbt compile however, I get this error: sbt.ResolveException: unresolved dependency: com.eed3si9n#sbt-assembly/scala_2.9.1/sbt_0.11.2;0.8.3: not found. What am I doing wrong? Thanks! EDIT Created a build.sbt file in the same folder as Build.scala (folder is /project/) and have these two lines in it: Seq[Setting[_]](resolvers += "artifactory" at "http://scalasbt.artifactoryonline.com/scalasbt/sbt-plugin-releases", addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.8.3")) Now the error is: [warn] :::::::::::::::::::::::::::::::::::::::::::::: [warn] :: UNRESOLVED DEPENDENCIES :: [warn] :::::::::::::::::::::::::::::::::::::::::::::: [warn] :: com.eed3si9n#sbt-assembly;0.8.3: not found [warn] :::::::::::::::::::::::::::::::::::::::::::::: [warn] [warn] Note: Some unresolved dependencies have extra attributes. Check that these dependencies exist with the requested attributes. [warn] com.eed3si9n:sbt-assembly:0.8.3 (sbtVersion=0.11.2, scalaVersion=2.9.1) [warn] [error] {file:/Users/myname/current/projectname/project/}default-d7da9a/*:update: sbt.ResolveException: unresolved dependency: com.eed3si9n#sbt-assembly;0.8.3: not found EDIT 2 Hm, after I do a successful sbt compile, should I just be able to enter the sbt console and type in assembly? > assembly [error] Not a valid command: assembly [error] Not a valid project ID: assembly [error] Not a valid configuration: assembly [error] Not a valid key: assembly [error] assembly [error] EDIT 3 JK got it. Had to add the build.sbt info as specified in the GitHub README. A: There are two points here. One is that SBT plugins are not just library dependencies -- in particular, they use the current SBT version in a similar way that other Scala libraries use the Scala version. The other is that libraryDependencies in project/Build.scala affects the dependencies for the project, not for the build. An SBT full build is itself an SBT project, just located one level down the directory tree, and so can have a build of its own configured the same way a normal build is. Unlike a normal build, where going for a "full build" is necessary under a handful of circumstances, there is almost never a reason to use a full build for a build, so using .sbt files located in project/ is almost always sufficient. The other issue is the versioning. SBT has a utility function called addSbtPlugin that handles everything for you. It takes a moduleID and adds all the necessary SBT and Scala versioning information. So, to get sbt-assembly working in a full build, you create a .sbt file in under project/ (conventionally either project/build.sbt or project/plugins.sbt) and place your build's resolvers and dependencies there: resolvers += Resolver.url("artifactory", url("http://scalasbt.artifactoryonline.com/scalasbt/sbt-plugin-releases"))(Resolver.ivyStylePatterns) addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.8.3")
2023-09-18T01:26:29.851802
https://example.com/article/8915
Q: I want to change the columns names with a loop i have a datasets column names looking like that state.abb, state.area, state.division, state.region i want to change the names of the columns and delete the "state." part to leave only "abb", "area","division", and "region". i wrote this code using a loop over the df columns using substr func but it doesn't work nor give errors. what's wrong with it please ? for(e in 1:ncol(df)){ colnames(df[e])<-substring(colnames(df[e]),7) } A: Here, we can change the colnames(df[e]) to colnames(df)[e] for(e in seq_along(df)){ colnames(df)[e] <- substring(colnames(df)[e],7) } substring is vectorized so we could directly do this without any for loop colnames(df) <- substring(colnames(df), 7) Also, if we are removing the prefix including the ., a generalized option assuming that the prefix can be of any length is sub colnames(df) <- sub(".*\\.", "", colnames(df)) An an example, data(mtcars) colnames(mtcars[1]) <- "hello" colnames(mtcars[1]) #[1] "mpg" # no change colnames(mtcars)[1] <- "hello" colnames(mtcars[1]) #[1] "hello" # changed A: As an alternative solution, you could use gsub() to replace all the "state." with nothing (""), here showing that with just a vector: gsub("state.", "", c("state.abb", "state.area", "state.division", "state.region")) To replace the colnames names: colnames(df) <- gsub("state.", "", colnames(df)) As a bonus, imagine you want to replace a word or string that occurs in some but not all of your columns. Taking the built in iris dataset as an example, you could replace "Petal" with "P" for the columns where "Petal" is in the column name with the exact same approach: colnames(iris) <- gsub("Petal", "P", colnames(iris)) I wouldn't bother with a for loop for this job, it's far easier to use a vectorised approach. But to explain your error, when you did colnames(df[1]) you were returning the column name of a single column dataframe that you had isolated from your main dataframe, rather than handling the main dataframe itself. For example, iris[1] returns a dataframe with one column - see str(iris[1]) - so colnames(iris[1]) returns the column name of that isolate. A slight change instead allows you to return (and then change) the 1st element of the vector of column names for iris: colnames(iris)[1].
2024-04-13T01:26:29.851802
https://example.com/article/5310
Q: https : What pages to secure? I have multiple pages in my site. Few pages handling payment transaction. Few pages not. I secured the Payment transaction page with https. Any other page should I have to secure. For eg. Login, registration etc..? A: Ideally, you need https for login, change password, register, profile pages where user enters/views sensitive information in addition to payment pages. Note: you need https for entire checkout process. Most gateways like PayPal Payments Pro won't even let you use without https. However, you do not want to use https on entire site without necessary. Otherwise, it'll slow down the server due to encryption. For example, amazon.com only use http on regular product pages unless you view or edit your account.
2024-07-23T01:26:29.851802
https://example.com/article/6954
Chemesthetic responses to airborne mineral dusts: boric acid compared to alkaline materials. (1) To assess the relation between occupationally relevant exposures to dust of boric acid and magnitude of feel in the eye, nose, and throat during activity (pedaling) equal to light industrial work. (2) To compare feel from the dust of boric acid with that of the alkaline dusts calcium oxide and sodium tetraborate pentahydrate (sodium borate). (3) To chart how magnitude of feel changes with time in exposures up to 3/4 h. Twelve subjects, six males and six females, participated in duplicate sessions of exposure to 2.5, 5, and 10 mg m(-3) of boric acid, 10 mg m(-3) of sodium borate, 2.5 mg m(-3) of calcium oxide presented as calcium oxide alone or diluted with hydrated calcium sulfate, and 0 mg m(-3) (blank). Exposures occurred in a plastic dome suspended over the head and closed around the neck with rubber dam. Measurements pre- and post-exposure included nasal secretion and nasal resistance. Measurements during exposure included rated magnitude of feel in the eye, nose, and throat, and respiration (Respitrace System). Six concentrations of carbon dioxide ranging from just below detectable to sharply stinging gave subjects references for their ratings. In general, feel increased for periods up to half an hour, then either declined or held at a plateau. Each material had a temporal signature. The nose led with the highest feel, followed by the throat, then the eyes. This hierarchy proved weakest for boric acid; at one level of exposure, magnitude in the throat overtook that in the nose. Accompanying measures implied that change of feel with time occurred neither because of an increase in dilution of the dissolved dusts in newly secreted mucus nor an increase of consequence in nasal resistance. Most likely, sensory adaptation determined the change. Boric acid of 10 mg m(-3) fell slightly and insignificantly below 10 mg m(-3) sodium borate in feel. Boric acid, though, showed a relatively flat dose-response relationship, i.e., a change in level caused little change in feel. The time-constant for feel from dusts lies on the order of tens of minutes. A flat concentration-response function for boric acid and a notable response from the throat suggests that perceived dryness, not mediated by acidity but perhaps by osmotic pressure, may account for the feel evoked at levels of exposure at or below 10 mg m(-3). More acidic dusts that could actually change nasal pH may trigger sensations differently.
2023-11-13T01:26:29.851802
https://example.com/article/9753
Q: Stability of dark solitons in a harmonic trap This question is based upon a research article which I am trying to reproduce. One of the main result of this paper is the condition on transverse confinement of the Bose-Einstein Condensate(BEC) to make the black soliton solution stable. The equation governing BEC is the Gross-Pitaevskii(GP) equation, given by $i\hbar \frac{\partial\psi}{\partial t}=-\frac{\hbar^{2}}{2m}\frac{\partial^{2}\psi}{\partial x^{2}}+g\psi|\psi|^{2}+V_{ext}\: \psi$ Here, $|\psi|^{2}$ gives the density of the condensate. We can see when $V_{ext}=0$, the above equation has a solution, in one dimension(say z), of the form $\psi(z,t)=\tanh{cz} \:e^{-i\mu t}$, where $c$ accounts for the constants. Suppose now that we are working in a cylindrical geometry, such that the $V_{ext}=\omega_{z}z^{2}+\omega_{r}r^{2}$ and $\omega_{z}<\omega_{r}$, meaning the radial confinement is stronger than the axial confinement. In such a case, one can obtain a solitonic solution with a nodal plane perpendicular to the axial direction. This can be done by using the split operator method and imaginary time evolution. Now, comes the question of stability of the solitonic solution. One can perturb $\psi$ with a perturbation of the form $\psi\rightarrow\psi+\delta\psi$, where $\delta\psi = u(z)e^{iq.r-i\epsilon t}+v(z)e^{-iq.r+i\epsilon t}$. So, basically, we are looking for small amplitude oscillations where the soliton, whose nodal plane is perpendicular to the axial(z) direction, gives out energy in the radial direction. Putting the form of $\delta\psi$ in the GP equation, we get the following set of equations for $f_{\pm}(z)=u(z)\pm v(z)$, where $\psi_{0}(z)$ is the density profile of the soliton in the z-direction(axial direction) in the presence of the trap($V_{ext}$) $\epsilon f_{-}(z)=\Big[-\frac{\hbar^{2}}{2m}\big(\frac{\partial^{2}}{\partial z^{2}}-q^{2}\big)-\mu+V_{ext}+3g\psi_{0}^{2}(z) \Big]f_{+}(z)$ $\epsilon f_{+}(z)=\Big[-\frac{\hbar^{2}}{2m}\big(\frac{\partial^{2}}{\partial z^{2}}-q^{2}\big)-\mu+V_{ext}+g\psi_{0}^{2}(z) \Big]f_{-}(z)$ To obtain a stability condition, we need a dispersion relation between $\epsilon$ and $q$. However, as you can see in the above set of equations, there are a lot of terms with $z$ dependences, including a derivative in the $z$ direction. The authors of the paper say that they have numerically solved these set of equations to obtain a dispersion relation. How does one do that? A: As short answer, you need to: Sweep over your wavenumber $q\in [q_\min, q_\max]$; Approximate your differential equation $$\mathcal{L}_q \mathbf{f} = \epsilon \mathbf{f}$$ to obtain a discrete problem $$[H(k)]\lbrace U \rbrace = E[S]\lbrace U \rbrace\, ;$$ Solve the generalized eigenvalue problem to obtain the admissible frequencies $\epsilon$ for that particular $q$. Regarding point 2 and 3, there are several methods. Some of the most popular ones are discussed in this answer [1]: Finite Difference Methods; Finite Element Methods; and Direct Variational methods. Let's rewrite the equation as above, $\mathcal{L}_q \mathbf{f} = \epsilon \mathbf{f}$, with $$\mathcal{L}_q = \begin{bmatrix} 0 &-\frac{1}{2}\nabla^2 + \hat{V}(q) + 3\psi^2_0\\ -\frac{1}{2}\nabla^2 + \hat{V}(q) + \psi^2_0 &0\ \end{bmatrix}\, ,\quad \mathbf{f} = \begin{Bmatrix} f_{-}\\ f_{+}\end{Bmatrix}\, ;$$ and $\hat{V(q)} = V_\text{ext} - \mu - q^2$, where I re-scaled some variables for convenience. The operator $\mathcal{L}$ does not seem to be self-adjoint, if that's the case, the resulting matrices might not be Hermitian (depending on the scheme). If one uses a weighted residual approach, the resulting functional is of the form (I would double-check, it's easy to make silly mistakes) $$\Pi_q[\mathbf{f}, \mathbf{w}] = \frac{1}{2}\int_\mathbb{R} \nabla w_1 \nabla f_{+}\, dz + \frac{1}{2}\int_\mathbb{R} \nabla w_2 \nabla f_{-}\, dz +\\ \int_\mathbb{R} |\psi_0|^2[3 w_1 f_{+} + w_2 f_{-}]\, dz + \int_\mathbb{R} \hat{V}(q)[w_1 f_{+} + w_2 f_{-}]\, dz - \epsilon\left[\int_\mathbb{R} w_1 f_{-}\, dz + \int_\mathbb{R} w_2 f_{+}\, dz\right] $$
2024-05-12T01:26:29.851802
https://example.com/article/1319
The effect of age on the dynamics and the level of c-Fos activation in response to acute restraint in Lewis rats. Recent studies have reported an age-related increase of anxiety in rodents with a concomitant decrease in neuronal activity in some of the key structures of the fear/anxiety circuit. In the present study we present evidence that distinct parts of this circuit are differentially affected by age in Lewis rats. The effect of ageing is observed both at the actual level of neuronal activation and its time-course. While the structures belonging to the HPA axis react with a bigger neuronal activation and almost no change in the shape of dynamics curve in response to restraint, the structures involved in higher processing of emotional cues (amygdala and hippocampus) become deficiently activated with age despite their generally higher basal level of activation.
2024-02-13T01:26:29.851802
https://example.com/article/3868
UK Next Day Delivery - Order by 1pm Monday to Friday for UK next business day delivery. This item ships internationally - To find out more about other delivery options and our international delivery times click here Sized 3-6 months All clothing in our Baby Clothes Bouquets is a generously sized 3-6 months, ensuring it will fit a larger baby and giving the recipient plenty of time to enjoy their flowers before unwrapping to reveal the clothing inside. To find out more about the baby clothes in our baby cakes and bouquets click here
2024-05-07T01:26:29.851802
https://example.com/article/9151
Power-hungry crypto mining has found an ideal home in the city of Bratsk, where the weather is cold and the electricity is cheap. LISTEN TO ARTICLE 1:41 SHARE THIS ARTICLE Share Tweet Post Email Bitriver, the largest data center in the former Soviet Union, was opened just a year ago, but has already won clients from all over the world, including the U.S., Japan and China. Most of them mine bitcoins. The company rents a building near the Bratsk aluminium plant. The world’s single largest aluminum smelter was built by the USSR in 1960s along with the nearby hydropower plant as energy is the largest cost in aluminium smelting. Photographs by Andrey Rudakov/Bloomberg Three-story racks of application-specific integrated circuit (ASIC) devices and power units line the wall of the 100 megawatt mining facility. A bank of large industrial fans run the length of the building, providing essential cooling to the machines as they farm cryptocurrencies. A team of on-site engineers work 24/7 to monitor and perform routine diagnostics on the application-specific integrated circuit (ASIC) devices and power units. They wear hearing-protection devices to guard against the noise of the mining rigs and cooling fans. As the number of cryptocurrencies and tokens multiply—they now reach into the thousands—Bitcoin remains the best-known, time-tested and valuable. On top of the power supply, another thing that makes Bratsk an ideal place for crypto is the Siberian climate with its long and cold winters. Low temperatures are good for the data center equipment. Billionaire Oleg Deripaska’s team came up with the idea of building the data center in Bratsk about five years ago. En+ Group Plc and its unit United Co. Rusal, which the sanctioned businessman used to control, own the Bratsk hydropower and aluminium plants. The Bratskaya hydropower plant on the Angara river. The hydropower generated electricity in the region is amongst some of the cheapest in the world. While Russian law doesn’t recognize crypto mining, Bitriver isn’t engaged in mining itself and only provides equipment at the data center and technical services, meaning its business is legal. Deripaska’s companies spent nearly 10 months under sanctions before he reached an agreement with U.S. Treasury to cut his control. The penalties were lifted in January. Continuing sanctions on En+ could have caused troubles for the cryptocurrency miners. Workers fabricate rack structures to increase capacity of the mining facility. Bitriver says it currently hosts over 20,000 mining devices, with scope for up to 67,000 units. An on-site technician carries out repairs to mining hardware. Bitriver says all its technicians are both trained and certified by Chinese mining hardware giant Bitmain and Innosilicon. En+ supplies up to 100 megawatts of power to Bitriver per year as a way to diversify its client base and sell excess energy. Cheap and stable power is also a key ingredient for crypto mining. En+ and Bitriver also have a venture that provides computer racks to crypto miners. Banks of illuminated cryptocurrency mining rigs span the length of the vast building. An armed guard makes regular patrols inside the main mining hall. The mining units require vast amounts of power to perform the complex mathematical calculations needed to harvest the cryptocurrency. The Siberian climate makes a perfect home for crypto-companies, the cold air providing natural regulation to the high temperatures generated by the mining rigs, saving investment in expensive cooling systems.
2024-07-31T01:26:29.851802
https://example.com/article/8712
In the conventional fabric printing machine, a pallet holding a raw fabric (a workpiece to be printed) is configured to be fixed to a mechanical structure and transferred, thereby enabling a printing job only. With the fixed pallet machine being used currently, the design capability is extremely restricted and its productivity is significantly lowered, thereby leading to an increase in the production cost and thus difficulties in the sale thereof. A hybrid process (combined flocking and printing) can be characterized as follows. In order to carry out a multi-color printing in a consecutive manner, the pallet necessarily requires a proper heat of 100 to 120° C. When a binding process for flocking is performed, the pallet must be properly cooled (below 30° C.). The reasons are that in order for the consecutive printing to be successfully performed, a previously printed ink must be dried (at the drying temperature of 160 to 180° C.) before a subsequent color printing. In case of the flock binder, if the pallet has a heat, a film is formed in the surface of the binder and thus the flocking cannot be easily accomplished. This will result decisively in a reduction in the productivity and quality degradation. Conventionally, the pallet is formed of an aluminum plate of 12 to 15 thickness or an aluminum cast in order to prevent bending or distortion. Thus, it disadvantageously takes much time to heat up and cool the pallet. In particular, with the conventional process, a multi-color printing can be carried out to a certain degree (without considering the productivity), but a multi-color flocking cannot be performed. This is because its quality is extremely degraded, along with its decreased productivity. Besides the above serious problems, i.e., the degradation in the productivity and quality due to repetitive heating and cooling of the pallet and the limited design capability, various other problems exist. In the flocking process, remaining pile is scattered into the atmosphere and thus the entire factory can be contaminated. Therefore, it adversely affects all the products under fabrication and incurs very serious problems with the working environment. In case of the conventional downward flocking process, a large amount of pile remains, except for the flocked piles. However, there exists no way to recover the remaining pile. Even though a separate removing facility or process can be added, the scattering of the remaining pile cannot easily be prevented.
2024-04-26T01:26:29.851802
https://example.com/article/6605
Outcome assessment in depressed hospitalized patient. Psychiatric symptoms as well as work, social, and physical functioning were compared in two groups of psychiatric patients (36 depressed only and 34 depressed in conjunction with an eating disorder) and 77 controls. In both groups, Global Assessment of Functioning (GAF) scores significantly improved from hospital admission to discharge and remained improved at 1.5 years postdischarge. As outpatients, the GAF, Zung Depression, and anxiety scores of both groups were significantly lower than for controls. Ratings of social functioning for depressed only outpatients did not differ from controls on five out of six measures. Predictors of posthospital improvement included high satisfaction with hospital treatment, high GAF scores on admission to hospital, perceived effectiveness of outpatient therapy, younger age, and an historical absence of sexual abuse or prior psychiatric hospitalization.
2024-01-03T01:26:29.851802
https://example.com/article/4469
An inverted Treasury yield curve—a yield curve where short-term Treasury interest rates are higher than long-term Treasury interest rates—is a good predictor of recessions. Because of this, economists and policymakers often assess the risk of a yield curve inversion when the yield curve is flattening. I study the forecastability of yield curve inversions. Professional forecasters did not predict the beginning of the yield curve inversions prior to the 1990–1991, 2001, and 2008–2009 recessions. In all three cases, professional forecasters failed to predict the magnitude of the rise in short-term interest rates. Prior to the 2008–2009 recession, forecasters also overpredicted long-term interest rates. The Treasury yield curve, the curve showing interest rates on Treasury securities at different maturity horizons, contains important information about the US economy. In particular, an inverted yield curve, where interest rates on short-term Treasury securities are higher than interest rates on long-term Treasury securities, is a good predictor of recessions.1 While there are reasons to believe that the relationship between the yield curve and recessions has changed,2 Bauer and Mertens (2018) show that an inverted yield curve has preceded each of the previous nine recessions in the United States. Further, they show that an inverted yield curve has been consistently followed by an economic slowdown.3 Given the recent flattening of the Treasury yield curve, it is natural for economists and policymakers to be concerned about the potential for an upcoming inversion and a corresponding economic slowdown. Indeed, at the June 12–13, 2018, Federal Open Market Committee meeting, a number of participants thought monitoring the slope of the yield curve was important given that an inverted yield curve has historically indicated an increased risk of recession. However, in order to have the option of adjusting interest rates before the yield curve inverts, policymakers would need to be able to predict when an inversion is likely. In this Commentary, I study whether professional forecasters predict yield curve inversions. To do this, I use the consensus or average forecasts of the interest rate on 10-year Treasury securities and the interest rate on 1-year Treasury securities from the Blue Chip Financial Forecasts. These data cover 1988 to the present, and I find that forecasters failed to forecast the beginning of the yield curve inversions that preceded the 1990–1991, 2001, and 2008–2009 recessions. Further, they forecasted yield curve inversions only once the yield curve inversion had occurred.4 I find that a common cause of the failure to predict yield curve inversions is a failure to predict the magnitude of the rise in the 1-year Treasury rates. However, these short-term rate forecast errors have shrunk with each inversion episode, a situation that is consistent with the Federal Reserve’s increased transparency. In addition, professional forecasters overpredicted 10-year Treasury rates prior to the 2008–2009 recession. The Yield Curve and Recent Inversions Yield curves contain a collection of data points, each of which is an interest rate for a given Treasury maturity and any of which can vary over time. Figure 1 shows the Treasury yield curve for 2018:Q1 and 2017:Q1. Each data point is the quarterly average of daily constant maturity interest rates for a given maturity.5 A comparison of the two yield curves shows that the yield curve has flattened over the past year, driven by increases in short-term Treasury interest rates. To simplify the analysis in this Commentary, I do not study the interest rate for every maturity displayed in figure 1. Rather, following Bullard (2017) and Bauer and Mertens (2018), I study the term spread as measured by the difference between the 10-year Treasury rate and the 1-year Treasury rate. A negative value of this term spread indicates an inverted yield curve because the shorter 1-year Treasury interest rate is above the longer 10-year Treasury interest rate. Figure 2 shows this term spread from 1987 to 2018 along with recession periods, indicated by shaded bars. As with figure 1, the data are quarterly averages of daily constant maturity interest rates. This figure shows that yield curve inversions preceded each of the three previous recessions. Further, it shows that the yield curve has flattened, albeit not smoothly, throughout the course of the current expansion as it has in previous expansions. The yield curve inversions studied in this Commentary are the three shown in figure 2. Their timing, taken from the figure, is as follows. The first inversion was 1989:Q1 to 1989:Q2, the second was 2000:Q2 to 2000:Q4, and the third came in two pieces: 2006:Q1 and 2006:Q3 to 2007:Q2. The Professional Forecasts The question I investigate is whether professional forecasters predicted the three yield curve inversions shown in figure 2. To answer this question, I use the Blue Chip Financial Forecasts, which have forecasts for the 1-year and 10-year constant maturity Treasury interest rates going back to 1988. I use the consensus forecast for each maturity, the consensus forecast being the average of the individual Blue Chip forecasts. The forecasts are of average interest rates over a quarter, and these forecasts are produced monthly. To give the forecasters the most available information when making their forecasts, I use the forecasts produced in the last month of each quarter: March, June, September, and December. Figure 3 shows the Treasury term spread along with two forecasts. In both panels, the blue line is the actual term spread in a given quarter. In the top panel, the orange line shows the predicted value of the quarter’s term spread made two quarters prior. In the bottom panel, the orange line shows the predicted value of the quarter’s term spread made four quarters prior. Shaded bars in both panels indicate a yield curve inversion. Because the predicted values of the term spreads, the orange lines, are positive both during and after the 1989:Q1 to 1989:Q2 yield curve inversion, figure 3 indicates that professional forecasters failed to predict the 1989:Q1 to 1989:Q2 yield curve inversion at both a 2-quarter-ahead and 4-quarter-ahead horizons. In contrast, around the 2000:Q2 to 2000:Q4 inversion, the predicted values of the terms spreads are negative for three quarters, 2000:Q4, 2001:Q1, and 2001:Q2, when making 2-quarter-ahead forecasts. That is, forecasters predicted yield curve inversions for these quarters. They also predicted inversions for 2001:Q2, 2001:Q3, and 2001:Q4 when making 4-quarter-ahead forecasts. Essentially, once the yield curve had inverted, professional forecasters continued to forecast an inversion for subsequent quarters. However, they were not able to forecast the beginning of the yield curve inversion. The results are very similar around the 2006:Q1 and the 2006:Q3 to 2007:Q2 inversions. The forecasters predicted inversions for 2006:Q3 and 2007:Q1 to 2007:Q4 when making 2-quarter-ahead forecasts, and they predicted inversions for 2007:Q1, 2008:Q1, and 2008:Q2 when making 4-quarter-ahead forecasts. As with the previous inversion, the forecasters were not able to forecast the beginning of the yield curve inversion in 2006:Q1.6 One important note is that the forecast errors shrank for each successive inversion. In 1989:Q1 and 1989:Q2, the average absolute forecast errors of the 2-quarter-ahead and 4-quarter-ahead term spread forecasts were 1.22 percent and 1.70 percent, respectively. For the 2000:Q2 to 2000:Q4 inversion, the 2-quarter-ahead and 4-quarter-ahead absolute average forecast errors were 0.37 percent and 0.50 percent, respectively. Lastly, for the 2006:Q1 and the 2006:Q3 to 2007:Q2 inversions, the 2-quarter-ahead and 4-quarter-ahead absolute average forecast errors were 0.24 percent and 0.32 percent, respectively. I discuss these improving forecasts further in the next section. These findings show that professional forecasters have not forecasted a yield curve inversion unless an inversion has already taken place. This result is similar to how recessions are only identified with lag.7 It also shows that professional forecasters have not made any false alarms about a yield curve inversion. What Do Professional Forecasters Get Wrong? To see why professional forecasters missed the onset of all three yield curve inversions, I examined their forecasts of 1-year and 10-year Treasury interest rates separately. Figure 4 shows the 1-year and 10-year Treasury interest rates during each yield curve inversion episode along with the corresponding professional forecasts. Shaded bars indicate yield curve inversions. The left panels of figure 4 show the 1-year interest rates and forecasts. In all three yield curve inversions, professional forecasters failed to forecast the magnitude of the rise in 1-year Treasury rates.8 Hence, an unpredictably rapid rise in the short end of the yield curve is a common cause in failing to predict yield curve inversions. While professional forecasters failed to predict these increases in 1-year Treasury rates, the magnitude of their forecast errors decreased with each successive inversion. At a 4-quarter-ahead horizon, professional forecasters missed the 1989:Q1 and 1989:Q2 1-year rates by an average of 1.7 percent. At this same horizon, they missed the 2000:Q2, 2000:Q3, and 2000:Q4 1-year rates by an average of 0.9 percent. Finally, they missed the 2006:Q1 and 2006:Q3 to 2007:Q2 1-year rates by an average of 0.3 percent. These improvements in forecasting the short-end of the yield curve are consistent with the findings of Swanson (2006), who argues that increases in Federal Reserve transparency have made forecasters better able to predict the federal funds rate and less surprised by Federal Reserve announcements. The right panels of figure 4 show the 10-year interest rates and forecasts. For the 1989:Q1 to 1989:Q2 inversions, forecasters generally did a good job predicting the 10-year rate for 1989:Q1; however, they failed to forecast the drop in 10-year rates that occurred in 1989:Q2. Similarly, for the 2000:Q2 to 2000:Q4 inversion, forecasters modestly underpredicted 10-year rates for 2000:Q2, generally did a good job predicting 10-year rates for 2000:Q3, and then overpredicted 10-year rates for 2000:Q4. In contrast to the two earlier inversions, professional forecasters generally predicted upward paths of 10-year Treasury rates in 2005 and 2006. This caused them to systematically overpredict 10-year rates for 2006:Q1 and for 2006:Q3 to 2007:Q2. Relative to 1-year Treasury rates, the forecast improvements for 10-year Treasury rates across the three inversions were modest. At a 4-quarter-ahead horizon, professional forecasters missed the 1989:Q1 and 1989:Q2 1-year rates by an average of 0.5 percent. At this same horizon, they also missed the 2000:Q2, 2000:Q3, and 2000:Q4 1-year rates by an average of 0.5 percent. Finally, they missed the 2006:Q1 and 2006:Q3 to 2007:Q2 1-year rates by an average of 0.3 percent. These results suggest that the decreasing term spread forecast errors during yield curve inversions are largely driven by improvements in forecasts of the short end of the yield curve. Summary With the recent flattening of the yield curve, economists and policymakers are currently discussing the likelihood of a yield curve inversion. In this Commentary, I study the historical forecastability of yield curve inversions. I find that professional forecasters failed to predict the beginning of the yield curve inversions prior to the 1990–1991, 2001, and 2008–2009 recessions. These failures were largely driven by failures to predict the magnitude of the rise in short-term interest rates. In addition, forecasters overpredicted long-term interest rates prior to the 2008–2009 recession. While Federal Reserve transparency has likely helped reduce professional forecasters’ errors for the short end of the yield curve, forecast errors for the long end of the yield curve have had little reduction. In the press conference following the December 12–13, 2017, Federal Open Market Committee meeting, former Chair Janet Yellen noted that a low term premium is one factor that could cause this change. Return Wheelock and Wohar (2009) survey the academic literature and find that the difference between long-term and short-term interest rates is also useful for forecasting recessions across many countries and with several different statistical models. Return Documenting this failure to predict yield curve inversions is not intended to question the abilities of professional forecasters. Indeed, Diebold and Li’s (2006) model, a standard yield curve forecasting model, also fails to predict the beginning of yield curve inversions and only forecasts inversions after an inversion has happened. See the online appendix for details. Return Note that this failure to predict the beginning of yield curve inversions is a failure on the part of forecast averages. However, it is also rarely the case that individual forecasters predict inversions. No individual forecaster predicted a negative term spread at a 4-quarter horizon for 1989:Q1, 2000:Q2, or 2006:Q1. At a 2-quarter horizon, no individual forecaster predicted a negative term spread for 1989:Q1 or 2000:Q2. At a 2-quarter horizon, 3 of 46 forecasters predicted a negative term spread for 2006:Q1.Return See Hamilton (2011) for a discussion about identifying recessions in real time.Return The Survey of Professional Forecasters has similar forecast errors. See figure A.10 in the supplemental appendix of Clark, McCracken, and Mertens (2018). The Diebold and Li (2006) model also fails to predict the rise in 1-year rates. See the online appendix for details.Return Headlines An inverted Treasury yield curve—a yield curve where short-term Treasury interest rates are higher than long-term Treasury interest rates—is a good predictor of recessions. Because of this, economists and policymakers often assess the risk of a yield curve inversion when the yield curve is flattening. I study the forecastability of yield curve inversions. Read More The Columbus metro area’s diversified economy is among the most successful in Ohio. In the first quarter of 2018, the local unemployment rate fell to nearly its lowest point on record, employment growth continued and most industry sectors added jobs, home prices and residential building permit issuance continued to rise, and consumer debt and the credit card delinquency rate remained stable. Read More Measures of real per capita gross domestic product and per capita income rose slightly in 2016, but though the unemployment rate has fallen, the decline was driven by a decrease in the labor force. Consumer debt per capita fell slightly, but credit card delinquency rates are on the rise. Home prices grew 7.7 percent between March 2017 and March 2018, but permits for new housing have stabilized. Read More
2023-10-30T01:26:29.851802
https://example.com/article/8213
/* Copyright (C) 2006 Charlie C * * This software is provided 'as-is', without any express or implied * warranty. In no event will the authors be held liable for any damages * arising from the use of this software. * * Permission is granted to anyone to use this software for any purpose, * including commercial applications, and to alter it and redistribute it * freely, subject to the following restrictions: * * 1. The origin of this software must not be misrepresented; you must not * claim that you wrote the original software. If you use this software * in a product, an acknowledgment in the product documentation would be * appreciated but is not required. * 2. Altered source versions must be plainly marked as such, and must not be * misrepresented as being the original software. * 3. This notice may not be removed or altered from any source distribution. */ // Auto generated from makesdna dna.c #ifndef __BLENDER_MDEFORMWEIGHT__H__ #define __BLENDER_MDEFORMWEIGHT__H__ // -------------------------------------------------- // #include "blender_Common.h" namespace Blender { // ---------------------------------------------- // class MDeformWeight { public: int def_nr; float weight; }; } #endif//__BLENDER_MDEFORMWEIGHT__H__
2023-12-27T01:26:29.851802
https://example.com/article/4313
PORTLAND, OR—Admitting it was difficult to watch his once-vibrant home fall into complete disarray, Portland Trail Blazers center Enes Kanter confirmed Monday that he was grateful to have escaped the oppressive, failing dictatorship in New York. “It’s disastrous—the leaders are full-on autocrats and there is so little hope left, it’s difficult to see how many people are suffering,” said Kanter, noting his former oppressors at Madison Square Garden were fanatics with no sense of duty to the people they serve, who waste millions of vanity projects to boost their ego while everything crumbles around them. “They just want to hold on to all of the money and power that they can. Unless there is some sort of regime change, things will only continue descending into chaos and despair. I was extremely lucky to get out of that situation—and I know many others who want to get out as well. My heart aches for them, and I pray for the day the people responsible for this suffering will be brought to justice.” At press time, Kanter urged talented young players worldwide to immediately seek asylum lest they be forcibly conscripted by the tyrannical organization in the June draft. Advertisement
2023-12-03T01:26:29.851802
https://example.com/article/6324
Q: How can I have both a legend and data labels, with different labels, in Highcharts? I need to create a pie chart with labels indicating what each area means, but also another info overimposed over each area. If I use both data labels and a legend, they'll show the same text. How can I have both with different texts, or emulate that effect? A mock of what I'd like to get: A: Using the format or formatter config properties of the dataLabels, you can make them say whatever you want. pie: { dataLabels: { enabled: true, formatter: function(){ return 'Y Value: ' + this.y; // y value } }, showInLegend: true } Quick example.
2023-11-26T01:26:29.851802
https://example.com/article/9158
Stigma and abortion complications in the United States. Abortion is highly stigmatized in the United States and elsewhere. As a result, many women who seek or undergo abortion keep their decision a secret. In many regions of the world, stigma is a recognized contributor to maternal morbidity and mortality from unsafe abortion, even when abortion is legal. Women may self-induce abortion in ways that are dangerous, or seek unsafe clandestine abortion from inadequately trained health care providers out of fear that their sexual activity, pregnancy, or abortion will be exposed if they present to a safe, licensed facility. However, unsafe abortion rarely occurs in the United States, and accordingly, stigma as a cause of unsafe abortion in the United States context has not been described. I consider the relationship of stigma to two serious abortion complications experienced by U.S. patients. Both patients wished to keep their abortion decision a secret from family and friends, and in both cases, their inability to disclose their abortion contributed to life-threatening complications. The experiences of these patients suggest that availability of legal abortion services in the United States may not be enough to keep all women safe. The cases also challenge the rhetoric that "abortion hurts women," suggesting instead that abortion stigma hurts women.
2023-11-27T01:26:29.851802
https://example.com/article/7059
Medical News Today: Is it safe to mix ibuprofen and alcohol? Many people are aware that taking ibuprofen at the same time as alcohol is not always safe, but what are the risks, and when is it dangerous? Ibuprofen is an over-the-counter medication that people use to reduce pain, inflammation, and fever. It is available under various brand names, such as Advil and Motrin, and in some combination medications for colds and the flu. Alcohol and ibuprofen can both irritate the lining of the stomach and intestines. Mixing the two can cause side effects that vary in severity from mild to serious depending on the dose and how much alcohol a person ingests. In this article, we discuss the safety and risks of taking ibuprofen and alcohol together. We also cover other side effects of ibuprofen. Is it safe to drink alcohol and take ibuprofen? A person may experience side effects when mixing alcohol and ibuprofen. Ibuprofen is usually safe if a person follows a doctor’s instructions and the recommended dosage on the packaging. According to the National Health Service (NHS) in the United Kingdom, it is usually safe to use pain relievers, including ibuprofen, when drinking a small amount of alcohol. However, people can experience mild-to-serious side effects if they take ibuprofen regularly and drink more than a moderate amount of alcohol, which is one drink for women and two drinks for men per day. The likelihood of experiencing side effects is particularly high with long-term use of ibuprofen, or regular, heavy alcohol use. The following sections discuss the health risks relating to taking ibuprofen and alcohol at the same time. Stomach ulcers and bleeding Ibuprofen can irritate the digestive tract, which is why doctors tell people to take this medication with food. When a person takes ibuprofen for an extended period or in high doses, it can increase their risk of gastric ulcers or bleeding in the digestive tract. Alcohol can also irritate the stomach and digestive tract. Mixing the two further increases the risk of ulcers and bleeding. The National Institutes of Health (NIH) state that ibuprofen can interact with alcohol, which can worsen the usual side effects of ibuprofen. These side effects can include bleeding, ulcers, and a rapid heartbeat. Research shows that both drinking alcohol and taking nonsteroidal anti-inflammatory drugs (NSAIDs), which is the class of drug that includes ibuprofen, are risk factors for stomach ulcer bleeding. The risk of stomach ulcer bleeding increases the longer a person takes ibuprofen. A person who takes ibuprofen every day for several months has a higher risk of this symptom than someone who takes ibuprofen once a week. Kidney problems The kidneys filter harmful substances from the body, including alcohol. The more alcohol that a person drinks, the harder the kidneys have to work. Ibuprofen and other NSAIDs affect kidney function because they stop the production of an enzyme in the kidneys called cyclooxygenase (COX). By limiting the production of COX, ibuprofen lowers inflammation and pain. However, this also changes how well the kidneys can do their job as filters, at least temporarily. Although the risk of kidney problems is low in healthy people who only occasionally take ibuprofen, the drug can be dangerous for people who already have reduced kidney function. People who have a history of kidney problems should ask a doctor before taking ibuprofen with alcohol. Increased drowsiness Individually, both alcohol and ibuprofen can cause drowsiness. Combining the two may make this drowsiness worse, which can lead to excessive sleepiness or an inability to function normally. The Centers for Disease Control and Prevention (CDC) state that it is never safe to drink alcohol and drive. The reason for this is that alcohol slows down reaction times and impairs coordination. Risks in older adults The National Institute on Alcohol Abuse and Alcoholism report that older adults have a greater risk of complications relating to mixing medication and alcohol. The risk is higher because a person’s body becomes less able to break down alcohol with age. People are also often likely to take more medications that could interact with alcohol as they get older. The authors of a study on drug-alcohol interactions state that most older adults in the U.S. use prescription or nonprescription medications, and more than 50 percent drink alcohol regularly. Drinking alcohol while taking medication puts older adults at higher risk of falls, other accidents, and adverse drug interactions. How to take ibuprofen safely Ibuprofen is not suitable for long-term pain relief. People should take ibuprofen for the shortest possible time at the lowest manageable dosage. A doctor can provide advice on safe long-term methods of pain management. Some combination medications, such as cold medicines, headache medicines, and prescription pain relievers, contain ibuprofen. Therefore, it is important to read the labels on all medications before taking them to avoid exceeding the safe amount of ibuprofen. People should also be wary about taking ibuprofen to ease a hangover, as they may still have alcohol remaining in their system. The stomach may also be more sensitive than usual at this time. Drinking alcohol only in moderation can prevent unwanted side effects. According to the CDC, moderate drinking means a maximum of one drink for women and two drinks for men per day. They state that each of the following counts as one alcoholic drink: a 12-ounce (oz) beer that contains 5 percent alcohol 8 oz of malt liquor that contains 7 percent alcohol 5 oz of wine that contains 12 percent alcohol 1.5 oz or a “shot” of 80-proof distilled spirits or liquors, such as gin, rum, vodka, or whiskey, that contain 40 percent alcohol The amount of alcohol in the drink matters. For instance, some types of beer and wine have higher alcohol content than others. Some types of liquor are also stronger than others. Beer and wine are no safer to drink than liquor, including when it comes to taking ibuprofen. Keeping alcohol intake within the recommended limits will reduce the risk of unwanted side effects, such as stomach bleeding and ulcers. When to see a doctor People who take ibuprofen regularly should watch for symptoms of stomach bleeding and ulcers, which may include: People who drink large amounts of alcohol every day or feel that they are unable to stop drinking can talk to a doctor about ways to reduce their alcohol intake. Alternative pain relief Gentle exercise may help relieve pain naturally. It is generally safe to take ibuprofen when following the instructions on the packaging and a doctor’s orders. People can also use different types of pain reliever or alternative pain relief methods. However, other pain medications, such as acetaminophen (Tylenol), naproxen (Aleve), and aspirin, can also interact with alcohol to cause adverse side effects. Acetaminophen affects the liver and can cause life-threatening liver damage in people who drink alcohol regularly. Aspirin and naproxen are NSAIDs, which means that they belong to the same class of medication as ibuprofen and carry many of the same risks. Natural remedies are not necessarily any safer to take with alcohol. Some herbal medicines and natural supplements can also interact with alcohol and cause side effects. When someone has already had more than a moderate amount of alcohol, the safest approach to pain relief is to wait until the alcohol is out of the body before taking ibuprofen or other pain medicines. Summary While people can typically have a small amount of alcohol with ibuprofen, the safest option is to avoid mixing the two. People who have health conditions should talk with a doctor about their medications and alcohol consumption to determine what is safe for them. Q: What should I do if I have taken alcohol and ibuprofen together? A: If you have consumed a small-to-moderate amount of alcohol along with ibuprofen, do not drink any more alcohol. You can reduce the risk of stomach upset by eating a snack or small meal and switching to drinking water. In the future, you should avoid taking any pain reliever with alcohol. Alan Carter, PharmDAnswers represent the opinions of our medical experts. All content is strictly informational and should not be considered medical advice.
2024-02-15T01:26:29.851802
https://example.com/article/5365
It's the Spruce Goose of homes -- American in its super-sized scope and confusion of styles. And the American Versailles differs from its historic namesake in a few key ways: It's still under construction, it's located in Florida, and it includes amenities like a bowling alley. The American Versailles, if completed, will be, at 90,000 square feet, bigger than a 747 airplane hangar and will hold the distinction of being the third-largest house in the United States. Other features include nine kitchens, 30 bathrooms and two movie theaters. The home's mahogany doors and windows alone cost $4 million. The owners, vacation time-share mogul David Siegel, 77, and former beauty queen Jackie Siegel, 46, said they had originally planned for the home to be smaller. "We didn't start out having a 90,000 square foot house. It was more like a normal 60,000 square foot house," David Siegel said with a chuckle. "...But then I said, 'I want a bowling alley,'" Jackie Siegel continued. "And then he said, 'Well, I want a health spa.' You know, so we just kept going back and forth and adding on things." The Siegels offered filmmaker Lauren Greenfield full access to their home, their prickly marriage and some belt-tightening. When the recession hit, construction stopped for four years. "This is almost like a riches to rags story," David Siegel said in Greenfield's documentary, "The Queen of Versailles." In an interview with ABC News, Siegel, the billionaire founder of Westgate Resorts, clarified that comment. "It never became rags ... They took that quote out of context and said, you know, 'riches to rags,' to appease the 99 percenters who don't like the 1 percenters," he said. The Siegels are now suing Greenfield and others tied to the film for defamation. "I just wanted the truth to come out. I didn't want people to ... see the movie, and think this is the truth. It wasn't," David Siegel told ABC News. "The scenes are totally manipulated, staged. The suit was not to gain monetarily. The suit was so that people would know that it's not the truth." In an interview with ABC News, one example he cited was a scene in the film in which Jackie Siegel rents a stretch limousine for a trip to McDonald's. David Siegel said that Greenfield actually encouraged his wife to rent the limo. In a complaint filed in court, Siegel has argued that the film is "defamatory, derogatory and damaging" for "falsely depicting" that his company didn't pay its bills and for portraying it as "essentially broke and out of business, on the verge of bankruptcy." A lawyer for Greenfield issued the following statement: "Lauren Greenfield is a world-renowned documentary filmmaker/photographer, who made this film with the full cooperation and support of the Siegel family. David Siegel is now engaged in a meritless legal and p.r. campaign, which purely serves his business interests. It is also in direct violation with agreements that Mr. Siegel has signed with the filmmaker." Now that business is good again, the Siegels intend to build again. David Siegel said his Versailles doesn't feel big. Asked if he wanted to make it bigger, Siegel replied, "I don't. I want to make it done."
2023-08-03T01:26:29.851802
https://example.com/article/7528
Novel potentiometric application for the determination of pantoprazole sodium and itopride hydrochloride in their pure and combined dosage form. Three sensitive and selective polyvinyl chloride (PVC) matrix membrane electrodes were developed and investigated. Sensor I was developed using tetraheptylammonium bromide (THB) as an anion exchanger with 2-nitrophenyl octyl ether (2-NPOE) as a plasticizer for the determination of the anionic drug pantoprazole sodium sesquihydrate (PAN). To determine the cationic drug itopride hydrochloride (ITH), two electrodes (sensors II and III) were developed using potassium tetrakis(4-chlorophenyl) borate (KTCPB) as a cation exchanger with dioctyl phthalate (DOP) as a plasticizer. Selective molecular recognition components, 2-hydroxypropyl-β-cyclodextrin (2-HP βCD) and 4-tert-butylcalix[8]arene (tBC8), were used as ionophores to improve the selectivity of sensors II and III, respectively. The proposed sensors had a linear dynamic range of 1×10(-5) to 1×10(-2) mol L(-1) with Nernstian slopes of -54.83±0.451, 56.90±0.300, and 51.03±1.909 mV/decade for sensors I, II and III, respectively. The Nernstian slopes were also estimated over the pH ranges of 11-13, 3.5-8 and 4-7 for the three sensors, respectively. The proposed sensors displayed useful analytical characteristics for the determination of PAN and ITH in bulk powder, in laboratory prepared mixtures and in combined dosage forms with clear discrimination from several ions, sugars and some common drug excipients. The method was validated according to ICH guidelines. Statistical comparison between the results from the proposed method and the results from the reference methods showed no significant difference regarding accuracy and precision.
2023-12-14T01:26:29.851802
https://example.com/article/3321
I usually start with the line art then create a single layer underneath it and fill in real quick basic color/shadow/light. Once satisfied with the "rough" result, merge both layers, then color on it with more care for details/shading/etc. (but I do keep backup layers just in case) But then, it really depends what type of coloring you want to achieve. (I'm still experimenting a little) I see. Do you just use one shade of gray when shading? I am also experimenting. There are so many styles, but I must say that I prefer yours the most! I still have just over nine months to find my way, though, so it is lucky for me that you have made so many videos for me to reference :3 From the looks of it I'd say it's Yozakura Quartet rather than Chrono Crusade. On the left it's probably Mariabelle and on the right Yae Shinatsuhiko. Mariabelle is actually wearing Yae's outfit and Yae is wearing her usual stockings and something I don't remember from the show, but there was a nurse called Juri. Oh I love this one I love the good and evil contrast here. With the nurse and nun which kind of plays with the imagination the chruch is beautiful and simple and your scrupulous attention to subtle detail never ceases to amaze me I love the lighting as well and the characters pose and heavy contrast in appearance. It kind oof reminds me vaguely of the show American Horror Story: Asylum.you just need to stop being absolutely wonderful
2024-02-23T01:26:29.851802
https://example.com/article/6869
How to Change the Oil on a Cub Cadet Zero-Turn Rider Length of Clip: 2:28 This video will show you how to change the oil on this Cub Cadet riding mower using the drain plug method or the Arnold Siphon Pump. Depending on your model, the instructions in this video may vary slightly. Always be sure to check your operator's manual for detailed instructions. Operator's Manual Disclaimer: The operator's manual posted is for general information and use. To ensure the download of the operator's manual specific to your unit, we require a model and serial number.
2024-06-29T01:26:29.851802
https://example.com/article/8871
Barcelona-based developer Nomada Studio announced on Twitter that their platformer Gris has sold over a million copies. Gris launched on PC and Switch at the end of 2018, and had already sold 300,000 copies a few months later (it eventually came to PS4 and mobile devices as well). Gris won the Games for Impact at the Game Awards—you remember, it was the award announced by actual Muppets—and was well-received on Steam as well. Currently it's sitting on a rating of Overwhelmingly Positive, with 22,237 reviews. If you're quick you'll be able to grab it for half-price as part of the Devolver Digital weekend sale. If you'd like a more measured take first, our review pointed out that its beautiful art style sometimes hindered its exploration of grief, saying that, "It’s too self-conscious, and too wrapped up in being aesthetically pleasing. It’s too tied to the idea of a neat conclusion. It’s so caught up in the language of recurring motifs and visual continuity that it doesn’t seem to notice when the emotional arc loses clarity and continuity."
2023-12-07T01:26:29.851802
https://example.com/article/9855
Introduction {#s1} ============ Glutaric aciduria type 1 (GA1) is an autosomal recessive inherited neurodegenerative disease caused by a deficiency in the activity of glutaryl-CoA dehydrogenase (GCDH). The overall prevalence is approximately 1 in 100,000 newborns, but this varies among different countries [@pone.0063084-vanderWatt1], [@pone.0063084-Yang1]. Because GCDH activity is central to the catabolism of lysine and tryptophan, glutaric acid (GA) and related metabolites accumulate in the tissues and fluids of affected patients. Untreated patients are prone to develop severe striatal degeneration and irreversible movement disorders after the acute encephalopathic crises that occur early during development, between the ages of 3 and 36 months [@pone.0063084-Klker1], [@pone.0063084-Keyser1]. Previous investigations have shown that early diagnosis and treatment can improve the prognosis of patients with GA1 significantly, but the outcomes can still vary, even among patients who follow their therapeutic regimens closely [@pone.0063084-Klker1], [@pone.0063084-Klker2], [@pone.0063084-Kamate1]. Despite extensive experimental work, the mechanisms underlying the development of striatal lesions remain unclear. This limits the design of appropriate therapeutic approaches [@pone.0063084-Jafari1]--[@pone.0063084-GokmenOzel1]. In previous studies, several *in vitro* and *in vivo* model systems have been used to investigate the pathogenesis of neurodegeneration. *In vitro* studies have mainly focused on the neurotoxicity of GA and related metabolites, but have not considered the interactions among related metabolites [@pone.0063084-Gerstner1]--[@pone.0063084-Leipnitz1]. Animal models include *Rousettus aegypticus*, chemical animal models (created using intracerebroventricular, intrastriatal, and subcutaneous administration of GA in rats), knock-out (KO) mouse models, and diet-induced KO mouse models [@pone.0063084-Jafari1], [@pone.0063084-OliveraBravo1]--[@pone.0063084-Zinnanti1]. These models have provided insight into individual pathological mechanisms, but the results have not been consistent across different models and human patients. At present, innovative *in vitro* and *in vivo* models mimicking the metabolic impairment in GA1 patients are needed for a better comprehension of the mechanisms involved in the neuropathogenesis in GA1 [@pone.0063084-Jafari1]. Short hairpin RNA (shRNA) and small interfering RNA (siRNA) are used to specifically suppress the transcription of specific target genes [@pone.0063084-Sliva1]. However, shRNA and siRNA are difficult to transduce into neurons. Lentivirus-mediated shRNA can introduce genetic material into neurons and integrate into the host genome readily, producing stable and persistent suppression of target gene both *in vitro* and *in vivo* [@pone.0063084-Harper1]. Lentiviral vectors have become the most widely used vectors for biological research and functional genomics, and shown great promise for clinical applications [@pone.0063084-DCosta1]--[@pone.0063084-Coutant1]. This technology has been used in the investigation of Huntington's disease, which is a hereditary neurodegenerative disorder similar to GA1 [@pone.0063084-Ruiz1]--[@pone.0063084-Martin1]. The lentivirus system thus provides a new investigative perspective regarding the exploration of mechanisms involved in the neuropathogenesis in GA1. In this study, we used lentivirus-mediated shRNA to suppress the expression of GCDH gene in rat striatal neurons. These neurons were cultured with a high concentration of lysine to imitate the hypermetabolic state of GA1 patients during acute encephalopathic crisis. We found that suppression of the GCDH gene and excessive intake of lysine induced apoptosis in rat striatal neurons. Our results suggest that lentivirus-mediated targeted suppression of GCDH gene might be a more useful means of determining the mechanism underlying GA1-induced striatal degeneration and whether the observed cell death is partially caspase-dependent. Materials and Methods {#s2} ===================== Ethics {#s2a} ------ This study was carried out in strict accordance with the Guide for the Care and Use of Laboratory Animals issued by the National Institutes of Health. The protocol was approved by the Committee on the Ethics of Animal Experiments of Tongji Medical College (Permit Number: 2011-S248). Every effort was made to minimize the animals' suffering. Culture and Identification of Primary Striatal Neurons {#s2b} ------------------------------------------------------ Neonatal rats (Sprague-Dawley) were killed by decapitation on postnatal day 1. All animals were purchased from the Experimental Animal Center of Tongji Medical College. Primary striatal neurons were cultured using a slightly modified version of a procedure described in a previous study [@pone.0063084-Lamp1]. Briefly, striatum tissues were cut into 1 mm^3^ fragments and incubated with 0.125% trypsin (Sigma) for 15 min at 37°C. Neurons were plated at a density of 5×10^5^ cells/well onto 6-well plates coated with 0.1 mg/ml poly-L-lysine (Sigma). After 4 h, plating medium (80% Dulbecco's modified Eagle's medium-high glucose medium (Hyclone), 10% fetal bovine serum (Gibco), and 2 mmol/L glutamine (Sigma)) was replaced with maintenance medium (98% neurobasal A medium (Gibco), 2% B27 (Gibco, containing serum-free supplements for growth and long-term viability of neurons [@pone.0063084-Brewer1]), 0.5 mmol/L glutamine). Cultures were washed with PBS (0.01 mol/L phosphate buffer solution) before and after fixation with 4% paraformaldehyde for 30 min. The fixed cultures were then permeabilized with 0.3% TritonX-100 in PBS for 30 min, washed in PBS, and incubated overnight with primary antibody at 4°C. The primary antibody, a polyclonal rabbit anti-rat microtubule-associated protein antibody (MAP2, Proteintech Group, Inc. China), was diluted to 1∶100 in 2% goat serum-PBS containing 3% bovine serum albumin and 0.3% Triton X-100. Cells were then incubated with Texas Red-conjugated secondary antibody (Jackson, Inc. China, 1∶250) for 2 h. For nuclear staining, these cells were washed and incubated with Hoechst33342 (5 µg/ml, Sigma) for 5 min. Stained cells were visualized under a fluorescence microscope (Olympus BX51 AX-70, Japan). Images were analyzed using Image-Pro Plus 6.0 (Media Cybernetics, Bethesda, MD, U.S.). Cultures were harvested and washed in PBS (mixed with 1% bovine serum albumin), and then cell density was maintained at 10^6^ cells/ml. The cells were incubated with primary antibody (MAP2, diluted to 1∶100) for 20 min. Then the cells were washed in PBS again and stained with secondary antibody (diluted to 1∶250) in the dark for 30 min. After staining, cells were washed again and analyzed using a flow cytometry analyzer (BD Biosciences). Lentiviral shRNA Vector Construction and Transfection {#s2c} ----------------------------------------------------- Three target shRNAs against rat GCDH gene (Gene Bank accession NM_001108896.1) were designed as follows: shRNA\#1∶5′-GGAGCAGCGACAGAAGTAT-3′, shRNA\#2∶5′-GGACAAGGCTACTCCAGAA-3′, and shRNA\#3∶5′-GGGACATTGTATATGAGAT-3′. Oligonucleotides encoding shRNA sequences and one negative control sequence (5′-TTCTCCGAACGTGTCACGT-3′, which showed no significant homology to any mouse or human gene [@pone.0063084-Lin1]) were synthesized and annealed into double strands. Double-stranded DNAs were inserted into Hpa1/Xho1 restriction sites of lentiviral frame plamids ([Figure S1](#pone.0063084.s001){ref-type="supplementary-material"}, pFU-GW-RNAi, encoding green fluorescent protein (GFP), the lentiviral frame plasmid was supplied by Genechem Co. Shanghai, China). They were then transformed into E.coli and positive recombinant clones were selected by using PCR, using the primers 5′-GCCCCGGTTAATTTGCATAT-3′ and 5′-GAGGCCAGATCTTGGGTG-3′. The conditions for the PCR were denaturation at 94°C for 30 sec, then 94°C for 30 sec, 55°C for 30 sec and 72°C for 30 sec, for 35 cycles, and extension at 72°C for 6 min. The products were then electrophoresed on a 1.5% agarose gel containing ethidium bromide. The length of positive clones containing shRNA was 343 bp, and the length of blank clones was 299 bp. Recombinant non-integrative lentiviral vectors were produced by co-transfecting 293T cells with the lentivirus expression plasmid and packaging plasmid (pHelper 1.0 including gag/pol and pHelper 2.0 including VSVG) using Lipofectamine 2000 (Invitrogen) [@pone.0063084-DiNunzio1]. Forty-eight hours later, the supernatants were collected and concentrated. After transfection, the viral titer was determined by counting GFP-positive cells. The viral titer was then diluted to 10^8^ TU/ml. DNA sequencing results revealed that the RNA interference sequence targeting the GCDH gene was successfully inserted into the lentiviral vector. Transfection efficiency was determined using the negative control (NC) lentivirus. After 10 days of culture, cells were infected at various multiplicities of infection (MOI: 1, 10, 20). Then, 72 h after infection, the transduction efficiency was observed under a fluorescent microscope ([Figure S2](#pone.0063084.s002){ref-type="supplementary-material"}). The best MOI was found to be 10. Then digestion was performed and single cell suspension was prepared (2×10^5^ cells in 200 µl PBS). The GFP intensity was determined by flow cytometry. When MOI was at 10 ([Figure S2](#pone.0063084.s002){ref-type="supplementary-material"}), the transfection efficiency was 96.5±2.3% based on flow cytometry results. Cells were divided into three groups: a control group (uninfected), a NC group (transfected with negative control virus), and a lentivirus-shRNA group (transfected with target shRNAs lentiviral vectors). The lentivirus-shRNA group was divided into three subgroups based on shRNA sequence: lentivirus-shRNA\#1, lentivirus-shRNA\#2, and lentivirus-shRNA\#3. Interference efficiency was detected using RT-PCR and Western blotting. MTT Assay {#s2d} --------- Primary striatal neurons were seeded into 96-well plates at a density of 5×10^4^ cells/well. Neurons were incubated with 0 mmol/L, 5 mmol/L, 10 mmol/L, 15 mmol/L, or 20 mmol/L lysine (Sigma) for 24 h. Then 3-(4, 5)-dimethylthiahiazo-3, 5-diphenytetrazoliumromide (MTT, Sigma) was added to these wells and incubated (at a concentration of 500 mg/L) for another 4 h. The medium was then removed, and 150 µL dimethyl sulfoxide was added and concussed for 10 min. Another three wells containing no cells were filled with 100 µL medium. These served as blank controls. Opacity density (OD) was measured at 570 nm using a spectrophotometer. Cell viability (%) = (OD of cells with different treatments -- OD of blank control)/(OD of cells with no treatment -- OD of blank control) ×100. We did not detect any observable differences in survival between cells exposed to 0 and 10 mmol/L lysine ([Table S1](#pone.0063084.s003){ref-type="supplementary-material"}). A lysine concentration of more than 15 mmol/L was found to be toxic to the neurons. Three groups of cells (control, NC, lentivirus-shRNA) were incubated with 0 mmol/L, 5 mmol/L, or 10 mmol/L lysine for 24 h. Cell viability was assessed using MTT assay. Hoechst Staining Assay {#s2e} ---------------------- Cultures were stained with Hoechst 33342 (10 µg/ml, Sigma) for 10 min. Changes in nuclear morphology were observed using fluorescent microscopy (350 nm stimulation and 460 nm emission). The relative number of Hoechst-positive nuclei per visual field (minimum of 10 fields) was determined. Annexin V-PE/7-AAD Staining {#s2f} --------------------------- Cells were trypsinized and washed with serum-containing medium. The samples (5×10^5^ cells) were centrifuged for 5 minutes at 400×g and the supernatant was discarded. The cells were then stained using an Annexin V-PE/7-AAD apoptosis kit (MultiSciences Biotech Co, Ltd) in accordance with the manufacturer's instructions. The number of apoptotic cells was detected and analyzed using flow cytometry. Determination of Mitochondrial Membrane Potential (MPP) {#s2g} ------------------------------------------------------- Cells in 3.5 cm culture dishes (5×10^4^ cells/dish) were washed three times with Tyrode's buffer and then incubated with tetramethylrhodamine methyl ester (TMRM, 20 nmol/L, Sigma) in the dark at room temperature. After 45 min, the cultures were washed 4 times with Tyrode's buffer and mounted on the stage of a confocal laser scanning microscope (LSM 510, Carl Zeiss Inc.). All procedures were performed as described previously [@pone.0063084-Joshi1]. We used a region of interest tool (ROI) from the LSM program to select the areas and measure TMRM fluorescence intensities. We calculated the average fluorescence intensities of all ROIs and the background fluorescence intensities of the regions next to the cells. After subtracting background intensity, we normalized the TMRM fluorescence intensities using the following formula (△F =  (F~0~--F)/F~0~; where F~0~ =  fluorescence intensity in the NC group, F =  fluorescence intensity in other groups). Cells were harvested as described in the Annexin V-PE/7-AAD staining section. The cells were resuspended with PBS and incubated with TMRM (20 nmol/L) in the dark at room temperature. After 30 min, the cells were rewashed and suspended with 200 µl PBS. Then the TMRM signal was analyzed in the FL2 channel of a flow cytometry analyzer [@pone.0063084-Floryk1]. Real-time Reverse Transcription Polymerase Chain Reaction (RT-PCR) {#s2h} ------------------------------------------------------------------ Primary striatal neurons were seeded in 6-well plates at a density of 5×10^5^ cells/well. Total RNA was extracted using Trizol (Invitrogen). Complementary DNA was synthesized in accordance with the manufacturers protocol (Toyobo, Japan). Real-time PCR amplification was performed on an ABI PRISM 7500 cycler with SYBR reagent (Toyobo, Japan). The thermal cycling conditions were set as given in the instructions included with the cycler, and the annealing temperature was 60°C. The sense primer 5′- GAAAGCCCTGGACATCG -3′ and the antisense primer 5′- CAACCGTGAATGCCTGA -3′ were used for amplification of GCDH (designed by Primer 5.0, synthesized by Invitrogen, China). Quantitative normalization of cDNA in each sample was performed using rat housekeeping gene glyceraldehyde-3-phosphate dehydrogenase (GAPDH, sense primer, 5′- TTCAACGGCACAGTCAAGG -3′; antisense primer, 5′- CTCAGCACCAGCATCACC -3′) as an internal control to determine the uniformity of the template RNA for all specimens. For each sample, GCDH expression was derived from the ratio of its own expression to GAPDH expression using the following formula: relative expression  = 2^−\ (△Ct\ sample−△Ct\ control)^, △Ct = Ct~GCDH~−Ct~GAPDH~. Western Blotting {#s2i} ---------------- Cells were plated in 6-well plates at a density of 5×10^5^ cells/well and lysed using cell lysis buffer (Beyotime, China) and phenylmethylsulfonyl fluoride (PMSF, Sigma). Protein extracts were quantified using a BCA protein assay kit (Beyotime, China). Denatured protein samples (40 µg/lane) were separated using 10% sodium dodecyl sulfate polyacrylamide and transferred onto polyvinylidene difluoride membranes. The membranes were sealed with skim milk powder and incubated with primary antibodies for 2 h at room temperature. Primary antibodies for GCDH (Proteintech Group, Inc. China) were diluted to 1∶300 in 5% skim milk powder with 0.2% PBS-Tween 20. Primary antibodies against β-actin were diluted to 1∶1000, and antibodies against caspase 3 (Santa Cruz Biotechnology, Inc.), caspase 8, and caspase 9 (Cell Signaling Technology, Inc.) were diluted to 1∶500. The membranes were washed three times (5 min/wash) using Tris-buffered saline with Tween-20, (pH 8.0). These were then incubated with horseradish-peroxidase-conjugated secondary antibodies (Jackson ImmunoResearch, PA, U.S., 1∶3000) for 2 h at room temperature. The blots were washed three times with PBS-Tween 20 and developed with enhanced chemiluminescence substrate (Amersham Pharmacia Biotech, Piscataway, NJ, U.S.). Protein bands were imaged using a gel image processing system (UVP Labworks, Upland, CA, U.S.) and quantified by densitometry (Quantity One). β-actin was used as a protein loading control. Statistical Analysis {#s2j} -------------------- All experiments were performed in triplicate. Data are presented as mean ± standard deviation. Statistical analysis was performed using SPSS17.0. Differences between two groups were compared using the Student's t test, and the comparison among more than two groups was performed via analysis of variance (ANOVA) and the Student-Newman-Keuls test. *P*\<0.05 was considered statistically significant. Results {#s3} ======= Assessment of the Neuron Purity {#s3a} ------------------------------- MAP2, which is mainly distributed in the neuronal bodies and dendrites, is widely used in the identification of nerve cells. In this study, all nuclei were stained blue with Hoechst33342, and all neuronal bodies and dendrites were stained red with Texas Red. In cultured isolated neurons, 92.4±1.6% of living cells was found to be MAP2-positive using immunofluorescence, and 94.3±2.5% of cells was found to be MAP2-positive using flow cytometry ([Fig. 1](#pone-0063084-g001){ref-type="fig"}). ![Assessment of neuronal purity.\ Immunofluorescence staining reveals the proportion of neurons in living cells to be 92.4±1.6%. Flow cytometry showed the proportion of neurons in living cells to be 94.3±2.5%. A: All nuclei were stained blue by Hoechst33342. B: All neuronal bodies and dendrites were labeled red by Texas Red. C: A merged image showing Hoechst33342 staining and Texas Red labeling. Scale bars: 20 µm. D: Cells without staining were analyzed by flow cytometry. E: Stained cells were analyzed by flow cytometry.](pone.0063084.g001){#pone-0063084-g001} Assessment of Interference Efficiency {#s3b} ------------------------------------- The mRNA levels of GCDH as measured by RT-PCR in the lentivirus-shRNA\#1, 2, and 3 subgroups were reduced by 63.4%, 54.2%, and 61.0%, respectively ([Table 1](#pone-0063084-t001){ref-type="table"}). In view of the fact that many patients have some residual GCDH activity, which can reach 40% of normal levels and the fact that no association has been found between residual activity and clinical phenotype, suppression of the expression of GCDH gene by as much as 60% is sufficient for investigations of the mechanism of GA1 [@pone.0063084-Christensen1]--[@pone.0063084-Harting1]. We assessed the efficiency of lentivirus-shRNA\#1 interference using Western blotting. The level of protein expression was reduced by 80.78% ([Fig. 2](#pone-0063084-g002){ref-type="fig"}). The results suggest that the use of lentivirus-shRNA\#1 is appropriate for the following experiments. ![Efficiency of lentivirus-shRNA\#1 interference as detected by Western blotting.\ GCDH expression in rat striatal neurons 72 h after infection with lentivirus. The lentivirus-shRNA\#1 reduced the level of protein expression in GCDH by as much as 80.78% relative to the negative control lentivirus. \**P\<*0.05.](pone.0063084.g002){#pone-0063084-g002} 10.1371/journal.pone.0063084.t001 ###### Relative expression levels of GCDH in different groups as detected by RT-PCR. ![](pone.0063084.t001){#pone-0063084-t001-1} △Ct(GCDH-GAPDH) Relative expression ------------------------- ----------------------------------------------- --------------------- **control** 7.719±0.1233 1.129 **NC** 7.893±0.2401 1 **Lentivirus-shRNA\#1** 9.343±0.0306[\*](#nt101){ref-type="table-fn"} 0.366 **Lentivirus-shRNA\#2** 9.020±0.0100[\*](#nt101){ref-type="table-fn"} 0.458 **Lentivirus-shRNA\#3** 9.253±0.0153[\*](#nt101){ref-type="table-fn"} 0.390 *P\<*0.05 *vs.* NC group. There were no significant differences between the control and NC group with respect to the level of GCDH mRNA level. The mRNA levels of GCDH in lentivirus-shRNA\#1, 2, and 3 subgroups were reduced by 63.4%, 54.2%, and 61.0%, respectively. Neuronal Viability after Treatment with Lentivirus-shRNA\#1 and Lysine {#s3c} ---------------------------------------------------------------------- A concentration gradient (0--20 mmol/L) of lysine incubated with the cells reveal that cell survival was not affected by lysine at concentrations below 10 mmol/L ([Table S1](#pone.0063084.s003){ref-type="supplementary-material"}). When the concentration of lysine was no greater than 5 mmol/L, there was no significant difference in viability between the NC and control group ([Table 2](#pone-0063084-t002){ref-type="table"}), suggesting that the defective virus and low doses of lysine (≤5 mmol/L) were nontoxic to cells. When lysine levels were higher than 10 mmol/L, the viability of neurons infected with NC lentivirus and lentivirus-shRNA\#1 were reduced to varying degrees. When cells were treated with 5 mmol/L lysine, lentivirus-shRNA\#1 reduced neuronal survival by 60.94% relative to cells transduced with the NC lentivirus. Lentivirus-shRNA\#1 alone reduced neuronal survival by 24.05% relative to NC lentivirus. In GA1 patients, neurons gradually and progressively degenerate. Hypermetabolic states can develop, exacerbating degeneration [@pone.0063084-Klker1]. In our study, GCDH-deficient neurons partially degenerated and 5 mmol/L lysine exacerbated this degeneration. In view of the fact that high-lysine diets do not induce neurodegeneration in normal children, 5 mmol/L lysine was used in the following experiments. 10.1371/journal.pone.0063084.t002 ###### OD in the detection of neuron viability by MTT assay. ![](pone.0063084.t002){#pone-0063084-t002-2} OD Viability rate (%) -------------------------------------- -------------- ----------------------------------------- **0 mmol/L lysine** 0.510±0.0189 100% **NC** 0.485±0.0085 92.67% **NC+5 mM lysine** 0.476±0.0100 90.21% **NC+10 mM lysine** 0.461±0.0097 85.83%[\*](#nt103){ref-type="table-fn"} **Lentivirus-shRNA** 0.403±0.0067 68.62%[\*](#nt103){ref-type="table-fn"} **Lentivirus-shRNA+5 mM** **lysine** 0.268±0.0070 29.27%[\*](#nt103){ref-type="table-fn"} **Lentivirus-shRNA+10 mM lysine** 0.245±0.0172 22.48%[\*](#nt103){ref-type="table-fn"} Viability rate (%)  = (OD~m~−OD~blank~)/(OD~0~−OD~blank~); OD~m:~ The OD of each sample; OD~0~: The OD of neurons with 0 mmol/L lysine group. OD~blank~: The OD of the blank control (0.169±0.0252). *P\<*0.05 *vs.* neurons with 0 mmol/L lysine group. As shown in [Figure 3](#pone-0063084-g003){ref-type="fig"}, nuclei were lightly stained blue in the NC and control groups, and there was no significant apoptosis in either group. Lentivirus-shRNA\#1 increased the rate of neuronal apoptosis by 36.22% relative to NC lentivirus. When cells were treated with 5 mmol/L lysine, lentivirus-shRNA\#1 increased the level of neuron apoptosis by as much as 76.21% relative to NC lentivirus. These results were consistent with the MTT assay results. ![Hoechst 33342 staining of apoptotic neurons.\ The effects of GCDH knockdown and excess lysine on the nuclear morphological changes in rat neurons. Nuclei in uninfected neurons and neurons infected with negative control lentivirus were lightly stained blue. Apoptotic nuclei were deeply stained blue, and appeared dense and fragmented (marked with arrows). Scale bars: 20 µm. The histogram represents the percentage of apoptotic cells. \**P\<*0.05.](pone.0063084.g003){#pone-0063084-g003} In order to confirm the effects of lentivirus-shRNA\#1 and increased lysine level on neurons, we quantified the number of apoptotic cells using Annexin V-PE/7-AAD staining and flow cytometry ([Figure 4](#pone-0063084-g004){ref-type="fig"}). Because there was no significant difference in the viability between the NC and control group that were either exposed to additional 5 mmol/L lysine or not, we quantified apoptosis relative to the NC lentivirus group. Increased lysine did not change the apoptotic cell fraction in neurons infected with NC lentivirus. The apoptotic cell fraction was significantly higher in the lentivirus-shRNA\#1 group: 44.13% in cells not exposed to lysine and 83.35% in cells exposed to 5 mmol/L lysine. These suggest that GCDH downregulation through lentivirus-shRNA\#1 induced neuronal apoptosis and increased lysine level enhanced this apoptosis. ![Detection of apoptosis using flow cytometry.\ Cells were assayed for apoptosis using Annexin V-PE/7-AAD staining with flow cytometry. Cells were grouped and treated as shown to quantify the apoptosis induced by GCDH knockdown and increased lysine. Lentivirus-shRNA\#1 induced apoptosis, and 5 mmol/L lysine increased the rate of apoptosis to a significantly greater extent. Z-VAD-FMK, a pan-caspase inhibitor, blocked the apoptosis induced by lentivirus-shRNA and increased lysine to a great extent. \**P\<*0.05.](pone.0063084.g004){#pone-0063084-g004} Assessment of MPP {#s3d} ----------------- The collapse of MPP is the critical first step in apoptosis [@pone.0063084-Federico1]. Here, we report the differences in MPP status between the experimental and NC groups. TMRM fluorescence intensity was proportional to the level of MPP, as shown in [Figure 5](#pone-0063084-g005){ref-type="fig"}. Lentivirus-shRNA\#1 was found to markedly decrease MPP regardless of the presence of lysine, and 5 mmol/L lysine enhanced this decrease. Quantification performed using flow cytometry was consistent with the results of LSM. ![Assessment of MPP in rat striatal neurons.\ A: Fluorescence images of rat striatal neurons incubated with TMRM. Lentivirus-shRNA\#1 leads to mitochondrial depolarization and loss of fluorescence intensity. The loss of TMRM fluorescence from the mitochondrial regions indicates the collapse of MPP upon lentivirus-shRNA\#1 and lysine treatment. Scale bars: 20 µm. The histogram shows the quantitative representation of changes in the fluorescence intensity of TMRM upon different treatments. △F = (F~0~--F)/F~0~; F~0~: TMRM fluorescence intensity in the lysine-free NC group; F: TMRM fluorescence intensity in other groups. \**P\<*0.05. B: MPP was assessed using flow cytometry. Abscissa represents SSC-height (side scatter height), ordinate intensity of fluorescence. The histogram shows the changes in mean fluorescence intensity of all the cells. \**P\<*0.05.](pone.0063084.g005){#pone-0063084-g005} Expression of Apoptosis-related Proteins {#s3e} ---------------------------------------- Because GA-related metabolites can induce apoptosis in neurons, we evaluated the expression of apoptosis-related proteins using Western blotting ([Figure 6](#pone-0063084-g006){ref-type="fig"}). The protein levels of caspases 3 and 9 were significantly upregulated by lentivirus-shRNA\#1. The combination of lysine and lentivirus-shRNA\#1 intensified the upregulation of caspases 3 and 9. Neither lentivirus-shRNA\#1 nor 5 mmol/L lysine alone changed the level of caspase 8 expression, but exposure to both increased the protein level of caspase 8. ![Protein expression of caspases 3, 8, and 9.\ (A) NC; (B) lentivirus-shRNA\#1; (C) NC +5 mmol/L lysine; and (D) lentivirus-shRNA\#1+5 mmol/L lysine. \**P\<*0.05. The protein levels of caspases 3 and 9 were significantly upregulated by lentivirus-shRNA\#1, and this upregulation was intensified by 5 mmol/L lysine. Neither lentivirus-shRNA\#1 nor 5 mmol/L lysine changed the expression of caspase 8 alone. Exposure to both conditions increased the protein level of caspase 8.](pone.0063084.g006){#pone-0063084-g006} Effects of Caspase Inhibitor on Apoptosis Induced by GA-related Metabolites {#s3f} --------------------------------------------------------------------------- To confirm the importance of caspase-dependent processes in apoptosis induced by GA-related metabolites, we included the pan-caspase inhibitor benzyloxy-carbonyl-Val-Ala-Asp(OMe)-fluoromethylketone (Z-VAD-FMK, MPBio U.S.) in our experiments. This compound did not affect the survival of rat neurons when used at 100 µmol/L [@pone.0063084-Cao1]. Z-VAD-FMK was added to the medium 1 h prior to lentiviral infection. This blocked the suppressive effective of the metabolites on the viability of rat neurons to a significant extent, as indicated by flow cytometry ([Figure 4](#pone-0063084-g004){ref-type="fig"}). With Z-VAD-FMK pretreatment, the apoptotic cell fraction in cells infected with lentivirus-shRNA\#1 decreased to 21.87% in cells not exposed to lysine and 41.66% in cells exposed to 5 mmol/L lysine. This confirmed that lysine-related metabolites induced apoptosis in a partially caspase-dependent manner. Discussion {#s4} ========== Our constructed lentiviral vector displayed high infection efficiency in primary striatal neurons and remarkably suppressed the expression of GCDH gene. GCDH is located in the mitochondrial matrix. Lysine is transported into the mitochondria and degraded into glutaryl-CoA. When GCDH levels is low, glutaryl-CoA cannot be catalyzed to crotonyl-CoA, and the generation of GA, 3-hydroxyglutaric acid (3-OHGA), and glutarylcarnitine are all increased [@pone.0063084-Federico1], [@pone.0063084-Goodman1]. About 10--20% of GA1 patients are regarded as insidious-onset or late-onset. These patients do not experience any documented encephalopathic crises [@pone.0063084-Klker1], [@pone.0063084-Zafeiriou1], [@pone.0063084-Bhr1]. This means that GA1 patients still suffer from neural degeneration even when no observable hypermetabolic events take place, which can exacerbate degeneration. In this study, the GCDH-deficient striatal neurons caused by lentivirus-shRNA\#1 were found to be partly apoptotic. Acute encephalopathic crises are often precipitated by events such as surgical intervention, febrile illness, and vaccination. Under hypermetabolic conditions, hypoglycemia stimulates the conversion of energy substrates in the brain to ketogenic amino acids and ketone bodies. The increased utilization of lysine in the brains of GA1 patients can enhance glutarate accumulation and inhibit the Krebs cycle [@pone.0063084-Frizzo1]. This in turn inhibits gluconeogenesis resulting in hypoglycemia. These series of events constitute a vicious cycle. Low-lysine and high-arginine diets have been widely used in GA1 therapy. In most proteins, lysine is more abundant than tryptophan. Lysine breakdown increases substantially during catabolic crisis [@pone.0063084-Jafari1]. Approximately 90% of untreated GA1 patients develop neurodegenerative disease during brain development after acute encephalopathic crisis. In our study, excessive lysine intake (higher levels of lysine-related metabolites) promoted the apoptosis induced by lentivirus-shRNA. We speculate that 5 mmol/L lysine may simulate catabolic crisis in this GA1 model. Previous *in vitro* models have focused mainly on organotypic slices or on neuronal cells incubated with GA, 3-OHGA, or other related metabolites. They have facilitated the development of a considerable number of hypotheses regarding neuropathogenesis, but many of these hypotheses are controversial. Some have shown GA and 3-OHGA to act as direct or indirect neurotoxins, while others have indicated no neurotoxicity. It has been suggested that astrocytes may protect neurons from the excitotoxic damage caused by 3-OHGA [@pone.0063084-Frizzo1]. Neuronal cultures have been shown to be more vulnerable to 3-OHGA than mixed-cell cultures [@pone.0063084-Wajner1]. However, experiments have also provided evidence that reactive glial cells may at least partially underlie the neuropathology of GA1 [@pone.0063084-QuincozesSantos1]. Other experiments have shown that GA does not induce neuronal death in the absence of astrocytes and that neonatal astrocyte damage is sufficient to trigger progressive striatal degeneration. In this case, neuronal death appeared several days after GA treatment and increased progressively [@pone.0063084-OliveraBravo1]. However, in GA1 patients, neuronal loss occurs shortly after the encephalopathical crisis and does not progress [@pone.0063084-Funk1]. Because existing *in vitro* models have produced profoundly conflicting results, further research should be performed and a new, more complex model should be developed. Many factors limited these previous studies. Firstly, in GA1 patients, GA and other metabolites are generated within the cell and mitochondria, and intracellular GA accumulation may cause direct mitochondrial toxicity within neurons. Furthermore, GA-related metabolites have never been examined for its impact on cell-membrane receptors. This is the limitation in the described *in vitro* models, conducted in organotypic slices or neuronal cells incubated with GA-related metabolites. Secondly, the intracellular levels of GA or 3-OHGA are unknown, and they could be present in cells at an order of magnitude higher than those used in previous *in vitro* models. Thirdly, the interaction among related metabolites was not considered in previous *in vitro* models. Since experiments have demonstrated that the expression of GCDH is restricted to neurons in normal mouse brains [@pone.0063084-Zinnanti2], we focused on GCDH-deficient striatal neurons. In this novel GA1 model established using lentivirus mediated shRNA, GCDH-deficient striatal neurons were found to undergo apoptosis. All GA-related metabolites were generated at the mitochondria, and they acted either intracellularly or extracellularly. All metabolites, even those related to carnitine-deficiency, were found to interact with each other and collectively influence the viability of striatal neurons. Pieces of evidence have demonstrated that intracerebral de novo synthesis of GA and other metabolites and subsequent limited transportation across the blood-brain barrier may be involved in neuronal damage observed in GA1. This observation has inspired the design of KO mouse models [@pone.0063084-Koeller2]--[@pone.0063084-Sauer2]. The biochemical phenotypes of these mice are similar to those of GA1 patients, but these mice do not develop striatal injury spontaneously [@pone.0063084-Koeller1]. KO mice fed with a high-lysine diet develop severe neuropathology, similar to that of GA1 patients, but the findings regarding the pathologic role of dicarboxylic acid in their brains have not been consistent [@pone.0063084-Zinnanti1], [@pone.0063084-Klker4]. These differences may be due to intrinsic differences between the striata of mice and of humans. Because the genome of mice is similar to those of humans and because mice are easy to handle, mice are widely used in gene knockout experiments. Rat is the traditional animal of choice in investigating the central nervous system of humans, since it offers considerable advantages over mouse and is more similar to human than mouse with respect to the central nervous system [@pone.0063084-Clancy1]--[@pone.0063084-Hirst1]. Lentivirus-shRNA can integrate into the genomes of neurons to produce stable, long-term silencing [@pone.0063084-Dreyer1], [@pone.0063084-Molles1]. Therefore, intrastriatal administration of lentivirus-shRNA in neonatal rats may be suitable for the establishment of a novel *in vivo* model. Moreover, this model may be less expensive and easier to handle than the KO mouse model. Increasing evidence shows that mitochondrial dysfunction is involved in the pathology of various organic acidemias and neurodegeneration [@pone.0063084-Wajner2], [@pone.0063084-Morn1]. In the present study, both LSM and flow cytometry results revealed that lentivirus-shRNA\#1 markedly decreased MPP levels, and that 5 mmol/L lysine enhanced this decrease. These results indicate that mitochondrial dysfunction is involved in striatal neurodegeneration in GA1. Several lines of evidence have suggested that mitochondrial disruption is involved in the brain injuries sustained by GA1 patients [@pone.0063084-Zinnanti2]. Other experiments have shown that bioenergetic impairment is involved in the neurodegenerative changes associated with GA1 and demonstrated that mitochondrial disruption plays an important role in striatal neurodegeneration in GA1 [@pone.0063084-Ferreira1]--[@pone.0063084-Latini2]. The collapse of MPP is the critical first step in apoptosis. Caspase 8 is an important initiator of the extrinsic pathway. Caspase 9 is an important initiator of the intrinsic pathway, and caspase 3 is the major executor in cell apoptosis. A great deal of evidence has shown that caspases contribute to neurodegeneration in Alzheimer's disease [@pone.0063084-Rohn1]. However, investigation into the correlation between caspase activity and neurodegeneration in GA1 has been limited. In this study, the protein levels of caspase 3, 8, and 9 were detected and used to identify the apoptotic pathways most likely to be involved in GA1. The levels of caspases 3 and 9 (precursors and cleaved fragments of both) were higher in cells infected with lentivirus-shRNA\#1 than in the NC group. In these cells, co-treatment with 5 mmol/L lysine increased the level of caspase 8. Pretreatment with Z-VAD-FMK decreased the number of lentivirus-shRNA\#1-infected cells that are apoptotic, which suggests that the apoptosis induced by lysine-related metabolites might be partially caspase-dependent. In conclusion, we successfully established a novel cell model of GA1 using lentivirus-mediated shRNA to GCDH and excessive intake of lysine. Intrastriatal administration of lentivirus-shRNA in rats may offer another appropriate *in vivo* model for the study of GA1. This study provides evidence that GA1-triggered apoptosis in neurons is partially caspase-dependent. The specific details of the mechanisms and molecular players involved in this apoptosis merit further research. Indeed, many novel mitochondrial targets for neuroprotection have been identified providing more alternatives in addressing GA1 [@pone.0063084-PerezPinzon1]. Supporting Information {#s5} ====================== ###### **The diagram of pFU-GW-siRNA vector.** CMV/LTR:913-2415, U6 promoter: 2600--2915, Polylinker: 2916--2987, Ubiqutin Promoter:2955--4140, EGFP:4234--4953, LTR:5721--6293, Polylinker: Hpa I, Xho I. Polylinker: **GTTAAC** GCGCGGTGACC **CTCGAG** **.** (TIF) ###### Click here for additional data file. ###### **Neurons infected with lentivirus.** Neurons were infected with negative control lentivirus at various MOI (1, 10, 20). Fluorescence images showed the best MOI to be 10. A: At MOI = 1, there was no fluorescence. B: At MOI = 10, more than 90% cells were green and showed normal morphology. C: At MOI = 20, nearly all the cells were infected, but some cells exhibited swollen bodies and sparse neurites. Scale bars: 20 µm. Flow cytometry results reveal the transfection efficiency to be 96.5±2.3% when MOI is at 10. A: Uninfected neurons were analyzed by flow cytometry. D: At MOI = 10, cells were analyzed by flow cytometry. (TIF) ###### Click here for additional data file. ###### **OD in the detection of neuron viability by MTT assay.** Viability rate (%)  = (OD~m~-OD~blank~)/(OD~0~− OD~blank~); OD~m:~ The OD of each sample; OD~0~: The OD of neurons with 0 mmol/L lysine group. OD~blank~: The OD of the blank control (0.172±0.0297). \**P*\<0.05 *vs*. neurons with 0 mmol/L lysine group. (DOC) ###### Click here for additional data file. [^1]: **Competing Interests:**The authors have declared that no competing interests exist. [^2]: Conceived and designed the experiments: XL QN. Performed the experiments: JG CZ XF FT. Analyzed the data: JG FT. Contributed reagents/materials/analysis tools: JG CZ XF QY. Wrote the paper: JG.
2024-07-24T01:26:29.851802
https://example.com/article/7185
If the invitation link above does not work, please copy and paste this URL into your browser: http://www.greeninfrastructurewiki.com/invitation/0770b57b-74ad-36b9-b87b-b9938da289cbWetpaint, 710 Second Avenue, Suite 1100 Seattle, WA Thursday, February 26, 2009 I didn't write this book, but I sure like the name. I also like the content. I haven't actually read it, but I spoke to the author, Chip Haynes. He assured me that I would approve. I'm not even bitter about the name business. ...with the increasing uncertainties of our times, people will seek authenticity (rather than pretence and ambiguity) as a refuge for sanity and safety. Showing off, he said, is dead. Like the predilection to imbibe exotically labeled bottled water that comes out of the tap anyway. And like the predilection to wear your fantasies to youth and virility by driving a Lamborghini or souped-up V8 muscle car. Cracked leather is good. Like the wrinkles on a face. Plastic faces are out. and later waxed poetic (a practice I support entirely): We are the pedlars of authenticity in an age of swelling demand. So, to weave my own futurist vision, can the raw authenticity of cyclists become a societal template to replace that to which bankers and CEO’s once laid siege? In my vision the bicycle becomes an instrument of authentic expression; an instrument of societal progress and integrity. Of course, to we cyclists, this is already the case. But wider recognition would surely catalyse some interesting reconfiguration of a civilisation that is still over entranced with the devilry of manufactured image and misplaced values. It’s time for the age of the peloton of authenticity. We cyclists are, naturally, the ideal lead-out men for a ride such as that. Aside: There is an essay by Charles Taylor entitled The Ethics of Authenticity. I haven't read it recently, and undoubtedly it's not exactly what I remember it to be. Nevertheless, my fondness for the title has remained. This is important to mention because the title of this post is a reference to Taylor's essay, and becuase I want to generally acknowledge that the value of authenticity is not a new discovery. As I was writing and thinking about critical mass yesterday I felt uneasy--my thinking wasn’t clear. Now I've found at least one of the missing pieces: the artiface of anarchy. Perhaps that phrasing is too sing-song for serious consideration, but I think the idea is there. It is the antithesis of authenticity. So, critical mass: Unplanned? Hardly. Transportation? Hardly. Leaderless? Hmm, a suspect claim. A big F-U to auto-domination? You-betcha. A celebration of bicycle culture? OK. Presenting bicycling as outsider or liminal culture? Jah. Lets bring it on home. The ethos of authenticity is, ahem, critical. Normalize. I'm looking for The Solution, and I believe, as much as I believe anything, that it will be a Bicycle Solution. No, The Bicycle Solution. The Bicycle Solution is where the god of What Is meets the god of What Must Be. I'm less clear about how it will come to pass, but I suspect that compassion and softness will be important. I'm sure that if anyone bothers to read this I'll get shot down as a critical mass nay-sayer, but let me assure you, I love bicycle culture, I love nonsense, costume dress, and irreverent behavior. I like liminal spaces. I like outside-inside tension, sort of. But we can't afford to leave bicycles in the liminal space. Again: we can't afford to leave bicycles outside mainstream culture. It's the great paradox of liberty, diversity, and tolerance: humans can aspire to be more than human. Go figure. I'll keep working on it. For now, my only advice regarding critical mass is this: don't let yourself become part of a mob. Mobs are what happens when we let go of our aspirations. Wednesday, February 18, 2009 The Mavic EZ Ride system is a "clipless" pedal/shoe combination meant for ease of use on the bike, and ease of walking off the bike. Unbeknownst to many in the American bicycle market, Mavic makes far more than just the wheels marketing in the states - overseas Mavic offers entire component ranges and was a pioneer in electronic shifting. While lacking any true retention system, the EZ Ride system uses an x-shape interface to key the shoe into the pedal, with a magnetic tab to help keep things in place. You can't pull up on the pedals, and you don't need to twist out of them, but the interface is sure to be more secure feeling than a platform pedal for most riders. For commuting and relatively short trips this system may make sense for given riders, but when it comes to longer distances I could see the lack of adjustment of foot position being a problem. There also does not appear to be any "float" or free twisting of the foot/pedal interface built into the system, which could be a problem for those with touchy knees. For more casual riders who nonetheless want a more secure pedal feel but just don't like clips and straps or clipless pedals that you have to twist out of the Mavic EZ Ride could be the answer. Visit the EZ Ride site for more information and animations showing how the system fits together. Joe Biel has produced a documentary, soon to be released, entitled A Post-Critical Mass Portland: Living in a Post-Revolutionary Bicycle Age. Here's a preview and a few conversation-starter questions that he posted online: What does it mean that Portland, one of the best North American cities for cycling, has virtually no Critical Mass? Is it no longer relevant in the evolution of cyclists or has the police crackdown just been so successful? What are the new goals of cyclists? In a nutshell, critical mass is a leaderless group of bicyclists riding together in a city or urban environment, typically starting at a specified time and place, with the general intention of promoting the use of bicycles as transportation. The leaderless element makes critical mass hard to pin down, though most participants seem to be comfortable with the idea that a critical mass is an explicitly political event. The wiki format seems to work well for leaderless groups, and there is a critical mass wiki: http://criticalmass.wikia.com/. It provides the following definition: Critical Mass bike rides take place monthly in cities around the world. They are free mass participatory events, with no leaders or fixed agendas. However, the broad aim is to celebrate cycling and sustainable transport, and to give cyclists safety in numbers. Based on this definition, the celebration of bicycle culture and community is inherent to any critcal mass: Fun is essential, or at least encouraged. Costumed participants, unusual or unusually decorated bicycles, and musical accompaniment are not out of the ordinary. On the other hand, the events frequently come into conflict with users of public roadways. Critical mass events have been criticised for intentionally creating conflicts with motorists, and some participants and promotors do not deny these motives. Thus, critical mass is a combination of cultural celebration, recreation, and political protest. The experience of critical mass participants and witnesses vary widely. Many neighborhoods welcome the "traffic calming" effect of critical mass, and the bicycle advocacy agenda is closely tied to the interests of pedestrians and public transit users. On the other hand, motorists are often inconvenienced by critical mass rides that move like long, slow trains through dense urban streets. The Wikipedia entry on critical mass supplies an overview of reactions and responses to critical mass events, but the intermingled issues are nuanced and complex. Here, for example, is an experience reported by Gordon Inkeles on the Social Biking Blog: Attempting to drive home in my Chevy pickup during the last critical mass ride here in Arcata [California] I was locked in place on a narrow road for at least a half hour while leering cyclists flipped me the bird and screamed insults at my pickup truck. "Hey," I wanted to add, "I'm one of you. I'm hauling a yard of compost for my organic garden." If I had been trying to get to a hospital, I'd have been out of luck. The people behind me had little kids in the car and looked pretty upset. Whether or not you consider Arcata, CA, a dense urban environment, there are several issues prominent in this account: first, motorists inconvenienced by the event; second, blockage of the public roadway; and finally, the less than genteel behavior by bicyclists. The last item, the behavior of the bicyclists, is particularly troubling because it occured during an event designed to promote tolerance and diversity. This irony didn't occur by accident--irony never does--rather it suggests deeper complexity. While the cyclists behavior toward Mr. Inkeles was inexcusable, critical mass in general needs to be understood in a larger context. For example, it is critical to recognize that motorists are not the only targets during critical mass events. An incident during a critical mass in NYC in July 2008 led to the indictment of NYPD Officer Patrick Pogan for misdemeanor assault and multiple felonies related to giving false statements. This incident was the latest in "a pattern of excessive force and harassment against cyclists from even the highest ranks of the NYPD," according to Time's Up, a New York City-based not-for-profit environmental group). Not surprisingly, ill-treatment of cyclists, and citizens in general, appears to occur most frequently during critical mass events and other political gatherings that celebrate, in theory, the values of liberty, diversity, and tolerance. Michael Bluejay's website http://critical-mass.info/ apparently served as a hub for critical mass activity and communication. It was operated on a volunteer basis starting in 1998 until late 2008. As it is no longer operating, it has the potential to provide an interesting snapshot of critical mass at a specific point in time. This would be a great point of entry for a serious academic or journalistic research effort. Is anyone taking a comprehensive academic look at critical mass in the US? What is the current state of critical mass in cities across the country? How does it fit into the landscape of bicycle advocacy? What has it accomplished? I suspect Joe's documentary will discuss some of these questions as they related to Portland, OR, but what about the bigger picture? I don't think I have the time for this project at the moment, but maybe you're a doctoral student in political science, sociology, law enforcement, transportation planning, public administration, or something else, and looking for a research topic. Jump on it! On the other hand, if you happen to have US$100,000 ready to grant to the project I could be enticed to pursue it further. Please get in touch right away. I'm sure Mr. Biel would be interested too. You can find him here, and at Microcosm Publishing. Despite all the drugs and scandal, I can still get caught up in the excitement of big-league bike racing. There's a race going on California right now that's about as big as bike racing gets in the United States of Stadium Sports. Still, the doping business is a bummer, especially when athletes lie about it. Doping stinks, but lying about it is worse. Kudos to A-Rod, and anyone else ready to come out of the closet. That's what I said. Nevertheless, here it is: a front disc hub with 135mm spacing (O.L.D. to some, or B.T.E.: "between the ends"). Why? It provides a montrously strong dishless ISO-disc-brake-compatible front wheel. The downside: you need a fork with 135mm spacing. The fix: Jeff Jones has had a bunch made by Vicious Cycles. The silver lining: you can run a double-wide rim and an 3.7" Endomorph tire, and wave at the rediculously rough stuff as you float over it riding with one hand on the bars. Paul Comp makes the hub, but Jeff Jones is the source if you want to buy one: www.jonesbikes.com. Hehe. Big'n fat. Wide load. Bid'ness class. Yeah, whatever. If you haven't seen Jeff Jones' bikes, go check them out. Even if you have seen them, go check them out again. He's added a bunch of stuff since I'd been there last. Thursday, February 12, 2009 No doubt by this time you've received many letters or calls about the white bicycle in Macy's "My Funny Valentine" display in the New York flagship store. The issue at hand is the resemblence of the display to what are commonly known as "ghost bikes," white bicycles placed at the sites of traffic crashes that resulted in bicyclist fatalities. Despite the lack of formal organization, ghost bikes are internationally recognized within the bicycle community (see http://www.ghostbikes.org/), and within that community, Macy's display appears to make light of these fatal traffic incidents. The matter has been featured in several news pieces (for example: http://gothamist.com/2009/02/09/macys_white_bike_valentines_display.php). Now, "bicycle people" in America tend to get a little defensive. It's understandable - they are confronted with an automobile-dominated transportation culture on a daily basis. I'm pretty sure that the designers of this marketing campaign did not intend to reference the ghost bike phenomenom, but I think no one can deny the de facto similarities between the two. I suspect Macy's directors, executives and staff feel as most of us do: that traffic fatalities are tragic events, and that they should be prevented whenever possible. In order for Macy's to make itself understood clearly on this point, I suggest the following: Provide a poster or fliers at the site of the My Funny Valentine display informing shoppers of this issue, the meaning of ghost bikes, Macy's unintentional use of the symbolism, and the corporation's view on traffic fatalities and public safety. Donate 1% of profits from sales related to the My Funny Valentine marketing campaign to Transportation Alternatives, a not-for-profit organization dedicated to promoting the interests of pedestrians, bicyclists, and transit users in New York City. Look, I know this is small potatos for a company like Macy's, but it means a lot to the bicycle community. Do the right thing, please. On Valentine's Day, wouldn't it be nice to see a company with a heart? Sorry, I couldn't resist. Wednesday, February 11, 2009 Here are some great tips for cleaning you bike using non-toxic around-the-house products. Props to The Old Bike Blog and Riding Pretty, because I've found yet another thing I seem unable to let pass... I am generally in favor of non-toxic cleaning and lubrication, for all sorts of things, including bikes. That said, old habits die hard, and I've not made the switch. I'm about to list all the non-toxic, green, biodegradeable, etc. bicycle cleaning and lubricating products I can find. If you use or have used any of these, please comment on your experience. If you know of others, please let me know. Here we go. Lanolin is rumored to work well well in place of grease on nuts and bolts. I've myself have used bees wax for this application. It appears to work. The whole "olive oil on the chain" thing makes me a little queasy, so let's skip it for now. I'd be pleased to find out that my reaction is needlessly discriminatory, but I'm just not up to trying it myself right now.
2024-05-23T01:26:29.851802
https://example.com/article/4655
--- abstract: 'We report the results of Faraday rotation measurements of 23 background radio sources whose lines of sight pass through or close to the Rosette Nebula. We made linear polarization measurements with the Karl G. Jansky Very Large Array (VLA) at frequencies of 4.4 GHz, 4.9 GHz, and 7.6 GHz. We find the background Galactic contribution to the rotation measure in this part of the sky to be $+$147 rad m$^{-2}$. Sources whose lines of sight pass through the nebula have an excess rotation measure of 50-750 rad m$^{-2}$, which we attribute to the plasma shell of the Rosette Nebula. We consider two simple plasma shell models and how they reproduce the magnitude and sign of the rotation measure, and its dependence on distance from the center of the nebula. These two models represent different modes of interaction of the Rosette Nebula star cluster with the surrounding interstellar medium. Both can reproduce the magnitude and spatial extent of the rotation measure enhancement, given plausible free parameters. We contend that the model based on a stellar bubble more closely reproduces the observed dependence of rotation measure on distance from the center of the nebula.' author: - 'Allison H. Savage, Steven R. Spangler, and Patrick D. Fischer' title: Probing the Rosette Nebula Stellar Bubble with Faraday Rotation --- Introduction ============ Luminous young stars interact with and alter the Interstellar Medium (ISM) from which they form. They interact by photoionizing gas in their vicinity, leading to a propagating ionization front [@spi1968], and by the powerful stellar winds formed by hot, luminous stars. Over the course of a stellar lifetime, stellar winds modify the ISM by inflating a bubble of hot gas surrounding a star cluster. The @wea1977 solution for the bubble due to a single star consists of an inner termination shock, a surrounding bubble of hot, low density stellar gas, a contact discontinuity, interstellar medium gas that is photoionized, and finally, an outer shock through which the interstellar gas has passed. A diagram illustrating this structure is given in Figure 1 of @fre2003. Within this picture, the visible HII region corresponds to the annular shell of shocked, photoionized gas. Whether a bubble structure exists, or instead a less dynamic structure corresponding to an ionization front, depends on the mechanical luminosity of the wind or winds in the star cluster. The long term goal of our research program is to better understand how stars in OB associations modify the ISM. In this paper, we present results on Faraday rotation measurements (a diagnostic of plasma properties) on lines of sight through the ionized “bubble” produced by one OB association, and interpret the measurements in the context of models of young clusters. HII regions are plasmas, and principles of plasma physics determine how these structures evolve and impact the surrounding interstellar medium. One of the most important properties of an astrophysical plasma is the magnetic field. The magnetic field in an HII region or stellar bubble can strongly impact the evolution of the HII region or bubble. At the same time, modification of the magnetic field in the vicinity of an HII region could have consequences for subsequent star formation, properties of interstellar turbulence, and heat flow, among other processes. Measurement of magnetic fields in the interstellar medium is notoriously difficult. One of the best available techniques, and the one utilized in this paper, is Faraday rotation of linearly polarized radio waves from extragalactic radio sources (described in Section 1.1 below; see @min1996 [@hav2004; @hav2006; @bro2003; @bro2007; @val1993; @val2004], among others, for prior uses of this technique). An attractive aspect of Faraday rotation is that it can also be measured for lines of sight that pass through the solar corona, and thus provide information on the coronal magnetic field [@man2000; @ing2007]. The fact that the same diagnostic technique can be used in these two media may facilitate comparison between plasma processes in the corona and solar wind, and those in the interstellar medium. The specific object for study in this paper is the Rosette Nebula, which is a prominent HII region featuring an obvious shell structure and a central cavity (see Figure \[figrossource\]). It is located on the edge of a molecular cloud in the constellation Monoceros. We adopt as its center that of the NGC 2244 star cluster (which is responsible for the Rosette), which is given by @ber2002 as RA(J2000)= 06$^h$ 31$^m$ 55$^s$, Dec(J2000)=04$^o$ 56’ 34" ($l$=206.5, $b$=-2.1). The distance to the Rosette is 1600 parsecs and its age is estimated to be 3 $\pm$ 1 Myr old [@Rom2008]. @men1962 concluded that the Rosette Nebula is an ionization-bounded Strömgren sphere on the basis of radio continuum observations and that its structure is that of an annular shell. This structure is consistent with that of a wind-blown bubble, as mentioned above. Within the central cavity of the Rosette is the OB stellar association NGC 2244. Photometry and spectroscopy studies put the age of NGC 2244 at less than 4 Myr [@per1989]. Evolutionary models place the main-sequence turn off age at 1.9 Myr [@Rom2008]. Despite this age discrepancy, both theoretical models and observations indicate that NGC 2244 is still forming stars [@Rom2008]. There are 21 confirmed pre-main sequence stars, and 113 confirmed stars belonging to NGC 2244, of which at least 7 are O type stars and 24 are B type stars [@Rom2008; @park2002; @ogu1981; @wan2008]. The two brightest stars are HD 46223, an O4V star, and HD 46150, an O5V star [@Rom2008; @wan2008]. Faraday Rotation as a Diagnostic Technique for Stellar Bubbles -------------------------------------------------------------- Faraday rotation is an excellent diagnostic tool for estimating properties of astrophysical plasmas such as the density of the general interstellar medium and the large scale structure of the Galactic magnetic field. Faraday rotation is the rotation in the plane of polarization of a radio wave as it propagates through a plasma that has a magnetic field. The polarization position angle $\chi$ of a source, or part of a source, whose radiation has propagated through the ISM is given by $$\chi=\chi_{0}+\left[\left(\frac{e^{3}}{2\pi m_{e}^{2}c^{4}}\right)\int_0^{L} {n_e\vec B\cdot \vec{ds}}\right]\lambda^{2} \label{RM1}$$ where $\chi$ is the polarization position angle, $\chi_{0}$ is the intrinsic polarization position angle (i.e. that which would be measured in the absence of a medium), $\emph{e}$ is the fundamental electric charge, $\emph{$m_{e}$}$ is the mass of the electron, $\emph{c}$ is the speed of light, $\emph{n$_e$}$ is the electron density, $\emph{$\vec {B}$}$ is the magnetic field, $\emph{$d\vec{s}$}$ is the incremental pathlength interval along the line of sight, and $\emph{$\lambda$}$ is the wavelength. The integral in Equation (1) is taken from the source at $s=0$ to the observer at $s=L$. The variable $L$ represents the effective thickness of the plasma. With this convention, a positive value for the integral corresponds to the average magnetic field pointing from the source to observer, while a negative value represents a mean magnetic field pointing from the observer to the source. The quantity in square brackets is defined as the rotation measure (RM). The fundamental definition of Faraday rotation given in Equation (\[RM1\]) is in cgs units. Values of RM are conventionally given in SI units. This conversion can be accomplished by multiplying the cgs value of the RM by a factor of $10^4$ to obtain the SI value. Alternatively, an expression which gives an SI value for the RM given mixed but convenient interstellar units is [@min1996] $$RM=0.81\int^L_0 n_{e} (cm^{-3}) \vec{B}(\mu G)\cdot \vec{ds} \mbox{ (pc) rad m$^{-2}$ } \label{RMSI}$$ Equation (\[RM1\]) shows that if measurements of $\chi(\lambda)$ are available at two or more wavelengths (preferably three or more), the RM can be measured as the slope of a line through the data on a plot of $\chi$ vs. $\lambda^{2}$, $$RM=\frac{\Delta\chi}{\Delta(\lambda^{2})} \label{RM}$$ The wavelengths of observation must be spaced closely enough that there is no possibility of a “wrap” of $\pi$ radians between two adjacent frequencies of observation. This is referred to as the “n-$\pi$ ambiguity”. A discussion of the constraints on spacing between observing frequencies, as well as an illustration of the difficulties if they are spaced too far apart, is given in [@laz1990] (see Figures 3 and 4 of that paper). Further details of how we extract RM values from our data are given in Section 3.2. Among the many studies to have used Faraday rotation in the investigation of interstellar magnetic fields are [@rand1989; @min1996; @bro2003; @bro2007; @har2011; @van2011]. To extract information on the magnetic field, it is necessary to have information on the electron density, since the integrand in Equation (\[RM1\]) is the product of $\emph{n$_{e}$}$ and B$_{||}$, the parallel component of the interstellar magnetic field. The data sources we use for estimates of $n_{e}$ are described in detail in Section 4.1 below. The Rosette Nebula as a Candidate for Faraday Rotation Measurements ------------------------------------------------------------------- The Rosette Nebula is an excellent object for studies of stellar bubbles via the technique of Faraday rotation. Besides being a prominent HII region with a shell and cavity, the Rosette has other properties which make it an excellent choice for studies of the impact of a young stellar association on the surrounding ISM. The Rosette is in the rough direction of the Galactic anticenter ($\emph{l}$=206.5$^o$). This gives it a number of advantages relative to HII regions and young star clusters in the inner two quadrants of the Galactic plane. Since star formation regions are relatively rare beyond the solar circle, there is no confusion in the Rosette field with other star formation regions at different distances along the line of sight. By contrast, studies in the Cygnus Region (e.g. @whi2009) are complicated by numerous star formation regions at various distances. Extinction also is less heavy for most anticenter lines of sight. The star cluster responsible for the Rosette Nebula (NGC 2244) is clearly seen, and the spectral types of the stars have been determined. Another advantage of the Rosette Nebula is its structural simplicity. It resembles the theoretical ideal of a photoionized interstellar bubble as described by the theory of @wea1977. Furthermore, the parameters of the bubble structure have been determined by the radio continuum observations of @men1962, and later confirmed by @cel1983 [@cel1985]. @cel1985 determined that the Rosette Nebula is a spherical shell of ionized matter around NGC 2244 on the basis of radio continuum observations at 1.4 GHz and 4.7 GHz with the 100 m telescope at Effelsberg. Celnik also reported values for the inner and outer radius of the shell of gas and the density within the HII region [@cel1985][^1]. We adopt Celnik’s parameters for the shell density and the structure in our analysis in Section 4. Previous results of Faraday Rotation Diagnostics of HII Regions --------------------------------------------------------------- @whi2009 presented a study of the Galactic plane region near the Cygnus OB1 association. The main purpose of @whi2009 was to confirm the existence of a “Faraday Rotation Anomaly” in this part of the sky, i.e., a large change in RM over a small distance on the sky. @whi2009 argued that this anomaly was due to the plasma bubble associated with the Cygnus OB1 association. @whi2009 also developed a simple shell model that reproduced the observed magnitude and the change in RM in Cygnus. In Section 4, we will use this shell model to interpret our data on the Rosette Nebula. @har2011 used Faraday rotation and H$\alpha$ measurements with the WHAM spectrograph [@haf2003] to measure the electron density and line of sight magnetic fields in several HII regions. Faraday rotation was measured for extragalactic radio sources viewed through the HII regions. They probed 93 lines of sight in 5 HII regions, and found that each HII region displays a coherent magnetic field, with a range of 2 to 6 $\mu$G for the parallel component [@har2011]. @har2011 briefly compared their RM values with the model presented by @whi2009 and concluded that there is no evidence for a shell with an amplified magnetic field in any of the HII regions. @whi2009 and @har2011 thus come to different conclusions about the nature of the plasma shell that comprises an HII region. It should be noted that @whi2009 claimed that the Faraday rotation anomaly was consistent with a wind-blown bubble, but did not claim that it was inconsistent with a shell without magnetic field amplification. Additional observations of the sort presented in [@whi2009] will help resolve this issue. Measurements of $RM$ on a large number of lines of sight through an HII region (in the case of the present paper, the Rosette Nebula) will diagnose the plasma structure of the HII region, and determine if the HII region produces significant modification of the interstellar magnetic field. In time, we plan to carry out such observations on a set of HII regions associated with star clusters of different age, stellar luminosity, and wind power. Observations ============ ![A mosaic of the Rosette Nebula compiled from the Palomar Sky Survey II. The interior sources whose lines of sight pass through, or close to, the visible nebula are labeled with the prefix of “I”. The exterior sources whose lines of sight are well outside the visible nebula are labeled with the prefix of “O”. Sources with negative RMs are labeled with open circles and those with positive RMs have solid circles. Depolarized sources are marked with an “X”. The source symbols are scaled with the magnitude of the log $\mid$RM$\mid$. []{data-label="figrossource"}](f01.eps) All observations were made with the Karl G. Jansky Very Large Array (VLA) radio telescope of the National Radio Astronomy Observatory during the first several months of commissioning of the upgraded VLA.[^2] Details of the observations and resultant data are given in Table \[tbl2\]. The VLA was in D array for all of the observations. We observed 23 extragalactic radio sources whose lines of sight pass through or close to the Rosette Nebula. The sources were chosen from the NRAO VLA Sky Survey (NVSS), which covers the entire sky north of declination -40$^{\circ}$ at 1.4 GHz [@con1998]. We also observed four calibrators, 3C286, J0632+1022, J0643+0857, and 3C138. The calibrator 3C286 is commonly used for absolute calibration of the visibility amplitudes because it has a well known flux density. It is also used to calibrate the origin of the polarization position angle. The source 3C138 was used for independent observations that could also set the flux density scale and determine the origin of the polarization position angle. Specifically, we used our observations of 3C138 to independently confirm the value of the R-L phase difference (used to calibrate the polarization position angle) obtained from 3C286. The source J0632+1022 was the primary calibrator for the project, functioning as the gain calibrator, i.e., determining the complex gain of each antenna as a function of time. This source (J0632+1022) was also used to measure the instrumental polarization, described by the “D factors”, D$_R$ and D$_L$ [@big1982; @sak1994]. We also observed a second source, J0643+0857, to obtain a completely independent set of D factors which confirmed our instrumental polarization calibration. In addition to the calibrators, we observed 23 program sources. We had 12 sources whose lines of sight passed through the Rosette Nebula. The remaining 11 sources have lines of sight that pass near the Rosette Nebula but outside the obvious H$\alpha$-emitting shell. We observed these latter sources so we could establish a background RM value due to the Galactic plane. Figure \[figrossource\] shows an image of the Rosette Nebula with the positions of our sources superposed. [ll]{} Dates of Observation & March 20, 2010; July 4, 2010; August 22, 2010\ Duration of Observing Sessions (h) & 5.95; 5.89; 5.94\ Frequencies of Observations(MHz) &4136; 4436; 4936; 7636\ VLA array & D\ Restoring Beam (diameter) & 128; 196\ Number of Scans per Source Per Session & 5\ RMS Noise Level in Q and U Maps (mJy/Beam) & 0.042; 0.048; 0.037\ [cccccccc]{} Source & $\alpha$(J2000) & $\delta$(J2000) & $\emph{l}$ & $\emph{b}$ & $\xi$ & S(4.9GHz) & Number of\ Name & h m s & $^o$ ’ ” & ($^o$) & ($^o$) & (arcmin) & \[Jy\] & f observed\ I1 & 06 28 39.50 & 04 47 08.0 & 206.1 & -2.9 & 49.6 & 0.017 & 2\ I2 & 06 29 56.26 & 04 26 33.0 & 206.5 & -2.7 & 42.2 & 0.260 & 3\ I3 & 06 29 57.30 & 04 47 45.5 & 206.2 & -2.6 & 30.6 & 0.038 & 3\ I6 & 06 30 50.04 & 05 29 26.6 & 205.7 & -2.1 & 36.6 & 0.013 & 2\ I7 & 06 31 24.28 & 05 02 50.8 & 206.2 & -2.1 & 9.9 & 0.043& 2\ I8 & 06 31 34.31 & 04 22 34.4 & 206.8 & -2.4 & 34.4 & 0.025 & 3\ I10 & 06 32 31.12 & 05 30 32.7 & 205.9 & -1.7 & 35.2 & 0.024 & 2\ I12 & 06 33 03.14 & 04 44 56.0 & 206.6 & -1.9 & 20.6 & 0.047 & 3\ I14 & 06 33 46.34 & 05 36 54.0 & 205.9 & -1.4 & 48.9 & 0.070 & 3\ I15 & 06 34 00.01 & 05 10 42.8 & 206.3 & -1.5 & 34.2 & 0.021 & 3\ I16 & 06 34 11.48 & 05 25 32.0 & 206.1 & -1.3 & 44.7 & 0.020 & 2\ I18 & 06 35 25.96 & 05 14 15.3 & 206.4 & -1.2 & 55.4 & 0.028 & 2\ O1 & 06 24 18.84 & 04 57 01.9 & 205.4 & -3.7 & 113.6 & 0.150 & 2\ O2 & 06 25 51.89 & 04 35 40.2 & 205.9 & -3.6 & 92.8 & 0.340 & 3\ O4 & 06 27 21.09 & 05 45 37.8 & 205.1 & -2.7 & 84.0 & 0.090 & 3\ O5 & 06 27 36.73 & 06 32 52.1 & 204.4 & -2.3 & 115.7 & 0.066 & 3\ O7 & 06 27 38.32 & 03 24 59.6 & 207.2 & -3.7 & 111.7 & 0.220 & 2\ O9 & 06 30 52.53 & 06 24 50.5 & 204.9 & -1.6 & 89.6 & 0.050 & 3\ O11 & 06 33 32.77 & 04 00 06.0 & 207.3 & -2.1 & 61.5 & 0.110 & 2\ O14 & 06 35 51.95 & 03 42 18.0 & 207.9 & -1.8 & 94.9 & 0.029 & 2\ O15 & 06 36 05.69 & 04 32 40.5 & 207.1 & -1.3 & 66.9 & 0.410 & 3\ O16 & 06 37 23.05 & 04 05 44.1 & 207.7 & -1.3 & 96.3 & 0.029 & 3\ O17 & 06 37 36.18 & 05 55 32.5 & 206.1 & -0.4 & 103.4 & 0.038 & 2\ We observed 128 MHz wide spectral windows centered on three frequencies: 4.436 GHz, 4.936 GHz, and 7.636 GHz. We had three sessions on the VLA (“scheduling blocks”) on March 20, July 4, and August 22, 2010. We also made observations at 4136 MHz for the March and July sources, which would have provided polarization measurements at 4 frequencies. However, we ultimately flagged all 4.1 GHz data due to overwhelming RFI. Table \[tbl2\] presents a summary of the observations, which includes the date of observation, the duration of the sessions, the frequencies observed, the VLA array, the restoring beam used for each session, the number of scans per source per session, and the characteristic RMS noise level in the Q and U maps. The sources for the March and July sessions were the same, and we observed those sources at all three frequencies. The August session observed additional sources. This new set of sources was observed at 4.4GHz and 4.9GHz only. The intent was to observe these sources at 7.6 GHz as well, but the D array observing season ended before a 4$^{th}$ scheduling block was carried out. Table \[sources\] lists all the sources with a project name in column 1, the RA and Dec (J2000) in columns 2 and 3, respectively, the galactic longitude and latitude in columns 4 and 5, the angular distance between the line of sight and a line of sight passing through the center of the Rosette, $\xi$, in column 6. The total Clean Flux at 4.9GHz is given in column 7, and in column 8, the number of frequencies observed for each source, where the number 3 corresponds to the set of frequencies of \[4.4 GHz, 4.9 GHz, $\&$ 7.6 GHz\] and the number 2 corresponds to the set \[4.4GHz and 4.9GHz\]. The range in frequency between 4.4 GHz and 7.6 GHz allows us to obtain RM values that are as low as a few tens of rad m$^{-2}$, given the errors in the polarization measurements (see Section 3.2 below). The shorter range between 4.4GHz and 4.9GHz allows for measurements of large RM values without being affected by the “n$\pi$ ambiguity”. Data Reduction ============== All data reduction was performed with the Common Astronomy Software Applications (CASA) data reduction package. The calibration procedure is similar to that used in our prior Faraday rotation projects with the VLA, such as @whi2009 and @min1996. The procedure for reducing and calibrating the data was as follows. 1. We flagged out measurements corrupted by radio frequency interference (RFI). For all sessions, some antennas were completely flagged because of corrupted or missing data. We also implemented position corrections for a number of antennas. As well as usual systematic flagging procedures (e.g. “Quack”), we visually inspected the data in order to manually remove RFI and other problems. 2. Calibration of the array, consisting of determination of the complex gains and instrumental polarization parameters (“D factors”), as well as the right-left phase difference for the entire array, was carried out following the online $\emph{EVLA Continuum Tutorial}$ and supplemented by the handbook for the CASA program.[^3] 3. Polarized images of the sources were made from the calibrated visibility data with the CASA task CLEAN. CLEAN is a task that Fourier transforms the data to form the “dirty map” and “dirty beam”, carries out the CLEAN deconvolution algorithm, and restores the image by convolving the CLEAN components with the restoring beam. We produced CLEANed maps of the Stokes parameters I, Q, U, and V. Different weighting schemes in the (u, v) plane were used in the different sessions. The weighting was set to uniform for the March and July sources, but natural weighting was used for the August sources in order to obtain a better signal to noise ratio for the weaker sources observed in that session. The restoring beam for the March and July sources, across all frequency bands, was 128. For the August sources, the restoring beam was 196. The larger restoring beam in the August 2010 session is due to the use of natural rather than uniform weighting in the (u,v) plane. All maps presented utilized external calibration only. A single iteration of phase-only self calibration did not produce an improved signal-to-noise ratio for our maps. Imaging the Sources ------------------- Having obtained the maps of the Stokes parameters I, Q, U, and V for each source at each frequency, we generated maps of the linear polarized intensity, L, and the polarization position angle, $\chi$ $$L=\sqrt{Q^2+U^2}$$ $$\chi=\frac{1}{2} \tan^{-1}({\frac{U}{Q}})$$ For each source and frequency, we worked with images of I, L, and $\chi$. Examples of the images of two of our sources are shown in Figures \[figmaps15\] and \[figmaps14\]. Figure \[figmaps15\] shows the I, L, and $\chi$ maps of a point source (to the D array), I15, that was found to have a large RM (633 $\pm$ 14 rad m$^{-2}$). Figure \[figmaps14\] shows a source, O2, which is resolved to the D array and possesses structure. Determination of Rotation Measures ---------------------------------- In this section, we describe how we obtained RMs from data of the sort shown in Figures \[figmaps15\] and \[figmaps14\]. We first identified a local maximum in the polarized intensity in the 4.4GHz map. We then measured the polarization position angle $\chi$ at this location for the 2 or 3 frequencies available for this source. Since the sources from the August scheduling block have only two data points, the RM was calculated from Equation (\[RM\]). There are also larger errors associated with the sources from the August scheduling block due to having only two data points that are only slightly separated in frequency. There are three data points for the sources from the March and July scheduling blocks and the RM was calculated by plotting the polarization position angle, $\chi$, against $\lambda^{2}$ and fitting a line to this relationship. An example of this is illustrated in Figure \[fig15\], for the source I15, and Figure \[figI14\] for the source O2. All of our RM values were positive except for I18 and a component of O14. Two of the sources, I3 and O11, were depolarized, so we did not obtain a RM for them. ![A plot of the polarization position angle $\chi$ in radians against the square of the wavelength in \[m$^2$\] for the interior source I15 (Image shown in Figure \[figmaps15\]). The value of the fit RM is 633 $\pm$ 14 rad m$^{-2}$. Error bars are contained within the plotted symbols.[]{data-label="fig15"}](f04.eps) ![Polarization position angle data for source O2, in the same format as Figure \[fig15\]. The two sets of data points present measurements for the two components of the source seen in Figure \[figmaps14\]. The solid black points represent the data for the north component (component (a) in Table 3), and the solid gray points are for the south component (component (b) in Table 3). The fit RM values are 80 $\pm$ 8 rad m$^{-2}$ for component (a) and RM= 64 $\pm$ 6 rad m$^{-2}$ for component (b). Error bars are contained within the plotted symbols.[]{data-label="figI14"}](f05.eps) The source I2 requires additional comments. I2 is an interior source from the March and July scheduling blocks. Usually, this would mean that we had polarization data at three frequencies. However, the 4.4GHz data for I2 were excluded from the calculation of the RM measurement. The 4.4GHz data failed a test for data quality that we applied to our observations, as follows. For each source and frequency, the data from each scan was mapped. As discussed in Section 2 above, each source was typically observed for 5 scans during the 6 hour observing session. These scan maps were made in all polarization parameters as well as the total intensity I. The purpose of this exercise was to make sure that no systematic changes occurred during the observing session, due to incorrect correction for instrumental polarization, or similar effects. Once it was determined that the polarization data were “stationary” during the observing session, and that no drastically flawed data were present, I, Q, and U maps as well as maps of the derived quantities L and $\chi$, were made with all available data. Unlike the other sources for which values of $\chi$ were within 1 $\sigma$ of the mean values, the $\chi$ time series for I2 showed scan-to-scan variations larger than noise. We examined I2 at 4.9GHz and 7.6 GHz, and determined that inconsistent polarization position angles were not present at the higher frequencies. Since we only used two frequencies in determining the RM value for I2, there is a larger error associated with this source. The degree of linear polarization for I2 was extremely low, 0.1$\%$ at 4.4GHz, and we attribute the variations of $\chi$ to residual instrumental polarization artifacts, which can appear with low values of the degree of linear polarization [@sak1994]. We retain I2 as one of the sources in our sample because we believe the data from the two higher frequencies are adequate to determine $RM$. The $RM$ for I2 is consistent with the values for adjacent sources that we determined from measurements at three frequencies, indicating that n-$\pi$ ambiguities are not a problem. The fits to the data shown in Figures \[fig15\] and \[figI14\] are sufficiently good to give us confidence that we have an accurate measure of the RM. Nonetheless, there can be a residual concern that the RM is larger than the value resulting from our fit, and that there is one or more rotations of $\pi$ radians between the frequencies observed. To exclude this possibility and demonstrate that the 3 frequency RM fits were accurate, we made a $\chi$($\lambda^{2}$) fit within the 4.4GHz bandpass for those sources with RM $\geq$ 500 rad m$^2$. As described above, the 4.4GHz spectral window had 64 channels of 2MHz bandwidth. The end channels on both ends of the bandpass were discarded, and the remaining channels averaged to 5 sub-IF channels of 22 MHz each. A fit of $\chi$($\lambda^{2}$) = $\chi_0$ + RM$\lambda^2$ was then redone over the 4.4GHz spectral window. In all cases, the RM from this procedure agreed, within the errors, with the values obtained by fitting to two or three frequencies of 4.4, 4.9, and 7.6 GHz. A final check of the data set was to examine the degree of linear polarization, $$m= \frac{L}{I},$$ where I is the total intensity, for each source or source component at each of the frequencies of observation. If the degree of linear polarization is constant, this indicates that the Faraday rotation occurs in an external medium, such as the Galactic interstellar medium. A case where m is a function of frequency, with a smaller $m$ at lower frequencies, indicates internal Faraday rotation and depolarization within the synchrotron emitting source. The dependence of $\chi$ on $\lambda$ is then not proportional to $\lambda^{2}$, and a fit of the type we have done could yield an inaccurate estimate of the RM. For each source or source component with measurements at 3 frequencies, the weighted mean degree of linear polarization $\bar{m}$ was calculated from the measurements of $m$ at each of the frequencies. The weighting was with the error on $m$, calculated from the noise level in the $Q$ and $U$ maps. We then calculated the reduced $\chi_{\nu}^2$ statistic for the 3 measurements about this mean (with $\nu = 2$ degrees of freedom), and chose as a flag threshold a value of $\chi_{\nu}^2 = 3.9$, which corresponds to a 2 % probability of constancy of $m$ with frequency, for three measurements [@bev1969]. We considered the $\chi_{\nu}^2$ statistic as a screening operation rather than a definitive test, since the error in $m$ was calculated from the $Q$ and $U$ noise levels on blank portions of the image; such a procedure can underestimate the true error on a portion of the source where $L$ or $I$ is large. Of the 16 sources or source components (excluding I2) with observations at three frequencies, 9 passed this screening operation for $m$ being independent of frequency, and thus unaffected by depolarization. We then carefully examined the data for the remaining sources in more detail. We found that in nearly every case, depolarization could be excluded, and we concluded that the blank field noise measurements underestimate the true errors in $m$. For example, for 2 source components (O2a and O5b) the excessive $\chi_{\nu}^2$ was due to $m$ at 7.6 GHz being slightly lower than at 4.4 and 4.9 GHz. This is the opposite of the behavior for Faraday depolarization, and shows that our measurements are not affected by depolarization. In 4 of the 5 remaining cases, the decrease in $m$ from 7.6 to 4.9 GHz was very small (i.e. $\leq 13$ %), and we believe the high values of $\chi_{\nu}^2$ are due to the low estimate of measurement errors on $m$. In all of the aforementioned cases, we feel that a fit of $\chi(\lambda^2)$ gives a good estimate of the $RM$ due to the Galactic ISM, unaffected by the depolarization within the source. A point in support of this contention is the fact that 3 of these components were in double sources, and the $RM$s of the two components were in satisfactory agreement (see Table 3, described below). The only source for which depolarization might be present is I14b. It was flagged by the $\chi_{\nu}^2$ screening criterion, and the degrees of linear polarization at 4.4, 4.9, and 7.6 GHz are $0.018 \pm 0.001$, $0.023 \pm 0.001$, and $0.031 \pm 0.001$, respectively. These measurements seem to show a progression in $m$ with increasing frequency, as well as a reduced $\chi_{\nu}^2$ value formally inconsistent with constancy. These data may indicate depolarization, in which case a source-associated rotation of the position angle, independent of the Galactic ISM, might occur. Furthermore, in this case there is a difference in $RM$ between the two components of the source (see Table 3 below), although a linear fit to the $\chi$ versus $\lambda^2$ data was obtained. Although this difference in $RM$ between two source components with a small angular separation could indicate a problem with depolarization, it could also be an interesting probe of small scale variations in the nebula, as discussed in Section 4.2 below. In the remainder of this paper, we will use the data for component I14b, with the recognition that the inferred RM might contain a component due to the source itself rather than the Galactic ISM. A similar test was undertaken, with a corresponding reduction in the degrees of freedom, for the 12 sources or source components with observations at two frequencies, Only 1 source (O7) had a $\chi_{\nu}^2$ for 1 degree of freedom that exceeded the 2 % probability threshold and therefore merited closer examination. We concluded that the large $\chi_{\nu}^2$ was due to small inferred errors on the $m$ values at the 2 frequencies; the $m$ values at 4.4 and 4.9 GHz are in good agreement, with $m_{4.4} > m_{4.9}$. Internal Faraday depolarization or depolarization by a plasma screen in front of the source cannot be occurring in this case. To conclude, with the probable exception of I2, and the possible but not certain case of I14b, all of the $RM$ values obtained from our sources and source components appear to be measures of the Galactic ISM. Our results on the polarization properties of our sources and the resultant RM values are shown in Table 3. The first column of Table \[RMvalues\] lists the source name. Duplication of sources in this column indicates that there were two components to the source for which we were able to obtain RM values. Each source has two or three associated rows in the table, and subsequent components of the same source also have two or three rows. These rows give data for the two or three frequencies of observation. Column 2 identifies the components of the duplicated source as either (a) or (b). There were 9 sources that had two components. Column 3 lists the frequency associated with the data for the source, column 4 lists the linear polarized intensity, L (mJy/beam), and the associated error, column 5 the degree of linear polarization, m. Column 6 is the polarization position angle $\chi$ and the associated error, and column 7 has the RM with associated errors. Since the RM is obtained by fitting a line to the $\chi$($\lambda^2$) data for the March and July sources, and by Equation (\[RM\]) for the August sources, column 7 has one value per source component. Comparison of RM Measurements with @tay2009 ------------------------------------------- ![A comparison of RM values from @tay2009 and the present study. The lighter solid line represents the case of perfect agreement, and the heavy solid line represents a weighted least-squares fit to the data.[]{data-label="figtaylor"}](f06.eps) @tay2009 re-analyzed data from the NRAO VLA Sky Survey (NVSS) in order to obtain RMs for 37,543 radio sources. That study provided RMs for the sky north of -40$^\circ$ in declination. We can compare our RM values with the previously derived RM values by @tay2009 for seven of our sources in common with @tay2009. There are two reasons for carrying out this comparison. First, it serves as a check on our data and method of data analysis. Second, there are inconsistent reports in the literature regarding the accuracy of the @tay2009 results. In a study of the magnetic field in the direction of the Galactic poles, @mao2010 found discrepancies between their RM measurements and those of @tay2009. A comparison of RM values from @tay2009 and independent measurements was also made by @van2011, with the VLA. @van2011 found generally satisfactory agreement, although there was a population of outliers as well as an apparent systematic bias at RM $\simeq$ 50 - 100 rad m$^{-2}$ (see Figure 4 of @van2011). A full discussion of the comparison between independent RM measurements and the RM values from @tay2009 is given in Section 4.2 of @har2011. We think it worthwhile to make additional comparisons between @tay2009 and independent measurements made specifically for the purpose of measuring Faraday rotation. Figure \[figtaylor\] illustrates the comparison of our RM measurements with those of @tay2009. The sources that are in common are all exterior sources, O1, O2, O4, O7, O9, O15, $\&$ O17. None of our interior sources were contained in the catalog of @tay2009. Our RM values compare favorably with those of @tay2009. The light solid line in Figure \[figtaylor\] shows the case of perfect agreement between the two sets of measurements, and this is clearly a satisfactory representation of the data. A weighted least squares linear fit to the data shown in Figure \[figtaylor\] gives a slope of m= 1.04 $\pm$ 0.04 and an intercept of b= -14.1 $\pm$ 8.1 rad m$^{-2}$ (heavy solid line). The good agreement between the two sets of measurements is consistent with the assessment of [@van2011], and gives confidence in our $RM$ measurements. We note that this does not address the question of the systematic error in some of [@tay2009] $RM$s that was pointed out by [@van2011]. Observational Results and Modeling in Terms of the Interaction of an HII Region with the Interstellar Medium ============================================================================================================ The first question in the analysis is whether the RM data from Table \[RMvalues\] show evidence for an RM enhancement associated with the Rosette Nebula. Such an enhancement is illustrated and clearly seen in Figure \[figrmarc\]. In Figure \[figrmarc\], we plot the measured RM versus angular distance from the center of the Rosette Nebula, which we take to be the center of the NGC 2244 star cluster as given by @ber2002 (see Introduction). A very clear signature of a Faraday rotation enhancement is seen for the 6 lines of sight (9 sources and source components) with angular separation $\leq$ 40 arcminutes from the nebular center. The excess $RM$ due to the Rosette Nebula is also visible in Figure 1, in which the size of the plotted symbol for each source is dependent on $RM$. Those sources viewed through the Rosette have larger $RM$s. The mean RM for sources seen through the Rosette Nebula is 675 rad m$^{-2}$, with a range of 200 $\leq$ RM $\leq$ 900 rad m$^{-2}$. Lines of sight that are more than 40 arcminutes from the center of the nebula have a mean of 147 rad m$^{-2}$, with a standard deviation of 77 rad m$^{-2}$. In calculating the mean and standard deviation of the background, we have excluded the two sources in our sample with a negative RM (I18, RM=-270 $\pm$ 54 rad m$^{-2}$ $\&$ O14(b), RM=-38 $\pm$ 60 rad m$^{-2}$). It is unclear whether the negative RMs have Galactic or extragalactic origins. The RM in both cases was obtained from measurements at only 2 frequencies. As presented in Table \[RMvalues\], both the polarized intensity and degree of linear polarization for those 2 sources are low. Although we include these sources in Table \[RMvalues\] because they passed our selection criteria, we do not include them in our calculation of the Galactic mean background. We interpret this mean background as due to the Galactic Faraday rotation in this part of the sky, which is independent of the Rosette Nebula. The data in Figure \[figrmarc\] show a “RM anomaly” of 50-750 rad m$^{-2}$ associated with the Rosette Nebula. This is comparable to, and perhaps slightly smaller than that reported for the Cygnus OB1 association by @whi2009. However, the @whi2009 result is more ambiguous because of the angular proximity of other HII regions as well as other Galactic objects, which confuse measurements in that field. ![The RM (rad m$^{-2}$) for each source and source component in Table 3 versus angular distance (arcminutes) from the center of the Rosette Nebula. All the sources, and components, are represented on the graph along with the associated error bars.[]{data-label="figrmarc"}](f07.eps) [ccccccc]{} I1 & & 4.4 & 0.25 $\pm$ 0.03 & 2 & -24.5 $\pm$ 3.5 &\ & & 4.9 & 0.26 $\pm$ 0.03 & 3 & -27.0 $\pm$ 3.4 &\ I2 & & 4.4 & 0.15 $\pm$ 0.03 & 0.1 & 6.8 $\pm$ 5 &\ & & 4.9 & 0.37 $\pm$ 0.03 & 0.2 & -14.8 $\pm$ 1.9 &\ & & 7.6 & 1.70 $\pm$ 0.04 & 1.0 & -23.3 $\pm$ 0.7 &\ I6 & a & 4.4 & 0.90 $\pm$ 0.04 & 8 & 13.4 $\pm$ 1.1 &\ & a & 4.9 & 0.78 $\pm$ 0.04 & 7 & -23.7 $\pm$ 1.7 &\ I6 & b & 4.4 & 0.60 $\pm$ 0.03 & 9 & -4.7 $\pm$ 1.6 &\ & b & 4.9 & 0.62 $\pm$ 0.03 & 10 & -48.6 $\pm$ 2.1 &\ I7 & a & 4.4 & 0.93 $\pm$ 0.03 & 10 & -45.4 $\pm$ 1.0 &\ & a & 4.9 & 0.87 $\pm$ 0.03 & 10 & -83.1 $\pm$ 1.0 &\ I7 & b & 4.4 & 0.79 $\pm$ 0.03 & 5 & 0.02 $\pm$ 1.15 &\ & b & 4.9 & 0.69 $\pm$ 0.03 & 4 & -35.4 $\pm$ 1.3 &\ I8 & a & 4.4 & 1.11 $\pm$ 0.04 & 14 & 46.6 $\pm$ 1.0 &\ & a & 4.9 & 0.99 $\pm$ 0.03 & 13 & 22.9 $\pm$ 0.9 &\ & a & 7.6 & 0.64 $\pm$ 0.04 & 14 & -33.4 $\pm$ 1.5 &\ I8 & b & 4.4 & 0.41 $\pm$ 0.04 & 7 & 28.8 $\pm$ 2.6 &\ & b & 4.9 & 0.28 $\pm$ 0.03 & 6 & 6.1 $\pm$ 3.2 &\ & b & 7.6 & 0.21 $\pm$ 0.03 & 7 & -13.5 $\pm$ 4.7 &\ I10 & & 4.4 & 0.23 $\pm$ 0.04 & 2 & 108.2 $\pm$ 5.1 &\ & & 4.9 & 0.29 $\pm$ 0.03 & 3 & 65.9 $\pm$ 3.3 &\ I12 & & 4.4 & 1.3 $\pm$ 0.04 & 5 & 61.5 $\pm$ 1 &\ & & 4.9 & 1.25 $\pm$ 0.04 & 5 & 21.9 $\pm$ 0.7 &\ & & 7.6 & 1.02 $\pm$ 0.05& 5 & -82.7 $\pm$ 1.3 &\ I14 & a & 4.4 & 3.30 $\pm$ 0.04 & 25 & 59.1 $\pm$ 0.3 &\ & a & 4.9 & 2.96 $\pm$ 0.03 & 26 & 51.4 $\pm$ 0.3 &\ & a & 7.6 & 1.78 $\pm$ 0.04 & 25 & 31.4 $\pm$ 0.6 &\ I14 & b & 4.4 & 0.75 $\pm$ 0.04 & 2 & 106.4 $\pm$ 1.3 &\ & b & 4.9 & 0.90 $\pm$ 0.03 & 2 & 87.4 $\pm$ 1.0 &\ & b & 7.6 & 0.86 $\pm$ 0.04 & 3 & 53.0 $\pm$ 1.3 &\ I15 & & 4.4 & 0.93 $\pm$ 0.04 & 5 & 6.6 $\pm$ 1.1 &\ & & 4.9 & 0.89 $\pm$ 0.04 & 5 & -23.3 $\pm$ 1.2 &\ & & 7.6 & 0.56 $\pm$ 0.04 & 4 & -102.9 $\pm$ 1.9 &\ I16 & & 4.4 & 0.94 $\pm$ 0.03 & 11 & 84.2 $\pm$ 1.0 &\ & & 4.9 & 0.81 $\pm$ 0.03 & 10 & 77.0 $\pm$ 1.1 &\ I18 & & 4.4 & 0.45 $\pm$ 0.03 & 4 & 28.2 $\pm$ 1.6 &\ & & 4.9& 0.38 $\pm$ 0.03 & 4 & 41.8 $\pm$ 2.3 &\ O1 & & 4.4 & 5.75 $\pm$ 0.05 & 6 & -55.5 $\pm$ 0.2 &\ & & 4.9 & 5.09 $\pm$ 0.06 & 6 & -58.2 $\pm$ 0.3 &\ O2 & a & 4.4 & 4.89 $\pm$ 0.05 & 5 & 88.6 $\pm$ 0.3 &\ & a & 4.9 & 4.42 $\pm$ 0.04 & 5 & 85.7 $\pm$ 0.3 &\ & a & 7.6 & 3.16 $\pm$ 0.06 & 4 & 74.9 $\pm$ 0.5 &\ O2 & b & 4.4 & 3.43 $\pm$ 0.05 & 4 & 62.7 $\pm$ 0.4 &\ & b & 4.9 & 3.20 $\pm$ 0.05 & 4 & 60.4 $\pm$ 0.5 &\ & b & 7.6 & 2.59 $\pm$ 0.06 & 4 & 51.7 $\pm$ 0.7 &\ O4 & a & 4.4 & 4.28 $\pm$ 0.04 & 23 & 49.7 $\pm$ 0.3 &\ & a & 4.9 & 3.92 $\pm$ 0.04 & 23 & 39.5 $\pm$ 0.3 &\ & a & 7.6 & 2.69 $\pm$ 0.04 & 23 & 15.0 $\pm$ 0.5 &\ O4 & b & 4.4 & 1.60 $\pm$ 0.05 & 6 & -9.4 $\pm$ 0.8 &\ & b & 4.9 & 1.45 $\pm$ 0.05 & 6 & -19.1 $\pm$ 0.8 &\ & b & 7.6 & 1.10 $\pm$ 0.04 & 6 & -47.0 $\pm$ 1.1 &\ O5 & a & 4.4 & 1.24 $\pm$ 0.05 & 11 & 85.8 $\pm$ 1.1 &\ & a & 4.9 & 1.05 $\pm$ 0.05 & 11 & 74.0 $\pm$ 1.3 &\ & b & 7.6 & 0.73 $\pm$ 0.04 & 11 & 49.9 $\pm$ 1.8 &\ O5 & b & 4.4 & 1.07 $\pm$ 0.05 & 4 & 79.0 $\pm$ 1.3 &\ & b & 4.9 & 0.96 $\pm$ 0.05 & 4 & 69.6 $\pm$ 1.4 &\ & b & 7.6 & 0.54 $\pm$ 0.04 & 3 & 39.6 $\pm$ 2.5 &\ O7 & & 4.4 & 4.03 $\pm$ 0.05 & 2 & 19.3 $\pm$ 0.4 &\ & & 4.9 & 3.84 $\pm$ 0.06 & 2 & 16.5 $\pm$ 0.4 &\ O9 & a & 4.4 & 4.00 $\pm$ 0.04 & 14 & 93.5 $\pm$ 0.3 &\ & a & 4.9 & 3.69 $\pm$ 0.05 & 14 & 81.9 $\pm$ 0.4 &\ & a & 7.6 & 2.42 $\pm$ 0.05 & 14 & 54.2 $\pm$ 0.6 &\ O9 & b & 4.4 & 2.93 $\pm$ 0.04 & 9 & 83.0 $\pm$ 0.4 &\ & b & 4.9 & 2.76 $\pm$ 0.05 & 10 & 71.0 $\pm$ 0.5 &\ & b & 7.6 & 1.82 $\pm$ 0.05 & 10 & 42.5 $\pm$ 0.8 &\ O14 & a & 4.4 & 0.55 $\pm$ 0.04 & 9 & -81.3 $\pm$ 2.0 &\ & a & 4.9 & 0.51 $\pm$ 0.04 & 9 & -86.3 $\pm$ 2.0 &\ O14 & b & 4.4 & 0.49 $\pm$ 0.04 & 9 & 9.7 $\pm$ 2.15 &\ & b & 4.9 & 0.46 $\pm$ 0.04 & 9 & 11.6 $\pm$ 2.2 &\ O15 & & 4.4 & 22.67 $\pm$ 0.06 & 8 & 18.3 $\pm$ 0.1 &\ & & 4.9 & 20.80 $\pm$ 0.06 & 8 & 16.4 $\pm$ 0.1 &\ & & 7.6 & 14.87 $\pm$ 0.11 & 8 & 6.6 $\pm$ 0.2 &\ O16 & & 4.4 & 3.23 $\pm$ 0.05 & 13 & -70.8 $\pm$ 0.4 &\ & & 4.9 & 2.96 $\pm$ 0.04 & 13 & -81.6 $\pm$ 0.4 &\ & & 7.6 & 1.94 $\pm$ 0.05 & 13 & -105.3 $\pm$ 0.7 &\ O17 & & 4.4 & 2.52 $\pm$ 0.03 & 10 & -27.2 $\pm$ 0.4 &\ & & 4.9 & 2.28 $\pm$ 0.03 & 10 & -34.7 $\pm$ 0.4 &\ Comparison of Observations to HII Region Shell Models ----------------------------------------------------- In this section, we compare our observations with mathematically simple expressions which describe the dynamics of an HII region interaction with the surrounding ISM. The first is the model presented in @whi2009. The @whi2009 model contains a simple parameterization of a stellar bubble, as described by the theory of @wea1977. In that model, the HII region consists of an inner, low density cavity comprised of shocked stellar wind, and a contact discontinuity (assumed spherical) separating the shocked stellar wind from interstellar medium material. This interstellar medium material is shocked and photoionized interstellar gas which has passed through an outer shock. The last part of the bubble structure is the outer shock itself.[^4] The parameters of the model are R$_0$, the outer radius of the shell; R$_1$, the inner radius of the shell; $\emph{n$_{e}$}$, the plasma density within the shell (n$_{e}$=0 is assumed for r $<$ R$_1$); and $\vec{B_0}$, the interstellar magnetic field outside the shell. A distinction is made between the pristine magnetic field $\vec{B_0}$ upstream of the outer shock, and the downstream magnetic field inside the plasma shell, which has been modified by passage through the shock. @whi2009 obtain the following formula for the RM through such a shell. $$RM(\xi)=\frac{Cn_{e}L(\xi)}{2}\left[B_{ZI}+B_{ZE}\right] \label{rmxi}$$ where $n_e$ is the plasma density (electron density) in the shell, $L(\xi)$ is the length of the chord through the shell, and B$_{ZI}$ and B$_{ZE}$ are the downstream line of sight components of the magnetic field at the points where the line of sight enters (ingress) and leaves (egress) the shell respectively, given in Equations (7) - (9) of @whi2009. The variable $\xi$ is the transverse, linear distance between the line of sight and a line of sight passing through the center of the shell (i.e., $\xi$=0 is a line of sight through the center of the shell and $\xi$=R$_0$ is a line of sight which is tangent to the outer edge of the shell.). The constant C is the collection of fundamental physical constants in curved brackets in Equation (\[RM1\]). The constant C has the value 2.631$\times$10$^{-17}$ in cgs units, or 0.81 if “interstellar units” of cm$^{-3}$, $\mu$Gauss, and parsecs are chosen for n$_{e}$, $\vec{B_0}$, and L, respectively. L($\xi$) is given by $$L(\xi)=2R_{0}\sqrt{(1-(\xi/R_0)^2)}, \mbox{ if } \xi \geq R_1$$ $$L(\xi)=2R_{0}[\sqrt{(1-(\xi/R_0)^2)}-(R_1/R_0)\sqrt{(1-(\xi/R_1)^2)}], \mbox{ if } \xi \leq R_1$$ Exterior to the shell, we assume the magnetic field of the interstellar medium is uniform, but it will be modified in the shell. The theory of magnetohydrodynamic shock waves (e.g. @gur2005) shows that the magnetic field component in the shock plane is amplified by a factor X, and the component normal to the shock front is unchanged. The factor X, for the case of a strong shock, is equivalent to the density compression ratio. We redefine the B$_{ZI}$ and B$_{ZE}$ components in terms of B$_{0Z}$, the upstream line of sight component of the magnetic field. Employing these assumptions and definitions in Equation (\[rmxi\]), we have $$RM(\xi)=Cn_{e}L(\xi)B_{0Z} \left[ 1 + (X-1) \left( \frac{\xi}{R_{0}} \right)^2 \right] \label{rmxi1}$$ It should be pointed out that our shell model, expressed in Equations (6)-(8), assumes that the post-shock field strength at the ingress or egress point applies everywhere along the half-chord connecting the ingress or egress point to the midpoint of the chord (see Figure 6 of [@whi2009] for an illustration). No attempt is made here to confront the physically complex question of the shell magnetic field as a function of position throughout the shell. Our approximation is presumably accurate for a thin shell, in which the chord extends only a short distance from the shock front before entering the bubble interior (again, see Figure 6 of [@whi2009]). However, in the case of a thick shell, this approximation must break down, and Equation (8) must be inaccurate. Other than recognizing this fact, further investigation is beyond the scope of this paper. This recognition should motivate further theoretical work to obtain analytic expressions which incorporate the results of MHD calculations such as [@fer1991] and [@sti2009]. For our model of Equation (8), we adopt the shell parameters from @cel1985, where R$_0$= 16.9 parsecs, R$_1$= 6.2 parsecs, and $\emph{n$_{e}$}$= 10.8 - 15.5 cm$^{-3}$. These numbers refer to Celnik’s Model 1, which is the single shell model. For the calculations described below we utilize a density equal to the mean of Celnik’s values, $\emph{n$_{e}$}$=13.1 cm$^{-3}$. The variable B$_{0Z}$, the $z$ component of the upstream ISM magnetic field, is $$B_{0Z} = B_{0} \cos{\theta},$$ where $B_{0}$ is the magnitude of the general interstellar field. In the analysis of this paper, we assume $B_{0}$ to be a known constant, and $\theta$ to be a variable with a wide range of possible values at a given point in the Galaxy. The justification for this choice is the rather well established value for the magnitude of the magnetic field in the low density phases of the ISM [e.g @fer2011; @cru2010]. We choose $B_{0}$=4 $\mu$G in the calculations below. The angle $\theta$ may have a well-defined expectation value for the location of the Rosette Nebula in the Galaxy, but the actual value at a specific location and time presumably departs significantly from this expectation value due to turbulent fluctuations in the ISM. A meaningful analogy would be the interplanetary magnetic field at 1 AU. Although the average direction conforms to the Parker spiral, a measurement at a given time shows the field pointing in a wide range of directions. It should be recognized that in reality, both $B_{0}$ and $\theta$ are random variables with mean values and probability density functions. As such, the true unknown variable is $B_{0z}$ which is formed from them. Again, observations of the solar wind prove instructive. Examination of several days of interplanetary magnetic field measurements show that the angles defining the direction of the interplanetary field show random variations, but the magnitude of the field does as well. The solar wind provides some support for our practice in the present case. Although the magnitude of the field does change with time, the fractional changes are usually relatively small in comparison with the large variations in the orientation of the interplanetary field. This statement is supported by the well known observational result that the variance of the magnitude of the interplanetary field is much less than the variance in the components [@bru2005]. In comparing the model of Equation (8) with our data, we overlaid curves generated by Equation (\[rmxi1\]) on a plot of the RM vs the distance $\xi$ in parsecs from the center of the Rosette (Figure \[model1\]), and effectively used the free parameter $\theta$ as a “tuning knob” for the model. By doing so, we obtained a value of $\theta$= 72$^{\circ}$ such that the model reproduces the magnitude of the measured RMs, and their dependence on the distance from the center of the Rosette Nebula. The degree of agreement between the model and the data in Figure \[model1\] is actually quite good, particularly since we have adopted the shell model parameters R$_0$, R$_1$, and $\emph{n$_{e}$}$ directly from the data of @cel1985. We have not varied these parameters in an attempt to optimize the fit. Figure \[fignewR\] presents the shell model with altered radii in order to obtain a better fit for the model Equation (\[rmxi1\]) to the data. ![Plot of RM versus distance from center of the Rosette Nebula. This plot differs from Figure \[figrmarc\] in that the distance of the lines of sight from the nebular center have been converted from arcminutes to parsecs, and the model for Faraday rotation through a stellar bubble given by Equation (\[rmxi1\]), has been overplotted. This model utilizes the following shell parameters: R$_1$=6.2 pc, R$_0$=16.9 pc, and n$_{e}$=13.1 cm$^{-3}$. Achieving this fit requires that the interstellar magnetic field at the location of the Rosette Nebula has a magnitude of 4 $\mu$G and is inclined at an angle $\theta$=72$^o$ with respect to our line of sight.[]{data-label="model1"}](f08.eps) ![This plot is the same as Figure \[model1\] except the inner radius of the shell has been changed to optimize the fit to the data, R$_1$=4.2 pc. The value of $\theta$ is $\theta$ = 72$^o$.[]{data-label="fignewR"}](f09.eps) To fit the magnitudes of the RMs viewed through the nebula, our model requires that the interstellar magnetic field at the location of the Rosette Nebula (before modification by the bubble associated with the Rosette) be rather highly inclined to the line of sight. Interestingly, our value of $\theta$ is roughly consistent with that expected for the mean Galactic field at the location of the Rosette Nebula. We use the galactic longitude of $206^{\circ}.5$ for the Rosette, and assume a Galactocentric distance of the Sun of 8.5 kpc, and a distance to the Rosette of 1.6 kpc. In this case, the angle between the line of sight and an azimuthal magnetic field is $68^{\circ}$. This is obviously completely consistent (to a doubtlessly fortuitous degree) with our model value of $\theta = 72^{\circ}$. Studies of the functional form of the Galactic magnetic field, while not conclusive at discriminating between an azimuthal field and one which follows the spiral arms, indicate that the field in the approximate neighborhood of the Sun has a pitch angle of $-8^{\circ}$ [@beck2001; @fer2011]. Application of this pitch angle to an azimuthal field would then produce an expected angle of $60^{\circ}$ between the mean Galactic field at the location of the Rosette and the line of sight. This value is also in acceptable agreement with our inferred value, in that it indicates a magnetic field that is oriented at a large angle with respect to the line of sight. We now consider a quite different HII region model which has been discussed in the context of Faraday rotation “anomalies”, that of @har2011 discussed in Section 1.3 above. @har2011 concluded that the magnetic field was not amplified in the volume of the HII region. We have adjusted our Equation (\[rmxi\]) to express the @har2011 assumption of no $\vec{B}$ field amplification, giving the formula $$RM(\xi)=Cn_{e}L(\xi)B_{0Z} \label{rmxi2}$$ where all parameters are defined following Equation (\[rmxi\]). The difference between these two expressions is that Equation (\[rmxi2\]) does not include amplification of the “upstream” interstellar magnetic field by the outer shock of the stellar bubble. As before, $\theta$ is the only free parameter and was varied to obtain a fit to the observed RM observations. By visual inspection, we obtained $\theta$=54$^{\circ}$ for reasonable agreement between Equation (\[rmxi2\]) and the data. A comparison of the model given by Equation (\[rmxi2\]) with the data is shown in Figure \[figrmmodels\]. Although it produces the magnitude and angular scale of the RM anomaly, it arguably does not do as well in reproducing the observed dependence of RM on distance from the center of the nebula. The smaller inclination of the interstellar magnetic field ($\theta$=54$^{\circ}$) is easily understood since in this latter model, there is no amplification of the perpendicular component of the interstellar magnetic field at the outer shock front (see Equation 9 of @whi2009). We suggest that the model of @whi2009 provides a better fit to the observed dependence of RM on distance from the center of the nebula for the case of the Rosette Nebula. To distinguish between these two models will require more lines of sight which pass between the inner and outer radii of the bubble (6 and 17 parsecs in the case of the Rosette Nebula). For the shell models described by Equation (\[rmxi2\]), in which the pre-existing interstellar magnetic field is unaltered by the presence of the HII region, the RM should have a maximum near the inner radius, as shown in Figure \[figrmmodels\]. In the model of @whi2009, on the other hand, the magnetic field is amplified and “refracted” into the shock plane. This has the potential of producing “RM limb brightening”, as might be present in the Rosette Nebula data, Figures \[model1\] and \[fignewR\]. This situation might be clarified by RM measurements of an additional 11 sources that were made with the VLA in February 2012, and are currently awaiting reduction and analysis. ![This plot is the same as Figures \[model1\] and \[fignewR\] except the model that has been overplotted is given by Equation (\[rmxi2\]) (interstellar magnetic field unmodified by HII region). This model curve requires that the interstellar magnetic field at the location of the Rosette Nebula is inclined at an angle $\theta$=54$^{\circ}$ with respect to our line of sight.[]{data-label="figrmmodels"}](f10.eps) The fit value for $\theta$ probably does not have much diagnostic ability, at least in the case of a single nebula. The values of $\theta$ for an azimuthal Galactic magnetic field (68$^{\circ}$), or a spiral field with a pitch angle of $-8^{\circ}$, $\theta = 60^{\circ}$ are more or less equally compatible with our shell model or the unmodified field model, Equation (\[rmxi2\]). However, all studies of the Galactic magnetic field show that the random component of the field is comparable to, if not larger than, the mean systematic component [e.g. @rand1989; @min1996; @hav2006]. Thus the interstellar magnetic field at the location of the Rosette is doubtlessly comprised of a mean, large scale component which is inclined at a large angle to the line of sight, and a random, turbulent component which is isotropic. As a result, the “local” interstellar magnetic field at the Rosette Nebula could point in virtually any direction. There are some final and obvious remarks which should be made regarding a comparison between the results of the present study of the Rosette Nebula and those of @har2011 on 5 other HII regions. The model of @whi2009 assumes that the plasma shell around the HII region is a bubble as described by the theory of @wea1977. The formation of a bubble on the scale of the Rosette Nebula requires stars with very large wind luminosities, which can only be furnished by very early main sequence stars or Wolf-Rayet stars. Such stars will only be found in very young stellar associations. At a later time, stellar wind luminosities will subside and the wind-blown bubble or superbubble will cease to exist. As was discussed in the Introduction, the Rosette Nebula is an excellent candidate for a stellar superbubble. The observations of @men1962 showed that it has the annular shell structure expected of such a bubble, and it has been used as a paradigmatic wind-blown structure in theoretical studies \[e.g. @dor1986 [@dor1987]\]. It therefore would be expected, $\emph{a priori}$, to show the plasma structure expected for a bubble. The HII regions studied by @har2011 could well be older clusters that are past the age when luminous stellar winds dominate their surroundings in the ISM. Resolution of this interesting question will require further observations of a sample of HII regions, with independent information on the ages of the star clusters and the wind luminosities of the constituent stars. Differences in Rotation Measure Between Closely-Spaced Lines of Sight --------------------------------------------------------------------- The data in Table 3 show several cases in which RM is measurable for two components within the same source. This raises the possibility of measuring RM differences between closely-spaced lines of sight. [@Spangler07] uses the term [*differential Faraday rotation*]{} to describe such differences. In the case of the present observations, with a synthesized beam width (FWHM) of 128 and an assumed distance to the Rosette Nebula of 1600 pc, we can examine lines of sight separated by as little as 0.1 parsecs. Differential Faraday rotation observations have been discussed by many authors [e.g. @min1996; @Haverkorn08]. Measurements of the RM difference $\Delta RM$ on many pairs of lines of sight with a range of angular separations $\delta \theta$ can be used to construct the RM structure function $D_{RM}(\delta \theta)$. The RM structure function yields characteristics of interstellar plasma turbulence [@min1996; @Haverkorn08]. [@Spangler07] also pointed out that a measurement of differential Faraday rotation could indicate the presence of an electrical current flowing between the lines of sight, and used a measurement of differential coronal Faraday rotation to deduce a model-dependent value for the magnitude of coronal currents. The same ideas could be applied to measurements of interstellar Faraday rotation. In this subsection, we briefly discuss the status of differential Faraday rotation in our sample of sources. Nine of the sources in Table 3 have RM values for two source components. We restrict attention to those sources with $\chi$ measurements at three frequencies. Such observations provide more secure and precise RM values. These sources are I8, I14, O2, O4, O5, and O9. Obviously, with such a restricted set of data we cannot construct a RM structure function, and our comments here will remain qualitative. We first consider the four “exterior” sources O2, O4, O5, and O9. The RM values for these sources are presumably determined by the general interstellar medium, with no contribution from the Rosette Nebula. The $\Delta RM$ values for these sources range from $\sim 7 - 25$ rad m$^{-2}$, and in at least 2 cases (O5 and O9) seem consistent with zero, given the measurement errors. The other two exterior sources (O2 and O4) have $\Delta RM$ values which appear slightly larger than expected for noise fluctuations about a zero expectation value. The two “interior” sources I8 and I14 have $\Delta RM$ values in excess of 100 rad m$^{-2}$, and larger than expected from our error estimates. This would seem to indicate enhanced differential Faraday rotation for lines of sight that pass through the interior of the Rosette Nebula, implying higher levels of plasma turbulence or electrical current systems flowing in the bubble associated with the Rosette. However, two caveats should be noted. First, as noted in Section 2, it is possible that component b of I14 is internally depolarized at the frequencies of observation, in which case neither the fit $RM$ for component b nor the measured $\Delta RM$ between the two components is a diagnostic of the ISM. Second, the small number of sources we are considering (2 interior and 4 exterior sources) precludes any firm conclusions about the statistics of differential Faraday rotation inside and outside of the Rosette Nebula. Given the data available in the present paper, a possible enhancement in $\Delta RM$ for the interior source I8 (and perhaps I14) relative to the exterior sources is speculative. The statistics of differential Faraday rotation for lines of sight passing through the Rosette Nebula, and the comparison with the statistics for lines of sight which do not pass through the nebula, need to be determined by measurements for a larger number of sources. As mentioned in Section 4.1, multifrequency polarization measurements of an additional 11 sources with lines of sight through the nebula have been made and are awaiting reduction and analysis. Those data should determine if an enhancement in differential Faraday rotation due to the Rosette Nebula exists, and if it does exist, establish its properties. Summary and Conclusions ======================= The conclusions of this paper are as follows. 1. [We observed 23 extragalactic radio sources whose lines of sight pass through or close to the Rosette Nebula and obtained Faraday rotation measurements for 21 of them. The interior sources, whose lines of sight pass through the Rosette, have an excess RM of 50-750 rad m$^{-2}$ with respect to a background due to this part of the galactic plane, which we determined to be +147 rad m$^{-2}$. We interpret this 50-750 rad m$^{-2}$ excess as the Faraday rotation measure of the plasma shell which comprises the Rosette Nebula.]{} 2. [We have compared our observations with a simplified analytic model for the plasma shell associated with a wind-driven, photoionized stellar bubble surrounding the NGC 2244 star cluster. This model was derived and presented in @whi2009. We find the measurements adhere well to the model if the angle between the line of sight and the Galactic magnetic field at the location of the Rosette Nebula is $\theta$=72$^{\circ}$ (see Figure \[model1\]). This angle is compatible with that expected for the mean Galactic field at the location of the Rosette Nebula ($60^{\circ} - 68^{\circ}$). Our observations support an interpretation in which the Rosette Nebula is a wind-blown bubble as described by the theory of @wea1977.]{} 3. [We have also compared our observations with a simpler model in which the NGC 2244 star cluster photoionizes the surrounding gas without modifying the magnetic field, as proposed by @har2011. This model, unlike the stellar bubble model, does not naturally account for the observed, annular shell structure of the Rosette Nebula. This model can also reproduce the magnitude of the RMs measured through the Rosette Nebula, with a smaller angle between the line of sight and the interstellar field at the location of the Rosette ($\theta$=54$^{\circ}$). This model does not seem to account as well for the observed dependence of RM on the projected distance from the center of the nebula.]{} 4. [A determination of which of these models, if either, is better for the plasma structure of HII regions will require similar studies of more HII regions (with large numbers of lines of sight per HII region), spanning a range in age of the embedded star clusters.]{} 5. We have compared our $RM$ values with those of [@tay2009] for the 7 sources (with 10 source components) in common. Good agreement between the two sets of measurements was found. This comparison was principally undertaken as a check of the $RM$s resulting from our observations, but it also contributes to the literature on the accuracy of the large [@tay2009] $RM$ data set. Our limited investigation supports the general accuracy of the [@tay2009] data, but does not contradict the finding of episodic inaccuracies or biases, as discussed in [@van2011]. Beck, R. 2001,  99, 243 Berghöfer, T. W. & Christian, D. J. 2002,  384, 890 Bevington, P.R., Data Reduction and Error Analysis for the Physical Sciences, McGraw-Hill: New York Bignell, C., Polarimetry, in Synthesis Mapping. Proceedings of the NRAO-VLA Workshop, entitled by A. R. Thomspon and L. R. D’Addario, 6, 1982, 6-1-6-29 Brown, J. C., Taylor, A. R., and Jackel, B. J. 2003,  145, 213 Brown, J. C., Haverkorn, M., Gaensler, B. M., et al. 2007,  663, 258 Bruno, R. & Carbone, V. 2005, Living Rev. Sol. Phys. 2, 4 Celnik, W. E. 1983,  53 403 Celnik, W. E. 1985,  144,171 Condon, J. J., Cotton, W. D., Greisen, E. W., et al. 1998,  115 1693 Crutcher, R.M., Wandelt, B., Heiles, C., Falgarone, E., & Troland, T.H. 2010,  725, 466 Dorland, H., Montmerle, T., & Doom, C. 1986,  160, 1 Dorland, H. & Montmerle, T. 1987,  177, 243 Ferrière, K., MacLow, M.M., & Zweibel, E.G. 1991,  375, 239 Ferrière, K. 2011, Mem. Soc. Astr. It. 82, 824 Freyer, T., Hensler, G., and Yorke, H. W. 2003,  594, 888 Gurnett, D. A. & Bhattacharjee, A. 2005, Introduction to Plasma Physics (Cambridge: Cambridge Univ. Press) Haffner, L. M., Reynolds, R. J., Tufte, S. L., et al. 2003,  149, 405 Harvey-Smith, L., Madsen, G. J., & Gaensler, B. M. 2011,  736, 83 Haverkorn, M., Gaensler, B. M., McClure-Griffiths, N. M., Dickey, J. M., & Green, A. J. 2004,  609, 776 Haverkorn, M., Gaensler, B. M., Brown, J. C., et al. 2006,  637, L33 Haverkorn, M., Brown, J.C., Gaensler, B.M., and McClure-Griffiths, N.M. 2008,  680, 362 Ingleby, L. D., Spangler, S. R., & Whiting, C. A. 2007,  668 520I Lazio, T. J., Spangler, S. R., & Cordes, J. M. 1990,  363, 515 Mancuso, S. and Spangler, S. R. 2000,  539, 480M Mao, S. A., Gaensler, B. M., Haverkorn, M., et al. 2010,  714, 1170 Menon, T. K.1962,  135, 394 Minter, A. H. & Spangler, S. R. 1996,  458, 194 Ogura, K. & Ishida, K. 1981,  33, 149 Digitized Sky Survey, Association of Universities for Research in Astronomy, Inc. 1994 $http$$://$stdatu.stsci.edu$/$cgi$-$bin$/$dss$_{}$form Park, B. & Sung, H. 2002,  123, 892 Pérez, M. R., Joner, M. D., Thé, P. S., & Westerlund, B. E. 1989,  101, 195 Rand, R.J. & Kulkarni, S.R. 1989,  343, 760 Román-Zúñiga, C. G. & Lada, E. A. 2008, in “Handbook of Star Forming Regions Vol. I: The Northern Sky”, Astron. Soc. Pac. Monograph Publications vol. 4, ed: Bo Reipurth Sakurai, T. & Spangler, S. R. 1994, Radio Science 29, 635 Spangler, S.R. 2007,  670, 841 Spitzer, L. 1968, Diffuse Matter in Space, Wiley Interscience. Stil, J., Wityk, N., Ouyed, R., & Taylor, A. R. 2009,  701, 330 Taylor, A. R., Stil, J. M., & Sunstrum, C. 2009,  702, 1230 Vallee, J. P. 1993,  419, 670 Vallee, J. P. 2004 New A Rev., 48, 763 Van Eck, C. J., Brown, J. C., Stil, J. M., et al. 2011,  728,97 Wang, J., Townsley, L. K., Feigelson, E. D., et al. 2008,  675, 464 Weaver, R., McCray, R., Castor, I., Shapiro, P., & Moore, R. 1977,  218, 377 Whiting, C. A., Spangler, S. R., Ingleby, L. D., & Haffner, M. L. 2009,  694, 1452 [^1]: [^2]: [^3]: [^4]: This model is described in more detail in Section 5.1 of @whi2009, and illustrated in Figure 6 of that paper.
2023-10-17T01:26:29.851802
https://example.com/article/5123
House passes bill to drill in parks Thursday May 26, 2011 at 12:01 AMJun 6, 2011 at 8:21 AM With Ohio facing $500 million in backlogged capital projects at its state parks and gas prices still flirting with $4 a gallon, House Republicans say now is the time to allow oil and gas drilling in parks and other state-owned land. With Ohio facing $500 million in backlogged capital projects at its state parks and gas prices still flirting with $4 a gallon, House Republicans say now is the time to allow oil and gas drilling in parks and other state-owned land. After a three-hour debate, the House voted 54-41 yesterday for a bill that would create an Oil and Gas Leasing Commission to oversee the leasing of state-owned land for oil and gas drilling. "It will not solve Ohio's problems or energy-price problems, but it is a component we cannot ignore," said Rep. John Adams, R-Sidney, the bill sponsor. Republicans said House Bill 133 would create jobs and help lower energy prices. Oil and gas drillers are particularly interested in southeastern Ohio and Salt Fork State Park. "If this gas boom takes off like we understand it is, there aren't going to be enough hotels. There aren't going to be enough houses. There aren't going to be enough restaurants to handle all of the people who are coming into this state," said Rep. Matt Huffman, R-Lima. Two Republicans joined all Democrats in voting against the bill. Franklin County lawmakers broke along party lines. Democrats argued that with 99.5 percent of Ohio already available for drilling, the bill is unnecessary. They also questioned the economic benefits, and argued it would cause significant damage to state parks, hurt tourism and harm the economy. "We're not against drilling. We're against drilling in parks," said Rep. Robert F. Hagan, D-Youngstown, who went on to question whether Republicans were on drugs. Democrats also suggested Republicans would face voter backlash over the bill in next year's elections. In response, Speaker William G. Batchelder, R-Medina, pointed to gas prices. "I would say that causes people to have a different view than they might have at $2.50," he said. Batchelder said the ongoing revenue stream for capital projects at Ohio parks is vital. "When you look at them, you can see it," he said of the lack of upkeep. "I think six members used the phrase 'pristine parks.' I don't know where they're going. I have not seen those." State revenue estimates from oil and gas royalties range from a few hundred thousand dollars to about $9million, depending on factors such as the level of oil and gas production and market prices, according to the nonpartisan Legislative Service Commission. The bill divides state land into four classes, which, Adams said, will deal with issues related to federal encumbrances or deed restrictions. Energy companies would have the greatest access to land in which the state clearly owns all the development rights. Ohio owns the mineral rights to 34,590 acres in state parks, less than one-third of the land. Republicans added an amendment yesterday that would ban drilling on state nature preserves, which Jack Shaner of the Ohio Environmental Council called a positive step. But he strongly opposes the bill. "Ohio has always promised that its parks would remain a natural park, not an industrial park," he said, adding that the new leasing commission would be "too industry-cozy." He said the director of the Ohio Department of Natural Resources should have the final say on whether drilling is allowed in state parks. Tracy Sabetta of the National Wildlife Federation of Ohio said she is concerned that the bill does not explicitly exempt Lake Erie from drilling. While a federal ban remains in place, there are efforts to repeal it, she said. Rep. Dave Hall, R-Killbuck, said the bill essentially bans Lake Erie from drilling because of the way it is classified. Gov. John Kasich's proposed state budget also would open state parks to drilling, but it left the Ohio Department of Natural Resources in control of the leasing process. Both Hall and Batchelder said they prefer the House drilling language to what is in the budget. Laura Jones, spokeswoman for the Ohio Department of Natural Resources, said the office is "pleased with how our concerns (with the bill) have been addressed." "It's very positive that the landholding agency is the entity entering into the lease as opposed to the commission," she said. jsiegel@dispatch.com Never miss a story Choose the plan that's right for you. Digital access or digital and print delivery.
2023-10-18T01:26:29.851802
https://example.com/article/9019
package com.tencent.mm.plugin.appbrand.luggage.a; import android.graphics.Bitmap; import android.graphics.Rect; import com.tencent.matrix.trace.core.AppMethodBeat; import com.tencent.mm.plugin.appbrand.canvas.d; import com.tencent.mm.plugin.appbrand.canvas.e; import com.tencent.mm.plugin.appbrand.canvas.e.a; import com.tencent.mm.plugin.appbrand.d.b; public final class c implements e { public final Bitmap a(d dVar, String str) { AppMethodBeat.i(132092); Bitmap a = a(dVar, str, null); AppMethodBeat.o(132092); return a; } public final Bitmap a(d dVar, String str, a aVar) { AppMethodBeat.i(132093); Bitmap a = a(dVar, str, null, aVar); AppMethodBeat.o(132093); return a; } public final Bitmap a(final d dVar, final String str, Rect rect, final a aVar) { AppMethodBeat.i(132094); com.tencent.mm.plugin.appbrand.jsapi.c cVar = dVar.hcM; Bitmap a = ((com.tencent.mm.plugin.appbrand.d.a) cVar.B(com.tencent.mm.plugin.appbrand.d.a.class)).a(((b) cVar.B(b.class)).b(cVar, str), rect, new com.tencent.mm.plugin.appbrand.d.a.c() { public final void E(Bitmap bitmap) { AppMethodBeat.i(132091); if (aVar == null || bitmap == null || bitmap.isRecycled()) { AppMethodBeat.o(132091); return; } aVar.a(dVar); AppMethodBeat.o(132091); } }); AppMethodBeat.o(132094); return a; } }
2023-10-30T01:26:29.851802
https://example.com/article/4931
A number of endocrine factors have been found to influence the development of plasmacytoma in the pristane-injected BALB/c mice. The discovery of two non-endocrine factors, bacterial lipopolysaccharide and pristane-induced peritoneal factors, has led to the development of an hypothesis integrating the profound effects of certain endocrine agents on the development of this tumor. Bacterial lipopolysaccharide administration increases the development of tumors in pristane-injected mice at some early step which produces nascent tumors, which cannot readily be transplanted to syngeneic mice. Intraperitoneal administration of pristane produces a factor in peritoneal fluid within three days which allows ready transplantation of primary tumors to appropriate hosts. This material contains no pristane. It is proposed to relate the endocrine factors that either accelerate or prevent tumorigenesis to the increase in peritoneal cells and bacterial lipopolysaccharide produced by intraperitoneal pristane, to a transformation step, to nascent tumor cells, or to a maturation step induced by our pristane-induced peritoneal factor.
2023-09-06T01:26:29.851802
https://example.com/article/3412
id: dsq-747509230 date: 2005-05-26T20:37:00.0000000-07:00 name: Todd avatar: https://disqus.com/api/users/avatars/Todd.jpg message: <p>Hi Larry, you are really dumb. You should slit your throat. For some reason, you think that investment principle will forever stay the same. Oil taxes continue to build the principle every year. You really have no idea what you're talking about. What a dork.<br><br><br><br>Now that you're bored, please go away. You are not a worthy opponent. You can't write English properly and you bore me.<br><br><br><br>I think you must be 14 years old and just having fun between surfing porn on your mom's computer. Any sensible adult would know when to shut up - you obviously don't.<br><br><br><br>BTW, are the "everglades" close to the "Everglades"?</p>
2024-06-09T01:26:29.851802
https://example.com/article/5919
SKorea: Libya releases South Korean pastor SEOUL, South Korea — Libya released a South Korean Christian pastor and a businessman on Sunday who had been detained on accusations of proselytizing in the predominantly Muslim country, South Korea's Foreign Ministry said. Efforts to seek their release had dragged on for several months amid diplomatic tensions over Libya's expulsion of a South Korean Embassy official in June for allegedly collecting information on its leader, Moammar Gadhafi, his family and other senior politicians. The dispute was settled on Friday when Lee Sang-deuk, a lawmaker who is the brother of South Korea's president, met with Gadhafi, Foreign Ministry spokesman Kim Young-sun said. Kim denied Libyan allegations that the embassy official was an intelligence agent. The pastor was arrested in June on charges of bringing Christian material into the North African country for missionary work. The other South Korean man was arrested a month later and accused of helping to finance the pastor's religious activities.
2024-01-09T01:26:29.851802
https://example.com/article/3668
I'll Name the Murderer I'll Name the Murderer is a 1936 American film directed by Bernard B. Ray. Plot Ralph Forbes is gossip columnist Tommy Tilton, who excels in slinging nonsense about who is being seen where and, it turns out, is not a timid bluffer when it comes to coaxing out a murderer. Yes, an entertainer is murdered in her dressing room. Tommy Tilton's friend looks pretty guilty, but there's a raft of suspects who also had crossed and been crossed by this particular singer. Tilton's game: He uses his society column to draw out the guilty person with taunts and hints, eventually claiming that he will name the murderer in his next column. Whether his boast is backed up is, of course, in great doubt. Cast Ralph Forbes as Tommy Tilton Marion Shilling as 'Smitty', newspaper photographer Malcolm McGregor as Ted Benson James Guilfoyle as Lou Baron, Private Investigator John Cowell as Police Captain 'Pop' Flynn William Bailey as William Hugo Van Ostrum, Vi's father Agnes Anderson as Nadia Renee, aka Marina Farina Claire Rochelle as Valerie Delroy, aka Maragert O'Brien Gayne Kinsey as Walton, Valerie's Dance Partner Harry Semels as Luigi, Club Owner Al Klein as Club Waiter References External links Category:1936 films Category:American films Category:1930s crime drama films Category:English-language films Category:American black-and-white films Category:American crime drama films Category:Films directed by Bernard B. Ray
2023-10-21T01:26:29.851802
https://example.com/article/5083
2013年9月21日星期六 Crusher Django Tutorial(1) hello world Hello,everyone,I will start a Django Tutorial,and I am a big Fan of Python and Django,So, I mean I am crush into Python and Django. I using the GNU/Linux Operating System to learning the Python and Django, and I think the Windows CMD is too weak to doing the Programming. :) Let us start the tutorial with the Terminal in the Linux. Let us try hello world with the terminal. Then we install the Django(the install is so easy , and I skip the step). First,we check out the django lib is ok in your system: Well,well,well, the django is installed in your System. Then we can just turn a directory that is English path(I am a Chinese),so maybe there is nothing not prefect with you. Here I use the root directory of my user ~ : zoo@ubuntu:~$ django-admin startproject hello nothing error, nothing means everything ok. And check the path, there is a folder Added named hello. All right, that is our base directory. In the folder, there are four files the __init__.py is a file that will make the folder as a packet as default.I do not know the more about it. LOL... the manage.py includes the all kinds of the command we can use. The settings.py is all kinds of the setting of the project,like the database we use, the templates we set down and so on, we will talk about it later. The url.py is all the url Configuration we use it to conf the views to the function.If you do not understand, just do it, the more you do, the more you will understand. Ok, just start the “hello world” page! Just open the terminal and start the command: the runserver means start the server, use django you won't have to use the server like Apache or Nignx,just django!BTW, the 0.0.0.0 means if you have a public IP Address, your friends not in your local network will see your page, that is real cool! Now you can open the Chrome or firefix and type the address: the line of : from views import * means import the function to the current file:url.py the line : (r'^$', hello), means the bind nothing of the url to the function.That means you refresh the http://localhost:8000 you can see the webpage we created: Now, we create the first django page, in fact we can make the page a little more complex: just add the html like this in the “”””””:
2023-09-14T01:26:29.851802
https://example.com/article/7444
Month: February 2017 We recently returned from a trip home to the U.S. for the holidays and I have been thinking about what defines my life here. People kept asking what life is like in Belgium and is it very different from home? I found that I could not say it was all that different but it is certainly not the same. Sometimes I feel like I could be anywhere doing the same things- taking my son to school, making dinner, going for walks, etc. Then at other times I feel like I am in this totally alien place compared to what I am accustomed to. I guess what it comes down to is an accumulation of little things that are different that affect your daily life. So, I started collecting a list of things I have noticed thus far- Doors- I never paid much attention to it but in the U.S., doors in public buildings open outwards, and in Belgium you never quite know which way they open, but usually it is the opposite way of which I try! Back home, this is due to fire codes, here I am not sure if those exist…. Dining out- There are some great restaurants here in Gent, but I have found the “leisurely” meal here to be a bit too much (especially with a 4-year-old). Getting your food can take literally an hour or more and you must ask for the check if you want to get out of there. I kind of used to hate how wait staff would rush you to turn their table back home but I also like to get something to eat before midnight… Privacy- there are a lot of hedges in my neighborhood, and many of the front doors on the houses are on the side of the house. People like their privacy and kind of keep to themselves, though they do tend to have giant windows and glass doors on the backs of their houses (that look out into the hedge enclosed back yard of course) Weighing your fruit- in the market you must weigh your own fruits and vegetables and print out a sticker that shows the price. If you go to the check out without this, they will not weigh it for you, they will send you back to weigh it. They had a sort of optional version of this at Wegmans but that seemed more like a way to engage my son in grocery shopping than an actual necessity. Doctors- When I went to my first doctor’s appointment I was very surprised to find the doctor answering her cell phone while I was in the midst of explaining my medical history to her. Thought it was actually pretty rude. Turns out, there are no receptionists at dr. offices here, so the doctors do often answer the phone during appointments. I don’t know, I kind of like to feel like the dr. is focused on me when I go in to see them… 24-hour time- I realize that there are only a few countries in the world who don’t use this, but I am from one of them and I have not gotten used to saying 15:30 instead of 3:30. I find myself having to count each time someone gives me a 24-hour time. Makes sense though… Bathrooms- this is a big one for me- most stores and publics places do not have public bathrooms. There are some exceptions but you won’t easily find or access the bathroom in the market or a clothing shop. At home, there was always that security that there is a bathroom nearby if you need it. Plus, even if there is one, you often have to pay to use it! Every day I notice more of these little things, so maybe part 2 will be in the future… Archives Categories Search Search for: Text Widget This is a text widget, which allows you to add text or HTML to your sidebar. You can use them to display text, links, images, HTML, or a combination of these. Edit them in the Widget section of the Customizer.
2023-10-02T01:26:29.851802
https://example.com/article/8492
To restore the default SMB2 you simply need to delete the newly created configuration file (nsmb.conf) with the command: rm ~/Library/Preferences/nsmb.conf Both workarounds force OS X to use SMB1 as a network protocol instead of the default SMB2 used by OS X 10.9 (Mavericks). While the first is an ad hoc solution the second is a persistent but reversible configuration change (for this user account). SMB1 is slower than SMB2 but stable. [crarko adds: I don't have a way to test this at the moment, but I do recall reading that people have experienced some of these issues. If someone has a NAS device to test this with (especially if there have been problems) please let us know if either of these fixes helped.] Do I need to restart or log out and back in after the changes? I have problems COPYING files to my SMB share since Mavericks! It won't let me copy because Finder tells me the files are still in use. Yes, that's COPYING (i.e. READING from the source, not moving, i.e. writing). This is ridiculous and Windows-like stupidity. Makes me want to go back to Linux... If you have issues with "busy" files, try browsing in a view other than column view. I've found that the previews generated by the Finder in this view very often lead to "busy" files that cannot be easily cleared. It's probably the preview pane that would be causing it then. I use column view all the time, but I have the preview pane unchecked. I'm finding 10.9 to be faster and more stable than 10.8, sometimes on 10.8, finder would revert to the root instead of the previous folder when I was deleting stuff, it works great in 10.9. I use smb://(server) to connect to our old Buffalo NAS. Finally bit the bullet and tried this. The performance difference on my system is amazing! Mid-2010 Mac mini running Mavericks, using Graphic Converter to browse photos on an Iomega NAS (connected via Ethernet to the same switch the Mac is using). Original performance was so poor that I copied photos to the local drive for browsing, instead of using the NAS! Made the change listed here today, and using GC to browse the NAS feels just as fast as the local drive! Thanks for this information which I found when trying to solve a problem saving .DOCX files from Office for Mac 2011 to file shares on a Windows Server 2012. There was also a possible hint from http://word.mvps.org/Mac/CantSaveToServer.html which I'd also tried. In the end I believe the issue lies with and incompatibility between AVG Anti Virus for Mac (and its real time protection), Office 2011 for Mac and Windows Server, turning this off seemed to solve the problem - more testing required.
2024-07-29T01:26:29.851802
https://example.com/article/9650
// // parameter.cs: Parameter definition. // // Author: Miguel de Icaza (miguel@gnu.org) // Marek Safar (marek.safar@seznam.cz) // // Dual licensed under the terms of the MIT X11 or GNU GPL // // Copyright 2001-2003 Ximian, Inc (http://www.ximian.com) // Copyright 2003-2008 Novell, Inc. // Copyright 2011 Xamarin Inc // // using System; using System.Text; #if STATIC using MetaType = IKVM.Reflection.Type; using IKVM.Reflection; using IKVM.Reflection.Emit; #else using MetaType = System.Type; using System.Reflection; using System.Reflection.Emit; #endif namespace Mono.CSharp { /// <summary> /// Abstract Base class for parameters of a method. /// </summary> public abstract class ParameterBase : Attributable { protected ParameterBuilder builder; public override void ApplyAttributeBuilder (Attribute a, MethodSpec ctor, byte[] cdata, PredefinedAttributes pa) { #if false if (a.Type == pa.MarshalAs) { UnmanagedMarshal marshal = a.GetMarshal (this); if (marshal != null) { builder.SetMarshal (marshal); } return; } #endif if (a.HasSecurityAttribute) { a.Error_InvalidSecurityParent (); return; } if (a.Type == pa.Dynamic) { a.Error_MisusedDynamicAttribute (); return; } builder.SetCustomAttribute ((ConstructorInfo) ctor.GetMetaInfo (), cdata); } public ParameterBuilder Builder { get { return builder; } } public override bool IsClsComplianceRequired() { return false; } } /// <summary> /// Class for applying custom attributes on the return type /// </summary> public class ReturnParameter : ParameterBase { MemberCore method; // TODO: merge method and mb public ReturnParameter (MemberCore method, MethodBuilder mb, Location location) { this.method = method; try { builder = mb.DefineParameter (0, ParameterAttributes.None, ""); } catch (ArgumentOutOfRangeException) { method.Compiler.Report.RuntimeMissingSupport (location, "custom attributes on the return type"); } } public override void ApplyAttributeBuilder (Attribute a, MethodSpec ctor, byte[] cdata, PredefinedAttributes pa) { if (a.Type == pa.CLSCompliant) { method.Compiler.Report.Warning (3023, 1, a.Location, "CLSCompliant attribute has no meaning when applied to return types. Try putting it on the method instead"); } // This occurs after Warning -28 if (builder == null) return; base.ApplyAttributeBuilder (a, ctor, cdata, pa); } public override AttributeTargets AttributeTargets { get { return AttributeTargets.ReturnValue; } } /// <summary> /// Is never called /// </summary> public override string[] ValidAttributeTargets { get { return null; } } } public class ImplicitLambdaParameter : Parameter { public ImplicitLambdaParameter (string name, Location loc) : base (null, name, Modifier.NONE, null, loc) { } public override TypeSpec Resolve (IMemberContext ec, int index) { if (parameter_type == null) throw new InternalErrorException ("A type of implicit lambda parameter `{0}' is not set", Name); base.idx = index; return parameter_type; } public void SetParameterType (TypeSpec type) { parameter_type = type; } } public class ParamsParameter : Parameter { public ParamsParameter (FullNamedExpression type, string name, Attributes attrs, Location loc): base (type, name, Parameter.Modifier.PARAMS, attrs, loc) { } public override TypeSpec Resolve (IMemberContext ec, int index) { if (base.Resolve (ec, index) == null) return null; var ac = parameter_type as ArrayContainer; if (ac == null || ac.Rank != 1) { ec.Module.Compiler.Report.Error (225, Location, "The params parameter must be a single dimensional array"); return null; } return parameter_type; } public override void ApplyAttributes (MethodBuilder mb, ConstructorBuilder cb, int index, PredefinedAttributes pa) { base.ApplyAttributes (mb, cb, index, pa); pa.ParamArray.EmitAttribute (builder); } } public class ArglistParameter : Parameter { // Doesn't have proper type because it's never chosen for better conversion public ArglistParameter (Location loc) : base (null, String.Empty, Parameter.Modifier.NONE, null, loc) { parameter_type = InternalType.Arglist; } public override void ApplyAttributes (MethodBuilder mb, ConstructorBuilder cb, int index, PredefinedAttributes pa) { // Nothing to do } public override bool CheckAccessibility (InterfaceMemberBase member) { return true; } public override TypeSpec Resolve (IMemberContext ec, int index) { return parameter_type; } } public interface IParameterData { Expression DefaultValue { get; } bool HasExtensionMethodModifier { get; } bool HasDefaultValue { get; } Parameter.Modifier ModFlags { get; } string Name { get; } } // // Parameter information created by parser // public class Parameter : ParameterBase, IParameterData, ILocalVariable // TODO: INamedBlockVariable { [Flags] public enum Modifier : byte { NONE = 0, PARAMS = 1 << 0, REF = 1 << 1, OUT = 1 << 2, This = 1 << 3, CallerMemberName = 1 << 4, CallerLineNumber = 1 << 5, CallerFilePath = 1 << 6, RefOutMask = REF | OUT, ModifierMask = PARAMS | REF | OUT | This, CallerMask = CallerMemberName | CallerLineNumber | CallerFilePath } static readonly string[] attribute_targets = new string[] { "param" }; FullNamedExpression texpr; Modifier modFlags; string name; Expression default_expr; protected TypeSpec parameter_type; readonly Location loc; protected int idx; public bool HasAddressTaken; TemporaryVariableReference expr_tree_variable; HoistedParameter hoisted_variant; public Parameter (FullNamedExpression type, string name, Modifier mod, Attributes attrs, Location loc) { this.name = name; modFlags = mod; this.loc = loc; texpr = type; // Only assign, attributes will be attached during resolve base.attributes = attrs; } #region Properties public Expression DefaultExpression { get { return default_expr; } } public DefaultParameterValueExpression DefaultValue { get { return default_expr as DefaultParameterValueExpression; } set { default_expr = value; } } Expression IParameterData.DefaultValue { get { var expr = default_expr as DefaultParameterValueExpression; return expr == null ? default_expr : expr.Child; } } bool HasOptionalExpression { get { return default_expr is DefaultParameterValueExpression; } } public Location Location { get { return loc; } } public Modifier ParameterModifier { get { return modFlags; } } public TypeSpec Type { get { return parameter_type; } set { parameter_type = value; } } public FullNamedExpression TypeExpression { get { return texpr; } } public override string[] ValidAttributeTargets { get { return attribute_targets; } } #endregion public override void ApplyAttributeBuilder (Attribute a, MethodSpec ctor, byte[] cdata, PredefinedAttributes pa) { if (a.Type == pa.In && ModFlags == Modifier.OUT) { a.Report.Error (36, a.Location, "An out parameter cannot have the `In' attribute"); return; } if (a.Type == pa.ParamArray) { a.Report.Error (674, a.Location, "Do not use `System.ParamArrayAttribute'. Use the `params' keyword instead"); return; } if (a.Type == pa.Out && (ModFlags & Modifier.REF) != 0 && !OptAttributes.Contains (pa.In)) { a.Report.Error (662, a.Location, "Cannot specify only `Out' attribute on a ref parameter. Use both `In' and `Out' attributes or neither"); return; } if (a.Type == pa.CLSCompliant) { a.Report.Warning (3022, 1, a.Location, "CLSCompliant attribute has no meaning when applied to parameters. Try putting it on the method instead"); } else if (a.Type == pa.DefaultParameterValue || a.Type == pa.OptionalParameter) { if (HasOptionalExpression) { a.Report.Error (1745, a.Location, "Cannot specify `{0}' attribute on optional parameter `{1}'", a.Type.GetSignatureForError ().Replace ("Attribute", ""), Name); } if (a.Type == pa.DefaultParameterValue) return; } else if (a.Type == pa.CallerMemberNameAttribute) { if ((modFlags & Modifier.CallerMemberName) == 0) { a.Report.Error (4022, a.Location, "The CallerMemberName attribute can only be applied to parameters with default value"); } } else if (a.Type == pa.CallerLineNumberAttribute) { if ((modFlags & Modifier.CallerLineNumber) == 0) { a.Report.Error (4020, a.Location, "The CallerLineNumber attribute can only be applied to parameters with default value"); } } else if (a.Type == pa.CallerFilePathAttribute) { if ((modFlags & Modifier.CallerFilePath) == 0) { a.Report.Error (4021, a.Location, "The CallerFilePath attribute can only be applied to parameters with default value"); } } base.ApplyAttributeBuilder (a, ctor, cdata, pa); } public virtual bool CheckAccessibility (InterfaceMemberBase member) { if (parameter_type == null) return true; return member.IsAccessibleAs (parameter_type); } // <summary> // Resolve is used in method definitions // </summary> public virtual TypeSpec Resolve (IMemberContext rc, int index) { if (parameter_type != null) return parameter_type; if (attributes != null) attributes.AttachTo (this, rc); parameter_type = texpr.ResolveAsType (rc); if (parameter_type == null) return null; this.idx = index; if ((modFlags & Parameter.Modifier.RefOutMask) != 0 && parameter_type.IsSpecialRuntimeType) { rc.Module.Compiler.Report.Error (1601, Location, "Method or delegate parameter cannot be of type `{0}'", GetSignatureForError ()); return null; } VarianceDecl.CheckTypeVariance (parameter_type, (modFlags & Parameter.Modifier.RefOutMask) != 0 ? Variance.None : Variance.Contravariant, rc); if (parameter_type.IsStatic) { rc.Module.Compiler.Report.Error (721, Location, "`{0}': static types cannot be used as parameters", texpr.GetSignatureForError ()); return parameter_type; } if ((modFlags & Modifier.This) != 0 && (parameter_type.IsPointer || parameter_type.BuiltinType == BuiltinTypeSpec.Type.Dynamic)) { rc.Module.Compiler.Report.Error (1103, Location, "The extension method cannot be of type `{0}'", parameter_type.GetSignatureForError ()); } return parameter_type; } void ResolveCallerAttributes (ResolveContext rc) { var pa = rc.Module.PredefinedAttributes; TypeSpec caller_type; foreach (var attr in attributes.Attrs) { var atype = attr.ResolveTypeForComparison (); if (atype == null) continue; if (atype == pa.CallerMemberNameAttribute) { caller_type = rc.BuiltinTypes.String; if (caller_type != parameter_type && !Convert.ImplicitReferenceConversionExists (caller_type, parameter_type)) { rc.Report.Error (4019, attr.Location, "The CallerMemberName attribute cannot be applied because there is no standard conversion from `{0}' to `{1}'", caller_type.GetSignatureForError (), parameter_type.GetSignatureForError ()); } modFlags |= Modifier.CallerMemberName; continue; } if (atype == pa.CallerLineNumberAttribute) { caller_type = rc.BuiltinTypes.Int; if (caller_type != parameter_type && !Convert.ImplicitNumericConversionExists (caller_type, parameter_type)) { rc.Report.Error (4017, attr.Location, "The CallerMemberName attribute cannot be applied because there is no standard conversion from `{0}' to `{1}'", caller_type.GetSignatureForError (), parameter_type.GetSignatureForError ()); } modFlags |= Modifier.CallerLineNumber; continue; } if (atype == pa.CallerFilePathAttribute) { caller_type = rc.BuiltinTypes.String; if (caller_type != parameter_type && !Convert.ImplicitReferenceConversionExists (caller_type, parameter_type)) { rc.Report.Error (4018, attr.Location, "The CallerFilePath attribute cannot be applied because there is no standard conversion from `{0}' to `{1}'", caller_type.GetSignatureForError (), parameter_type.GetSignatureForError ()); } modFlags |= Modifier.CallerFilePath; continue; } } } public void ResolveDefaultValue (ResolveContext rc) { // // Default value was specified using an expression // if (default_expr != null) { ((DefaultParameterValueExpression)default_expr).Resolve (rc, this); if (attributes != null) ResolveCallerAttributes (rc); return; } if (attributes == null) return; var pa = rc.Module.PredefinedAttributes; var def_attr = attributes.Search (pa.DefaultParameterValue); if (def_attr != null) { if (def_attr.Resolve () == null) return; var default_expr_attr = def_attr.GetParameterDefaultValue (); if (default_expr_attr == null) return; var dpa_rc = def_attr.CreateResolveContext (); default_expr = default_expr_attr.Resolve (dpa_rc); if (default_expr is BoxedCast) default_expr = ((BoxedCast) default_expr).Child; Constant c = default_expr as Constant; if (c == null) { if (parameter_type.BuiltinType == BuiltinTypeSpec.Type.Object) { rc.Report.Error (1910, default_expr.Location, "Argument of type `{0}' is not applicable for the DefaultParameterValue attribute", default_expr.Type.GetSignatureForError ()); } else { rc.Report.Error (1909, default_expr.Location, "The DefaultParameterValue attribute is not applicable on parameters of type `{0}'", default_expr.Type.GetSignatureForError ()); } default_expr = null; return; } if (TypeSpecComparer.IsEqual (default_expr.Type, parameter_type) || (default_expr is NullConstant && TypeSpec.IsReferenceType (parameter_type) && !parameter_type.IsGenericParameter) || parameter_type.BuiltinType == BuiltinTypeSpec.Type.Object) { return; } // // LAMESPEC: Some really weird csc behaviour which we have to mimic // User operators returning same type as parameter type are considered // valid for this attribute only // // struct S { public static implicit operator S (int i) {} } // // void M ([DefaultParameterValue (3)]S s) // var expr = Convert.ImplicitUserConversion (dpa_rc, default_expr, parameter_type, loc); if (expr != null && TypeSpecComparer.IsEqual (expr.Type, parameter_type)) { return; } rc.Report.Error (1908, default_expr.Location, "The type of the default value should match the type of the parameter"); return; } var opt_attr = attributes.Search (pa.OptionalParameter); if (opt_attr != null) { default_expr = EmptyExpression.MissingValue; } } public bool HasDefaultValue { get { return default_expr != null; } } public bool HasExtensionMethodModifier { get { return (modFlags & Modifier.This) != 0; } } // // Hoisted parameter variant // public HoistedParameter HoistedVariant { get { return hoisted_variant; } set { hoisted_variant = value; } } public Modifier ModFlags { get { return modFlags & ~Modifier.This; } } public string Name { get { return name; } set { name = value; } } public override AttributeTargets AttributeTargets { get { return AttributeTargets.Parameter; } } public void Error_DuplicateName (Report r) { r.Error (100, Location, "The parameter name `{0}' is a duplicate", Name); } public virtual string GetSignatureForError () { string type_name; if (parameter_type != null) type_name = parameter_type.GetSignatureForError (); else type_name = texpr.GetSignatureForError (); string mod = GetModifierSignature (modFlags); if (mod.Length > 0) return String.Concat (mod, " ", type_name); return type_name; } public static string GetModifierSignature (Modifier mod) { switch (mod) { case Modifier.OUT: return "out"; case Modifier.PARAMS: return "params"; case Modifier.REF: return "ref"; case Modifier.This: return "this"; default: return ""; } } public void IsClsCompliant (IMemberContext ctx) { if (parameter_type.IsCLSCompliant ()) return; ctx.Module.Compiler.Report.Warning (3001, 1, Location, "Argument type `{0}' is not CLS-compliant", parameter_type.GetSignatureForError ()); } public virtual void ApplyAttributes (MethodBuilder mb, ConstructorBuilder cb, int index, PredefinedAttributes pa) { if (builder != null) throw new InternalErrorException ("builder already exists"); var pattrs = ParametersCompiled.GetParameterAttribute (modFlags); if (HasOptionalExpression) pattrs |= ParameterAttributes.Optional; if (mb == null) builder = cb.DefineParameter (index, pattrs, Name); else builder = mb.DefineParameter (index, pattrs, Name); if (OptAttributes != null) OptAttributes.Emit (); if (HasDefaultValue && default_expr.Type != null) { // // Emit constant values for true constants only, the other // constant-like expressions will rely on default value expression // var def_value = DefaultValue; Constant c = def_value != null ? def_value.Child as Constant : default_expr as Constant; if (c != null) { if (c.Type.BuiltinType == BuiltinTypeSpec.Type.Decimal) { pa.DecimalConstant.EmitAttribute (builder, (decimal) c.GetValue (), c.Location); } else { builder.SetConstant (c.GetValue ()); } } else if (default_expr.Type.IsStruct) { // // Handles special case where default expression is used with value-type // // void Foo (S s = default (S)) {} // builder.SetConstant (null); } } if (parameter_type != null) { if (parameter_type.BuiltinType == BuiltinTypeSpec.Type.Dynamic) { pa.Dynamic.EmitAttribute (builder); } else if (parameter_type.HasDynamicElement) { pa.Dynamic.EmitAttribute (builder, parameter_type, Location); } } } public Parameter Clone () { Parameter p = (Parameter) MemberwiseClone (); if (attributes != null) p.attributes = attributes.Clone (); return p; } public ExpressionStatement CreateExpressionTreeVariable (BlockContext ec) { if ((modFlags & Modifier.RefOutMask) != 0) ec.Report.Error (1951, Location, "An expression tree parameter cannot use `ref' or `out' modifier"); expr_tree_variable = TemporaryVariableReference.Create (ResolveParameterExpressionType (ec, Location).Type, ec.CurrentBlock.ParametersBlock, Location); expr_tree_variable = (TemporaryVariableReference) expr_tree_variable.Resolve (ec); Arguments arguments = new Arguments (2); arguments.Add (new Argument (new TypeOf (parameter_type, Location))); arguments.Add (new Argument (new StringConstant (ec.BuiltinTypes, Name, Location))); return new SimpleAssign (ExpressionTreeVariableReference (), Expression.CreateExpressionFactoryCall (ec, "Parameter", null, arguments, Location)); } public void Emit (EmitContext ec) { ec.EmitArgumentLoad (idx); } public void EmitAssign (EmitContext ec) { ec.EmitArgumentStore (idx); } public void EmitAddressOf (EmitContext ec) { if ((ModFlags & Modifier.RefOutMask) != 0) { ec.EmitArgumentLoad (idx); } else { ec.EmitArgumentAddress (idx); } } public TemporaryVariableReference ExpressionTreeVariableReference () { return expr_tree_variable; } // // System.Linq.Expressions.ParameterExpression type // public static TypeExpr ResolveParameterExpressionType (IMemberContext ec, Location location) { TypeSpec p_type = ec.Module.PredefinedTypes.ParameterExpression.Resolve (); return new TypeExpression (p_type, location); } public void Warning_UselessOptionalParameter (Report Report) { Report.Warning (1066, 1, Location, "The default value specified for optional parameter `{0}' will never be used", Name); } } // // Imported or resolved parameter information // public class ParameterData : IParameterData { readonly string name; readonly Parameter.Modifier modifiers; readonly Expression default_value; public ParameterData (string name, Parameter.Modifier modifiers) { this.name = name; this.modifiers = modifiers; } public ParameterData (string name, Parameter.Modifier modifiers, Expression defaultValue) : this (name, modifiers) { this.default_value = defaultValue; } #region IParameterData Members public Expression DefaultValue { get { return default_value; } } public bool HasExtensionMethodModifier { get { return (modifiers & Parameter.Modifier.This) != 0; } } public bool HasDefaultValue { get { return default_value != null; } } public Parameter.Modifier ModFlags { get { return modifiers; } } public string Name { get { return name; } } #endregion } public abstract class AParametersCollection { protected bool has_arglist; protected bool has_params; // Null object pattern protected IParameterData [] parameters; protected TypeSpec [] types; public CallingConventions CallingConvention { get { return has_arglist ? CallingConventions.VarArgs : CallingConventions.Standard; } } public int Count { get { return parameters.Length; } } public TypeSpec ExtensionMethodType { get { if (Count == 0) return null; return FixedParameters [0].HasExtensionMethodModifier ? types [0] : null; } } public IParameterData [] FixedParameters { get { return parameters; } } public static ParameterAttributes GetParameterAttribute (Parameter.Modifier modFlags) { return (modFlags & Parameter.Modifier.OUT) != 0 ? ParameterAttributes.Out : ParameterAttributes.None; } // Very expensive operation public MetaType[] GetMetaInfo () { MetaType[] types; if (has_arglist) { if (Count == 1) return MetaType.EmptyTypes; types = new MetaType[Count - 1]; } else { if (Count == 0) return MetaType.EmptyTypes; types = new MetaType[Count]; } for (int i = 0; i < types.Length; ++i) { types[i] = Types[i].GetMetaInfo (); if ((FixedParameters[i].ModFlags & Parameter.Modifier.RefOutMask) == 0) continue; // TODO MemberCache: Should go to MetaInfo getter types [i] = types [i].MakeByRefType (); } return types; } // // Returns the parameter information based on the name // public int GetParameterIndexByName (string name) { for (int idx = 0; idx < Count; ++idx) { if (parameters [idx].Name == name) return idx; } return -1; } public string GetSignatureForDocumentation () { if (IsEmpty) return string.Empty; StringBuilder sb = new StringBuilder ("("); for (int i = 0; i < Count; ++i) { if (i != 0) sb.Append (","); sb.Append (types [i].GetSignatureForDocumentation ()); if ((parameters[i].ModFlags & Parameter.Modifier.RefOutMask) != 0) sb.Append ("@"); } sb.Append (")"); return sb.ToString (); } public string GetSignatureForError () { return GetSignatureForError ("(", ")", Count); } public string GetSignatureForError (string start, string end, int count) { StringBuilder sb = new StringBuilder (start); for (int i = 0; i < count; ++i) { if (i != 0) sb.Append (", "); sb.Append (ParameterDesc (i)); } sb.Append (end); return sb.ToString (); } public bool HasArglist { get { return has_arglist; } } public bool HasExtensionMethodType { get { if (Count == 0) return false; return FixedParameters [0].HasExtensionMethodModifier; } } public bool HasParams { get { return has_params; } } public bool IsEmpty { get { return parameters.Length == 0; } } public AParametersCollection Inflate (TypeParameterInflator inflator) { TypeSpec[] inflated_types = null; bool default_value = false; for (int i = 0; i < Count; ++i) { var inflated_param = inflator.Inflate (types[i]); if (inflated_types == null) { if (inflated_param == types[i]) continue; default_value |= FixedParameters[i].HasDefaultValue; inflated_types = new TypeSpec[types.Length]; Array.Copy (types, inflated_types, types.Length); } else { if (inflated_param == types[i]) continue; default_value |= FixedParameters[i].HasDefaultValue; } inflated_types[i] = inflated_param; } if (inflated_types == null) return this; var clone = (AParametersCollection) MemberwiseClone (); clone.types = inflated_types; // // Default expression is original expression from the parameter // declaration context which can be of nested enum in generic class type. // In such case we end up with expression type of G<T>.E and e.g. parameter // type of G<int>.E and conversion would fail without inflate in this // context. // if (default_value) { clone.parameters = new IParameterData[Count]; for (int i = 0; i < Count; ++i) { var fp = FixedParameters[i]; clone.FixedParameters[i] = fp; if (!fp.HasDefaultValue) continue; var expr = fp.DefaultValue; if (inflated_types[i] == expr.Type) continue; var c = expr as Constant; if (c != null) { // // It may fail we are inflating before type validation is done // c = Constant.ExtractConstantFromValue (inflated_types[i], c.GetValue (), expr.Location); if (c == null) expr = new DefaultValueExpression (new TypeExpression (inflated_types[i], expr.Location), expr.Location); else expr = c; } else if (expr is DefaultValueExpression) expr = new DefaultValueExpression (new TypeExpression (inflated_types[i], expr.Location), expr.Location); clone.FixedParameters[i] = new ParameterData (fp.Name, fp.ModFlags, expr); } } return clone; } public string ParameterDesc (int pos) { if (types == null || types [pos] == null) return ((Parameter)FixedParameters [pos]).GetSignatureForError (); string type = types [pos].GetSignatureForError (); if (FixedParameters [pos].HasExtensionMethodModifier) return "this " + type; var mod = FixedParameters[pos].ModFlags & Parameter.Modifier.ModifierMask; if (mod == 0) return type; return Parameter.GetModifierSignature (mod) + " " + type; } public TypeSpec[] Types { get { return types; } set { types = value; } } } // // A collection of imported or resolved parameters // public class ParametersImported : AParametersCollection { public ParametersImported (IParameterData [] parameters, TypeSpec [] types, bool hasArglist, bool hasParams) { this.parameters = parameters; this.types = types; this.has_arglist = hasArglist; this.has_params = hasParams; } public ParametersImported (IParameterData[] param, TypeSpec[] types, bool hasParams) { this.parameters = param; this.types = types; this.has_params = hasParams; } } /// <summary> /// Represents the methods parameters /// </summary> public class ParametersCompiled : AParametersCollection { public static readonly ParametersCompiled EmptyReadOnlyParameters = new ParametersCompiled (); // Used by C# 2.0 delegates public static readonly ParametersCompiled Undefined = new ParametersCompiled (); private ParametersCompiled () { parameters = new Parameter [0]; types = TypeSpec.EmptyTypes; } private ParametersCompiled (IParameterData[] parameters, TypeSpec[] types) { this.parameters = parameters; this.types = types; } public ParametersCompiled (params Parameter[] parameters) { if (parameters == null || parameters.Length == 0) throw new ArgumentException ("Use EmptyReadOnlyParameters"); this.parameters = parameters; int count = parameters.Length; for (int i = 0; i < count; i++){ has_params |= (parameters [i].ModFlags & Parameter.Modifier.PARAMS) != 0; } } public ParametersCompiled (Parameter [] parameters, bool has_arglist) : this (parameters) { this.has_arglist = has_arglist; } public static ParametersCompiled CreateFullyResolved (Parameter p, TypeSpec type) { return new ParametersCompiled (new Parameter [] { p }, new TypeSpec [] { type }); } public static ParametersCompiled CreateFullyResolved (Parameter[] parameters, TypeSpec[] types) { return new ParametersCompiled (parameters, types); } // // TODO: This does not fit here, it should go to different version of AParametersCollection // as the underlying type is not Parameter and some methods will fail to cast // public static AParametersCollection CreateFullyResolved (params TypeSpec[] types) { var pd = new ParameterData [types.Length]; for (int i = 0; i < pd.Length; ++i) pd[i] = new ParameterData (null, Parameter.Modifier.NONE, null); return new ParametersCompiled (pd, types); } public static ParametersCompiled CreateImplicitParameter (FullNamedExpression texpr, Location loc) { return new ParametersCompiled ( new[] { new Parameter (texpr, "value", Parameter.Modifier.NONE, null, loc) }, null); } public void CheckConstraints (IMemberContext mc) { foreach (Parameter p in parameters) { // // It's null for compiler generated types or special types like __arglist // if (p.TypeExpression != null) ConstraintChecker.Check (mc, p.Type, p.TypeExpression.Location); } } // // Returns non-zero value for equal CLS parameter signatures // public static int IsSameClsSignature (AParametersCollection a, AParametersCollection b) { int res = 0; for (int i = 0; i < a.Count; ++i) { var a_type = a.Types[i]; var b_type = b.Types[i]; if (TypeSpecComparer.Override.IsEqual (a_type, b_type)) { if ((a.FixedParameters[i].ModFlags & Parameter.Modifier.RefOutMask) != (b.FixedParameters[i].ModFlags & Parameter.Modifier.RefOutMask)) res |= 1; continue; } var ac_a = a_type as ArrayContainer; if (ac_a == null) return 0; var ac_b = b_type as ArrayContainer; if (ac_b == null) return 0; if (ac_a.Element is ArrayContainer || ac_b.Element is ArrayContainer) { res |= 2; continue; } if (ac_a.Rank != ac_b.Rank && TypeSpecComparer.Override.IsEqual (ac_a.Element, ac_b.Element)) { res |= 1; continue; } return 0; } return res; } public static ParametersCompiled MergeGenerated (CompilerContext ctx, ParametersCompiled userParams, bool checkConflicts, Parameter compilerParams, TypeSpec compilerTypes) { return MergeGenerated (ctx, userParams, checkConflicts, new Parameter [] { compilerParams }, new TypeSpec [] { compilerTypes }); } // // Use this method when you merge compiler generated parameters with user parameters // public static ParametersCompiled MergeGenerated (CompilerContext ctx, ParametersCompiled userParams, bool checkConflicts, Parameter[] compilerParams, TypeSpec[] compilerTypes) { Parameter[] all_params = new Parameter [userParams.Count + compilerParams.Length]; userParams.FixedParameters.CopyTo(all_params, 0); TypeSpec [] all_types; if (userParams.types != null) { all_types = new TypeSpec [all_params.Length]; userParams.Types.CopyTo (all_types, 0); } else { all_types = null; } int last_filled = userParams.Count; int index = 0; foreach (Parameter p in compilerParams) { for (int i = 0; i < last_filled; ++i) { while (p.Name == all_params [i].Name) { if (checkConflicts && i < userParams.Count) { ctx.Report.Error (316, userParams[i].Location, "The parameter name `{0}' conflicts with a compiler generated name", p.Name); } p.Name = '_' + p.Name; } } all_params [last_filled] = p; if (all_types != null) all_types [last_filled] = compilerTypes [index++]; ++last_filled; } ParametersCompiled parameters = new ParametersCompiled (all_params, all_types); parameters.has_params = userParams.has_params; return parameters; } // // Parameters checks for members which don't have a block // public void CheckParameters (MemberCore member) { for (int i = 0; i < parameters.Length; ++i) { var name = parameters[i].Name; for (int ii = i + 1; ii < parameters.Length; ++ii) { if (parameters[ii].Name == name) this[ii].Error_DuplicateName (member.Compiler.Report); } } } public bool Resolve (IMemberContext ec) { if (types != null) return true; types = new TypeSpec [Count]; bool ok = true; Parameter p; for (int i = 0; i < FixedParameters.Length; ++i) { p = this [i]; TypeSpec t = p.Resolve (ec, i); if (t == null) { ok = false; continue; } types [i] = t; } return ok; } public void ResolveDefaultValues (MemberCore m) { ResolveContext rc = null; for (int i = 0; i < parameters.Length; ++i) { Parameter p = (Parameter) parameters [i]; // // Try not to enter default values resolution if there are is not any default value possible // if (p.HasDefaultValue || p.OptAttributes != null) { if (rc == null) rc = new ResolveContext (m); p.ResolveDefaultValue (rc); } } } // Define each type attribute (in/out/ref) and // the argument names. public void ApplyAttributes (IMemberContext mc, MethodBase builder) { if (Count == 0) return; MethodBuilder mb = builder as MethodBuilder; ConstructorBuilder cb = builder as ConstructorBuilder; var pa = mc.Module.PredefinedAttributes; for (int i = 0; i < Count; i++) { this [i].ApplyAttributes (mb, cb, i + 1, pa); } } public void VerifyClsCompliance (IMemberContext ctx) { foreach (Parameter p in FixedParameters) p.IsClsCompliant (ctx); } public Parameter this [int pos] { get { return (Parameter) parameters [pos]; } } public Expression CreateExpressionTree (BlockContext ec, Location loc) { var initializers = new ArrayInitializer (Count, loc); foreach (Parameter p in FixedParameters) { // // Each parameter expression is stored to local variable // to save some memory when referenced later. // StatementExpression se = new StatementExpression (p.CreateExpressionTreeVariable (ec), Location.Null); if (se.Resolve (ec)) { ec.CurrentBlock.AddScopeStatement (new TemporaryVariableReference.Declarator (p.ExpressionTreeVariableReference ())); ec.CurrentBlock.AddScopeStatement (se); } initializers.Add (p.ExpressionTreeVariableReference ()); } return new ArrayCreation ( Parameter.ResolveParameterExpressionType (ec, loc), initializers, loc); } public ParametersCompiled Clone () { ParametersCompiled p = (ParametersCompiled) MemberwiseClone (); p.parameters = new IParameterData [parameters.Length]; for (int i = 0; i < Count; ++i) p.parameters [i] = this [i].Clone (); return p; } } // // Default parameter value expression. We need this wrapper to handle // default parameter values of folded constants (e.g. indexer parameters). // The expression is resolved only once but applied to two methods which // both share reference to this expression and we ensure that resolving // this expression always returns same instance // public class DefaultParameterValueExpression : CompositeExpression { public DefaultParameterValueExpression (Expression expr) : base (expr) { } public void Resolve (ResolveContext rc, Parameter p) { var expr = Resolve (rc); if (expr == null) { this.expr = ErrorExpression.Instance; return; } expr = Child; if (!(expr is Constant || expr is DefaultValueExpression || (expr is New && ((New) expr).IsDefaultStruct))) { rc.Report.Error (1736, Location, "The expression being assigned to optional parameter `{0}' must be a constant or default value", p.Name); return; } var parameter_type = p.Type; if (type == parameter_type) return; var res = Convert.ImplicitConversionStandard (rc, expr, parameter_type, Location); if (res != null) { if (parameter_type.IsNullableType && res is Nullable.Wrap) { Nullable.Wrap wrap = (Nullable.Wrap) res; res = wrap.Child; if (!(res is Constant)) { rc.Report.Error (1770, Location, "The expression being assigned to nullable optional parameter `{0}' must be default value", p.Name); return; } } if (!expr.IsNull && TypeSpec.IsReferenceType (parameter_type) && parameter_type.BuiltinType != BuiltinTypeSpec.Type.String) { rc.Report.Error (1763, Location, "Optional parameter `{0}' of type `{1}' can only be initialized with `null'", p.Name, parameter_type.GetSignatureForError ()); return; } this.expr = res; return; } rc.Report.Error (1750, Location, "Optional parameter expression of type `{0}' cannot be converted to parameter type `{1}'", type.GetSignatureForError (), parameter_type.GetSignatureForError ()); this.expr = ErrorExpression.Instance; } public override object Accept (StructuralVisitor visitor) { return visitor.Visit (this); } } }
2024-07-08T01:26:29.851802
https://example.com/article/4445
Oct 24, 2012 06:00 AM Will you check their Halloween candy? By M.B.Sanok Once upon a time, a Halloween rumor snaked its way around the country, scaring each suburban parent in its wake. When rumors circulated that poisoned candy was distributed to trick-or-treaters, the Halloween candy-checking rituals began. Further discussions stated that candy tampered with pins and razor blades were also handed out. Although the rumors proved mostly untrue, they originated from a true story about a man who murdered his own child with poisoned Halloween candy in order to cash in on a life insurance policy. However, it contributed to the annual search-and-abolish mission parents undertake after each Halloween. Ever since the poisoned 1970s and 1980s Halloween candy scares, many parents vigilantly rummage through their kids’ treat bags. Now it’s a given to examine every Milky Way with the stealth of factory inspector number 56. Or is it? Does anyone still check their children’s Halloween candy anymore? Is it that important to examine every piece from an overflowing pillowcase of treats? My own no-no list Instead of rummaging through the candy to steal a Twix bar, I feverishly paw through any candy the kids receive including chalky, almost chocolate Easter bunnies to gorgeously decorated yet bland lollipops to their hard-won Halloween candy. I use my own mental sort list in order to detect faulty candy and other sundry collected items like money and tracts warning about the evils of Halloween. First, I check my son J’s bag for any candy with nuts due to his allergy. Luckily, he’s able to sit next to a Reese’s Peanut Butter Cup, but he just can’t and won’t eat them. When he was originally tested, not only did his eyes bulge and his face blow up, but he puckered up like he tasted a lemon and refused to eat the rest of the required testing teaspoonful. Since I grew up during the “Poison Halloween Candy” years, I seek out pins or razor blades in the candy. Although I realize it was mostly proven false, the stories will not leave my subconscious despite Snopes.com reports and Urban Legend busted myths. Besides, never mind the kids — what if I sneak some candy and break a tooth while chomping down on a razor blade?!? Never mind the kids — what if I sneak some candy and break a tooth while chomping down on a razor blade?!? And I still sort through the candy to make sure they’re not chowing down on any candy manufactured in China. The scary reports about harmful chemicals contained in the foreign-made candy and the unsanitary factories where they were produced squelched my trust and increased my paranoia. Although I struggle not to waste food and condemn all candy from Asian manufacturers, I just couldn’t bear to save two boxes of Indonesian gummy fruit snacks. Policing or fliching? Friends I polled sometimes check and sometimes don’t. The checkers mentioned they wouldn’t allow the kids to eat the candy until they checked it out unless they brought it from home or received it from a trusted neighbor or friend. Some expressed concern about the bowls left out on stoops which could have been tampered with by anyone. My friend T mentioned that “obviously anything open or glaringly odd” gets tossed. Mostly, though, the moms rummaged through the candy to get a shot at their favorites and the best candy! Who cared about any more danger than a few extra calories?!? Maybe we should search for another sketchy item in their treat bags: marijuana-shaped candy which recently caused uproar with Buffalo, NY, parents. Produced by a novelty supply company, the lollipops and ring-pops are sour-apple flavored and shaped into marijuana leaves. The outer packaging depicts a “joint-smoking, peace-sign waving user” with the word “legalize” printed across the front. Most parents felt that the product promoted marijuana use. What a great backlash for the War on Drugs, huh? Does anyone remember candy cigarettes from childhood? I recall visiting the local candy store after church on Sundays, and my conservative parents purchasing candy cigarettes for us. There were two different kinds: either wax-paper wrapped, colored gum cigarettes that sprayed powdered sugar when you blew into them; or white candy with a dab of reddish-orange to look like a lighted butt. We loved them! And neither of us smokes. Just spooky stories Amid the ghosts, ghouls and goblins, Halloween candy provides spooky stories of candy stuffed with pins and razor blades; laced with poison, chemicals and allergens; and filled with crazy amounts of sugar. While checking the candy can’t hurt, please remember that many of the candy scare tales are just that — tales. Enjoy the holiday, and memorize these lines: “Trick or Treat, smell my feet, give me something good to eat…” M.B. Sanok is a South Jersey mom and a blogger for JerseyMomsBlog, where this post originated. This page requires javascript. It seems that your browser does not have Javascript enabled. Please enable Javascript and press the Reload/Refresh button on your browser. Add your comment: Advertisement About This Blog Outstanding Delaware Valley mom and dad bloggers share insights about their kids or themselves, family experiences or ways they handled parenting situations. Their items — often reposted from their blogs — reflect everyday experiences that anyone can relate to rather than political viewpoints or belief systems.
2024-06-22T01:26:29.851802
https://example.com/article/8062
Volumetric assessment of glioma removal by intraoperative high-field magnetic resonance imaging. To investigate the contribution of high-field intraoperative magnetic resonance imaging (iMRI) for further reduction of tumor volume in glioma surgery. From April 2002 to June 2003, 182 neurosurgical procedures were performed with a 1.5-T magnetic resonance system. Among patients who underwent these procedures, 47 patients with gliomas (14 with World Health Organization Grade I or II glioma, and 33 with World Health Organization Grade III or IV glioma) who underwent craniotomy were investigated retrospectively. Completeness of tumor resection and volumetric analysis were assessed with intraoperative imaging data. Surgical procedures were influenced by iMRI in 36.2% of operations, and surgery was continued to remove residual tumor. Additional further resection significantly reduced the percentage of final tumor volume compared with first iMRI scan (6.9% +/- 10.3% versus 21.4% +/- 13.8%; P < 0.001). Percentages of final tumor volume also were significantly reduced in both low-grade (10.3% +/- 11.5% versus 25.8% +/- 16.3%; P < 0.05) and high-grade gliomas (5.4% +/- 9.9% versus 19.5% +/- 13.0%; P < 0.001). Complete resection was achieved finally in 36.2% of all patients (low-grade, 57.1%; high-grade, 27.3%). Among the 17 patients in whom complete tumor resection was achieved, 7 complete resections (41.2%) were attributable to further tumor removal after iMRI. We did not encounter unexpected events attributable to high-field iMRI, and standard neurosurgical equipment could be used safely. Despite extended resections, introduction of high-field iMRI in conjunction with functional navigation did not translate into an increased risk of postoperative deficits. The use of high-field iMRI increased radicality in glioma surgery without additional morbidity.
2024-07-24T01:26:29.851802
https://example.com/article/8328
VALENCIA, Calif.--(BUSINESS WIRE)--Nov 12, 2012--SetPoint Medical, a biomedical technology company developing neuromodulation therapies for inflammatory diseases, presented positive results from a first-in-human study using a neuromodulation device to treat rheumatoid arthritis (RA) yesterday at the American College of Rheumatology annual meeting in Washington, D.C. “In contrast to immunosuppressive drugs, the neuromodulation therapy in this study used an implantable pulse generator to stimulate the vagus nerve, activating the body’s natural Inflammatory Reflex to produce a systemic anti-inflammatory effect,” said Dr. Kevin Tracey, President of Feinstein Institute for Medical Research, discoverer of the inflammatory reflex, and co-author of the ACR presentation. “The results of this study show promising clinical efficacy, and confirm a decade of foundational scientific research done by our laboratory and others.” “This study validates neuromodulation as a breakthrough approach to treating RA and other autoimmune and inflammatory diseases,” said Anthony Arnold, chief executive officer of SetPoint Medical. “We look forward to developing safe and effective alternative therapies for a range of inflammatory diseases such as Crohn’s disease, ulcerative colitis, psoriasis, and for the inflammation that worsens diabetes and heart disease. This approach could offer patients 10 years of treatment for the cost of about 18 months of biologic therapy and would be an exciting new alternative for physicians to treat their patients.” In this open label pilot study, eight patients with active rheumatoid arthritis despite treatment with the RA agent methotrexate, who would otherwise have been candidates for a TNF inhibitor, instead received the neuromodulation implant on the vagus nerve in their neck. The treatment was well tolerated, with one patient reporting hoarseness after implantation, a known side effect with this type of device. Improvement in RA was assessed at the six-week primary endpoint using standard measures of efficacy including the Disease Activity Score (DAS), and the American College of Rheumatology (ACR) 20 response rate. At the primary end point, two of the eight patients in the study achieved DAS remission, and six of the eight had a positive ACR 20 response, results similar to those typically achieved in larger studies with drugs currently used to treat RA. SetPoint is developing a novel platform to treat a variety of inflammation-mediated autoimmune diseases. The company is planning larger clinical studies in several diseases scheduled to begin in 2013. About SetPoint Medical SetPoint Medical is a privately held biomedical technology company dedicated to treating patients with debilitating inflammatory diseases using proprietary implantable neuromodulation devices. SetPoint is developing safe and effective neuromodulation therapies for patients with inflammatory autoimmune diseases, such as Crohn's Disease and rheumatoid arthritis. SetPoint’s novel platform consists of an implantable miniature neuromodulation device, wireless charger and iPad prescription pad application. The system uses vagus nerve stimulation to activate the body’s natural Inflammatory Reflex, which produces a potent systemic anti-inflammatory effect. The company has just completed a first-in-human open-label proof-of-concept trial in rheumatoid arthritis. The results confirm extensive preclinical studies, and show clinical efficacy comparable to leading immunosuppressive drugs. SetPoint’s approach is intended to offer patients and providers a better alternative approach for the treatment of RA and other chronic inflammatory diseases with less risk and cost than drug therapy. SetPoint is headquartered in Valencia, Ca., and investors in the company include Morgenthaler Ventures, Foundation Medicine and Topspin Partners. For more information, visit www.setpointmedical.com.
2024-01-24T01:26:29.851802
https://example.com/article/6928
James Alex Fields Jr. has first hearing; tensions still high in Charlottesville Two days have passed since a white supremacist gathering here turned deadly, but tensions remained high Monday as the man charged with killing a women during the rally made his first court appearance. Outside the courthouse, a couple of "Unite the Right" activists screamed at journalists, saying, "You are all to blame for this." They were soon confronted by a couple of counterprotesters, who yelled back. A shouting match ensued for several minutes until police broke up the confrontation. No one was injured. It was a continuation of the overt hostility that raged over the weekend between white nationalists and anti-fascist protesters. Meanwhile, inside the courthouse, a judge informed James Alex Fields Jr., 20, of the charges against him in the death of Heather Heyer, the 32-year-old paralegal who was killed on Saturday when a car rammed a group of people demonstrating against the "Unite the Right" rally. Fields is accused of being the driver. Nineteen others were injured. Nine patients remained hospitalized in good condition on Monday, hospital officials said. Fields is charged with second-degree murder, three counts of malicious wounding and failure to stop in an accident that resulted in death. Wearing a black and white jumpsuit, the suspect appeared via video link from jail in front of Judge Robert Downer. Fields, who lives in Maumee, Ohio, was recently making $650 every two weeks working for Securitas, a security company, and noted that he couldn't afford a lawyer. The company said Monday that it has terminated Fields' employment. The judge normally has someone from the public defender's office available if the defendant has no lawyer. But Downer said someone in that office was related to an individual injured over the weekend, and he would have to go outside the office to make a selection. The judge named a local defense attorney, Charles Weber, to represent Fields. No bond was set, and Fields remained in custody. The clashes in Charlottesville sparked political fallout over the weekend, with critics blasting President Donald Trump for initially failing to single out white supremacists in his criticism of the violence. On Sunday, Vice President Mike Pence called out "dangerous fringe groups" and on Monday, Trump named the groups in question and denounced them. "Racism is evil," the President said. "And those who cause violence in its name are criminals and thugs, including the KKK, neo-Nazis, white supremacists and other hate groups that are repugnant to everything we hold dear as Americans." Speaking from the White House, Trump expressed his condolences to the families of Heyer and the two troopers, and said they "embody the goodness and decency of our nation." "To anyone who acted criminally in this weekend's racist violence, you will be held fully accountable. Justice will be delivered. As I said on Saturday, we condemn in the strongest possible terms this egregious display of hatred, bigotry and violence. It has no place in America." Heyer's mother, Susan Bro, thanked the President in a statement. "Thank you, President Trump, for those words of comfort and for denouncing those who promote violence and hatred," she said. On Sunday, people around the nation marched in support of the anti-racism protesters in Charlottesville, with more than 130 rallies from California to Maine. Sign of remembrance Confederate monuments on public property became controversial in Southern cities after a white supremacist massacred nine black churchgoers in Charleston, South Carolina, in 2015. The discord in Charlottesville stems from a City Council vote to rechristen two parks named for Confederate generals and to remove a bronze statue of one of those generals, Robert E. Lee, from what was known as Lee Park. A couple of months ago, the park was renamed Emancipation Park. The Lee statue and the park were at the center of violent protests this past weekend, with white nationalists opposing the removal of the statue. Nancy Carpenter, a Charlottesville resident, said her neighbors and friends discussed naming the location for Heyer. "It was therapeutic to do something to help with the feelings of the weekend," she said. Carpenter said such a move could generate change and help residents move forward and tackle the challenge of dealing with the animosities and problems that very much remain in town -- even though most of the out-of-towners who descended on the community cleared out. Carpenter took thick poster board, attached it to a stake and hammered it into the ground. The sign says "Heyer Mem. Park." What do we know about Fields? (Chip Somodevilla/Getty Images) Fields was a man who possessed "outlandish, very radical beliefs," and a "fondness" for Adolf Hitler, according to Derek Weimer, who teaches social studies at Randall K. Cooper High School in Union, Kentucky. "It was quite clear he had some really extreme views and maybe a little bit of anger behind them," Weimer told CNN. "Feeling, what's the word I'm looking for, oppressed or persecuted. He really bought into this white supremacist thing. He was very big into Nazism. He really had a fondness for Adolf Hitler." Principal Mike Wilson said he remembered Fields as a quiet and reserved student who graduated in 2015. In August of that year, Fields was inducted into the Army but he left active duty in December 2015. A spokeswoman for the Army said he failed to meet training standards. "As a result he was never awarded a military occupational skill nor was he assigned to a unit outside of basic training," Lt. Col. Jennifer Johnson said. Fields' mother, Samantha Bloom, told the Toledo Blade in Ohio, where he lives, that she didn't know her son was going to Virginia for a white nationalist rally. She thought it had something to do with Trump. She told the Blade she didn't discuss politics with her son. She was surprised her son attended an event with white supremacists. (Chip Somodevilla/Getty Images) "He had an African-American friend," she told the Blade. Before Trump's comments on Monday, the creator of one of the most prolific neo-Nazi websites praised Trump for not specifically blaming neo-Nazis and white supremacists, saying "he loves us." Andrew Anglin of the Daily Stormer wrote that Trump's comments were "good." "He didn't attack us. He just said the nation should come together. Nothing specific against us. He said that we need to study why people are so angry, and implied that there was hate on ... both sides!" Anglin wrote. "There was virtually no countersignaling of us at all. He loves us all." Anglin did not respond to CNN's request for comment. Another alt-right rally planned for Charlottesville? Chip Somodevilla/Getty Images) Three other men were arrested Saturday. One faces a charge of carrying a concealed handgun and another is charged with disorderly conduct. The third man was arrested on suspicion of assault and battery. The Justice Department and the Federal Bureau of Investigation have launched a civil rights investigation into the deadly crash, to be led by US Attorney Rick Mountcastle. Investigators will be looking into Fields' alleged motives, and whether there's enough evidence for a domestic terrorism case. On Monday, Charlottesville Police Chief Al Thomas Jr. said the "Unite the Right" protestors had agreed to enter Emancipation Park through the rear. But "they did not follow" the agreed-upon safety plan and entered the park at different locations, forcing police to alter their plans, Thomas said. Other groups started gathering in the park and along the street and the crowds became violent, Thomas said. "We did make attempts to keep the two sides separate. However, we can't control which side someone enters the park," Thomas said. "We had agreements and worked out a security plan to bring the groups in in separate entrances. They decided to change the plan and entered the park in different directions." Police in Charlottesville may have their hands full again in the future. Richard Spencer, the white supremacist who helped found the so-called alt-right movement, announced on Monday he is planning to hold another rally in Charlottesville. Spencer was scheduled to speak at a September 11 "white lives matter" rally at Texas A& M University in College Station, Texas -- but the university on Monday canceled the rally.
2023-11-19T01:26:29.851802
https://example.com/article/7148
Introduction ============ Atherosclerosis is triggered by lipid accumulation in subendothelial arterial cells.[@b1-vhrm-12-379] Intracellular lipid accumulation is caused by low-density lipoprotein (LDL) circulating in human blood. However, only modified lipoprotein, but not native LDL, causes intracellular lipid accumulation.[@b2-vhrm-12-379] Although oxidation remains the most studied form of atherogenic modification, other modifications of LDL can also be detected in the bloodstream. A study of the total LDL from the blood of atherosclerosis patients revealed the presence of desialylated LDL particles with atherogenic properties.[@b2-vhrm-12-379] Glycosphingolipids in LDL usually have a terminal sialic acid residue. In case of removal of this terminal sialic acid, modified glycosphingolipid will have galactose as the terminal saccharide residue. We took advantage of this fact to isolate the subfraction of desialylated LDL from total LDL preparation using *Ricinus communis* agglutinin (RCA120), which possesses a high affinity to the terminal galactose.[@b2-vhrm-12-379] Total LDL preparation was applied on column with RCA120 immobilized on CNBr-activated agarose. Normally, sialylated LDL passed through the column not binding to the sorbent. Desialylated LDL was bound to lectin and then eluted with 5--50 mM galactose. This method allowed us to isolate subfractions of both sialylated and desialylated LDL from total LDL preparation isolated from the blood of patients. We called the latter subfraction as circulating modified LDL (cmLDL). In this study, we focus on chemical analysis of LDL particles including studying the carbohydrate composition of apoB-and lipid-bound glycoconjugates of native and modified LDL obtained from healthy donors and atherosclerotic patients. Materials and methods ===================== Study subjects -------------- This study was conducted in accordance with the Declaration of Helsinki as revised in 1983. It was approved by the local ethics committees of the Institute of General Pathology and Pathophysiology (Moscow) and Institute for Atherosclerosis Research (Skolkovo Innovation Center, Moscow, Russia). All participants provided their written informed consent prior to their inclusion in the study. Study subjects included men and women aged 30--60 years with angiographically proven coronary atherosclerosis, healthy subjects aged 25--55 years with no signs of ischemic heart disease according to Rose questionnaire,[@b3-vhrm-12-379],[@b4-vhrm-12-379] and individuals with asymptomatic carotid atherosclerosis. The characteristics of patients are presented in [Table 1](#t1-vhrm-12-379){ref-type="table"}. The following are the inclusion and exclusion criteria by the study protocol. Men and women aged 22--55 years without clinical manifestations of cardiovascular atherosclerosis-related diseases, in whom high-resolution B-mode ultrasonography has revealed no signs of subclinical atherosclerosis (normal values of carotid intima-media thickness and the absence of atherosclerotic plaques in carotids), were eligible for inclusion in the study as "healthy subjects". Men and women aged 22--55 years without clinical manifestations of cardiovascular atherosclerosis-related diseases but with ultrasonographic signs of subclinical atherosclerosis (abnormally high values of carotid intima-media thickness and the presence of atherosclerotic plaques in carotids with \>10% stenosis) were eligible for inclusion in the study as "individuals with asymptomatic carotid atherosclerosis". Finally, patients with coronary heart disease aged 30--60 years with coronary atherosclerosis proven by coronary angiography were eligible for inclusion in the study as "patients with coronary atherosclerosis", regardless of the presence and extent of carotid atherosclerosis. For all study participants, the presence of arterial hypertension (systolic blood pressure \>140 mmHg, diastolic blood pressure \>90 mmHg) or type 2 diabetes mellitus, or intake of lipid-lowering agents at least within 2 months prior to inclusion, was evaluated as exclusion criteria. Preparation of blood plasma --------------------------- Venous blood was collected from healthy subjects, patients with angiographically documented coronary atherosclerosis, and patients with subclinical carotid atherosclerosis into tubes containing EDTA (1 mg/mL) after overnight fasting. Plasma was separated by centrifugation (20 minutes at 900 *g*). Total LDL preparation --------------------- Total LDL fraction was isolated from blood plasma as described previously.[@b5-vhrm-12-379] For LDL isolation, plasma density was adjusted to 1.390 g/mL with solid NaBr, and 4 mL of plasma was transferred into polycarbonate centrifuge bottles (16×76 mm, Beckman Instruments, Inc., Palo Alto, CA, USA). Six milliliters of NaBr solution (*d*=1.019 g/mL) was layered over the plasma and centrifuged for 2 hours at 116763 g (42000 rpm) in a Type 50Ti rotor (Beckman Instruments). The lower layer of LDL (1.5 cm over the plasma level) was aspirated, and the density was adjusted to 1.470 g/mL with solid NaBr. The samples were recentrifuged under the same conditions and dialyzed in the dark at 4°C overnight against 2,000 volumes of phosphate-buffered saline, pH 7.4, containing 1 mM EDTA. LDL preparations were sterilized by filtration (pore size, 0.45 µm) and stored at 4°C for 1--5 days prior to carbohydrate measurement. The LDL preparations obtained by this technique were free from other plasma proteins and were identical in particle size and lipid composition to LDL isolated by the classical method. Lectin chromatography of modified LDL ------------------------------------- Apolipoprotein B (apoB), the only protein found in LDL, contains two types of oligosaccharide conjugates, oligomannoside and terminally sialylated.[@b6-vhrm-12-379],[@b7-vhrm-12-379] Glycosphingolipids in LDL also have a terminal sialic acid residue.[@b8-vhrm-12-379] Desialylation will result in the exposure of the next residue of the carbohydrate chain, which is galactose. Correspondingly, we hypothesized that desialylated LDL will interact with galactose-specific lectins, such as *Ricinus communis* agglutinin (RCA120).[@b9-vhrm-12-379] To prepare the affinity columns, RCA120 was immobilized on BrCN-activated agarose as described earlier.[@b10-vhrm-12-379] The columns were equilibrated with 10--15 mL of isotonic phosphate buffer, pH 7.2, and 0.5--5 mL of LDL sample (containing 0.2--10 mg of protein) was loaded on the column. The major part of unbound LDL was washed from the column in the first ten volumes of phosphate buffer. Desialylated LDL was washed from the column with galactose solutions in phosphate buffer (5 mM, 10 mM, 20 mM, 50 mM, and 100 mM). Washing with 50 mM galactose resulted in virtually complete elution of LDL in the first five volumes and this concentration was used most often.[@b11-vhrm-12-379] Bound and unbound LDL fractions were brought to the density of 1.070 g/mL with NaBr, concentrated by ultracentrifugation, and dialyzed against 6,000 volumes of isotonic phosphate buffer as described earlier. Carbohydrate and sialic acid analysis ------------------------------------- Fluorescently labeled N-glycans are separated by hydrophilic liquid chromatography on a Waters Acquity ultraperformance liquid chromatography instrument (Waters, Milford, MA, USA) consisting of a quaternary solvent manager, sample manager, and an FLR fluorescence detector set with excitation and emission wavelengths of 250 nm and 428 nm, respectively. The instrument is under the control of Empower 3 software, build 3471 (Waters). Labeled plasma N-glycans are separated on a Waters Ethylene Bridged Hybrid Glycan chromatography column, 150×2.1 mm Internal Diameter, 1.7 µm Ethylene Bridged Hybrid particles, with 100 mM ammonium formate, pH 4.4, as solvent A and acetonitrile as solvent B. Separation method used a linear gradient of 70%--53% acetonitrile (v/v) at a flow rate of 0.56 mL/min in a 23-minute analytical run. Samples are maintained at 10°C before injection, and the separation temperature was 25°C. Sialic acid content was measured according to Warren.[@b12-vhrm-12-379] One milliliter of 20% trichloroacetic acid was added to samples of 50 µL LDL containing 30--150 µg of protein, and the samples were incubated for 20 minutes at 4°C followed by centrifugation for 15 minutes at 4,500 rpm. Supernatant was discarded, and 250 µL of 5% Trichloroacetic acid (TCA) was added, followed by a hydrolysis for 7 minutes at 100°C. In order to eliminate possible influence of oxidized lipids on the reaction, oxidized lipids were extracted with 1 mL of chloroform for 30 minutes at room temperature, after which chloroform phase was discarded. Then, 250 µL of 0.2% resorcin dissolved in 10 N HCl, containing 25 µM of copper sulfate, was added to the residual water phase. The samples were incubated for 15 minutes at 100°C, followed by extraction of the colored product by 600 µL of butyl acetate:isobutanol (85:15 v/v) mixture. The mixture was vigorously vortexed and then phase separation was put in spectrophotometer Yanaco (Houston Instruments, Houston, TX, USA) tuned to 630 nm wavelength in 1 cm quartz cuvette; 1 mg/mL *N*-acetylneuraminic acid was used as a standard. Statistical analysis -------------------- The results were analyzed using one-way analysis of variance. Statistical analysis was performed using IBM SPSS 21.0 soft-ware (IBM Corporation, Armonk, NY, USA). The data are presented in terms of mean and standard deviation. The significance of differences was defined at the 0.05 level of confidence. Results ======= The comparison of baseline data between healthy subjects, individuals with asymptomatic carotid atherosclerosis, and patients with coronary atherosclerosis revealed no significant differences between study groups in body mass index values, systolic and diastolic blood pressure, smoking, family history of coronary heart disease/hypertension/type 2 diabetes, total cholesterol, triglycerides, high-density lipoprotein-, and LDL cholesterol ([Table 1](#t1-vhrm-12-379){ref-type="table"}). The only exception was the age, which was significantly higher in patients with coronary atherosclerosis as compared to either healthy subjects or individuals with asymptomatic carotid atherosclerosis. Protein-conjugated monosaccharide content was analyzed in LDL samples obtained from ten healthy subjects, ten patients with coronary atherosclerotic, and ten individuals with asymptomatic carotid atherosclerosis. In samples from healthy subjects, protein and apoB glycoconjugate composition included *N*-acetyl-glucosamine, galactose, mannose, and sialic acid in a molar ratio of 2:1:2.5:1 ([Tables 2](#t2-vhrm-12-379){ref-type="table"} and [3](#t3-vhrm-12-379){ref-type="table"}). Protein and apoB fractions of LDL from atherosclerotic patients contained similar amounts of glucosamine, galactose, and mannose, but lower level of sialic acid as compared to healthy subjects. Carbohydrate composition of lipid fraction of LDL was characterized by the absence of mannose and the presence of *N*-acetylgalactosamine and glucose, lower amount of *N*-acetylglucosamine, and increased amount of galactose in comparison to apoB glycoconjugates ([Table 4](#t4-vhrm-12-379){ref-type="table"}). Lipid fraction also contained less sialic acid. Some samples also contained traces of fucose (not presented). Lipid-bound glycoconjugates of total LDL from patients with coronary and carotid atherosclerosis contained less neutral monosaccharides than total LDL from healthy subjects. Patient-derived LDL also contained significantly less sialic acid ([Table 4](#t4-vhrm-12-379){ref-type="table"}). In samples from healthy subjects, total level of lipid-bound monosaccharides in cmLDL was 1.5--2-fold lower than in native LDL. In samples from atherosclerotic patients, both neutral carbohydrates and sialic acid contents of cmLDL were decreased by 1.5--2-fold in comparison to native LDL, and total level of carbohydrates was lower than that measured in cmLDL of healthy subjects ([Table 5](#t5-vhrm-12-379){ref-type="table"}). Sialic acid content of cmLDL from healthy donors was lower than that of native LDL ([Table 5](#t5-vhrm-12-379){ref-type="table"}). Discussion ========== In this work, we have compared the carbohydrate composition of LDL extracted from the blood of healthy subjects, patients with asymptomatic carotid atherosclerosis, and patients with coronary atherosclerosis. We have previously shown that LDL from atherosclerotic patients contains less sialic acid and confirmed, therefore, our earlier observations.[@b2-vhrm-12-379] We have developed a method based on affine chromatography to extract desialylated LDL from total LDL fraction.[@b2-vhrm-12-379] This method was used in the current work to compare the carbohydrate composition of desialylated and normally sialylated LDL. A similar study has been performed earlier.[@b13-vhrm-12-379] However, in the present study, we used a different method to measure the carbohydrate content and included a group of subjects with asymptomatic carotid atherosclerosis. Moreover, the study groups were larger. Importantly, the obtained results were in accordance with those reported previously. Carbohydrate composition of protein glycoconjugates from total LDL preparations has been studied by a number of authors. Total level of carbohydrates varied from 30 µg/mg to 130 µg/mg of protein.[@b14-vhrm-12-379]--[@b20-vhrm-12-379] In our study, total monosaccharide content was 35--45 µg/mg of protein. The ratio of *N*-acetyl-glucosamine, galactose, mannose, and sialic acid was 2:1:2.5:1, which was in accordance with previously published data.[@b14-vhrm-12-379]--[@b20-vhrm-12-379] There are two types of protein glycoconjugates in LDL: biantennary sialylated (acidic) chains and high-mannose chains.[@b6-vhrm-12-379],[@b17-vhrm-12-379]--[@b21-vhrm-12-379] As suggested by Taniguchi et al,[@b6-vhrm-12-379] apolipoprotein-B molecule contains five to six high-mannose and eight to ten biantennary sialylated conjugates. It can therefore be proposed that carbohydrate composition of protein glycoconjugates should be approximately the following: *N*-acetyl glucosamine, 42--52 (mol/mol apo-B); galactose, 14--18 mol/mol ApoB; mannose, 60--73 mol/mol ApoB; and sialic acid, 12--15 mol/mol ApoB. According to our data, the content of galactose and sialic acid in apo-B of healthy subjects was 14 mol/mol and 15 mol/mol apo-B, respectively. This is in accordance with previously published observations.[@b22-vhrm-12-379] On the other hand, the contents of *N*-acetyl glucosamine and mannose were 30 mol/mol and 38 mol/mol apo-B, respectively, which was 1.5-fold lower than that reported by the cited authors. Other authors have also reported lower contents of *N*-acetyl glucosamine and mannose in comparison with the expected levels.[@b16-vhrm-12-379],[@b17-vhrm-12-379] In their calculations, Taniguchi et al[@b22-vhrm-12-379] were based on the expectation that 13--16 of asparagine residues in apo-B are glycosylated. The available data support the suggestion that not all these asparagine residues are glycosylated. Moreover, LDL can contain a lower amount of high-mannose chains. This hypothesis, however, remains to be proven experimentally. Lipid-conjugated glycoconjugates of human LDL include *N*-acetyl galactosamine and glucosamine, galactose, glucose, and sialic acid. The amounts of each individual monosaccharide in the lipid fraction of total LDL preparation were decreased in samples from atherosclerosis patients in comparison with those obtained from healthy subjects. The level of lipid-conjugated neutral carbohydrates was 1.5--2-fold lower in cmLDL in comparison to native LDL. The difference in sialic acid content was even more pronounced. Earlier studies reported the levels of sialic acid and total neutral carbohydrates in LDL from subjects with different LDL profiles.[@b23-vhrm-12-379] It was demonstrated that small dense LDL contained less sialic acid and neutral carbohydrates. In our work, we separated total LDL into fractions with different sialic acid contents and demonstrated that desialylated LDL that also had decreased amounts of other carbohydrates was characterized by a smaller particle size and increased density in comparison to sialylated lipoproteins.[@b2-vhrm-12-379] Taken together, these observations demonstrate a link between lipoprotein particle size and density and carbohydrate content. Although the exact mechanism of changes in the carbohydrate composition of LDL remains to be determined, it can be speculated that these changes affect lipoprotein metabolism. It has been demonstrated that varying sialic acid content altered the uptake and degradation of LDL by the smooth muscular cells of the arterial wall.[@b24-vhrm-12-379] Taniguchi et al[@b22-vhrm-12-379] demonstrated that desialylation of LDL by neuraminidase treatment enhanced its metabolism by murine macrophages. In our experiments, treatment of LDL with neuraminidase increased its uptake and lipid accumulation in cultured human aortic wall cells.[@b2-vhrm-12-379] Similar effects were described for cmLDL obtained from patients with atherosclerosis. Therefore, alterations of LDL carbohydrate composition, including desialylation, influence its ability to induce intracellular lipid accumulation. We have previously shown that intracellular lipid accumulation is a trigger of atherogenesis at the cellular level, which is followed by all the major signs of atherosclerosis.[@b2-vhrm-12-379] It can be suggested that deglycosylation is a key atherogenic modification of LDL. Conclusion ========== We demonstrated altered composition of glycoconjugates in LDL isolated from the blood of atherosclerotic patients in comparison to healthy subjects. The most considerable change was the decrease in sialic acid content. Circulating modified LDL was characterized by a decreased content of lipid-bound neutral monosaccharides in healthy subjects and by a decreased content of both neutral monosaccharides and sialic acid in atherosclerotic patients. The observed changes should be further evaluated for their clinical relevance as another marker of atherosclerosis. This study was supported by Ministry of Education and Sciences, Russia (Project \# RFMEFI61614X0010). **Disclosure** The authors report no conflicts of interest in this work. ###### Characteristics of patients Variable Healthy subjects Patients with subclinical carotid atherosclerosis Patients with coronary atherosclerosis -------------------------------------- ------------------ --------------------------------------------------- ----------------------------------------------------- Age, years 46.6±10.9 49.2±9.8 58.5±8.6[a](#tfn2-vhrm-12-379){ref-type="table-fn"} Body mass index, kg/m^2^ 26.1±3.7 26.7±3.7 27.0±3.8 Systolic blood pressure, mmHg 140±15 143±18 143±18 Diastolic blood pressure, mmHg 85±10 86±11 80±12 Smoking, % 12 15 12 Family history of CHD, % 24 20 26 Family history of hypertension, % 36 36 43 Family history of type 2 diabetes, % 11 10 10 Total cholesterol, mg/dL 183±46 186±44 181±38 Triglycerides, mg/dL 130±79 135±75 126±62 HDL cholesterol, mg/dL 55±16 55±13 54±15 LDL cholesterol, mg/dL 104±40 109±39 107±39 **Notes:** Ten subjects were involved in each group. Data are presented as mean ± SD. Statistically significant difference from healthy subjects, *P*\<0.05. **Abbreviations:** CHD, coronary heart disease; HDL, high-density lipoprotein; LDL, low-density lipoprotein; SD, standard deviation. ###### Carbohydrate content of protein conjugates of total LDL of healthy individuals and patients with atherosclerosis Carbohydrate content, nmol/mg of protein ------------------------------------------ ---------- ---------- ---------- ----------------------------------------------------- Healthy individuals  Mean 59.3±4.8 28.4±2.7 74.0±5.8 29.5±2.8 Patients with coronary atherosclerosis  Average 56.9±3.3 27.5±4.1 73.3±9.1 17.4[a](#tfn5-vhrm-12-379){ref-type="table-fn"}±3.2  *P*-value 0.20 0.58 0.85 4.1×10^−8^ Patients with carotid atherosclerosis  Mean 57.2±4.0 27.3±5.5 74.2±9.0 19.6[a](#tfn5-vhrm-12-379){ref-type="table-fn"}±5.8  *P*-value 0.29 0.58 0.94 3×10^−4^ **Notes:** Mean of ten independent measurements ± SD is presented. Significant difference from the healthy individuals. **Abbreviations:** LDL, low-density lipoprotein; SD, standard deviation. ###### Carbohydrate conjugates composition of apoB protein from native LDL and circulating modified LDL in healthy individuals and atherosclerotic patients Carbohydrate content, nmol/mg of protein ------------------------------------------ ---------- ---------- ---------- ----------------------------------------------------- Healthy individuals  Mean 57.6±5.8 27.4±4.3 73.7±5.8 27.7±4.5 Patients with coronary atherosclerosis  Mean 59.1±5.8 29.2±5.6 73.7±9.1 16.7[a](#tfn8-vhrm-12-379){ref-type="table-fn"}±3.8  *P*--value 0.56 0.42 0.99 7.8×10^−6^ Patients with carotid atherosclerosis  Mean 59.7±5.7 28.3±5.4 73±9.7 19.8[a](#tfn8-vhrm-12-379){ref-type="table-fn"}±2.6  *P*--value 0.43 0.68 0.86 13.4×10^−5^ **Notes:** Mean of ten independent measurements ± SD is presented. Significant difference from the healthy individuals. **Abbreviations:** LDL, low-density lipoprotein; SD, standard deviation. ###### Carbohydrate content of glycolipids of total LDL extracted from the blood of healthy individuals and atherosclerosis patients Carbohydrate content, nM/mg of protein ---------------------------------------- ----------------------------------------------------- ----------------------------------------------------- ------------------------------------------------------- ------------------------------------------------------ ----------------------------------------------------- Healthy donors  Mean 6.5±1.4 8.9±1.8 44.4±8.4 48.7±8.7 7.9±1.1 Patients with coronary atherosclerosis  Mean 2.4[a](#tfn11-vhrm-12-379){ref-type="table-fn"}±0.5 4.1[a](#tfn11-vhrm-12-379){ref-type="table-fn"}±1.7 27.3[a](#tfn11-vhrm-12-379){ref-type="table-fn"}±5.2 28.8[a](#tfn11-vhrm-12-379){ref-type="table-fn"}±6.2 3.6[a](#tfn11-vhrm-12-379){ref-type="table-fn"}±1.5  *P*-value 1.7×10^−6^ 5.7×10^−6^ 6.5×10^−5^ 2×10^−5^ 1.1×10^−6^ Patients with carotid atherosclerosis  Mean 3.5[a](#tfn11-vhrm-12-379){ref-type="table-fn"}±0.8 5.5[a](#tfn11-vhrm-12-379){ref-type="table-fn"}±1.4 36[\*\*](#tfn13-vhrm-12-379){ref-type="table-fn"}±3.4 37.9[a](#tfn11-vhrm-12-379){ref-type="table-fn"}±5.2 4.8[a](#tfn11-vhrm-12-379){ref-type="table-fn"}±1  *P*-value 2.6×10^−5^ 1.3×10^−4^ 0.0125 4.1×10^−3^ 4.5×10^−6^ **Notes:** Mean of ten independent measurements ± SD is presented. Significant difference from the healthy individuals. Significant differences from healthy individuals are denoted by asterisk (*P*\<0.001), or double asterisk (*P*\<0.05). **Abbreviations:** LDL, low-density lipoprotein; SD, standard deviation. ###### Carbohydrate conjugates composition of glycolipids from native LDL and circulating modified LDL in healthy individuals and atherosclerotic patients Carbohydrate content, nmol/mg of protein ------------------------------------------ ----------------------------------------------------- ----------------------------------------------------- ------------------------------------------------------ ---------------------------------------------------- ----------------------------------------------------- Healthy individuals Native LDL  Mean 6.2±0.6 8.8±0.8 41±5.9 46±6.3 7.1±0.7 cmLDL  Mean 3.8[a](#tfn16-vhrm-12-379){ref-type="table-fn"}±0.7 6.2[a](#tfn16-vhrm-12-379){ref-type="table-fn"}±0.9 28.3[a](#tfn16-vhrm-12-379){ref-type="table-fn"}±3.8 29.4[a](#tfn16-vhrm-12-379){ref-type="table-fn"}±6 3.3[a](#tfn16-vhrm-12-379){ref-type="table-fn"}±0.4  *P*-value 1.4×10^−7^ 2.1×10^−6^ 3.3×10^−5^ 1.1×10^−5^ 7.2×10^−10^ Patients with coronary atherosclerosis Native LDL  Mean 3.8±0.8 4.7±1.4 34.7±3.4 40.8±4.3 5.3±1.6 cmLDL  Mean 1.4[a](#tfn16-vhrm-12-379){ref-type="table-fn"}±0.4 1.6[a](#tfn16-vhrm-12-379){ref-type="table-fn"}±0.6 17.3[a](#tfn16-vhrm-12-379){ref-type="table-fn"}±2.8 33.2±48.4 1.6[a](#tfn16-vhrm-12-379){ref-type="table-fn"}±0.8  *P*-value 1.2×10^−6^ 4.4×10^−5^ 4.7×10^−10^ 0.63 2.4×10^−5^ Patients with carotid atherosclerosis Native LDL  Mean 4.2±0.9 4.8±1 35.1±2.4 39.4±6.2 5.8±1.4 cmLDL  Mean 2.4[a](#tfn16-vhrm-12-379){ref-type="table-fn"}±0.3 2.5[a](#tfn16-vhrm-12-379){ref-type="table-fn"}±0.7 22.1[a](#tfn16-vhrm-12-379){ref-type="table-fn"}±4.4 20.9[a](#tfn16-vhrm-12-379){ref-type="table-fn"}±4 2.6[a](#tfn16-vhrm-12-379){ref-type="table-fn"}±1  *P*-value 1.1×10^−4^ 1.6×10^−5^ 1.1×10^−6^ 8.5×10^−7^ 1.9×10^−5^ **Notes:** Mean of ten independent measurements ± standard deviation is presented. Significant differences of cmLDL from native LDL. **Abbreviations:** cmLDL, circulating modified LDL; LDL, low-density lipoprotein.
2023-10-31T01:26:29.851802
https://example.com/article/5108
Fashion often accidentally imitates the seminal fashion film (see: David Gandy), but at the usually sedate Valentino show during Paris fashion week, a real-life walk-off took place between actors Ben Stiller and Owen Wilson. It’s thought the stunt was an official announcement for the sequel, Zoolander 2, whose release has been confirmed for February 2016, and whose plot will involve Zoolander’s overweight son, Derek Junior. Just before the finale of womenswear show Stiller, AKA Derek Zoolander, strode out from one end of the catwalk wearing a dark blue suit embroidered with butterflies. To the strains of the Human League’s Don’t You Want Me, Wilson, in character as Hansel, strode from the opposite side of the horseshoe-shaped catwalk wearing blue silk printed pyjamas and shoulder-robing a tailored coat. When the seasoned male models reached each other in front of the pit of photographers, Derek pulled out an incredible Blue Steel, which showed no signs of supermodel rust, despite the 14 years that have passed since the really, really ridiculously good-looking character first finessed it. Hansel could barely compete. He then grabbed an iPhone from a member of the front row and performed a heroic selfie swagger down the catwalk. It seems Zoolander and Hansel now have a better grasp on technology than they did during the first film, when there was some confusion about how to get files out of the computer. In the film, the walk-off is judged by David Bowie, but at Tuesday’s showthere appeared to be no judge, though Anna Wintour was surely a contender, particularly as she was seen enjoying a Zoolander photo sandwich with Derek and Hansel backstage. It was unclear whether Wintour or Derek would be the victor in a Blue Steel-off situation. The fashion industry immediately erupted with praise on social media. The Guardian’s own reporters on the ground described it as “the best thing ever” and “the dream”. After all that excitement, let’s hope the sequel answers our remaining questions. Mainly, is the Derelicte look due a comeback? And where’s Mugatu?
2024-03-17T01:26:29.851802
https://example.com/article/7994
Relevant to development and progress is sustained development so that future generations get their needs catered with the given scarce natural resources. But the government’s policy plans certainly miss out on the longevity and sustainability of the developmental projects being put into effect. A policy failure in one realm creates a domino effect, going down from macro to micro level.Unfortunately, in Pakistan less thought is put into the sustainability study of a project. Succeeding governments approve of projects which are expected to generate substantial revenues or strengthen their political traction.A recent study pointed at fatal levels of arsenic found in water resources, government’s failure to curtail the intrusion of the chemical element into drinkable water will not only result in population being effected by severe water-borne diseases but agricultural resources irrigated with the same will be contaminated and non-consumable in the long run — this will be disastrous for a country where approximately 48% of the land is utilised for agricultural purposes.Similarly, with threatening levels of pollution and carbon emissions the government’s macro policy for the finance sector must cut down on the procedural ease with which a vehicle is borrowed from a bank or financial institution in lieu of monthly installments. Such commutation trends might predict improved living standards along with an improved traffic management system that currently may possess infrastructural damage in the long run.An indicator to improved living standards is GDP, which estimates the worth of the produce in an economy in a given time period and by extension measures the income of the individuals in that country. Far-sighted employment policy would not solely rely on current figures of economic produce. As in the case of CPEC, economic indicators might predict higher figures, however, with majority of the employees being Chinese citizens and with earnings being sent back to their country, Pakistan’s economy shall face even increased levels of unemployment.Conclusively, the worsening economy and deteriorating living standards are an outcome of the policies formulated to cater to a limited period of time and short-term gains.Maryam AlviPublished in The Express Tribune, August 30, 2017.Like Opinion & Editorial on Facebook , follow @ETOpEd on Twitter to receive all updates on all our daily pieces.
2024-03-24T01:26:29.851802
https://example.com/article/5920
Successful management of a widespread osteosarcoma. A case report. To report the case of a 23-year-old woman with widespread osteosarcoma including skeletal, pulmonary and pleural metastases, who had a remarkable response to combined chemo- and radiotherapy. A 23-year-old Indonesian woman presented in October 1999 with a swelling of the right thigh, severe generalized pain and progressive left hemiparesis. Radiological examination revealed osteolytic lesions in the cervical spine. CT scan of the chest showed multiple pulmonary metastases and a huge left pleural effusion. Bone scan with technetium-99m hydroxymethylene diphosphonate showed intense uptake of the radiopharmaceutical in the distal right femur, generalized deposits throughout the skeleton and in the right hemithorax corresponding to the lung findings. Bone marrow and kidney function tests as well as serum calcium level were normal. Alkaline phosphatase was markedly elevated, 8,000 IU/l (normal <250 IU/l). Histopathology from the femoral tumor showed osteosarcoma. Treatment was started with radiotherapy to the cervical spine followed immediately by a combination chemotherapy with ifosfamide, cisplatin, etoposide and mesna rescue. In addition, the patient received bisphosphonates regularly. Eleven cycles of chemotherapy were given with a remarkable response. The patient was successfully treated with a combination of radio- and chemotherapy. She recovered fully and is in almost complete remission. The disease remained stable 24 months after the discontinuation of the treatment.
2024-01-25T01:26:29.851802
https://example.com/article/4179
Long Sleeve T-shirt Water Lily Men's / Unisex Sizes DESCRIPTION Tri-Blend Long Sleeve T-Shirts are made with 50% Polyester, 25% Cotton and 25% Rayon. Enjoy everything you love about the fit, feel and durability of a vintage T-shirt. ABOUT THE ART Water Lily Beautiful waterlily floats on a pond, surrounded by lily pads, this man-made pond is home to coy fish and a variety of bugs and frogs. Wonderfully stunning white and yellow flower is bright and enchanting. flower, lily...
2024-05-22T01:26:29.851802
https://example.com/article/4459
Em tempos de crise, capital flerta com hiper individualismo. Para este, a competição é o máximo. Cabe à cultura, e à religião, aceitar a guerra de todos contra todos Por Boaventura de Sousa Santos* – Outras Palavras O social é o conjunto de dimensões da vida coletiva que não podem ser reduzidas à existência e experiência particular dos indivíduos que compõem uma dada sociedade. Esta definição não é neutra. Define o social pela negativa, o que permite atribuir-lhe uma infinidade de atributos que variam de época para época. É, por outro lado, uma definição eurocêntrica porque pressupõe uma distinção categorial entre o social e o indivíduo, uma distinção que, longe de ser universal ou imemorial, é específica da filosofia e da cultura ocidentais, e nestas só se tornou dominante com o racionalismo, o individualismo e o antropocentrismo renascentista do século XV, os quais viriam a ter em Descartes o seu mais brilhante teorizador. Tanto é assim que a máxima expressão desta filosofia–cogito ergo sum, “penso logo existo”– não tem tradução adequada em muitas línguas e culturas não eurocêntricas. Para muitas destas culturas, a existência de um ser individual é não só problemática como absurda. É o caso das filosofias da África austral e do seu conceito fundamental de Ubuntu, que se pode traduzir por “eu sou porque tu és”, ou seja, eu não existo senão na minha relação com outros. Os africanos não precisaram esperar por Heidegger para conceber o ser como ser-com (Mitsein). Muito esquematicamente, podemos distinguir na cultura eurocêntrica que serviu de base ao capitalismo moderno dois entendimentos extremos do social. De um lado, o entendimento reacionário, que confere total primazia ao indivíduo e o concebe como um ser ameaçado pelo social. Segundo tal lógica, os indivíduos, longe de serem iguais, são naturalmente diferentes e essas diferenças determinam hierarquias que o social deve respeitar e ratificar. Entre essas diferenças, duas são fundamentais: as diferenças de raça e as diferenças de sexo. No outro extremo está o entendimento solidarista, que confere primazia ao social e que o concebe como o conjunto de regras de sociabilidade que neutralizam as desigualdades entre os indivíduos. Entre estes dois extremos foram muitos os entendimentos intermédios, nomeadamente os entendimentos liberais (no plural), que viram no social o garante da igualdade dos indivíduos como ponto de partida, e os entendimentos socialistas (também no plural), que viram no social o garante da igualdade dos indivíduos como ponto de chegada. Entre estes dois entendimentos, por sua vez, foram possíveis várias combinações. Com as revoluções francesa e americana os dois últimos entendimentos passaram a ser os únicos legítimos no plano ideológico. Foi com base neles que se iniciou a luta contra a escravatura e a discriminação contra as mulheres. No entanto, ao contrário do que se supõe, o entendimento reacionário da desigualdade natural-social entre os indivíduos sempre se manteve como corrente subterrânea. Até hoje. E é intrigante que assim seja depois de dois séculos de lutas contra a desigualdade e a discriminação. Houve progressos? E, se houve, por que é que os retrocessos ocorrem recorrentemente e aparentemente com tanta facilidade? Estaremos hoje numa fase de retrocesso histórico em que o entendimento socialista se desfaz no ar e o liberal parece perigosamente ameaçado pelo entendimento reacionário? As respostas a estas perguntas dependem da consideração de vários fatores. Vou limitar-me a um deles e, por isso, assumo à partida que a minha resposta é incompleta. O que o pensamento liberal designou por sociedade moderna democrática e o pensamento marxista por sociedade moderna capitalista foi de fato uma sociedade cujo modelo de desenvolvimento econômico exigia dois tipos de exploração da força de trabalho: a exploração de seres humanos teoricamente iguais aos seus exploradores e a exploração de seres humanos inferiores ou sub-humanos. Daqui decorreram dois tipos de desvalorização do trabalho: uma desvalorização controlada, porque regulada pelo princípio da igualdade, e por isso assente em direitos supostamente universais; e uma desvalorização mais intensa porque “natural”, exercida sobre seres ontologicamente degradados, seres racializados e seres sexualizados — basicamente, negros e mulheres. O capitalismo não inventou nem o colonialismo (racismo, escravatura, trabalho forçado) nem o patriarcado (discriminação sexual) mas ressignificou-os como formas de trabalho super-desvalorizado, ou mesmo não pago ou sistematicamente roubado. Sem essa super-desvalorização do trabalho de populações tidas por inferiores não seria possível a exploração rentável da força de trabalho assalariado em que tanto liberais como marxistas se concentraram, ou seja, o capitalismo não se poderia manter e expandir de forma sustentada. Mas, se assim foi, não terá sido apenas nos alvores do capitalismo? Em meu entender, não, e só o domínio do pensamento liberal e do pensamento marxista nos impediu de ver que desde o século XV, pelo menos, até hoje vivemos em sociedades capitalistas, colonialistas e patriarcais. Obviamente que ao longo dos séculos houve lutas e movimentos sociais que eliminaram algumas das formas mais selvagens de desvalorização humana, mas só o domínio daquelas duas formas de pensamento moderno foi capaz de nos criar a ilusão de que a eliminação dessa desvalorização seria progressiva e até acabaria um dia, mesmo sem o capitalismo acabar. Ledo engano. O que aconteceu foi a substituição, real ou apenas jurídica, de alguns instrumentos de desvalorização por outros ou a deslocação do exercício da desvalorização de um campo social para outro ou de uma região do mundo para outra. Não ter isto em conta fez com que confundíssemos o fim do colonialismo histórico (de ocupação territorial por país estrangeiro) com o fim total do colonialismo, quando de facto o colonialismo continuou sob outras formas: neocolonialismo, colonialismo interno, imperialismo, racismo, xenofobia, ódio anti-imigrante e anti-refugiado, e, para espanto de muitos, a própria escravatura, como a ONU hoje reconhece. Da mesma forma que a discriminação contra as mulheres deixou de se manifestar no sufrágio eleitoral e nos direitos sociais, mas continuou sob as formas de pagamento desigual para trabalho igual, assédio sexual e violência, da doméstica ao gang rape e feminicídio. Esta cegueira analítica impediu-nos de dar relevo à composição etno-cultural da força de trabalho desde o início — por exemplo, às diferenças entre trabalhadores ingleses e irlandeses, ou [na Espanha] entre trabalhadores de Castela e da Andaluzia. Por que razão é este argumento mais facilmente aceito hoje do que há vinte anos? Em meu entender, isso deve-se ao facto de a atual fase do capitalismo exigir hoje, talvez mais do que nunca, a super-desvalorização da força de trabalho e a submissão de vastas populações à condição de populações descartáveis, populações a quem se pode roubar o trabalho e sujeitar a trabalho forçado ou “análogo” a trabalho escravo; populações eliminadas por guerras onde só morrem civis inocentes, abandonadas à sua “sorte” em caso de acontecimentos climáticos extremos ou encarceradas, como acontece a boa parte da população jovem negra dos EUA. Estes fatos devem-se à conjugação de dois fatores epocais e, portanto, de larga duração: as revoluções eletrônicas e digitais e o domínio global do capital financeiro, o setor do capitalismo mais anti-social por criar riqueza artificial com escassíssimo recurso à força de trabalho. A super-desvalorização da força de trabalho e o caráter descartável de vastas populações estão hoje a ser ideologicamente respaldados pela reemergência do pensamento reacionário da desigualdade natural-social entre os indivíduos, o qual sempre se manteve como corrente subterrânea da modernidade ocidental. Ele reemerge sob formas tão diferentes que facilmente se disfarçam de desvios conjunturais ou idiossincrasias sem significado. Aflora no crescimento da extrema-direita europeia e brasileira e do supremacismo branco nos EUA. Aflora na chocante virulência classista, racista, sexista e homofóbica de organizações brasileiras de extrema-direita, algumas delas financiadas por agências públicas e privadas norte-americanas. Aflora na generalização da precariedade do trabalho assalariado e da transformação dos direitos dos trabalhadores em privilégios ilegítimos. Aflora em sentenças judiciais que invocam a Bíblia para justificar a inferioridade das mulheres. Aflora no aumento do trabalho escravo. E aflora, pasme-se, na relegitimação do colonialismo histórico, um fenômeno que pela sua aparente novidade merece uma referência especial. Não me refiro a políticos como o presidente Nicolas Sarkozy, que em 2007 dissertou em Dakar sobre as vantagens do colonialismo para os povos africanos, cuja tragédia seria não terem até hoje entrado plenamente na história. Refiro-me à justificação científica do colonialismo histórico e à sua invocação como solução para os “Estados falidos” do nosso tempo. Refiro-me ao artigo de Bruce Gilley, professor do Departamento de Ciência Política da Universidade Estadual de Portland, publicado em 2017 na respeitada revista Third World Quarterly dedicada aos problemas pós-coloniais. O artigo, intitulado “The Case for Colonialism”, defende o papel histórico do colonialismo e advoga que se volte a recorrer a ele para resolver problemas que os “estados falidos” do nosso tempo não podem resolver. Mais especificamente, propõe três soluções: “recomendar modos de governação colonial; recolonizar algumas áreas; criar novas colônias de raiz.” A polêmica que o artigo suscitou foi tão grande que o autor acabou por retirar o artigo (foi retirado da versão eletrônica da revista, mas pode ser lido na versão em papel). A minha suspeita é, no entanto, que o artigo, longe de ser apenas uma prova das deficiências do sistema de avaliação “anônima” de artigos científicos, é um sintoma da época, e a polêmica que ele levantou não ficará por aqui. O que designo por desimaginação do social é a imaginação anti-social do social. Segundo ela, numa sociedade de desigualdade natural-social entre os indivíduos, a responsabilidade coletiva pelos males da sociedade não existe. O que existe é a culpa individual daqueles que não querem ou não podem competir por aquilo que a sociedade nunca oferece e apenas concede a quem merece. Os que fracassam, em vez de apoiar-se na sociedade, devem apoiar-se nas religiões que por aí pregam a teologia da prosperidade e consolo para quem não prospera. A educação, em vez de criar a miragem da responsabilidade cidadã e da solidariedade social, deve ensinar os jovens a ser competitivos e saber que estão numa guerra de todos contra todos. Se não é isto que queremos, é bom termos bem a noção do inimigo contra o qual temos de lutar com todas as forças democráticas, e sem complacência. – A foto não é do século 19. São crianças trabalhando em Bangladesh, dez anos atrás. *Doutor em sociologia do direito pela Universidade de Yale, professor catedrático da Faculdade de Economia da Universidade de Coimbra, diretor dos Centro de Estudos Sociais e do Centro de Documentação 25 de Abril, e Coordenador Científico do Observatório Permanente da Justiça Portuguesa – todos da Universidade de Coimbra.
2024-04-09T01:26:29.851802
https://example.com/article/5107
Matt Jones is a designer who was formerly creative director for the BBC news online. He was with Nokia as the director of UX design and he’s now with Dopplr. Informed by two overwhelming mega-trends in the world – rising urbanisation of the planet and the rapid digitalisation of those cities. It is projected that 60% of the world population will be urban by 2030. Three things to look at in the talk; Optimistic visions of the future from the past How do hackers and designers reconfigure this future past Proclamations for the future Post WWII there were visions of space colonies and apocalyptic visions. Our cities are increasingly linked and learning – radical theories that came out of Archigram in the 60s and 70s. A time when cybernetics would start (or stop) a war, when tech was big… and required a room to house it. Archigram thought of behaviour as the raw material they were building with – and personal technologies would enable life. Archigram considered the car the ultimate tool of technical freedom – whereas now the ultimate piece of technical freedom is a mobile phone. Botanicalls allowing plant moisture levels to be tweeted, @towerbridge to tell the world what the Tower Bridge in London is doing. Nuage vert in Helsinki which graphically represents the cities energy consumption; Always design a thing by considering it’s next largest context – chair to room to house to city… The demon haunted world we’ve been anxious about throughout humanity is finally being built… through technology. xbee, the open source version of ZigBee which will create the mesh, cascading structure of information in our environments. Jones is optimistc for the future – play with the city, and play with the stuff of which magic is formed… software Post Views: 5 Share: Ben Kepes is a technology evangelist, an investor, a commentator and a business adviser. His business interests include a diverse range of industries from manufacturing to property to technology. As a technology commentator he has a broad presence both in the traditional media and extensively online. Ben covers the convergence of technology, mobile, ubiquity and agility, all enabled by the Cloud. His areas of interest extend to enterprise software, software integration, financial/accounting software, platforms and infrastructure as well as articulating technology simply for everyday users.
2023-10-22T01:26:29.851802
https://example.com/article/6482
using System; using System.Collections.Generic; using Monodoc; using Mono.Options; namespace Mono.Documentation { class MDocTreeDumper : MDocCommand { public override void Run (IEnumerable<string> args) { var validFormats = RootTree.GetSupportedFormats (); string cur_format = ""; var formats = new Dictionary<string, List<string>> (); var options = new OptionSet () { { "f|format=", "The documentation {FORMAT} used in FILES. " + "Valid formats include:\n " + string.Join ("\n ", validFormats) + "\n" + "If not specified, no HelpSource is used. This may " + "impact the PublicUrls displayed for nodes.", v => { if (Array.IndexOf (validFormats, v) < 0) Error ("Invalid documentation format: {0}.", v); cur_format = v; } }, { "<>", v => AddFormat (formats, cur_format, v) }, }; List<string> files = Parse (options, args, "dump-tree", "[OPTIONS]+ FILES", "Print out the nodes within the assembled .tree FILES,\n" + "as produced by 'mdoc assemble'."); if (files == null) return; foreach (string format in formats.Keys) { foreach (string file in formats [format]) { HelpSource hs = format == "" ? null : RootTree.GetHelpSource (format, file.Replace (".tree", "")); Tree t = new Tree (hs, file); TreeDumper.PrintTree (t.RootNode); } } } private void AddFormat (Dictionary<string, List<string>> d, string format, string file) { if (format == null) format = ""; List<string> l; if (!d.TryGetValue (format, out l)) { l = new List<string> (); d.Add (format, l); } l.Add (file); } } }
2023-11-26T01:26:29.851802
https://example.com/article/2555
Other This listing has expired. Bought this dress a couple months ago for my engagement photography. Wore it only once for 3-4hours. Its all hand made by an independent designer. Quality comparable to $4000 – $6000 range of designer gowns. I have went to klenfield saks bhldn etc when i was looking for my “the one”, and confidently speaking, the quality and design is way much better than bhldn, material is about 85% as good as reem acra’s, if this can be quantified. more photos on imgur (the first couple was me wearing it, unfiltered, not photoshopped, raw photos; the last couple of pictures are from the designer, i would say it is pretty true to picture) – note: it is adjustable!!! you save hundreds of bucks as there is no need for alterations!!! there is straps/laces on the back that can be adjusted without going to the tailors (think about shoe laces). im a size xs/0 for street clothing brands like jcrew, anntaylor, banana republic and a size 2 for majority of designer brands. 166cm/5'6 height. i would say this wedding gown is good for any girl in size 0-4 as the dress is adjustable.
2024-03-19T01:26:29.851802
https://example.com/article/7127
Premium links: Survey Finds Pent-Up Demand for Improvements A new study from Pella Corp. suggests plenty of items in homes across the country are calling for urgent attention. According to a November survey conducted for Pella by Kelton Research, about two-thirds of homeowners in the U.S. have a major item in their home that needs some type of maintenance and the average person reports five major items that need to be repaired or replaced. Whether it’s new carpet or flooring, a refreshed landscape or updating the kitchen counters or cabinets, change is in the air. A majority (61 percent) of American homeowners plan to make some type of improvement to increase the curb appeal of their home. Almost three in ten (27 percent) of those planning to make home improvements in 2010 intend on simultaneously enhancing the inside and outside of the home by installing new windows, the manufacturer reports. The most common home improvement for 2010 may be installing new carpet or flooring (48 percent), followed closely by updating the exterior of the house with new paint or siding (43 percent). The survey also found that homeowners in the Midwest (44 percent) and South (38 percent) are more likely than those in the Northeast (25 percent) and West (30 percent) to roll up their sleeves and do the improvements themselves. Design Trends"Right now there is a trend, and it may just be related to the current economic situation, to downsize a little bit, but increase the amenities in the home," says Elaine Sagers, Pella's vice president, marketing and customer support. "So while you might have a smaller footprint, the overall cost of the home is the same and additional design elements are put into it. So, you know, it’s the beautiful granite countertops and wonderful crown molding and wonderful walls and windows that flood the home with light. So those are the kinds of things that people are looking at doing to make the space more inviting as well as efficient." According to Pella, color forecasters say the hot colors for 2010 include bright or warm yellows, lavenders—particularly for bedrooms, and slate or charcoal grays to replace tan and beige tones as popular neutrals. “While popular design trends like colors, patterns and fabrics may come and go, one thing remains constant in home decorating,” adds Sagers. “That is the desire to have a warm, inviting and comfortable home with plenty of natural light to create a better view. We are naturally drawn to sunlight and that’s an important element in any new home or home makeover.”
2023-12-08T01:26:29.851802
https://example.com/article/3758
Mass spectrometry imaging (MSI) has increasingly been used to visualize abundance of various molecular species of phospholipids in the tissues[@b1][@b2], including neuronal tissues[@b3][@b4]. This technique holds a great potential to visualize and identify complicated biological processes and cell movements in time and space during localized changes in a tissue, e.g. during inflammation initiation and resolution following focal cerebral ischaemia. Especially, we have hypothesized that the increased presence of the uncommon phospholipids, bis(monoacylglycero)phosphate (BMP)[@b5] and *N*-acyl-phosphatidylethanolamine (NAPE)[@b6], in brain tissue could be used as biomarkers of phagocytizing macrophages/microglia cells and dead/dying neurones, respectively. Furthermore, it should be possible to visualize the time course of generation of a number of lipid mediators promoting and resolving inflammation in mouse brains exposed to permanent middle cerebral artery occlusion (pMCAO). Focal cerebral ischaemia is characterized with an early ischaemic core surrounded by a penumbra through which the infarct is growing within the first couple of hours after the ischaemic insult[@b7][@b8]. The first 24 h or more includes an early inflammatory phase involving neutrophil infiltration and release of cytokines and eicosanoids, but after several days it is followed by resolution involving phagocytosis of apoptotic cells and cell debris by macrophages/microglia cells and regeneration[@b9][@b10]. However, MSI of the progression of these processes with focus on lipid biomarkers and abundance of signalling lipid mediators have not been investigated before. MSI has already illustrated that during the initial ischaemic phase lysophosphatidylcholine (LysoPC) will accumulate in the ischaemic area[@b11][@b12][@b13][@b14], as well as a cessation of Na^+^/K^+^-ATPase activity during cell death can be visualized as a decrease in the abundance of the potassium adduct of intracellular phosphatidylcholine (e.g. PC(34:1)) and an increase in the abundance of the sodium adduct of the same PC species[@b13][@b14]. Furthermore, ceramides (Cer), *N*-acyl-ethanolamines (NAE), free fatty acids, prostaglandins, and 2-arachidonoylglycerol have been seen by MSI to accumulate in the ischaemic area during the early phase of ischaemia. At the same time sphingomyelin (SM) decreases in abundance[@b12][@b13][@b14][@b15][@b16]. We have now searched for the spatiotemporal abundance of other PC species, especially docosahexaenoic-containing PC (e.g. PC(18:0/22:6)) and arachidonic-containing PC (e.g. PC(18:0/20:4)), since they have been suggested to be biomarkers for neurones and invasive immune cells, respectively[@b12]. Since we observed a change in the Na^+^/K^+^-adduct abundance of several PC species, we also visualized the abundance of the Na^+^/K^+^-adducts of sphingomyelin (SM(d18:1/18:0)), because it is generally considered to be localized mainly on the outer leaflet of cells, thereby facing a high sodium concentration[@b17]. We have previously reported that NAPE accumulates during brain ischaemia in the injured area[@b13][@b18][@b19][@b20], and studies of primary cerebral cell cultures suggest that NAPE particularly is generated in dying/dead neurones and not in astrocytes[@b21][@b22]. Thus, we have used NAPE as a spatiotemporal biomarker for the abundance of dead neurones in the ischaemic brain. BMP is a low abundant phospholipid in all cells being confined to the endosomal/lysosomal vesicles[@b5][@b23], but it is especially abundant in macrophages/microglia cells[@b24], where it is primarily localized to phagosomes[@b25]. We have investigated whether BMP can be used as a biomarker phospholipid for macrophages/microglia cells performing phagocytosis during the repair phase of focal cerebral ischaemia. In this context, we have also searched for the visualization of lysophosphatidylserines (LysoPSs), since they appear to be signalling lipids that regulate immunological and neurological processes via membrane receptors, including stimulation of phagocytosis by macrophages[@b26][@b27]. Furthermore, we have also searched for a number of other signalling lipids, including sphingolipid metabolites, monoacylglycerols (MAGs), and dihydroxy-derivates of docosahexaenoic acid (DHA) and docosapentaenoic acid (DPA). Of the sphingolipid derivates, both sphingosine-1-phosphate (S1P) as an intercellular messenger[@b28] and ceramide-1-phosphate (CerP), whether extracellular or intracellular[@b29], have been implicated in regulation of neuronal damage[@b30][@b31]. The endocannabinoid, 2-arachidonoylglycerol (2-AG), is an important neuromodulator in the brain in both normal physiology[@b32] as well as during brain injury[@b33], but also as a precursor for eicosanoids during neuroinflammation[@b34]. The tissue level of 2-AG had been reported to be increased at 4 h or at 24 h after permanent damage to the brain (trauma or ischaemia) in mice[@b13][@b18][@b35], but it is not known how levels of 2-AG are in the late resolving phase. Within recent years, di- and tri-hydroxylated derivates of DHA and DPA, some of which are called protectins, resolvins, and maresins, have been reported to have several pro-resolving functions in the late stages of inflammation[@b36] and possibly also during focal cerebral ischaemia[@b37]. However, their endogenous time-course of formation in pMCAO is not very clear. We have used Desorption Electrospray Ionization (DESI) and Matrix Assisted Laser Desorption Ionization (MALDI) imaging in time and space to analyse the involvement of selected lipids in the progression and resolution of the ischaemic insult caused by pMCAO in mice A list of abbreviations is provided in the [supplementary information](#S1){ref-type="supplementary-material"}. Results ======= With DESI imaging of selected phospholipid species, we investigated at different post-surgical survival times the progression of ischaemia. We used permanent MCAO for induction of ischemia, and our results may thus not be applicable for ischemia with transient MCAO, which is a model for the minority of stroke patients having rapid reperfusion treatment at a hospital. For all time points investigated by DESI imaging, we used a spatial resolution of 100 × 100 μm^2^ and measured on sections with 2 h, 24 h, 5d and 20d post-surgical survival. In some of the DESI images, a thin line of increased signal intensity is observed along the edge of the tissue. This "edge effect" is occasionally observed in DESI imaging and may be ascribed to differences in surface charging between the tissue and the bare glass slide rather than actual higher abundances at the edge of the tissue. Cessation of the Na^+^/K^+^-ATPase activity and activation of phospholipases ---------------------------------------------------------------------------- In positive ion mode, we mainly observed ionized phosphatidylcholine (PC) and to a lesser extent sphingomyelin (SM). PC is found both in the outer leaflet of the plasma membrane and the inner membranes (both leaflets) of the cell, while SM primarily is found in the outer leaflet of the plasma membrane. Thus, those PC species that predominantly are in the intracellular membranes mainly experience a high potassium concentration, while SM that predominantly is in the outer leaflet of the plasma membrane is in contact with a high extracellular sodium concentration. In the ischaemic area, lack of oxygen causes fall in the intracellular ATP concentration, influx of sodium and calcium over the plasma membrane, followed by activation of one or more of the different subtypes of phospholipase A~2~ and sphingomyelinase, which generates lysophosphatidylcholines (LysoPC) and ceramides (Cer) from PC and SM respectively. In [Fig. 1](#f1){ref-type="fig"}, the distribution of the most abundant PC and SM species in the brain, PC(16:0/18:1) and SM(d18:1/18:0), are shown using DESI imaging (for molecular structure and MS/MS spectra for the two lipids, see [Supplementary Fig. S1](#S1){ref-type="supplementary-material"}). The sodium adduct of PC(16:0/18:1) accumulated in the ischaemic area at 2 h, 24 h and 5d while it was not observable at 20d in the small remaining injured area. On the other hand the potassium adduct disappeared within the same time frame. This can be explained by the cessation of the Na^+^/K^+^-ATPase activity, as previously reported[@b13][@b14]. When the pump breaks down sodium floats into the cell causing accumulation of the sodium adduct. Furthermore, the activated phospholipase A~2~ hydrolyses PC(16:0/18:1) into LysoPC(16:0), which accumulates here as the sodium adduct. Contrary to this, both the sodium and potassium adducts of SM(d18:1/18:0) disappeared from the ischaemic area. This may be due to an activated sphingomyelinase, which removes the phosphocholine head group from SM(d18:1/18:0) and generates Cer(d18:1/18:0), here seen as the sodium adduct. This accumulation of Cer(d18:1/18:0) can be observed at 24 h and 5d. The potassium adducts of LysoPC(16:0) and Cer(d18:1/18:0) showed the same trend as the sodium adducts, see [supplementary Fig. S2](#S1){ref-type="supplementary-material"} (for molecular structures for LysoPC(16:0) and Cer(d18:1/18:0) and MS/MS spectra for LysoPC(16:0), see [supplementary Fig. S3](#S1){ref-type="supplementary-material"}). The changes in abundance of the sodium and potassium adducts of PC(16:0/18:1) and SM(d18:1/18:0) may also give information about the localizations of these lipids. PC(16:0/18:1) was affected by the cessation of the Na^+^/K^+^-ATPase activity, suggesting that it mainly is localized in the inner leaflet of the plasma membrane as well as the inner membranes of the cell. On the other hand, the sodium adduct of SM(d18:1/18:0) was not accumulated, in accordance with its main localization in the outer leaflet of the plasma membrane and perhaps also caused by breakdown by sphingomyelinase. To make sure that the accumulation of the sodium adduct and the disappearance of the potassium adduct of PC was not caused by ion suppression, we sprayed a 24 h section with PC(10:0/10:0) (not present naturally in the brain) before measuring by DESI imaging. In [supplementary Fig. S4](#S1){ref-type="supplementary-material"} we tested the effect of ion suppression in the ischaemic area compared to healthy brain tissue and concluded that even though the ion suppression of the lipids in the ischaemic area seemed less than in the healthy part, this smaller ion suppression may not explain the observed accumulation of e.g. LysoPC(16:0) and Cer(d18:1/18:0). Arachidonic- and docosahexaenoic-rich PCs ----------------------------------------- [Supplementary Fig. S5](#S1){ref-type="supplementary-material"} shows DESI imaging of the behaviour of the sodium and potassium adducts of the PCs containing arachidonic (AA, 20:4(n-6)) or docosahexaenoic acid (DHA, 22:6(n-3)). As with PC(16:0/18:1), PC(18:0/22:6) is affected by the cessation of the Na^+^/K^+^-ATPase activity. We observed accumulation of the sodium adduct and disappearance of the potassium adduct in the ischaemic area. Accumulation of the sodium and potassium adduct of LysoPC(18:0) could also be seen having the same pattern as the sodium and potassium adduct of LysoPC(16:0), as shown in [Supplementary Fig. S7](#S1){ref-type="supplementary-material"}. Accumulation of PC(18:0/20:4) followed the same pattern although with a weaker accumulation of the sodium adduct. However, we did not observe accumulation of the potassium adduct at 24 h and 5d as reported by Hanada *et al*.[@b12] in ischaemic spinal cord. Accumulation of monoacylglycerols (MAGs) at the resolution stages of ischaemia ------------------------------------------------------------------------------ [Figure 2a](#f2){ref-type="fig"} shows the tentatively identified distribution of the endocannabinoid MAG(20:4) at the four time points along with MAG(22:6), the suggested molecular structures can be seen in [Fig. 2b,c](#f2){ref-type="fig"}, respectively. At 2 h no accumulation of the two MAGs was observed, and at 24 h a weak accumulation could be seen. At 5d and 20d, accumulation was observed in and especially around the edges of the injured area, where tissue repair is taking place. Accumulation of *N*-acyl-phosphatidylethanolamine (NAPE) and bis(monoacylglycero)phosphate (BMP) species seen by DESI imaging ----------------------------------------------------------------------------------------------------------------------------- In positive ion mode, the observed lipids were mainly seen as sodium and potassium adducts, but in negative ion mode they were mainly observed as the deprotonated ion. [Figure 3a](#f3){ref-type="fig"} shows the accumulation of NAPE species and BMP(22:6/22:6) during the progression of ischaemia. NAPE(56:6) and pNAPE(56:6) accumulated in the ischaemic area at 24 h and 5d, but no accumulation were observed at 2 h. At 20d, NAPE and pNAPE had again disappeared. There was no BMP(22:6/22:6) present at 2 h and 24 h, but at 5d accumulation of BMP(22:6/22:6) could be seen, especially in the edge of the ischaemic area. At 20d, BMP(22:6/22:6) was spread out to accumulate in the entire region of the remaining injured area. Comparing BMP(22:6/22:6) to NAPE, we saw that the NAPE species were localized all over the ischaemic area at 24 h with no BMP(22:6/22:6) present. At 5d, BMP(22:6/22:6) had begun to accumulate and the NAPE species were no longer present evenly over the ischaemic area, but had lesser abundance in the regions where BMP(22:6/22:6) were located. Going to 20d, the NAPE species had disappeared while BMP(22:6/22:6) had spread to the total remaining injured area. Molecular structures of NAPE(56:6), pNAPE(56:6), and BMP(22:6/22:6) are shown in [Fig. 3b--d](#f3){ref-type="fig"} and MS/MS spectra and DESI imaging of MS/MS fragments can be seen in [Supplementary Figs S8](#S1){ref-type="supplementary-material"}, [S9 and S10](#S1){ref-type="supplementary-material"} for NAPE(56:6), pNAPE(56:6), and BMP(22:6/22:6) respectively. Accumulation of cholesteryl esters (CE) in the resolution phases of inflammation -------------------------------------------------------------------------------- Cholesteryl esters (CEs) were seen to accumulate in the ischaemic area during the resolution phases of inflammation (at day 5 and 20). [Supplementary Fig. S11a](#S1){ref-type="supplementary-material"} shows the distribution of CE(18:1), CE(20:4), and CE(22:6) (the molecular structures can be seen in [Supplementary Fig. S11b, c and d](#S1){ref-type="supplementary-material"}, respectively). At 2 h, no accumulation of CE could be observed, however, at 24 h a weak accumulation of CE(18:1) and CE(20:4) in the ischaemic area were seen. At 5d and 20d a strong accumulation of all three CE species were observed. The localization of CE seemed to follow that of BMP ([Fig. 3](#f3){ref-type="fig"}). With our MALDI imaging setup, we were able to investigate brain sections with higher spatial resolution (smaller pixel size) and thus were able to investigate the sections in greater spatial detail. We measured on sections with 5d, 7d and 20d post-surgical survival using MALDI imaging. Accumulation of NAPE and BMP in the ischaemic area -------------------------------------------------- With DESI imaging, we were only able to find one BMP species, BMP(22:6/22:6). However, measuring the samples using MALDI imaging showed us the presence of other BMP species in the ischaemic area. [Figure 4a](#f4){ref-type="fig"} shows that, along with BMP(22:6/22:6), we were able to find species tentatively identified as BMP(40:7) and BMP(42:10) in brains with 5d post-surgical survival (see their molecular structure in [Supplementary Fig. S12a and b](#S1){ref-type="supplementary-material"}). Comparing the localization of BMP(22:6/22:6) with NAPE(56:6) ([Fig. 4](#f4){ref-type="fig"}), we could see that the two lipids were mainly localized in two different regions of the ischaemic area suggesting that where macrophages/microglia cells (i.e. BMP as a biomarker for phagocytosis) had phagocytized the dead neurones, NAPE was no longer present (i.e. NAPE as a biomarker for dead/dying neurones). NAPE was still present in the area where BMP had not yet accumulated. BMP as a lipid biomarker for macrophages/microglia cells -------------------------------------------------------- LysoPS has been identified as a pro-resolving signalling lipid implicated in macrophage activation and clearance of the apoptotic cells[@b27]. [Figure 4b](#f4){ref-type="fig"} shows the distribution of BMP(22:6/22:6) and LysoPS(18:0) in the ischaemic area in a sample with 5d post-surgical survival. While LysoPS(18:0) is distributed faintly throughout the section, accumulation of LysoPS could also clearly be observed in the edges of the ischaemic area, coinciding with the distribution of BMP(22:6/22:6) (See the molecular structure of LysoPS(18:0) in [Supplementary Fig. S12c](#S1){ref-type="supplementary-material"}). To compare the localization of macrophages/microglia cells with BMP, we performed immunohistochemistry on the sections by staining for CD11b, a biomarker for macrophages/microglia cells[@b38]. In [Fig. 4c](#f4){ref-type="fig"}, we visualized a brain section of 5d post-surgical survival using MALDI imaging, followed by immunohistochemical staining of the section for CD11b. The distribution of CD11b and BMP(22:6/22:6) coincide, supporting our claim that BMP can be used as a biomarker for phagocytizing macrophages/microglia cells. Higher spatial resolutions pictures of CD11b clearly showing the staining of individual microglia/macrophages are shown in [Supplementary Fig. S13](#S1){ref-type="supplementary-material"}. Furthermore, in [Fig. 4d](#f4){ref-type="fig"} we compared the spatial distribution of BMP(22:6/22:6) with the potassium adduct of CE(18:1) on a mouse brain with 5d post-surgical survival by measuring the tissue in both negative and positive ion mode. Here we found that BMP and CE co-localized. Since BMP is especially abundant in alveolar macrophages[@b24], we analysed BMP in TiO~2~-nanoparticle-exposed mouse lungs and in normal control lungs. These MSI images clearly show that BMP(22:6/22:6), and LysoPS(18:0) were increased in the TiO~2~- nanoparticle-exposed lungs compared to the control lungs (see [Supplementary Fig. S14](#S1){ref-type="supplementary-material"}). Sphingosine-1-phosphates accumulate in the resolution phase of inflammation --------------------------------------------------------------------------- Sphingosine-1-phosphate has emerged as an important mediator in inflammation regulating immune cell trafficking[@b39]. [Figure 5a](#f5){ref-type="fig"} shows the accumulation of sphingosine-1-phosphate and CerP(d18:1/16:0) along with the disappearance of C24:1 sulfatide in the ischaemic area in a mouse brain with 7d post-surgical survival. The molecular structures of S1P, CerP(d18:1/16:0), and C24:1 Sulfatide are shown in [Fig. 5b to d](#f5){ref-type="fig"}. Localization of *N*-acyl-taurines (NAT) in the resolution phases of inflammation -------------------------------------------------------------------------------- Unexpectedly, we found that several species of *N*-acyl-taurine (NAT) were accumulating in the ischaemic area in the resolution phase, while they were not visible at the earlier time points. [Figure 6a,b](#f6){ref-type="fig"} shows the distribution of NAT(18:0) and NAT(18:1) at 7d and 20d, respectively. Their molecular structures are shown in [Fig. 6c,d](#f6){ref-type="fig"}. Fatty acids and their derivates in the resolution phases of inflammation ------------------------------------------------------------------------ The importance of oxygenated derivatives of liberated fatty acids in the resolution of ischaemia has been realized in recent years. Important fatty acid precursors include DPA (22:5(n-3)) and DHA (22:6(n-3))[@b36]. In [Fig. 7a](#f7){ref-type="fig"}, we show the accumulation of DHA, hydroxy-DHA, dihydroxy-DHA, DPA, and dihydroxy-DPA in the ischaemic area after 7d post-surgical survival. While these derivates were observed at 7 days, we were not able to find them in any of our sections with other survival times (2 h, 24 h, 5d and 20d). Due to their low abundance, we could not perform stereo-chemical identification revealing whether they were resolvins, maresins, protectins or other derivatives. The molecular structures of DHA and DPA are shown in [Fig. 7b,c](#f7){ref-type="fig"}. Discussion ========== The application of MSI for the study of spatiotemporal changes of the inflammatory lipidome during focal cerebral ischaemia has brought a number of significant biological results[@b11][@b13]. Initially, we have studied the lipidome using the DESI imaging setup, while we later also has access to the MALDI imaging setup, which provide an accurate mass facilitating molecular identification. First, we provide evidence that BMP can be used as a biomarker for phagocytizing macrophages/microglia cells in the late resolving state of inflammation as BMP co-localized with the macrophage/microglia cells biomarker CD11b. Furthermore, our [supplementary studies](#S1){ref-type="supplementary-material"} of abundance of BMP in alveolar macrophages support this conclusion. Several different species of BMP could be visualized with BMP(22:6/22:6) being the most abundant in the brain, while BMP(36:2) was the most abundant in the lung. Furthermore, in the same brain section high abundance of LysoPS(18:0) was also observed. LysoPS is a bioactive lipid mediator, which via activation of G-protein-coupled receptors seems to be involved in stimulation of phagocytosis by macrophages during resolution of inflammation[@b27]. It was only at day 5 we were able to observe LysoPS in the infarcted area, and not at an earlier time-point. In contrast, LysoPC(16:0) was very abundant at 2 h and 24 h, although it was still present at 5d and 20d. This seems to be in agreement with the generation of LysoPC from injured astrocytes and neurones and that LysoPC may then stimulate the activation of microglia cells[@b40]. We were not able to find lipid biomarkers, which specifically could visualize the existence of a penumbra within the first 2 hours. NAPE species slowly accumulate during cell death[@b41][@b42][@b43] especially in neurones as opposed to astrocytes[@b22][@b41][@b43], and we argue that the increased abundance of NAPE in the ischaemic area is caused primarily by death of the neurones, which eventually disappear due to phagocytosis by invading macrophages/microglia cells. Cholesteryl esters accumulated in the late resolving phase, as also seen by Roux *et al*.[@b44]. We found that CE(18:1) to some extent co-localized with BMP(22:6/22:6) suggesting that cholesteryl esters accumulate in the macrophages/microglia cells due to phagocytosis of cholesterol-containing dead cells/cell debris. Using higher spatial resolution with MALDI imaging, we could see that NAPE did not co-localize with BMP(22:6/22:6) suggesting that where NAPE is localized, dead neurones are still present, and where BMP is located dead neurones have been degraded by phagocytosis by macrophages. As previously reported[@b13][@b14], a clear change in the ratio of the potassium and sodium adducts of several PC species (e.g. PC(16:0/18:1) and PC(18:0/22:6)) was seen as an indication of their intracellular localization and a cessation of the Na^+^/K^+^-ATPase activity due to lack of ATP. However, the sodium adduct of PC(18:0/20:4) did not clearly increase while the potassium adduct did decrease. Whether this is due to this lipid serving as a precursor for generation of LysoPC and arachidonic acid (AA, 20:4(n-6)) metabolites is not clear. M. Hanada *et al*.[@b12] observed that AA-rich PC was temporally elevated one week after spinal cord injury and interpreted this as caused by invasive immune cells. We did not observe such a temporally increase in AA-rich PC species. The abundance in the ischaemic area of sodium and potassium adducts of SM(d18:1/18:0) decreased both in the early phase and the late phase. This may be due to the localization of sphingomyelin mainly in the extracellular leaflet of the plasma membrane as well as to the concomitant generation of ceramide (seen both as sodium and potassium adducts of Cer(d18:1/18:0)) from sphingomyelin. This generation of ceramide during cell death of ischaemia is well-established[@b45] also in MSI studies[@b14][@b15][@b46]. However, we also observed an increased abundance of both ceramide-1-phosphate (CerP(d18:1/16:0)) and sphingosine-1-phosphate (S1P) in the ischaemic area at day 7, with CerP(18:1/16:0) having high abundance in the whole ischaemic area whereas S1P was faintly seen in the periphery of the ischaemic area. Both of these lipids have signalling functions[@b28][@b47] and their formation in the later phase of inflammation may suggest that they have some important functions related to the resolution of inflammation. S1P has been found to have an inhibitory function on vascular inflammation[@b48]. The endocannabinoid 2-arachidonoylglycerol (2-AG) has been implicated as a neuroprotective factor generated during the early phase of ischaemia[@b35][@b49][@b50]. In the present study, we especially saw increased abundance of MAG(20:4) also in the later resolution phase of ischaemic inflammation (5d and 20d) while at the same time MAG(22:6) also increased in abundance. Since, 2-AG is much more abundant in the brain than 1-AG[@b51], we assume that our MAG(20:4) is mainly 2-AG and that MAG(22:6) is mainly 2-docosahexaenoylglycerol (2-DHA-G) respectively. It is not clear whether 2-AG served as agonist for cannabinoid receptor-1 and cannabinoid receptor-2[@b33] or whether it primarily served as a precursor molecule for formation of various eicosanoids[@b34][@b52]. Lipoxin A4 generated from arachidonic acid is known as a pro-resolving lipid mediator[@b53], but a number of dihydroxy-derivates of both docosahexaenoic acid (DHA) and docosapentaenoic acid (DPA) have also various pro-resolving activities, having names as resolvins, maresins and protectins[@b36]. We observed localization of dihydroxy-DHA and dihydroxy-DPA as well as their precursors DHA and DPA in the ischaemic area at day 7, identified by their exact mass, within the ischaemic area. It was not possible to identify the exact chemical structures of these di-hydroxy fatty acids and thereby more exactly suggest their possible biological functions. Generally, it is well-known that lack of oxygen leads to post-mortem changes in levels of fatty acids and other signalling lipids[@b54] as well as of several small water-soluble metabolites[@b55][@b56]. In [Fig. 7](#f7){ref-type="fig"}, DHA is seen in both the ischaemic area and in the non-ischaemic area. Furthermore, the decapitation procedure can cause artificial alterations in metabolic profiles in some metabolic pathways in the apparently healthy contralateral hemisphere, e.g. increase of adenosine monophosphate[@b57]. We believe that the reason for seeing some DHA in the non-ischaemic area is due to post-mortem accumulation during sampling of the brains[@b54]. However, DHA intensity is clearly higher in the infarct area reflecting a specific release of DHA in this area. By studying the lipids in a defined area with tissue infarct, the non-infarct tissue can serve as a sort of control for infarct-specific changes in levels of the lipids, i.e. changes not caused by post-mortem lack of oxygen. We also found that many species of *N*-acyl-taurines (NATs) accumulated in the injured area in the resolving phase (7d and 20d). Not much is known about biological functions of NATs, but it has been reported that they can activate TRP receptor[@b58] and inhibit proliferation of prostate cancer cells[@b59]. Our finding of high abundance of NATs during resolution of inflammation raises the question whether they have anti-inflammatory functions. In conclusion, our MSI study has shown that (A) BMP can be used as a biomarker of phagocytizing macrophages/microglia cells in histological studies, (B) NAPE may be a marker for dying/dead neurones, C) the ratio of Na^+^/K^+^-adducts of selected choline-containing phospholipids in dying cells can suggest whether the lipids are localized intracellularly or on the outer leaflet of the plasma membrane, and D) a number of both pro-inflammatory and pro-resolving lipid mediators change in abundance between the early pro-inflammatory and the late pro-resolving phases of neuroinflammation. Furthermore, this lipidome technique may discover new lipid species involved in the inflammatory process. With the present MSI lipidome techniques combined with immunohistochemistry, it will in the future be possible to dissect in greater detail the spatiotemporal changes of cells and of lipid and peptide mediators during an inflammatory process, and suggests more precise biological roles for the various cell types and mediator compounds involved. Methods ======= Induction of brain ischaemia ---------------------------- Focal cerebral ischaemia was induced in anaesthetized 7- to 8-week-old C57BL/6 male mice by permanent middle cerebral artery occlusion (pMCAO) of the distal part of the left middle cerebral artery, as previously described[@b38]. Mice were obtained from The Jackson Laboratory (Maine, USA) and were cared for in accordance with the protocols and guidelines approved by the Danish Animal Inspectorate (J number 2013-15-2934-00924). All efforts were made to minimize pain and distress. Tissue preparation ------------------ Mice were decapitated after cervical dislocation at a range of post-surgical survival times. Survival times selected for DESI imaging were 2 h, 24 h, 5d, and 20d, and for MALDI imaging 5d, 7d, and 20d with three mice at each survival time. The brains were quickly removed from the skulls, frozen in gaseous CO~2~, and subsequently cut into 30 μm thick coronal cryostat sections. Sections were placed on microscope slides and stored in sealed boxes at −80 °C. Before MSI a section was removed from the freezer and placed in a vacuum desiccator for approximately 10 min. to remove water and thus prevent enzymatic reactions in the brain tissue during the measurement. Desorption electrospray ionization (DESI) imaging ------------------------------------------------- DESI imaging was performed on a LTQ XL linear ion trap mass spectrometer (Thermo Scientific, California, USA) equipped with a custom-built DESI imaging ion source, as previously described[@b60]. The electrospray was constructed of coaxial fused silica capillaries connected in 1/16-inch Swagelok tee (Swagelok Co., USA), an inner capillary (50 μm ID, 150 μm OD, SGE, USA) carrying the spray solvent, and an outer capillary (250 μm ID, 350 μm OD, SGE, USA) carrying the nebulizer gas. The electrospray was directed toward the surface of the sample in order to desorb and ionize compounds on the surface followed by analysis in the mass spectrometer. The solvent spray consisted of methanol and water (95:5) dispensed with a flow of 5 μl/min and the nitrogen nebulizer gas was set to a pressure of 9 bar. Each mass spectrum was measured with an injection time of 100 ms and an average of 5 microscans for positive ion mode and an injection time of 200 ms with an average of 3 microscans for negative ion mode. The spray-to-inlet and spray-to-sample distance were optimized to approximately 4.5 mm and 1.5 mm respectively with a spray angle of approximately 55°. The spray potential was 5 kV for positive ion mode and −5 kV for negative ion mode, and the mass-to-charge (*m/z*) scan range was set between 250 and 1100 for all measurements. Placed on a moving stage, the section was moved under the electrospray for 100 μm during measurement of one mass spectrum. The whole section was recorded line-by-line with a distance of 100 μm between each line giving a spatial resolution of 100 × 100 μm^2^. In both positive and negative ion mode, measurements were performed on brain sections from three different mouse brains, n = 3, and a typical representative were used for the images in the figures. Matrix assisted laser desorption ionization (MALDI) imaging ----------------------------------------------------------- A solution of 150 μl 4-nitroaniline (10 mg/mL in acetone/water (50:50, v/v)) was sprayed on the sections with a pneumatic sprayer, as described by Bouschen, W. *et al*.[@b61], with a flow of 10 μl/min and a pressure of 1 to 2 bar nitrogen gas. While being sprayed the section was rotated with approximately 300 to 500 rpm. After matrix application the sample was placed in an atmospheric-pressure scanning-microprobe matrix assisted laser desorption/ionization imaging source (AP-SMALDI10, TransMIT GmbH, Giessen, Germany) coupled to a Fourier transform orbital trapping mass spectrometer (QExactive, Thermo Fisher Scientific GmbH, Bremen, Germany). For analyte ionization, a nitrogen laser with a wavelength of 337 nm and a frequency of 60 Hz with 30 pulses per shot was used. The laser beam spot size was focused on the sections to match the resolution, between 7 × 7 and 50 × 50 μm^2^, for a given measurement. Sections were measured in negative ion mode with different *m/z* ranges between 250 and 1200 or positive ion mode with *m/z* range 250 to 1000. The mass resolution was 70,000 or 140,000 with the automatic gain control turned off and with a fixed injection time of 500 ms (microscans = 1) to match the time of one mass spectrum with the time of one 30 laser pulses. The measurements in negative ion mode were performed on brain sections from three different mouse brains, n = 3, and a typical representative were used for the images in the figures. The only exception to this were the images shown in [Fig. 4e](#f4){ref-type="fig"} where tissue sections were measured first in negative and then in positive ion mode to compare BMP with CE. These were only measured on tissue sections from 2 different mouse brains. Microscope images/Toluidine Blue stains --------------------------------------- Sections measured in Copenhagen, Denmark were after the measurement stained with 0.5% Toluidine Blue (Fluka Analytical, Sigma-Aldrich, Missouri, USA) in water for approximately 8 min., and then dehydrated in graded series of alcohol (70--99%), cleared in xylene, and finally cover-slipped with Eukitt quick-hardening mounting medium (Fluka Analytical, Sigma-Aldrich, Missouri, USA). The Toluidine Blue (TB) stains were then captured on a Stemi DV4 Stereoscope (Carl Zeiss AG, Oberkochen, Germany) equipped with an LCMOS digital streaming camera (Brunel Microscopes Ltd, Chippenham, UK). An Olympus BX-41 (Olympus Europa, Hamburg, Germany) microscope was used to make optical images of samples measured in Giessen, Germany prior to MALDI imaging. Images composed of more than one image were stitched by the Image Composite Editor (Microsoft Corporation, Washington, USA). Data analysis of DESI and MALDI imaging --------------------------------------- The Raw-files were converted to imzML by an imzML-converter[@b62] and loaded into the open-source MS imaging software, MSiReader[@b63]. Images were generated for the *m/z* values of interest with a bin width of ±0.1 Da for DESI imaging and ±5 ppm (±0.002 Da to ±0.004 Da) for MALDI imaging. The images shown in the figures are representatives of images from the three different mouse brains measured at each survival time for both DESI and MALDI imaging. To give the best presentation of the images, both on screen and on print, the MATLAB (MathWorks, Massachusetts, USA) colormap 'Hot' was chosen. In some of the figures, semi-quantitative data are given for changes in the abundance of the lipids. This has been done by dividing the mean intensity in the injured area with the mean intensity in a comparable area on the contralateral site of the brain section. This was, however, not possible in [Fig. 4a](#f4){ref-type="fig"} due to zero intensity on the contralateral tissue. The data are presented as intensity ratio, mean ± SEM, n = 3 animals. Statistical analysis was performed using one-way ANOVA or, if the intensity ratios did not have a normal distribution, ANOVA on ranks and statistical significance (\*p \< 0.05) was determined by using the Student-Newman-Keuls Method. Note that the ischaemic area did vary greatly at the different survival times with 24 h having the largest area and 20d the smallest. Immunohistochemistry for CD11b ------------------------------ Immunohistochemical staining for CD11b (macrophages/microglia cells) was done with a horseradish peroxidase technique as previously described[@b38]. The 4-nitroaniline-matrix was removed from the sections by flushing ethanol on the section until the matrix was washed of (around 15 s) and it was then left to dry before beginning of the staining procedure. Designing figures ----------------- The figures were composed by loading the generated images from MSiReader together with the corresponding microscope image or TB stain into Adobe Illustrator (Adobe Systems, California, USA). Arial was used as font for all figures. DESI images were cropped to minimize the surrounding border of the tissue in Adobe Photoshop (Adobe Systems, California, USA) before being loaded into Adobe Illustrator. Additional Information ====================== **How to cite this article**: Nielsen, M. M. B. *et al*. Mass spectrometry imaging of biomarker lipids for phagocytosis and signalling during focal cerebral ischaemia. *Sci. Rep.* **6**, 39571; doi: 10.1038/srep39571 (2016). **Publisher\'s note:** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary Material {#S1} ====================== ###### Supplementary Information Financial support by the Deutsche Forschungsgemeinschaft, DFG under grant Sp314/13-1 (BS, DB), Augustinusfonden (HSH), the Danish centre for Nanosafety II (STL), the Novo Nordic Foundation, the Lundbeck Foundation (KLL, BHC), the Danish MRC (KLL), the Danish Council for Independent Research \| Medical Sciences (grant no. DFF -- 4002-00391) (CJ), and Carlsbergfondet (CJ) is gratefully acknowledged. **Author Contributions** M.M.B.N., K.L.L., S.T.L., C.J. and H.S.H. designed the research. M.M.B.N., K.L.L., B.H.C., M.M., D.B., S.T.L. and S.S.P. performed experiments, and M.M.B.N. and H.S.H. together with K.L.L. and C.J. analysed the results. B.S., C. J. and H.S.H. supervised the research. M.M.B.N. and H.S.H. wrote the manuscript, which was commented and approved by all authors. ![Cessation of Na^+^/K^+^-ATPase activity and activation of lipases.\ Mouse brains with 2 h, 24 h, 5d and 20d post-surgical survival after the pMCAO-procedure were analysed. The ion images have individual intensity bars between 0--100%, and therefore, the intensity colours cannot be compared between two images. For each lipid, the ratio between the intensity of the ischaemic area and the comparable size area in the contralateral site are shown on the right side where bars are mean ± SEM (n = 3), \*p \< 0.05. The red line indicates ratio = one. The sodium adduct of PC(16:0/18:1) accumulated in the ischaemic area while the potassium adduct disappeared. This was caused by the cessation of the Na^+^/K^+^-ATPase activity, which indicated that this PC species mainly was found intracellularly. In addition to that, LysoPC(16:0) accumulated because of the activation of one or more different subtypes of phospholipase A~2~. Both the sodium and potassium adducts of SM(d18:1/18:0) disappeared from the area probably caused by sphingomyelinase degrading SM to Cer. Cer(d18:1/18:0) accumulated in the ischaemic area at 24 h and 5d. Since the sodium adduct of SM(d18:1/18:0), unlike the sodium adduct of PC(16:0/18:1), did not increase, we concluded that it was mainly found in the outer leaflet of the plasma membrane. All images were measured in positive ion mode by DESI imaging with a spatial resolution of 100 × 100 μm^2^ and the images are a typical representative of 3 mice. Molecular structures of the lipids can be found in [Supplementary Figures S1 and S3](#S1){ref-type="supplementary-material"}.](srep39571-f1){#f1} ![Accumulation of monoacylglycerols.\ **(a)** MAG(20:4) and MAG(22:6) were measured at 2 h, 24 h, 5d, and 20d. The ion images have individual intensity bars between 0--100%, and therefore, the intensity colours cannot be compared between two images. For each lipid, the ratio between the intensity of the ischaemic area and the comparable size area in the contralateral site are shown on the right side where bars are mean ± SEM (n = 3), \*p \< 0.05. The red line indicates ratio = one. At 2 h, no accumulation of the lipids could be seen, at 24 h a weak accumulation was seen, and at 5d and 20d a clearer accumulation could be seen in the ischaemic area. All images were measured in positive ion mode using DESI imaging with a spatial resolution of 100  ×  100 μm^2^ and the images are a typical representative of 3 mice. **(b)** Molecular structure MAG(20:4) (here shown as the 2-arachidonoylglycerol sodium adduct ion). **(c)** Molecular structure of MAG(22:6) (here shown as the 2-docosahexaenoylglycerol sodium adduct ion).](srep39571-f2){#f2} ![Accumulation of BMP and NAPE over time during ischaemia.\ **(a)** The distribution of NAPE(56:6), pNAPE(56:6), and BMP(22:6/22:6) were investigated. The ion images have individual intensity bars between 0--100%, and therefore, the intensity colours cannot be compared between two images. For each lipid, the ratio between the intensity of the ischaemic area and the comparable size area in the contralateral site are shown on the right side where bars are mean ± SEM (n = 3), \*p \< 0.05. The red line indicates ratio = one. The NAPE and pNAPE species accumulated both evenly over the ischaemic area at 24 h. At 5 days, NAPE and pNAPE also accumulated, however, the abundance of the two lipids were not evenly distributed throughout the area. At 20d, NAPE and pNAPE were not detected. BMP was not accumulating until 5d where it seemed to be most abundant at the edge, but was in turn still present in the ischaemic area at 20d, where it was evenly spread throughout the ischaemic area. All images were measured in negative ion mode using DESI imaging with a spatial resolution of 100 × 100 μm^2^ and the images are a typical representative of 3 mice. Molecular structure of **(b)** NAPE(56:6) (here shown as NAPE(18:0/22:6/16:0)), **(c)** pNAPE(56:6) (here shown as pNAPE(18:0/22:6/16:0), and **(d)** BMP(22:6/22:6).](srep39571-f3){#f3} ![BMP as a biomarker for phagocytosis.\ MALDI imaging showed the presence of more species of BMP. **(a)** BMP(40:7), BMP(42:10), and BMP(22:6/22:6) all accumulated in the edge of the ischaemic area at day 5 The ion images have individual intensity bars between 0--100%, and therefore, the intensity colours cannot be compared between two images. BMP(22:6/22:6) seemed though to be the most abundant of the three BMP species. The images were measured with a spatial resolution of 35 × 35 μm^2^. **(b)** Looking at BMP(22:6/22:6) with a higher spatial resolution clearly showed that it accumulated at the edge of the ischaemic area. Abundance of LysoPS(18:0) was seen faintly distributed throughout the section as well as a clear accumulation at the edges of the ischaemic area coinciding with BMP(22:6/22:6). The images were measured with a spatial resolution of 7 × 7 μm^2^. **(c)** After measuring the distribution of BMP(22:6/22:6), the tissue was stained for CD11b to compare with the distribution of macrophages/microglia cells. Co-localization of BMP(22:6/22:6) and CD11b is shown by overlay. The image was measured with a spatial resolution of 15 × 15 μm^2^. **(d)** The section was first measured in negative ion mode to visualize the distribution of BMP(22:6/22:6) and then measured in positive ion mode to visualize the distribution of the potassium adduct of CE(18:1). The images were measured with a spatial resolution of 35 × 35 μm^2^. All images were measured by MALDI imaging on sections with 5 day post-surgical survival and the images are a typical representative of 3 mice, except 4d, which is a typical representative of 2 mice. Molecular structures of BMP(40:7), BMP(42:10, and LysoPS(18:0) are shown in [Supplementary Fig. S12](#S1){ref-type="supplementary-material"}.](srep39571-f4){#f4} ![Accumulation of Sphingosine-1-phosphate and CerP species.\ **(a)** S1P and CerP(d18:1/16:0) were found to accumulate in the ischaemic area at 7d post-surgical survival. In contrast, C24:1 Sulfatide disappeared from the area. All images were measured in negative ion mode using MALDI imaging with a resolution of 15 × 15 μm^2^ and the images are a typical representative of 3 mice. Molecular structures of **(b)** S1P, **(c)** CerP(d18:1/16:0), and **(d)** C24:1 Sulfatide.](srep39571-f5){#f5} ![*N*-acyl-taurines accumulate in the resolution phase of inflammation.\ NAT(18:0) and NAT(18:1) were both found to accumulate in the ischaemic area at **(a)** 7 days and **(b)** 20 days. The images were measured at 7d and 20d in negative ion mode using MALDI imaging with a resolution of 25 × 25 μm^2^ and 15 × 15 μm^2^ respectively. The images are a typical representative of 3 mice. Molecular structures of **(c)** NAT(18:0) and **(d)** NAT(18:1).](srep39571-f6){#f6} ![DHA and DPA are precursors for lipid mediators, which play a role in the resolution phase of inflammation.\ **(a)** DHA and DPA were accumulating in the ischaemic area at 7d post-surgical survival. Likewise, hydroxy-derivatives of these fatty acids were found to accumulate: hydroxy-DHA, dihydroxy-DHA, and dihydroxy-DPA were all accumulating in the ischaemic area. These images were measured at 7d in negative ion mode using MALDI imaging with a resolution of 15 × 15 μm^2^ and the images are a typical representative of 3 mice. Molecular structures of **(b)** DHA and **(c)** DPA.](srep39571-f7){#f7}
2024-03-20T01:26:29.851802
https://example.com/article/1869
工厂模式:<br /> 1.1、简单工厂:<br /> 另外使用一个类(通常是一个单例)来生成实例,示例程序;<br /> 1.2、工厂模式:<br /> 使用子类来决定一个成员变量应该是哪个具体的类的实例,示例程序;<br /> 1.3、工厂模式的适用场景:<br /> 1.3.1、动态实现:<br /> XHR的例子,使用简单工厂,注意在工厂方法中使用到的memoizing技术;<br /> 专用型连接对象:创建两个新的处理器类:<br /> QueuedHandler:在发起新的请求之前先确保所有请求都已经成功处理。<br /> OfflineHandler:在用户处于离线状态时把请求缓存起来。<br /> 示例程序<br /> 在运行时选择连接对象:示例程序<br /> 1.3.2、节省设置开销:<br /> 把设置代码放到类的构造函数中并不是一种高效的做法,这是因为即便设置工作已经完成,每次创建新实例的时候这些代码还是会执行,而且这样做会把设置代码分散到不同的类中。工厂方法非常适合于这种场合。它可以再实例化所有需要的对象之前一次性地进行设置。<br /> 1.3.3、用许多小型对象组成一个大对象:<br /> RSS阅读器:由ListDisplay, XhrHandler, conf对象组成,包括fetchFeed, parseFeed, showError, stopUpdates, startUpdates方法。这是一个阐明“用许多小型对象组成一个大对象”这个用途的绝佳示例。它使用工厂模式,先创建出所有要用到的对象,然后再生成并返回那个作为容器的FeedReader类型大对象:示例程序<br /> 1.4、优点:<br /> 弱化对象的耦合,防止代码的重复。在一个方法中进行类的实例化,可以消除重复性的代码。这是在用一个对接口的调用取代具体的实现。这些都有助于创建模块化的代码。使用工厂模式,你可以先创建一个抽象的父类,然后再子类中创建工厂方法,从而把成员对象的实例化推迟到更专门化的子类中进行。<br /> 通过使用工厂方法而不是new关键字及具体类,你可以把所有实例化代码集中在一个位置,可以大大简化更换所用的类或在运行期间动态选择所用的类的工作。<br /> 1.5、缺点:<br /> 如果不需要再运行期间在一系列可互换的类中进行选择,那就不应该使用工厂方法。大多数类最好使用new关键字和构造函数公开初始化,使代码更简单易读,一眼就看到调用的是什么构造函数而不用查看工厂方法。<br /> 如果拿不定主意,就不要用工厂模式,因为在代码重构时还有机会使用工厂模式。
2024-06-16T01:26:29.851802
https://example.com/article/6314
(e) -2/147 (f) 5 d What is the second smallest value in -2/5, 0, 5, -1.2098, -7, -3, -1/2? -3 Which is the second biggest value? (a) 9 (b) 0.5 (c) 2/580837 b Which is the third smallest value? (a) -57.5 (b) 6 (c) -9111 (d) 2/9 (e) 1/7 e Which is the second biggest value? (a) -2 (b) 4 (c) 107 (d) 19282 c What is the second biggest value in -2, -0.1, 2.3516773? -0.1 Which is the second smallest value? (a) -1 (b) 1 (c) -2/3 (d) 1/16 (e) 1513 (f) 2 c Which is the second smallest value? (a) 1/6 (b) 2.5 (c) -2/3 (d) -46 (e) -2 (f) -2.7 f What is the second biggest value in 1, 0.07, -73/2, 1/10396, -19? 0.07 Which is the third biggest value? (a) -0.3 (b) 389 (c) 0 (d) -27 (e) -7 (f) 0.2 c Which is the second smallest value? (a) 3/4 (b) 0.1552 (c) 4.7 (d) 9 a What is the third smallest value in 2/243, -31/11, 2, -8, -0.4? -0.4 Which is the fifth smallest value? (a) -4 (b) -4/3 (c) 6/2767 (d) -0.3 (e) -11 (f) 1/9 c Which is the smallest value? (a) 2/5 (b) -14 (c) -4199 (d) -34 c What is the biggest value in 3, 2/15, -26, 22.0592? 22.0592 What is the third smallest value in -22211, 2/2235, -0.1, -0.3? -0.1 Which is the third smallest value? (a) 496 (b) -19.8 (c) 77 a What is the smallest value in 2/37, -51.8, 0.43? -51.8 What is the second biggest value in 0, 98.1765, -4? 0 What is the fifth biggest value in 6, -3/2, -1, 8.47, 0.2, 0.5? -1 What is the third smallest value in -0.227, -0.01, 12.6, -0.1, -3, 2/9? -0.1 Which is the third biggest value? (a) 14/15 (b) -175/4 (c) 0.5 (d) 2 (e) -3 (f) -2/3 c What is the second smallest value in -2/9, -0.06, 27830? -0.06 What is the second smallest value in 1223042, 3, 0.3, 5, 0.5, -3/2? 0.3 What is the third biggest value in -1, -5/3, 5.3, -1/45, 1, -3.403? -1/45 Which is the third biggest value? (a) 0.1 (b) 0.5 (c) -1.038 (d) -2 (e) -7 (f) 0.0517 f Which is the third biggest value? (a) -6 (b) 0.4 (c) 0.264 (d) -1.42 d What is the smallest value in 0.03, 1/6172, 0.02, -2/3? -2/3 What is the seventh biggest value in 3/46, -11, -1, -3, 0, 93, 0.5? -11 What is the sixth biggest value in 1, 2, 9, -6285, 31/2, 1/3? -6285 What is the second biggest value in -0.5, -927/5, 2/7, -5/10176? -5/10176 What is the third biggest value in -0.232, 14, 5, -8/469? -8/469 What is the fourth smallest value in -1/8, 7.1, 0.4, 0.1, -14? 0.4 What is the seventh biggest value in 13, 2, -6, -15/14, 0.05, -2, 1? -6 Which is the third smallest value? (a) 3 (b) 274345 (c) -3/224 b What is the second smallest value in -1/3, -6, 214/21813? -1/3 Which is the fourth smallest value? (a) -2/9 (b) 1 (c) 2.8 (d) 894 d What is the second smallest value in -0.6, -2/11, -4/3, -9/8, 3, 3.382? -9/8 Which is the fifth biggest value? (a) -1352 (b) -3 (c) -2/19 (d) -86 (e) 2/11 (f) -1 d Which is the third biggest value? (a) -3.4 (b) -172746 (c) -0.4 b Which is the second smallest value? (a) -1/3 (b) -2 (c) -1.2 (d) -9 (e) 12 (f) 4/9 b Which is the smallest value? (a) -19 (b) 0.01 (c) 203 (d) -0.1 (e) 9/5 a Which is the third smallest value? (a) -2 (b) -133 (c) -0.59224 c What is the second smallest value in 30, -168, -489? -168 Which is the third smallest value? (a) 2 (b) 7/37056 (c) 1/8 a Which is the smallest value? (a) -3162 (b) -3 (c) -2/15 (d) 93 a Which is the second smallest value? (a) -55 (b) 5 (c) 244.62 (d) 0 d What is the third smallest value in -119, -0.2, -0.63621? -0.2 Which is the fourth smallest value? (a) -3/8 (b) 0.1 (c) 2/375 (d) -4559/6 (e) 0.3 b What is the fourth smallest value in -50, -0.244, 16, 10? 16 What is the second biggest value in -8, 0, 1/34, 4/5, -428, -1? 1/34 Which is the biggest value? (a) -3 (b) -1/5 (c) -2 (d) 5 (e) 12 (f) -7.3 (g) -33 e What is the third biggest value in 1/6, 5518310, 21? 1/6 Which is the second smallest value? (a) 42.4 (b) 2.4 (c) 0.0766 (d) -2/5 (e) -0.5 d What is the biggest value in 55, 0.1, 8/27, 33, -2/5? 55 What is the third smallest value in 1/4, 5, 6, 0, 3, -24/17? 1/4 Which is the biggest value? (a) 17/883 (b) -23 (c) 0.29 c Which is the smallest value? (a) -2 (b) -3 (c) -116.503 (d) -0.1 (e) 0.8 (f) -4 c Which is the second smallest value? (a) -2/7 (b) 4 (c) 10829574 b What is the biggest value in -1/10, -0.03, 2/8339, -1/33? 2/8339 What is the smallest value in -0.07, -1, 1/5, -5, 5298? -5 Which is the third biggest value? (a) -0.4 (b) -28 (c) -4/15 (d) -3/5 (e) 80.01 a Which is the second biggest value? (a) -0.051 (b) -3 (c) 8024 (d) 0.3 d What is the sixth smallest value in -4, 3/2, -78, -0.9, -1/8, 213, 1? 3/2 What is the fourth biggest value in -11, 10124, -2, -0.02927? -11 What is the fifth biggest value in -0.0175, 0, -2, -5, 2, -4, 6? -2 What is the biggest value in 3/29, 5, -5, 1.233, 14/3? 5 What is the third biggest value in 2/3, 3, 2.2, -1/6, -2/13, 131/20? 2.2 Which is the second smallest value? (a) 61.53 (b) 17 (c) 6.6 b What is the third biggest value in 5, 69.34868, -4? -4 Which is the smallest value? (a) 3 (b) -2984/9 (c) -43 (d) -203 b Which is the fourth biggest value? (a) 4 (b) 2 (c) -0.4 (d) -2 (e) 2/7 (f) 2524 (g) 376 b What is the second biggest value in 2.2, 0, -8, 3, -141, 0.3? 2.2 What is the third smallest value in 61, -0.05, -2/100465? 61 What is the fifth biggest value in -1, 2/15, -72, -2/7, 23? -72 Which is the fifth smallest value? (a) 0.3 (b) 1.8 (c) -0.08 (d) -5 (e) 0 (f) 4/9 f What is the fourth smallest value in 1/718, 0.2, 416, 8, -0.4? 8 Which is the fourth biggest value? (a) -16.5 (b) 0.5 (c) 4 (d) -20 (e) 671 (f) -4 f What is the second smallest value in -0.5, -0.1, -1/53, 2096, 2, 0.3? -0.1 Which is the third biggest value? (a) 0.4 (b) -1/3 (c) -1.83984 (d) 0 b What is the third biggest value in -2/3, -1/2, -2/75, 84, -2/11, -4.6? -2/11 What is the sixth biggest value in 92.2, 5, -0.5, -1, -2/9, -26? -26 Which is the third biggest value? (a) 14/9 (b) -3 (c) -10093474 c Which is the second smallest value? (a) 0.5 (b) 614524 (c) 0.2 a Which is the biggest value? (a) -116098 (b) 2/15 (c) 0 b Which is the third biggest value? (a) 0.06 (b) -3/7 (c) 330/647 (d) -22/3 b What is the second smallest value in -5, 1/6, -2/5, 0, -2, -4506, -12? -12 What is the third biggest value in -1/7, 1/105, 6, 12, -0.4, 2/7? 2/7 What is the smallest value in -0.5, -0.15, -274941? -274941 What is the third smallest value in 0.04, -86, -6, -0.04, -9/2, 0? -9/2 Which is the fifth biggest value? (a) 1 (b) 15.421 (c) -1/9 (d) -0.2 (e) -1 e Which is the fourth biggest value? (a) 2 (b) -50/7 (c) -0.6 (d) 1/3 (e) 2/9 (f) 76 e Which is the third biggest value? (a) -1/2 (b) -0.47018 (c) 2/13 (d) 74 b Which is the smallest value? (a) 0 (b) -1.31 (c) -97 c Which is the smallest value? (a) 2/7 (b) 2/53 (c) 600 (d) 0 (e) 3 d What is the fifth smallest value in -0.3, -1, 4, -8/9, -524/13? 4 What is the biggest value in 263/3, 1/2, -4, 1, 0.01, -0.4, 4? 263/3 Which is the fourth smallest value? (a) 1/6 (b) 0.5 (c) -5 (d) 0.31055 (e) -2/17 d What is the second biggest value in -2261, 15.9, 11? 11 What is the seventh biggest value in 3/2, -61, -4, 0.2, -1, -0.06, -2/5? -61 Which is the second biggest value? (a) -4/5 (b) 2/165 (c) -2136 (d) -0.3 (e) -1/8 (f) 2 b Which is the second biggest value? (a) -910/9 (b) 2 (c) -214/3 (d) -0.18 d What is the second biggest value in -5, 4/5, 1, 1.1, -18, 2? 1.1 Which is the fourth smallest value? (a) -3 (b) 0.02 (c) 56/3 (d) 53 (e) -2/9 (f) 0.6 f Which is the second biggest value? (a) 0.5 (b) 2 (c) 0.05 (d) 0 (e) 53/5 (f) 0.04 (g) -0.4 b What is the third smallest value in 18, 1, -1/7, -1/3, -27, -4? -1/3 Which is the seventh biggest value? (a) -3/5 (b) -2 (c) 4 (d) 0.16 (e) 0.1626 (f) -2/9 (g) -0.046 b What is the smallest value in -10, -3/49, -8, 43, -3? -10 What is the fourth smallest value in 4, -60.9, 1, 92? 92 Which is the second smallest value? (a) 2 (b) -1 (c) 0.3 (d) 0.2 (e) 3 (f) 224/5 (g) 5 d Which is the second smallest value? (a) 0.4 (b) 9 (c) -2551 (d) -9 (e) -0.2 d What is the third smallest value in -2/11, 0.5, -45, 0, 2/8923, 0.1? 0 Which is the second biggest value? (a) -20/9
2024-06-06T01:26:29.851802
https://example.com/article/2354
Categories Devastating Disease Could Make Bananas Go Extinct Bananas could go extinct, say health experts. It’s all because of a deadly tropical disease that keeps on spreading to crops all over the world. The disease is called the Panama disease, and it has spread to Africa, Asia, Australia, the Middle East, and Central America. If the disease reaches South America, the Cavendish banana – which is the banana consumed worldwide, could become extinct. Unfortunately, the fungus that attacks the roots of the banana has proven to be resistant to chemical treatments, and it can only be stopped by quarantining the land affected by it. The disease originated in the 1950’s. It is called the Panama disease because it started in Panama and it then spread to Central America. The Madagascan Tree Could Be a Solution Cavendish bananas are similar to the other bananas, so the disease can easily spread to harvest fields. Researchers believe that there could be a way to save bananas after all, by using the Madagascan tree. The Madagascan tree has wild species of bananas, and they are immune to the disease. Researchers hope to create a hybrid of the species to create one resistant to the infection. Scientists know only of five Madagascan trees. According to Richard Allen, who is the senior conservation assessor at the Royal Botanic Gardens, the species is rare and has some characteristics that make the tree more durable than the plantable bananas. Part of why they’re durable is the climate on the island. The Madagascar banana has a bad taste, and it grows seeds. However, scientists could combine the strains from the Madagascar banana and the Cavendish bananas to get a hybrid that tastes good and resists diseases. Steve Porter is the head gardener at Chatsworth, hopes that “the work being done by scientists around the world to find a cure for the disease threatening the Cavendish banana will be successful.” He added that in the greenhouse, they grow a Madagascar plant to ensure the future of the Cavendish bananas. Andre Blair s is the lead editor for Advocator.ca. He holds a B.A. in Psychology from the University of Toronto, and a Master of Science in Public Health (M.S.P.H.) from the School of Public Health, Department of Health Administration, at the University of North Carolina at Chapel Hill. Andre specializes in environmental health, but writes on a variety of issues.
2023-10-01T01:26:29.851802
https://example.com/article/3704