id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
7,947,339 | https://en.wikipedia.org/wiki/Boletus%20reticulatus | Boletus reticulatus (alternately known as Boletus aestivalis (Paulet) Fr.), and commonly referred to as the summer cep is a basidiomycete fungus of the genus Boletus. It occurs in deciduous forests of Europe, where it forms a symbiotic mycorrhizal relationship with species of oak (Quercus). The fungus produces fruiting bodies in the summer months which are edible and popularly collected. The summer cep was formally described by Jacob Christian Schäffer as Boletus reticulatus in 1774, which took precedence over B. aestivalis as described by Jean-Jacques Paulet in 1793.
Taxonomy
German naturalist Jacob Christian Schäffer described the summer cep as Boletus reticulatus in 1774, in his series on fungi of Bavaria and the Palatinate, Fungorum qui in Bavaria et Palatinatu circa Ratisbonam nascuntur icones. French mycologist Jean-Jacques Paulet described it as Le grand Mousseux (Tubiporus aestivalis) in 1793, adding that it was delicious with chicken fricassee and could be found in the Bois de Boulogne in summer. the species name the species name is derived from the Latin aestas "summer". Swedish mycologist Elias Magnus Fries followed Paulet, using Boletus aestivalis in 1838.
The two names have been used in literature for many years.
Boletus reticulatus is classified in Boletus section Boletus, alongside close relatives such as B. aereus, B. edulis, and B. pinophilus. A genetic study of the four European species found that B. reticulatus was sister to B. aereus. More extensive testing of worldwide taxa revealed that B. reticulatus was most closely related to two lineages that had been classified as B. edulis from southern China and Korea/northern China respectively. The common ancestor of these three species was related to a lineage consisting of B. aereus and the genetically close B. mamorensis. Molecular analysis suggests that the B. aereus/mamorensis and B. reticulatus/Chinese B. "edulis" lineages diverged around 6 to 7 million years ago.
The British Mycological Society approved the name "summer bolete" for Boletus reticulatus.
Description
The summer cep's fruiting body is a mushroom with a swollen bulbous stem, and large convex cap. The cap is more or less round and usually up to in diameter. It bears a velvety brown, rust to chocolate cuticle which when dry often cracks to reveal the white flesh underneath, giving the appearance of a net.
The darker, more uniform shade and the velvety feel of the cap are a key feature distinguishing this species as is the vagueness or total absence of a white edge to the cap margin as seen in Boletus edulis. The tubes and pores of the hymenium are initially white, darkening with age to pale yellow and finally brown. The stipe is central, up to about tall, and has a strongly marked reticulated pattern with a variable white to brown colour.
The flesh is white and thick and remains firm if yellowish as the mushroom ages, and is often attacked by insect larvae. Its odour is pleasant.
Distribution and habitat
The summer cep is found in woods throughout Europe, after hot and humid weather, from the start of summer until the end of autumn. It is particularly common in the south and west of France, as well as in Tosco-Emiliano Apennine in Italy. It is less host-specific than other porcini mushrooms. It occurs in Ukraine and Crimea, and Republic of Karelia, Karachay-Cherkessia, Krasnodar Krai, Tula Oblast, Moscow Oblast, and as far east as Primorsky Krai in Russia. Boletus reticulatus has been recovered from southern Africa, where it was likely introduced, growing under the Mexican species Pinus patula.
Edibility
The summer cep, like most ceps, is edible and useful in cooking. However, its flesh is somewhat less firm than other ceps. Based on analysis of fruit bodies collected in Portugal, there are 334 kilocalories per 100 gram of bolete (as dry weight). The macronutrient composition of 100 grams of dried bolete includes 22.6 grams of protein, 55.1 grams of carbohydrates, and 2.6 grams of fat. By weight, fresh fruit bodies are about 91% water. B. reticulatus contains predominantly unsaturated fatty acids; mainly cis-linoleic acid, followed by cis-oleic, palmitic, and stearic acids. The carbohydrate component contains the monosaccharides glucose, mannitol and α,α-trehalose, the polysaccharide glycogen, and the water-insoluble structural polysaccharide chitin, which accounts for up to 80–90% of dry matter in mushroom cell walls. Chitin, hemicellulose, and pectin-like carbohydrates—all indigestible by humans—contribute to high proportion of insoluble fibre in B. reticulatus. It also contains more tocopherol than other species of mushroom.
See also
List of Boletus species
List of North American boletes
References
Notes
This article contains translations from the French Wikipedia article.
Edible fungi
reticulatus
Fungi of Europe
Fungi described in 1763
Fungus species
Taxa named by Jacob Christian Schäffer | Boletus reticulatus | Biology | 1,193 |
2,195,142 | https://en.wikipedia.org/wiki/Wallflower%20%28people%29 | A wallflower is someone with an introverted or shy personality type (or in more extreme cases, social anxiety) who will attend parties and social gatherings, but will usually distance themselves from the crowd and actively avoid being in the limelight. They are also social around friends but not strangers, though once around friends, the strangers become less impactful. The name itself derives from the eponymous plant's unusual growth pattern against a wall as a stake or in cracks and gaps in stone walls. "Wallflowers" might literally stand against a wall and simply observe others at a social gathering, rather than mingle.
Connection to sociology
Structural function theory
Structural functionalism is a sociological theory that sees society as a number of complex parts that form a stable and functional whole. This leads to a strong and coherent family unit made of smaller parts, with the functioning family unit then going on to form the smaller parts of a wider community, society and so on.
Social conflict theory
Social conflict theory in sociology claims that society is in a state of perpetual grace conflict due to competition for limited resources. It holds that social order is maintained by domination and power, rather than consensus and conformity. According to conflict theory, those with wealth and power try to hold on to it by any means possible, chiefly by suppressing the poor and powerless.
Symbolic interaction theory
The most relevant sociological theory that the 'wallflower' relates to, symbolic interaction, describes specific gestures or social norms that are symbolic in meaning. The theory consists of three core principles: meaning, language and thought. These core principles lead to conclusions about the creation of a person’s self and socialization into a larger community.
Because the 'wallflower' will usually exhibit a lack of interaction with others, it becomes symbolic of their thoughts and feelings towards others. The most specific example would be in the body language. During many times, people who are shy have little or no eye contact with others. A person may see a man, woman, or child try to avoid eye contact with others while out walking around in public or even in private. For some, this may be a condition that becomes consistent over time and become a normal action.
In social gatherings or parties, a 'wallflower' typically remains on the periphery of the group, avoiding the center of activity. Shy individuals may prefer to stay near familiar people or keep their distance from those they do not know well. Even in the presence of friends, they often avoid situations that might draw attention to themselves or place them at the center of focus.
Social anxiety
Social anxiety is the extreme fear of being scrutinized and judged by others in social or performance situations. Social anxiety disorder can wreak havoc on the lives of those who suffer from it. Symptoms may be so extreme that they disrupt daily life. People with this disorder, also called social phobia, may have few or no social or romantic relationships, making them feel powerless, alone, or even ashamed.
Although they recognize that the fear is excessive and unreasonable, people with social anxiety disorder feel powerless against their anxiety. They are terrified they will humiliate or embarrass themselves. The anxiety can interfere significantly with daily routines, occupational performance, or social life, making it difficult to complete school, interview and get a job, and have friendships and romantic relationships.
Being a wallflower can be considered a less-intense form of social anxiety. A person with social anxiety may feel a sense of hesitation in large crowds, and may even have a sense of panic if forced to become the center of attention. This fear may cause them to do something as minor as stand away from the center of a party, but it may also cause a major or minor anxiety attack.
People with social anxiety disorder do not believe that their anxiety is related to a mental or physical illness. This type of anxiety occurs in most social situations, especially when the person feels on display or is the center of attention. Once a person avoids almost all social and public interactions, it can be said that the person has an extreme case of social anxiety disorder, more commonly called Avoidant Personality Disorder. People with social anxiety disorder have an elevated rate of relationship difficulties and substance abuse.
Panic and anxiety attacks
Anxiety attacks are a combination of physical and mental symptoms that are intense and overwhelming. The anxiety is, however, more than just regular nervousness. Symptoms of anxiety attacks and panic attacks mimic serious medical issues, such as:
Heart attacks and heart failure.
Brain tumors.
Multiple sclerosis.
Despite their intensity, anxiety attacks are generally not life-threatening.
In popular culture
In the novel The Perks of Being a Wallflower by Stephen Chbosky, as well as in the film adaptation of the same title, the main character Charlie often finds himself alone in school or at parties. He also suffers from anxiety and depression.
In the song "Here" by Alessia Cara, the artist describes wanting to enjoy herself at home and not attend any parties with her friends.
Bob Dylan sings about a wallflower in the song "Wallflower" from 1971.
Jakob Dylan, son of Bob Dylan, founded the popular band the Wallflowers in 1989.
In the song "Wallflower" by In Flames, from the album Battles, they describe life from the perspective of a wallflower.
In the My Little Pony spin-off Equestria Girls, the 2018 special "Forgotten Friendship" features the character Wallflower Blush who is an extreme introvert, says her only friends are the plants in her garden and who sings the song "Invisible" about nobody ever taking notice of her existence.
In the song "Wallflower" from the album Wallflowers by Ukrainian metal band Jinjer, the vocalist describes herself being a wallflower.
In the song "Stall Me (Bonus Track)" from the deluxe version of the album Vices & Virtues by rock band Panic! at the Disco, the lyrics mention a 'wallflower garden', meaning a group of people who are collectively distancing themselves from the rest of the party.
In the song "WALLFLOWER" by K-Pop girl group TWICE, the lyrics are about seducing a wallflower.
See also
Hermit
Loner
Recluse
References
Human communication
Behaviorism
Personality typologies | Wallflower (people) | Biology | 1,260 |
42,125,403 | https://en.wikipedia.org/wiki/Roku | Roku ( ) is a brand of consumer electronics that includes streaming players, smart TVs (and their operating systems), as well as a free TV streaming service. The brand is owned by Roku, Inc., an American company. As of 2024, Roku is the leading streaming TV distributor in the U.S., reaching nearly 120 million people.
History
Roku was founded by Anthony Wood in 2002; he had previously founded ReplayTV, a DVR company that competed with TiVo. After ReplayTV's failure, Wood worked for a while at Netflix. In 2007, Wood's company began working with Netflix on Project:Griffin, a set-top box to allow Netflix users to stream Netflix content to their TVs. Only a few weeks before the project's launch, Netflix's founder Reed Hastings decided it would hamper license arrangements with third parties, potentially keeping Netflix off other similar platforms, and killed the project. Fast Company magazine cited the decision to kill the project as "one of Netflix's riskiest moves".
Netflix then decided instead to spin off the company, and Roku released their first set-top box in 2008. In 2010 they began offering models with various capabilities, which eventually became their standard business model. In 2014, Roku partnered with smart TV manufacturers to produce TVs with built-in Roku functionality. In 2015, Roku won the inaugural Emmy for Television Enhancement Devices.
In January 2018, CNET reported that Roku was debuting a new licensing program for smart audio devices such as smart speakers, sound bars and whole-home audio, while noting the "ease of use" and "superb streaming options" offered by Roku TVs.
According to CNBC in 2021, Roku was the U.S. market leader in streaming video distribution. Later in 2023, Variety called Roku "the top connected TV platform" in the U.S. In December 2023, a Popular Mechanics review cited Roku TVs to be affordable and easy to use, while also noting that the Roku-integrated TVs lacked "the premium brand badging of big players like Sony or Samsung".
In April 2024, TheWrap reported that the Roku streaming platform was reaching an estimated 120 million people in the U.S. In July 2024, an article in The Verge stated that a Roku OS update in June 2024 had "ruined" the Roku TV experience. This update was said to add "motion smoothing", and was cited to be irreversible. This followed another identical issue reported in 2020 for Roku TVs made by TCL. In August 2024, a Wired review cited the ease of use as "one of the main reasons" to buy any Roku product.
Roku streaming players
First generation
The first Roku model, the Roku DVP N1000, was unveiled on May 20, 2008. It was developed in partnership with Netflix to serve as a standalone set-top box for its recently introduced "Watch Instantly" service. The goal was to produce a device with a small footprint that could be sold at low cost compared to larger digital video recorders and video game consoles. It features an NXP PNX8935 video decoder supporting both standard and high definition formats up to 720p; HDMI output; and automatic software updates, including the addition of new channels for other video services.
Roku launched two new models in October 2009: the Roku SD (a simplified version of the DVP, with only analog AV outputs); and the Roku HD-XR, an updated version with 802.11n Wi-Fi and a USB port for future functionality. The Roku DVP was retroactively renamed the Roku HD. By then, Roku had added support for other services. The next month, they introduced the Channel Store, where users could download third-party apps for other content services (including the possibility of private services for specific uses).
Netflix support was initially dependent on a PC, requiring users to add content to their "Instant Queue" from the service's web interface before it could be accessed via Roku. In May 2010, the channel was updated to allow users to search the Netflix library directly from the device.
In August 2010, Roku announced plans to add 1080p video support to the HD-XR. The next month, they released an updated lineup with thinner form factors: a new HD; the XD, with 1080p support; and the XDS, with optical audio, dual-band Wi-Fi, and a USB port. The XD and XDS also included an updated remote.
Support for the first-generation Roku models ended in September 2015.
Second generation
In July 2011, Roku unveiled its second generation of players, branded as Roku 2 HD, XD, and XS. All three models include 802.11n, and also add microSD slots and Bluetooth. The XD and XS support 1080p, and only the XS model includes an Ethernet connector and USB port. They also support the "Roku Game Remote"—a Bluetooth remote with motion controller support for games, which was bundled with the XS and sold separately for other models. The Roku LT was unveiled in October, as an entry-level model with no Bluetooth or microSD support.
In January 2012, Roku unveiled the Streaming Stick - a new model condensed into a dongle form factor using Mobile High-Definition Link (MHL). Later in October, Roku introduced a new search feature to the second-generation models, aggregating content from services usable on the device.
Third generation
Roku unveiled its third-generation models in March 2013, the Roku 3 and Roku 2. The Roku 3 contains an upgraded CPU over the 2 XS, and a Wi-Fi Direct remote with an integrated headphone jack. The Roku 2 features only the faster CPU. A software update in October 2014 added support for peer-to-peer Miracast wireless.
Fourth generation
In October 2015, Roku introduced the Roku 4; the device contains upgraded hardware with support for 4K resolution video, as well as 802.11ac wireless.
Fifth generation
In September 2016, Roku revamped their entire streaming player line-up with five new models (low end Roku Express, Roku Express+, high end Roku Premiere, Roku Premiere+, and top-of-the-line Roku Ultra), while the Streaming Stick (3600) was held over from the previous generation (having been released the previous April) as a sixth option. The Roku Premiere+ and Roku Ultra support HDR video using HDR10.
Sixth generation
In October 2017, Roku introduced its sixth generation of products. The Premiere and Premiere+ models were discontinued, the Streaming Stick+ (with an enhanced Wi-Fi antenna device) was introduced, as well as new processors for the Roku Streaming Stick, Roku Express, and Roku Express+.
Seventh generation
In September 2018, Roku introduced the seventh generation of products. Carrying over from the 2017 sixth-generation without any changes were the Express (3900), Express+ (3910), Streaming Stick (3800), and Streaming Stick+ (3810). The Ultra is the same hardware device from 2017, but it comes with JBL premium headphones and is repackaged with the new model number 4661. Roku has resurrected the Premiere and Premiere+ names, but these two new models bear little resemblance to the 2016 fifth-generation Premiere (4620) and Premiere+ (4630) models. The new Premiere (3920) and Premiere+ (3921) are essentially based on the Express (3900) model with 4K support added, it also includes Roku Streaming Stick+ Headphone Edition (3811) for improving Wifi signal strength and private listening.
Eighth generation
In September 2019, Roku introduced the eighth generation of products.
The same year, Netflix announced that it would stop supporting older generations of Roku, including the Roku HD, HD-XR, SD, XD, and XDS, as well as the NetGear-branded XD and XDS beginning on December 1, 2019. Roku had warned in 2015 that it would stop updating players made in May 2011 or earlier, and these vintage boxes were among them.
Ninth generation
On September 28, 2020, Roku introduced the ninth generation of products. An updated Roku Ultra was released along with the addition of the Roku Streambar, a 2-in-1 Roku and Soundbar device. The microSD slot was removed from the new Ultra 4800, making it the first top-tier Roku device since the first generation to lack this feature. On April 14, 2021, Roku announced the Roku Express 4K+, replacing the 8th generation Roku Express devices, the Voice Remote Pro as an optional upgrade for existing Roku players, and Roku OS 10 for all modern Roku devices.
Tenth generation
On September 20, 2021, Roku introduced the tenth generation of products. The Roku Streaming Stick 4K was announced along with the Roku Streaming Stick 4K+ which includes an upgraded rechargeable Roku Voice Remote Pro with lost remote finder. Roku announced an updated Roku Ultra LT with a faster processor, stronger Wi-Fi and Dolby Vision as well as Bluetooth audio streaming and built-in Ethernet support. Roku also announced Roku OS 10.5 with several new and improved features.
On November 15, 2021, Roku announced a budget model Roku LE (3930S3) to be sold at Walmart, while supplies last. It lacks 4K and HDR10 support, making its features similar to those of the 2019 Roku Express (3930). It has the same form factor as the 2019 Roku Express, except the plastic shell is white rather than black.
Feature comparison
Roku TV
Roku announced its first branded smart TV and it was released in late 2014. These TVs are manufactured by companies like TCL, LG, Westinghouse and Hisense, and use the Roku user interface as the "brain" of the TV. Roku TVs are updated just like the streaming devices. More recent models also integrate a set of features for use with over-the-air TV signals, including a program guide that provides information for shows and movies available on local antenna broadcast TV, as well as where that content is available to stream, and the ability to pause live TV (although the feature requires a USB hard drive with at least 16GB storage).
On November 14, 2019, Walmart and Roku announced that they would be selling Roku TVs under the Onn brand exclusively at Walmart stores, starting November 29.
In January 2020, Roku created a badge to certify devices as working with a Roku TV model. The first certified brands were TCL North America, Sound United, Polk Audio, Marantz, Definitive Technology, and Classé.
In January 2021, a Roku executive said one out of three smart TVs sold in the United States and Canada came with Roku's operating system built-in.
In May 2020, Roku announced a 55-inch outdoor Element Roku TV. The television offers minimal reflection, an anti-glare display, 4K streaming, and can be used in bright outdoor environments.
In March 2023, Roku announced a partnership with Best Buy in which the retailer will exclusively sell the Roku Select and Plus Series TVs manufactured by Roku.
Roku OS
Content and programming
Roku provides video services from a number of Internet-based video on demand providers.
Roku channels
Content on Roku devices is provided by Roku partners and is identified using the term channel. Users can add or remove different channels using the Roku Channel Store or the search feature. Roku's website does not specify which channels are free to its users.
Service creation for Roku Player
The Roku is an open-platform device with a freely available software development kit that enables anyone to create new channels. The channels are written in a Roku-specific language called BrightScript, a scripting language the company describes as 'unique', but "similar to Visual Basic" and "similar to JavaScript".
Developers who wish to test their channels before a general release, or who wish to limit viewership, can create "private" channels that require a code be entered by the user in the account page of the Roku website. These private channels, which are not part of the official Roku Channel Store, are not reviewed or certified by Roku.
There is an NDK (Native Developer Kit) available, though it has added restrictions.
The Roku Channel
Roku launched its own streaming channel on its devices in October 2017. It is ad-supported, but free. Its licensed content includes movies and TV shows from studios such as Lionsgate, MGM, Paramount, Sony Pictures Entertainment, Warner Bros., Disney, and Universal as well as Roku channel content publishers American Classics, FilmRise, Nosey, OVGuide, Popcornflix, Vidmark, and YuYu. It is implementing an ad revenue sharing model with content providers. On August 8, 2018, the Roku Channel became available on the web as well. Roku also added the "Featured Free" section as the top section of its main menu from which users can get access to direct streaming of shows and movies from its partners.
In January 2019, premium subscription options from select content providers were added to the Roku Channel. Originally only available in the U.S., it launched in the UK on April 7, 2020, with a different selection of movies and TV shows, and without premium subscription add-ons.
On January 8, 2021, Roku announced that it had acquired the original content library of the defunct mobile video service Quibi for an undisclosed amount, reported to be around $100 million. The content is being rebranded as Roku Originals.
Controversies
Non-certified channels
The Daily Beast alleged that non-certified channels on Roku eased access to materials promoting conspiracy theories and terrorism content.
In June 2017, a Mexico City court banned the sale of Roku products in Mexico, following claims by Televisa (via its Izzi cable subsidiary) that the devices were being used for subscription-based streaming services that illegally stream television content without permission from copyright holders. The devices used Roku's private channels feature to install the services, which were all against the terms of service Roku applies for official channels available in its store. Roku defended itself against the allegations as such, stating that these channels were not officially certified and that the company takes active measures to stop illegal streaming services. The 11th Collegiate Court in Mexico City overturned the decision in October 2018, with Roku returning to the Mexican market soon after; Televisa's streaming service Blim TV (now Vix) would also launch on the platform.
In August 2017 Roku began to display a prominent disclaimer when non-certified channels are added, warning that channels enabling piracy may be removed "without prior notice". In mid-May 2018, a software glitch caused some users to see copyright takedown notices on legitimate services such as Netflix and YouTube. Roku acknowledged and patched the glitch.
In March 2022, the private channel system was deprecated due to abuse and replaced with a more limited and strict beta channels platform which only allows twenty users to test a channel for up to four months.
Carriage disputes
Pay television-styled carriage disputes emerged on the Roku platform in 2020, as the company requires providers to agree to revenue sharing for subscription services that are billed through the platform, and to hold 30% of advertising inventory. On September 18 of that same year, Roku announced that NBCUniversal TV Everywhere services would be removed from its devices "as early as this weekend", due to its refusal to carry the company's streaming service Peacock (which had been unavailable on Roku since its launch in July 2020) under terms it deemed "unreasonable". It reached an agreement with NBCUniversal later that day, which allowed Peacock to become available on Roku. HBO Max, which launched in May 2020, was unavailable on Roku until December 2020 due to similar disputes over revenue sharing, particularly in regards to an upcoming ad-supported tier. On December 17, 2020, HBO Max began streaming on Roku, after WarnerMedia and Roku reached a deal the previous day (and also after media speculation that WarnerMedia moving Wonder Woman 1984 and Warner Bros' 2021 theatrical slate to a hybrid theatrical/HBO Max release model were an attempt to get Roku to agree to their terms).
Another dispute, starting mid-December 2020, caused Spectrum customers to be unable to download the Spectrum TV streaming app to their Roku devices; existing customers could retain the app, but would lose it upon deletion, even to fix software bugs. This dispute was resolved on August 17, 2021.
On April 30, 2021, Roku removed the over-the-top television service YouTube TV from its Channels Store, preventing it from being downloaded. The company accused operator Google LLC of making demands regarding its YouTube app that it considered "predatory, anti-competitive and discriminatory", including enhanced access to customer data, giving YouTube greater prominence in Roku's search interface, and requiring that Roku implement specific hardware standards that could increase the cost of its devices. Roku accused Google of "leveraging its YouTube monopoly to force an independent company into an agreement that is both bad for consumers and bad for fair competition."
Google claimed that Roku had "terminated our deal in bad faith amidst our negotiation", stating that it wanted to renew the "existing reasonable terms" under which Roku offered YouTube TV. Google denied Roku's claims regarding customer data and prominence of the YouTube app, and stated that its carriage of a YouTube app was under a separate agreement, and unnecessarily brought into negotiations. As a partial workaround, YouTube began to deploy an update to its main app on Roku and other platforms, which integrates the YouTube TV service. On December 8, 2021 (a day before the agreement for the main YouTube app expired), Roku and Google announced that they had settled their dispute and reached a multi-year agreement to keep the YouTube app on Roku and to restore the YouTube TV app on Roku.
See also
Comparison of digital media players
SoundBridge, another Roku product
Smart TV
Roku City
Notes
References
External links
Telecommunications-related introductions in 2008
Digital media players
Internet radio
Streaming television
Linux-based devices
Online advertising services and affiliate networks
Technological comparisons | Roku | Technology | 3,858 |
30,472,700 | https://en.wikipedia.org/wiki/Epoxy%20moisture%20control%20system | Epoxy moisture control systems are chemical barriers that are used to prevent moisture damage to flooring. Excessive moisture vapor emissions in concrete slabs can mean significant, expensive damage to a flooring installation. Hundreds of millions of dollars are spent annually just in the United States to correct moisture-related problems in flooring. These problems include failure of the flooring adhesive; damage to the floor covering itself, such as blistering; the formation of efflorescence salts; and the growth of mold and mildew.
In 2013 the ASTM F3010-13 "Standard Practice for Two-Component Resin Based Membrane-Forming Moisture Mitigation Systems for Use Under Resilient Floor Coverings" was adopted to establish performance criteria required for two component membranes employed as concrete moisture control systems.
Excess moisture in concrete is defined by that amount of moisture emitting from the concrete subfloor that exceeds the amount allowed by the flooring manufacturer. This condition occurs when the flooring is installed before the water in the concrete mix that is not needed for hydration (strengthening) has had adequate time to evaporate. Causes of this condition include a construction schedule that does not allow at least 28 days for the slab to dry; using too much water in the concrete mix; installing the slab without a puncture- and tear-resistant, low-permeability vapor barrier beneath it; rewetting of the slab due to precipitation; inadequate drying conditions, which can include air temperatures that are lower than 50 °F, high humidity in the surrounding air and poor airflow; and liquid water infiltration due to external sources, such as broken pipes, irrigation, improper sloping of the landscape, condensation, cleaning and maintenance, and moisture from flooring adhesives.
Epoxy flooring is a durable, attractive, and resistant solution for residential, commercial, and industrial spaces, offering a wide range of colors and finishes to enhance aesthetic appeal and functionality.
There are two industry standards for measuring moisture vapor emissions in concrete: calcium chloride testing (ASTM F1869) and relative humidity testing (ASTM F2170). Epoxy moisture control systems can be used when these tests determine that the moisture vapor emissions need to be remediated in order to install the selected floor covering within the timeframe allotted by the construction schedule.
Epoxy moisture control systems are roller-applied and are available in one-coat and two-coat varieties. One-coat systems allow for a faster installation time, while two-coat systems address the potential for pinholes or voids in the system, which can cause future failures. Epoxy moisture control systems can be applied over concrete with relative humidity levels up to 100%, and there are systems available on the market today that can be applied over concrete that is physically damp. In some cases, with the use of an epoxy moisture control system, floor coverings can be installed just 7 days after the slab is poured.
When applied correctly, epoxy moisture control systems are designed to bring moisture emission rates to acceptable levels for the flooring being installed, which combats flooring failures, microbiological activity (mold and mildew) and other problems associated with excess moisture in the slab.
References
Notes
http://www.astm.org/Standards/F3010.htm
Concrete | Epoxy moisture control system | Engineering | 679 |
7,869,912 | https://en.wikipedia.org/wiki/Margatoxin | Margatoxin (MgTX) is a peptide that selectively inhibits Kv1.3 voltage-dependent potassium channels. It is found in the venom of Centruroides margaritatus, also known as the Central American Bark Scorpion. Margatoxin was first discovered in 1993. It was purified from scorpion venom and its amino acid sequence was determined.
Structure
Margatoxin is a peptide of 39 amino acids with a molecular weight of 4185 Dalton. The primary amino acid sequence of margatoxin is as follows:
Thr-Ile-Ile-Asn-Val-Lys-Cys-Thr-Ser-Pro-Lys-Gln-Cys-Leu-Pro-Pro-Cys-Lys-Ala-Gln-Phe-Gly-Gln-Ser-Ala-Gly-Ala-Lys-Cys-Met-Asn-Gly-Lys-Cys-Lys-Cys-Tyr-Pro-His
Or, when translated to one-letter sequence,
TIINVKCTSPKQCLPPCKAQFGQSAGAKCMNGKCKCYPH.
There are disulfide bridges between Cys7-Cys29, Cys13-Cys34 and Cys17-Cys36.
Margatoxin is classified as a "scorpion short toxin" by Pfam, showing sequence homology with other potassium channel blockers, such as charybdotoxin (44%), kaliotoxin (54%), iberiotoxin (41%) and noxiustoxin (79%), which are also derived from scorpion venom.
Synthesis
Margatoxin is a peptide originally purified from the venom of the scorpion Centrutoides margaritatus (Central American Bark Scorpion). Scorpion toxins are specific and have a high affinity for their targets, and this makes them good tools to characterize various receptor proteins involved in ion channel functioning. Because only low amounts of natural toxins can be isolated from scorpion venoms, a chemical synthesis approach has been utilised to produce sufficient protein for research. This approach is not only produces enough material to study the effects on potassium channels but ensures purity as toxin isolated from the scorpion venom risks contamination by other active compounds.
Margatoxin can be chemically synthesized using the solid phase synthesis technique. The compound gained by this technique was compared with the natural, purified margatoxin. Both compounds had the same physical and biological properties. The chemically synthesized margatoxin is now used to study the role of Kv1.3 channels.
Mechanism of action
Margatoxin blocks potassium channels Kv1.1 Kv1.2 en Kv1.3. Kv1.2 channel regulates neurotransmitter release associated with heart rate, insulin secretion, neuronal excitability, epithelial electrolyte transport, smooth muscle contraction, immunological response and cell volume. Kv1.3 channels are expressed in T and B lymphocytes. Margatoxin irreversibly inhibits the proliferation of human T-cells in a concentration of 20 μM. At lower concentrations, this inhibition is reversible.
Influence on cardiovascular function
Margatoxin significantly reduces outward currents of Kv1.3 channels and depolarized resting membrane potential. It increases the time necessary to conduct action potentials in the cell in response to a stimulus. Acetylcholine (ACh) plays a key role in activation of nicotinic and muscarinic ACh-receptors. Margatoxin influences nicotinic ACh-receptor agonist-induced norepinephrine release. Upon activation of muscarinic ACh receptors with bethanechol, margatoxin-sensitive current was suppressed. Therefore, it was concluded that Kv1.3 affects the function of postganglionic sympathetic neurons, so one could suggest that Kv1.3 influences sympathetic control of cardiovascular function.
Immune system suppression
Kv1.3-channels can be found in various cells, including T-lymphocytes and macrophages. To activate an immune response a T-lymphocyte has to come into contact with a macrophage. The macrophage can then produce cytokines, such as IL-1, IL-6, and TNF-α. Cytokines are cell signaling molecules that can enhance the immune response. Kv1.3-channels are important for the activation of T-lymphocytes, and thus for the activation of macrophages. The disturbance of the function of Kv1.3-channels, for example due to inhibition of these channels, will lower the cytokines production and lymphocyte proliferation in vitro. This would lead to immune response suppression in vivo.
Kv channels are regulated during proliferation and regulation of macrophages and their activity is important during cell responses. In contrast to leukocytes which have monomeric Kv1.3 channels, macrophages have heterotetrameric Kv1.3/Kv1.5 channels. These heterotetramers plays a role in regulating the membrane potential of macrophages on different stages of macrophage activation by lymphocytes. Potassium channels are involved in leukocyte activation by calcium. The possible different conformations of these Kv1.3 and 1.5 complexes can affect the immune response. Margatoxin inhibits Kv1.3 channels, so no heterodimers can be formed. The effect of margatoxin is similar to the effect of DEX. DEX diminishes amount of K1.3 channels by binding to GC receptor, which leads to downregulating of expression of K1.3 channels. Both margatoxin and DEX lead to immune suppression.
Effects on ion channels in lymphocytes
Ion channels play a key role in lymphocyte signal transduction. Potassium channels are required for the activation of T-cells. Pharmacological inhibition of Potassium channels can be useful in the treatment of immune diseases. The membrane potential exerts powerful effects on the lymphocyte activation. The resting potential results primarily from a potassium-diffusion potential contributed by potassium channels. Margatoxin depolarizes resting human T cells. Pharmacological studies suggest that functional potassium channels are required in the activation of T- and B-cells. KV channel blockers inhibit activation, gene expression, killing by cytotoxic T cells and NK cells, lymphokine secretion and proliferation. Margatoxin blocks mitogen-induced proliferation, the mixed lymphocyte response and the secretion of Interleukin-2 and interferon-gamma (IFN-γ). This provides the strongest available evidence for a role of KV channels in mitogenesis.
Toxicity
Margatoxin can have several different effects on the body:
May cause skin irritation
May be harmful if absorbed through the skin
May cause eye irritation
May be harmful if inhaled
Material may be irritating to mucous membranes and upper respiratory tract
May be harmful if swallowed
Prolonged or repeated exposure may cause allergic reactions in certain sensitive individuals
May be fatal if enters bloodstream
The chronic effects target the heart, nerves, lungs, skeleton and muscles.
The median lethal dose (LD50) of margatoxin is 59.9 mg/kg, so Centruroides margaritatus stings are not dangerous to humans except as a result of possible anaphylactic responses. They do cause pain, local swelling, and tingling for 3–4 hours, but no intervention beyond symptomatic relief should be necessary.
Effects on animals
Margatoxin leads to the depolarization of human and pig cells in vitro. By blocking 99% of the KV1.3-channels, margatoxin inhibits the proliferation response of T-cells in mini-swine. Furthermore, it suppresses a B-cell response to allogenic immunization and inhibits the delayed-type hypersensitivity reaction to tuberculin. In pigs, the protein's half-life is two hours. When the peptide is continuously infused, it leads to diarrhea and hypersalivation. However, no major toxic effects are observed in animals. In contrast to when the plasma concentration of margatoxin is higher than 10nM, the transient hyperactivity occurs in pigs. It might be an effect of Kv1.1 and Kv1.2 channels in the brain.
Efficacy and side effects
Kv1.3 is already linked with proliferation of lymphocytes, vascular smooth cells, oligodendrocytes and cancer cells. Recent studies have shown that there is therapeutical potential for Kv1.3-blockers such as Margatoxin.
In a minipig treatment a study with margatoxin has been conducted. An eight-day treatment led to a prolonged immune suppression that lasted three to four weeks after termination of dosing. Thymic atrophy (reduced thymus) was observed. Especially the cells in the cortical region had decreased in number
Medicinal significance
Neointimal Hyperplasia is the movement and proliferation of smooth muscle cells into the luminal area of a blood vessel. This generates a new inner structure that can block blood flow. This is commonly seen to cause failure of interventional clinical procedures that include placement of stents and bypass grafts.
Due to changes in potassium channel type the vascular smooth muscle cells switch from the contractile to proliferating phenotype. It is suggested that Kv1.3 is important in proliferating vascular smooth muscle cells. Inhibitors of such channels suppress vascular smooth muscle proliferation, stenosis following injury, and neointimal hyperplasia. Studies shows that margatoxin is a high potency inhibitor of vascular cell migration, with an IC50 (half maximal inhibitory concentration) of 85 pM. In this study, a negative effect was also found. There have been vasoconstrictor effects observed in some arteries, but elevated blood pressure has not appeared as a significant concern.
References
Further reading
Neurotoxins
Ion channel toxins
Scorpion toxins | Margatoxin | Chemistry | 2,111 |
6,168,750 | https://en.wikipedia.org/wiki/Bimorph | A bimorph is a cantilever used for actuation or sensing which consists of two active layers. It can also have a passive layer between the two active layers. In contrast, a piezoelectric unimorph has only one active (i.e. piezoelectric) layer and one passive (i.e. non-piezoelectric) layer.
Piezoelectric bimorph
The term bimorph is most commonly used with piezoelectric bimorphs. In actuator applications, one active layer contracts and the other expands if voltage is applied, thus the bimorph bends. In sensing applications, bending the bimorph produces voltage which can for example be used to measure displacement or acceleration. This mode can also be used for energy harvesting.
Bimetal bimorph
A bimetal could be regarded as a thermally activated bimorph. The first theory about the bending of thermally activated bimorphs was given by Stoney. Newer developments also enabled electrostatically activated bimorphs for the use in microelectromechanical systems.
See also
Shape-memory alloy
References
Piezoelectric materials | Bimorph | Physics | 252 |
39,594,285 | https://en.wikipedia.org/wiki/Spherical%20surface%20acoustic%20wave%20%28SAW%29%20sensor | Spherical surface acoustic wave sensors use a type of surface acoustic wave (SAW) that travels along the surface of a medium exhibiting elasticity with exponentially decaying amplitude along depth. MEMS-IDT technology allows the use of SAW waves to sense various gases. Sensitivity up to 10 ppm of hydrogen using a spherical Ball SAW device is obtained.
Principles
Conventional planar SAW sensors are based on principle that the parameters such as amplitude, speed and phase of Surface acoustic wave changes on adsorption of gas molecules. Limitation of planar SAW based sensors is that the change in above mentioned parameters is very small due to limited path offered to Surface acoustic wave by planar sensor. In case of Spherical sensors surface acoustic wave make several round trips along the equator of a ball as shown in fig, which offer longer paths to Surface acoustic wave hence even smaller change in parameters is amplified with multiple turns, which increases the sensitivity of the sensor considerably.
References
Microtechnology
Sensors | Spherical surface acoustic wave (SAW) sensor | Materials_science,Technology,Engineering | 193 |
3,310,078 | https://en.wikipedia.org/wiki/List%20of%20pioneers%20in%20computer%20science | This is a list of people who made transformative breakthroughs in the creation, development and imagining of what computers could do.
Pioneers
~ Items marked with a tilde are circa dates.
See also
Computer Pioneer Award
IEEE John von Neumann Medal
Grace Murray Hopper Award
History of computing
History of computing hardware
History of computing hardware (1960s–present)
History of software
List of computer science awards
List of computer scientists
List of Internet pioneers
List of people considered father or mother of a field § Computing
The Man Who Invented the Computer (2010 book)
List of Russian IT developers
List of Women in Technology International Hall of Fame inductees
Timeline of computing
Turing Award
Women in computing
References
Sources
External links
Internet pioneers
Pioneers
Computer, List | List of pioneers in computer science | Technology | 142 |
7,112,300 | https://en.wikipedia.org/wiki/Barberpole%20illusion | The barberpole illusion is a visual illusion that reveals biases in the processing of visual motion in the human brain. This visual illusion occurs when a diagonally striped pole is rotated around its vertical axis (horizontally), it appears as though the stripes are moving in the direction of its vertical axis (downwards in the case of the animation to the right) rather than around it.
History
In 1929, psychologist J.P. Guilford informally noted a paradox in the perceived motion of stripes on a rotating barber pole. The barber pole turns in place on its vertical axis, but the stripes appear to move upwards rather than turning with the pole. Guilford tentatively attributed the phenomenon to eye movements, but acknowledged the absence of data on the question.
In 1935, Hans Wallach published a comprehensive series of experiments related to this topic, but since the article was in German it was not immediately known to English-speaking researchers. An English summary of the research was published in 1976 and a complete English translation of the 1935 paper was published by Sophie Wuerger, Robert Shapley, and Nava Rubin in 1996. Wallach's analysis focused on the interaction between the terminal points of the diagonal lines and the implicit aperture created by the edges of the pole.
Explanation
This illusion occurs because a bar or contour within a frame of reference provides ambiguous information about its "real" direction of movement. The actual motion of the line has many possibilities. The shape of the aperture thus tends to determine the perceived direction of motion for an otherwise identically moving contour. A vertically elongated aperture makes vertical motion dominant whereas a horizontally elongated aperture makes horizontal motion dominant. In the case of a circular or square aperture, the perceived direction of movement is usually orthogonal to the orientation of the stripes (diagonal, in this case). The perceived direction of movement relates to the termination of the line's end points within the inside border of the occluder. The vertical aperture, for instance, has longer edges at the vertical orientation, creating a larger number of terminators unambiguously moving vertically. This stronger motion signal forces us to perceive vertical motion. Functionally, this mechanism has evolved to ensure that we perceive a moving pattern as a rigid surface moving in one direction.
Individual motion-sensitive neurons in the visual system have only limited information, as they see only a small portion of the visual field (a situation referred to as the "aperture problem"). In the absence of additional information the visual system prefers the slowest possible motion: i.e., motion orthogonal to the moving line. The neurons which may correspond to perceiving barber-pole-like patterns have been identified in the visual cortex of ferrets.
Auditory analogue
A similar effect occurs in the Shepard's tone, which is an auditory illusion.
See also
Screw (simple machine) – screws convert rotational motion to linear motion and exhibit the same mechanic
Motion perception
Auditory illusion
References
Notes
External links
Barpole effect animation and explanation.
Optical illusions | Barberpole illusion | Physics | 604 |
35,630,702 | https://en.wikipedia.org/wiki/Hexapradol | Hexapradol (INN) is a psychostimulant drug which was never marketed.
It also had cytoprotective/antiulcer properties.
Synthesis
Synthesis methods are described.
See also
β-Phenylmethamphetamine
3,3-Diphenylcyclobutanamine
Phenylpropanolamine
Pipradrol
References
Abandoned drugs
Amines
Beta-Hydroxyamphetamines
Phenylethanolamines
Stimulants
Tertiary alcohols | Hexapradol | Chemistry | 105 |
63,514,033 | https://en.wikipedia.org/wiki/NGC%20768 | NGC 768 is a barred spiral galaxy located in the constellation Cetus about 314 million light years from the Milky Way. It was discovered by the American astronomer Lewis Swift in 1885.
See also
List of NGC objects (1–1000)
References
External links
Barred spiral galaxies
0768
Cetus
007465 | NGC 768 | Astronomy | 64 |
532,542 | https://en.wikipedia.org/wiki/Companion%20matrix | In linear algebra, the Frobenius companion matrix of the monic polynomial
is the square matrix defined as
Some authors use the transpose of this matrix, , which is more convenient for some purposes such as linear recurrence relations (see below).
is defined from the coefficients of , while the characteristic polynomial as well as the minimal polynomial of are equal to . In this sense, the matrix and the polynomial are "companions".
Similarity to companion matrix
Any matrix with entries in a field has characteristic polynomial , which in turn has companion matrix . These matrices are related as follows.
The following statements are equivalent:
A is similar over F to , i.e. A can be conjugated to its companion matrix by matrices in GLn(F);
the characteristic polynomial coincides with the minimal polynomial of A , i.e. the minimal polynomial has degree n;
the linear mapping makes a cyclic -module, having a basis of the form ; or equivalently as -modules.
If the above hold, one says that A is non-derogatory.
Not every square matrix is similar to a companion matrix, but every square matrix is similar to a block diagonal matrix made of companion matrices. If we also demand that the polynomial of each diagonal block divides the next one, they are uniquely determined by A, and this gives the rational canonical form of A.
Diagonalizability
The roots of the characteristic polynomial are the eigenvalues of . If there are n distinct eigenvalues , then is diagonalizable as , where D is the diagonal matrix and V is the Vandermonde matrix corresponding to the 's:
Indeed, a reasonably hard computation shows that the transpose has eigenvectors with , which follows from . Thus, its diagonalizing change of basis matrix is , meaning , and taking the transpose of both sides gives .
We can read the eigenvectors of with from the equation : they are the column vectors of the inverse Vandermonde matrix . This matrix is known explicitly, giving the eignevectors , with coordinates equal to the coefficients of the Lagrange polynomials
Alternatively, the scaled eigenvectors have simpler coefficients.
If has multiple roots, then is not diagonalizable. Rather, the Jordan canonical form of contains one Jordan block for each distinct root; if the multiplicity of the root is m, then the block is an m × m matrix with on the diagonal and 1 in the entries just above the diagonal. in this case, V becomes a confluent Vandermonde matrix.
Linear recursive sequences
A linear recursive sequence defined by for has the characteristic polynomial , whose transpose companion matrix generates the sequence:
The vector is an eigenvector of this matrix, where the eigenvalue is a root of . Setting the initial values of the sequence equal to this vector produces a geometric sequence which satisfies the recurrence. In the case of n distinct eigenvalues, an arbitrary solution can be written as a linear combination of such geometric solutions, and the eigenvalues of largest complex norm give an asymptotic approximation.
From linear ODE to first-order linear ODE system
Similarly to the above case of linear recursions, consider a homogeneous linear ODE of order n for the scalar function :
This can be equivalently described as a coupled system of homogeneous linear ODE of order 1 for the vector function :
where is the transpose companion matrix for the characteristic polynomial
Here the coefficients may be also functions, not just constants.
If is diagonalizable, then a diagonalizing change of basis will transform this into a decoupled system equivalent to one scalar homogeneous first-order linear ODE in each coordinate.
An inhomogeneous equation
is equivalent to the system:
with the inhomogeneity term .
Again, a diagonalizing change of basis will transform this into a decoupled system of scalar inhomogeneous first-order linear ODEs.
Cyclic shift matrix
In the case of , when the eigenvalues are the complex roots of unity, the companion matrix and its transpose both reduce to Sylvester's cyclic shift matrix, a circulant matrix.
Multiplication map on a simple field extension
Consider a polynomial with coefficients in a field , and suppose is irreducible in the polynomial ring . Then adjoining a root of produces a field extension , which is also a vector space over with standard basis . Then the -linear multiplication mapping
has an n × n matrix with respect to the standard basis. Since and , this is the companion matrix of :
Assuming this extension is separable (for example if has characteristic zero or is a finite field), has distinct roots with , so that
and it has splitting field . Now is not diagonalizable over ; rather, we must extend it to an -linear map on , a vector space over with standard basis , containing vectors . The extended mapping is defined by .
The matrix is unchanged, but as above, it can be diagonalized by matrices with entries in :
for the diagonal matrix and the Vandermonde matrix V corresponding to . The explicit formula for the eigenvectors (the scaled column vectors of the inverse Vandermonde matrix ) can be written as:
where are the coefficients of the scaled Lagrange polynomial
See also
Frobenius endomorphism
Cayley–Hamilton theorem
Krylov subspace
Notes
Matrices
Matrix theory
Matrix normal forms | Companion matrix | Mathematics | 1,103 |
37,170,575 | https://en.wikipedia.org/wiki/Thaxterogaster%20cinereoroseolus | Thaxterogaster cinereoroseolus is a species of truffle-like fungus in the family Cortinariaceae. Found in New South Wales, Australia, the species was described as new to science in 2010.
Taxonomy
The species was first described scientifically by Melissa Danks, Teresa Lebel, and Karl Vernes in a 2010 issue of the journal Persoonia. The type collection was made in Mount Kaputar, New South Wales (Australia) in July 2007. Molecular analysis of internal transcribed spacer DNA sequences indicates that Cortinarius cinereoroseolus groups together in a subclade with two undescribed sequestrate Cortinarius species, and that this subclade is sister to a clade containing the agaric species C. australis, C. chalybaeus, C. porphyropus, C. purpurascens and C. purpurascens var. largusoides; all of these species belong to the section Purpurascens of the genus Cortinarius. The specific epithet caesibulga is derived from the Latin words cinereo (greyish) and roseolus (light pink) and refers to the colour of the fruit bodies.
In 2022 the species was transferred from Cortinarius and reclassified as Thaxterogaster cinereoroseolus based on genomic data.
Description
Thaxterogaster cinereoroseolus has a sequestrate fruit body, meaning that its spores are not forcibly discharged from the basidia, and it remains enclosed during development, including at maturity. The shape of the caps ranges from irregularly spherical to like an inverted cone, and they measure long by in diameter. A white to silvery-grey partial veil connects the cap to the stipe. The colour of the outer skin of the cap (the pellis) is cream mixed with pale pink, lilac, and grey, and it is smooth with a finely hairy texture. Remnants of the greyish silky universal veil are readily rubbed off with handling. The flesh is white to cream and thick. The internal spore-bearing tissue of the cap (the hymenophore) is pale brown at first, but darkens as the spores mature. A white stipe extends into the fruit body through its entire length; it measures long by thick, with a bulbous base that extends below the cap. Fruit bodies have no distinctive taste, but smell somewhat of flowers or like chlorine. The spores are broadly egg-shaped and measure 7–8.9 by 5.1–6.4 μm. They are covered irregularly with nodules up to 1.5 μm high. The thin-walled basidia (spore-bearing cells) are hyaline (translucent), club-shaped to cylindrical, four-spored, and have dimensions of 28–40 by 7–9 μm. There are clamp connections present in the hyphae of both the cap and the hymenium.
Habitat and distribution
The fruit bodies of Thaxterogaster cinereoroseolus grow in the ground under litterfall in subalpine areas of the Kaputar Plateau. Plants typically associated with the fungus include Eucalyptus dalrympleana, E. pauciflora and Poa sieberiana with scattered Acacia melanoxylon, Acacia sp., Hibbertia obtusifolia, Lomatia arborescens, Monotoca scaparia, Olearia rosemanifolia and Pultanea satulosa. The fungus has also been collected in wet sclerophyll forests where Acacia melanoxylon, Blechnum cartilagineum, Coprosma quadrifida, Cyathea australis, Lomandra multiflora, Lomatia arborescens and Poa sieberiana are the predominant plants.
See also
List of Cortinarius species
References
External links
cinereoroseolus
Fungi described in 2010
Fungi of Australia
Taxa named by Teresa Lebel
Fungus species | Thaxterogaster cinereoroseolus | Biology | 843 |
9,780,961 | https://en.wikipedia.org/wiki/Clinical%20case%20definition | In epidemiology, a clinical case definition, a clinical definition, or simply a case definition lists the clinical criteria by which public health professionals determine whether a person's illness is included as a case in an outbreak investigation—that is, whether a person is considered directly affected by an outbreak. Absent an outbreak, case definitions are used in the surveillance of public health in order to categorize those conditions present in a population (e.g., incidence and prevalence).
How are they used
A case definition defines a case by placing limits on time, person, place, and shared definition with data collection of the phenomenon being studied. Time criteria may include all cases of a disease identified from, for example, January 1, 2008 to March 1, 2008. Person criteria may include age, gender, ethnicity, and clinical characteristics such as symptoms (e.g. cough and fever) and the results of clinical tests (e.g. pneumonia on chest X-ray). Place criteria will usually specify a geographical entity such as a town, state, or country, but may be as small as an institution, a school class, or a restaurant meal session. Shared definition of the phenomenon impacts the study methods and ensures terminology is used in a consistent manner.
Case definitions are often used to label individuals as suspect, probable, or confirmed cases. For example, in the investigation of an outbreak of pneumococcal pneumonia in a nursing home the case definition may be specified as:
Suspect Case: All residents of Nursing Home A with onset of cough and fever between January 1, 2008 and February 1, 2008.
Probable Case: Meet the suspect case definition plus have pneumonia on chest X-ray.
Confirmed Case: Meet the probable case definition plus have pneumococcal infection confirmed by blood culture or other isolation of pneumococci from normally sterile site.
By creating a case definition, public health professionals are better equipped to study an outbreak and determine possible causes.
As investigations proceed, a case definition may be expanded or narrowed, a characteristic of the dynamic nature of outbreak investigations. At any given time, the case definition is supposed to be the gold standard to diagnose a given disease. A sensitive case definition, often applied early in an outbreak, will capture all cases, but will include many non-cases. A specific case definition, usually applied after the outbreak is considered more well understood, will exclude most non-cases, but will also exclude some actual cases.
Diagnostic criteria
The term diagnostic criteria designates a case definition with a specific combination of signs, symptoms, and test results that the clinician uses to attempt to determine the correct diagnosis.
Some examples of diagnostic criteria are:
Amsterdam criteria for hereditary nonpolyposis colorectal cancer
McDonald criteria for multiple sclerosis
ACR criteria for systemic lupus erythematosus
Clinical definitions
When diagnostic criteria are universally accepted they can be considered a "clinical definition" because they define the limits of the affected population, determining which patients are inside and outside of the set.
A clinical definition should be regarded as a statistical analysis tool, and not a substitute for a pathological definition when this is required. Posthumous diagnosis allows to establish the sensitivity and specificity of the clinical definitions.
See also
Public Health
Epidemiology
Outbreak
Sensitivity and specificity
Diagnostic criteria
References
Epidemiology | Clinical case definition | Environmental_science | 676 |
22,683,827 | https://en.wikipedia.org/wiki/Helvella%20ephippium | Helvella ephippium is a species of fungus in the family Helvellaceae, Pezizales order. It appears in summer and autumn as an upright white stem up to tall supporting a greyish-brown saddle-shaped cap. It is found in woodland and is variously listed as inedible or "edible but uninspiring".
Distribution
This is a European species, also recorded in China.
References
ephippium
Fungi of China
Fungi of Europe
Fungi described in 1841
Taxa named by Joseph-Henri Léveillé
Fungus species | Helvella ephippium | Biology | 115 |
48,860,835 | https://en.wikipedia.org/wiki/Double%20summit | A double summit, double peak, twin summit, or twin peak is a mountain or hill that has two summits, separated by a col or saddle.
One well-known double summit is Austria's highest mountain, the Großglockner, where the main summit of the Großglockner is separated from that of the Kleinglockner by the Glocknerscharte col in the area of a geological fault. Other double summits have resulted from geological folding. For example, on Mont Withrow in British Columbia, resistant sandstones form the limbs of the double summit, whilst the softer rock in the core of the fold was eroded.
Triple peaks occur more rarely; one example is the Rosengartenspitze in the Dolomites. The Illimani in Bolivia is an example of a rare quadruple summit.
Well known double summits (selection)
Well known double summits are (roughly from east to west):
Europe
Limestone Alps
Schneeberg (Lower Austria)
Kaiserstein in the massif of the Wetterin, Styria
Lugauer in the Gesäuse, Styria
Krippenstein (north of the Dachstein Group)
Bischofsmütze in the Dachstein region (Gosaukamm)
Brietkogel and the Eiskogel in the Tennen Mountains, Salzburg state
Karlspitzen in the Kaiser Mountains
Roßstein and Buchstein, Upper Bavaria
Klammspitze in the Ammergau Alps
Guffert in the Rofan, Tyrol
Grauspitz, Liechtenstein
Furchetta in the Geisler Group (?)
Altmann in the Alpstein, East Switzerland
Central Alps
Großglockner
Seekarspitze (Schladming Tauern)
Gleichenberge (Styria)
Lasörling in the Großvenediger, High Tauern
Unterberghorn in eastern North Tyrol
Wilde Kreuzspitze in the Zillertal Alps
Rofelewand in the Ötztal Alps
Watzespitze in the Kaunergrat, Ötztal Alps
Wildspitze in the Weißkamm, Ötztal Alps
Schwarzhorn and Weißhorn in South Tyrol
Ortstock, Glarus Alps
Aiguille du Dru in the Mont Blanc massif
Aiguille Verte in the Mont Blanc region
Other mountain ranges of Europe
Smolikas (Bogdani and Kapetan Tsekouras) in Greece
Bubenik in Upper Lusatia
Strohmberg in Upper Lusatia
Špičák (Sattelberg) in the Ore Mountains
Burgstadtl in the Duppau Mountains
Schanzberge near Tischberg, South Bohemia
Schwarze Mauer and Kamenec on the Upper Austrian-Bohemian border
Großer Auerberg in the Harz
Ehrenbürg, a Zeugenberg in Franconian Switzerland
Hohenstoffeln (volcano in the Hegau)
Berguedà in the Pyrenees
Pen y Fan in the Brecon Beacons
Asia
Hasan Dağı in the region of Cappadocia, Turkey
Ushba in Georgia
Elbrus (twin-peaked volcano) in the Caucasus
Raja Gyepang in Central Lahaul, India
Machapucharé in the Annapurna massif in the Himalayas, Nepal
Chogolisa in the Karakorum, Pakistan
Broad Peak with pre- and main summit in the Karakorum, China/Pakistan
Gasherbrum IV, southern neighbour of Broad Peak in the Karakorum, Pakistan
Other mountain regions
Mont Ross on the Kerguelen Islands
Pico Duarte on Hispaniola (Dominican Republic)
Chaupi Orco in the Andes
Ancohuma in the Andes
The Brothers in the Olympic Mountains (USA/Washington)
Double Peak in the Cascade Mountains (USA/Washington)
Mount Sopris in the Rocky Mountains (USA/Colorado)
Pilot peak and index peak in Wyoming
Kaufmann Peaks in Banff National Park Canada
References
Summits
Geodesy
Cartography
Physical geography
Slope landforms
Topography
Oronyms | Double summit | Mathematics | 841 |
2,262,585 | https://en.wikipedia.org/wiki/Software%20package%20metrics | Various software package metrics are used in modular programming. They have been mentioned by Robert Cecil Martin in his 2002 book Agile software development: principles, patterns, and practices.
The term software package here refers to a group of related classes in object-oriented programming.
Number of classes and interfaces: The number of concrete and abstract classes (and interfaces) in the package is an indicator of the extensibility of the package.
Afferent couplings (Ca): The number of classes in other packages that depend upon classes within the package is an indicator of the package's responsibility. Afferent couplings signal inward.
Efferent couplings (Ce): The number of classes in other packages that the classes in a package depend upon is an indicator of the package's dependence on externalities. Efferent couplings signal outward.
Abstractness (A): The ratio of the number of abstract classes (and interfaces) in the analyzed package to the total number of classes in the analyzed package. The range for this metric is 0 to 1, with A=0 indicating a completely concrete package and A=1 indicating a completely abstract package.
Instability (I): The ratio of efferent coupling (Ce) to total coupling (Ce + Ca) such that I = Ce / (Ce + Ca). This metric is an indicator of the package's resilience to change. The range for this metric is 0 to 1, with I=0 indicating a completely stable package and I=1 indicating a completely unstable package.
Distance from the main sequence (D): The perpendicular distance of a package from the idealized line A + I = 1. D is calculated as D = | A + I - 1 |. This metric is an indicator of the package's balance between abstractness and stability. A package squarely on the main sequence is optimally balanced with respect to its abstractness and stability. Ideal packages are either completely abstract and stable (I=0, A=1) or completely concrete and unstable (I=1, A=0). The range for this metric is 0 to 1, with D=0 indicating a package that is coincident with the main sequence and D=1 indicating a package that is as far from the main sequence as possible.
Package dependency cycles: Package dependency cycles are reported along with the hierarchical paths of packages participating in package dependency cycles.
See also
Dependency inversion principle – a method to reduce coupling (Martin 2002:127).
References
External links
OO Metrics tutorial explains package metrics with examples, but gets the Instability index wrong; see page 262 of Martin's Agile Software Development: Principles, Patterns and Practices. Pearson Education. .
Software metrics
Object-oriented programming | Software package metrics | Mathematics,Engineering | 558 |
57,756,777 | https://en.wikipedia.org/wiki/R13%20%28drug%29 | R13 is a small-molecule flavonoid and orally active, potent, and selective agonist of the tropomyosin receptor kinase B (TrkB) – the main signaling receptor for the neurotrophin brain-derived neurotrophic factor (BDNF) – which is under development for the potential treatment of Alzheimer's disease. It is a structural modification and prodrug of tropoflavin (7,8-DHF) with improved potency and pharmacokinetics, namely oral bioavailability and duration. The compound is a replacement for the earlier tropoflavin prodrug R7 and has similar properties to it. It was developed because while R7 displayed a good drug profile in animal studies, it showed almost no conversion into tropoflavin in human liver microsomes. In contrast to R7, R13 is readily hydrolyzed into tropoflavin in human liver microsomes.
See also
List of investigational antidepressants
Tropomyosin receptor kinase B § Agonists
References
External links
7,8-Dihydoxyflavone and 7,8-substituted flavone derivatives, compositions, and methods related thereto (US9975868B2)
Antidementia agents
Carbamates
Esters
Experimental drugs
Flavones
Neuroprotective agents
Nootropics
Prodrugs
TrkB agonists | R13 (drug) | Chemistry | 308 |
36,337,786 | https://en.wikipedia.org/wiki/Trash%20hook | The Trash hook is a tool used by firefighters for a variety of functions. The tool's primary purpose is to sift through trash during the overhaul stage of a dumpster fire. Secondarily, it can be used for roof ventilation and prying operations.
References
Firefighter tools
Hand tools | Trash hook | Engineering | 61 |
873,892 | https://en.wikipedia.org/wiki/London%20Hydraulic%20Power%20Company | The London Hydraulic Power Company was established in 1883 to install a hydraulic power network in London. This expanded to cover most of central London at its peak, before being replaced by electricity, with the final pump house closing in 1977.
History
The company was set up by an Act of Parliament (the London Hydraulic Power Act 1884), sponsored by railway engineer Sir James Allport, to install a network of high-pressure cast iron water mains under London. It merged the Wharves and Warehouses Steam Power and Hydraulic Pressure Company, founded in 1871 by Edward B. Ellington, and the General Hydraulic Power Company, founded in 1882. The network gradually expanded to cover an area mostly north of the Thames from Hyde Park in the west to Docklands in the east.
The system was used as a cleaner and more compact alternative to steam engines, to power workshop machinery, lifts, cranes, theatre machinery (including revolving stages at the London Palladium and the London Coliseum, safety curtains at the Theatre Royal, Drury Lane, the lifting mechanism for the cinema organ at the Leicester Square theatre and the complete Palm Court orchestra platform), and the backup mechanism of Tower Bridge. It was also used to supply fire hydrants, mostly those inside buildings. The water, pumped straight from the Thames, was heated in winter to prevent freezing.
Pumping stations
The pressure was maintained at a nominal (55 BAR) by five hydraulic power stations, originally driven by coal-fired steam engines. These were at:
Falcon Wharf Pumping Station at Bankside, east of Blackfriars Bridge on the south bank of the River Thames (opened in 1883)
Kensington Court and Millbank (1887) later (1911) replaced by a station in Grosvenor Road
Wapping Hydraulic Pumping Station (est. 1890), using the defunct Tower Subway to carry pipes under the Thames (closed on 30 June 1977, the last to be used)
City Road Basin on the Regent's Canal in Islington (1893), later used as the Marico furniture factory
Renforth Pump House (Rotherhithe, Canada Water) (opened in 1904), now residential accommodation
Short-term storage was provided by hydraulic accumulators, which were large vertical pistons loaded with heavy weights.
Cross-River Thames mains
The mains crossed the River Thames via Vauxhall Bridge, Waterloo Bridge and Southwark Bridge and via the Rotherhithe Tunnel as well as the Tower Subway.
Decline
The system pumped 6.5 million gallons of water each week in 1893; this grew to 32 million gallons in 1933.
From about 1904, business began to decline as electric power became more popular. The company began to replace its steam engines with electric motors from 1923. At its peak, the network consisted of of pipes, and the total power output was about .
The system finally closed in June 1977. The company, as a UK statutory authority, had the legal right to dig up the public highways to install and maintain its pipe network. This made it attractive to Mercury Communications (a subsidiary of Cable & Wireless) who bought the company and used the pipes as telecommunications ducts. Wapping Hydraulic Power Station, the last of the five to close, later became an arts centre and restaurant.
See also
Liverpool Hydraulic Power Company
Manchester Hydraulic Power
References
Further reading
Electric power companies of the United Kingdom
Hydraulics
Infrastructure in London
Subterranea of the United Kingdom
Utilities of the United Kingdom
Energy companies established in 1883
Companies disestablished in 1977
1883 establishments in England | London Hydraulic Power Company | Physics,Chemistry | 700 |
37,812,104 | https://en.wikipedia.org/wiki/HolE | In E. coli and other bacteria, holE is a gene that encodes the theta subunit of DNA polymerase III.
References
Bacterial proteins
DNA replication | HolE | Chemistry,Biology | 31 |
335,094 | https://en.wikipedia.org/wiki/Reducing%20atmosphere | A reducing atmosphere is an atmosphere in which oxidation is prevented by absence of oxygen and other oxidizing gases or vapours, and which may contain actively reductant gases such as hydrogen, carbon monoxide, methane and hydrogen sulfide that would be readily oxidized to remove any free oxygen. Although Early Earth had a reducing prebiotic atmosphere prior to the Proterozoic eon, starting at about 2.5 billion years ago in the late Neoarchaean period, the Earth's atmosphere experienced a significant rise in oxygen and transitioned to an oxidizing atmosphere with a surplus of molecular oxygen (dioxygen, O2) as the primary oxidizing agent.
Foundry operations
The principal mission of an iron foundry is the conversion of iron oxides (purified iron ores) to iron metal. This reduction is usually effected using a reducing atmosphere consisting of some mixture of natural gas, hydrogen (H2), and carbon monoxide. The byproduct is carbon dioxide.
Metal processing
In metal processing, a reducing atmosphere is used in annealing ovens for relaxation of metal stresses without corroding the metal. A non-oxidizing gas, usually nitrogen or argon, is typically used as a carrier gas so that diluted amounts of reducing gases may be used. Typically, this is achieved through using the combustion products of fuels and tailoring the ratio of CO:CO2. However, other common reducing atmospheres in the metal processing industries consist of dissociated ammonia, vacuum, and direct mixing of appropriately pure gases of N2, Ar, and H2.
A reducing atmosphere is also used to produce specific effects on ceramic wares being fired. A reduction atmosphere is produced in a fuel fired kiln by reducing the draft and depriving the kiln of oxygen. This diminished level of oxygen causes incomplete combustion of the fuel and raises the level of carbon inside the kiln. At high temperatures the carbon will bond with and remove the oxygen in the metal oxides used as colorants in the glazes. This loss of oxygen results in a change in the color of the glazes because it allows the metals in the glaze to be seen in an unoxidized form. A reduction atmosphere can also affect the color of the clay body. If iron is present in the clay body, as it is in most stoneware, then it will be affected by the reduction atmosphere as well.
In most commercial incinerators, exactly the same conditions are created to encourage the release of carbon-bearing fumes. These fumes are then oxidized in reburn tunnels where oxygen is injected progressively. The exothermic oxidation reaction maintains the temperature of the reburn tunnels. This system allows lower temperatures to be employed in the incinerator section, where the solids are volumetrically reduced.
Origin of life
The atmosphere of Early Earth is widely speculated to have been reducing. The Miller–Urey experiment, related to some hypotheses for the origin of life, entailed reactions in a reducing atmosphere composed of a mixed atmosphere of methane, ammonia and hydrogen sulfide. Some hypotheses for the origin of life invoke a reducing atmosphere consisting of hydrogen cyanide (HCN). Experiments show that HCN can polymerize in the presence of ammonia to give a variety of products including amino acids. The same principle applies to Mars, Venus and Titan.
Cyanobacteria are suspected to be the first photoautotrophs that evolved oxygenic photosynthesis, which over the latter half of the Archaen eon eventually depleted all reductants in the Earth's oceans, terrestrial surface and atmosphere, gradually increasing the oxygen concentration in the atmosphere, changing it to what is known as an oxidizing atmosphere. This rising oxygen initially led to a 300 million-year-long ice age that devastated the then-mostly anaerobe-dominated biosphere, forcing the surviving anaerobic colonies to evolve into symbiotic microbial mats with the newly evolved aerobes. Some aerobic bacteria eventually became endosymbiont within other anaerobes (likely archaea), and the resultant symbiogenesis led to the evolution of a completely new lineage of life — the eukaryotes, who took advantage of mitochondrial aerobic respiration to power their cellular activities, allowing life to thrive and evolve into ever more complex forms. The increased oxygen in the atmosphere also eventually created the ozone layer, which shielded away harmful ionizing ultraviolet radiation that otherwise would have photodissociated away surface water and rendered life impossible on land and the ocean surface.
In contrast to the hypothesized early reducing atmosphere, evidence exists that Hadean atmospheric oxygen levels were similar to those of today. These results suggests prebiotic building blocks were delivered from elsewhere in the galaxy. The results however do not run contrary to existing theories on life's journey from anaerobic to aerobic organisms. The results quantify the nature of gas molecules containing carbon, hydrogen, and sulphur in the earliest atmosphere, but they shed no light on the much later rise of free oxygen in the air.
See also
Notes
Metallurgy
Planetary science
Pottery
Redox | Reducing atmosphere | Chemistry,Materials_science,Astronomy,Engineering | 1,067 |
13,937,760 | https://en.wikipedia.org/wiki/Collinder%20catalogue | The Collinder catalogue is a catalogue of 471 open clusters compiled by Swedish astronomer Per Collinder. It was published in 1931 as an appendix to Collinder's paper On structural properties of open galactic clusters and their spatial distribution.
The catalogue contains 452 open clusters, 11 globular clusters, 6 asterisms, 1 stellar moving group, and 1 stellar association. Catalogue objects are denoted by Collinder, e.g. "Collinder 399". Dated prefixes include as Col + catalogue number, or Cr + catalogue number, e.g. "Cr 399".
Collinder objects
Notes
Errors
There are some errors in Collinder's list or references to it. For example:
Cr 21, 27, 57, 396, 399, and 426 are asterisms.
Cr 32, 33, and 34 all refer to parts of the much larger IC 1848.
There is some doubt as to whether or not Cr 84, 182, 221, 254, 265, 269, 283, 294, 336, 387, 404, 425, 456, and 458 are open clusters.
The positions of Cr 109 and 185 are inaccurate.
Cr 202 is actually the central condensation of the much larger Cr 199.
Cr 220 was believed by Collinder to be NGC 3247 when in reality he had discovered a new open cluster.
Cr 234 was applied to the southern section of the much larger Cr 233.
Cr 240 is actually the central condensation of the much larger Cr 239.
Cr 267, 328, 330, 346, 364, 366, 368, 381, 395, 409, and 414 are globular clusters.
Cr 334 and 335 are duplicate listings of the same object.
The original alias given for Cr 339 is the galaxy NGC 6393. The correct alias for Cr 339 is the open cluster NGC 6396.
The original alias given for Cr 371 is of the nebula which surrounds an open cluster he discovered. He apparently did not know he was first to make the distinction.
Cr 374 is embedded within the much larger Messier 24.
Collinder erroneously believed that Messier 11 was a globular cluster.
Collinder’s description of Messier 73 is actually for Messier 72, a globular cluster, and not the object he intended for, Cr. 426.
See also
List of astronomical catalogues
Melotte catalogue - a similar catalogue of star clusters published by Philibert Jacques Melotte in 1915.
Trumpler catalogue - a similar catalogue of open star clusters published by Robert Julius Trumpler in 1930, one year before Per Collinder.
References
External links
An annotated version of the Collinder catalogue by Thomas Watson
Astronomical catalogues
Open clusters | Collinder catalogue | Astronomy | 558 |
23,847,917 | https://en.wikipedia.org/wiki/4D-RCS%20Reference%20Model%20Architecture | The 4D/RCS Reference Model Architecture is a reference model for military unmanned vehicles on how their software components should be identified and organized.
The 4D/RCS has been developed by the Intelligent Systems Division (ISD) of the National Institute of Standards and Technology (NIST) since the 1980s.
This reference model is based on the general Real-time Control System (RCS) Reference Model Architecture, and has been applied to many kinds of robot control, including autonomous vehicle control.
Overview
4D/RCS is a reference model architecture that provides a theoretical foundation for designing, engineering, integrating intelligent systems software for unmanned ground vehicles.
According to Balakirsky (2003) 4D/RCS is an example of deliberative agent architecture. These architectures "include all systems that plan to meet future goal or deadline. In general, these systems plan on a model of the world rather than planning directly on processed sensor output. This may be accomplished by real-time sensors, a priori information, or a combination of the two in order to create a picture or snapshot of the world that is used to update a world model". The course of action of a deliberative agent architecture is based on the world model and the commanded mission goal, see image. This goal "may be a given system state or physical location. To meet the goal systems of this kind attempts to compute a path through a multi-dimensional space contained in the real world".
The 4D/RCS is a hierarchical deliberative architecture, that "plans up to the subsystem level to compute plans for an autonomous vehicle driving over rough terrain. In this system, the world model contains a pre-computed dictionary of possible vehicle trajectories known as an ego-graph as well as information from the real-time sensor processing. The trajectories are computed based on a discrete set of possible vehicle velocities and starting steering angles. All of the trajectories are guaranteed to be dynamically correct for the given velocity and steering angle. The systems runs under a fixed planning cycle, with the sensed information being updated into the world model at the beginning of the cycle. These update information include information on what area is currently under observation by the sensors, where detected obstacles exist, and vehicle status".
History
The National Institute of Standards and Technology's (NIST) Intelligent Systems Division (ISD) has been developing the RCS reference model architecture for over 30 years. 4D/RCS is the most recent version of RCS developed for the Army Research Lab Experimental Unmanned Ground Vehicle program. The 4D in 4D/RCS signifies adding time as another dimension to each level of the three-dimensional (sensor processing, world modeling, behavior generation), hierarchical control structure. ISD has studied the use of 4D/RCS in defense mobility, transportation, robot cranes, manufacturing, and several other applications.
4D/RCS integrates the NIST Real-time Control System (RCS) architecture with the German (Bundeswehr University of Munich) VaMoRs 4-D approach to dynamic machine vision. It incorporates many concepts developed under the U.S. Department of Defense Demo I, Demo II, and Demo III programs, which demonstrated increasing levels of robotic vehicle autonomy. The theory embodied in 4D/RCS borrows heavily from cognitive psychology, semiotics, neuroscience, and artificial intelligence.
Three US Government funded military efforts known as Demo I (US Army), Demo II (DARPA), and Demo III (US Army), are currently underway. Demo III (2001) demonstrated the ability of unmanned ground vehicles to navigate miles of difficult off-road terrain, avoiding obstacles such as rocks and trees. James Albus at NIST provided the Real-time Control System which is a hierarchical control system. Not only were individual vehicles controlled (e.g. throttle, steering, and brake), but groups of vehicles had their movements automatically coordinated in response to high level goals.
In 2002, the DARPA Grand Challenge competitions were announced. The 2004 and 2005 DARPA competitions allowed international teams to compete in fully autonomous vehicle races over rough unpaved terrain and in a non-populated suburban setting. The 2007 DARPA challenge, the DARPA urban challenge, involved autonomous cars driving in an urban setting.
4D/RCS Building blocks
The 4D/RCS architecture is characterized by a generic control node at all the hierarchical control levels. The 4D/RCS hierarchical levels are scalable to facilitate systems of any degree of complexity. Each node within the hierarchy functions as a goal-driven, model-based, closed-loop controller. Each node is capable of accepting and decomposing task commands with goals into actions that accomplish task goals despite unexpected conditions and dynamic perturbations in the world.
4D/RCS Hierarchy
4D/RCS prescribes a hierarchical control principle that decomposed high level commands into actions that employ physical actuators and sensors. The figure for example shows a high level block diagram of a 4D/RCS reference model architecture for a notional Future Combat System (FCS) battalion. Commands flow down the hierarchy, and status feedback and sensory information flows up. Large amounts of communication may occur between nodes at the same level, particularly within the same subtree of the command tree:
At the Servo level : Commands to actuator groups are decomposed into control signals to individual actuators.
At the Primitive level : Multiple actuator groups are coordinated and dynamical interactions between actuator groups are taken into account.
At the Subsystem level : All the components within an entire subsystem are coordinated, and planning takes into consideration issues such as obstacle avoidance and gaze control.
At the Vehicle level : All the subsystems within an entire vehicle are coordinated to generate tactical behaviors.
At the Section level : Multiple vehicles are coordinated to generate joint tactical behaviors.
At the Platoon level : Multiple sections containing a total of 10 or more vehicles of different types are coordinated to generate platoon tactics.
At the Company level : Multiple platoons containing a total of 40 or more vehicles of different types are coordinated to generate company tactics.
At the Battalion level : Multiple companies containing a total of 160 or more vehicles of different types are coordinated to generate battalion tactics.
At all levels, task commands are decomposed into jobs for lower level units and coordinated schedules for subordinates are generated. At all levels, communication between peers enables coordinated actions. At all levels, feedback from lower levels is used to cycle subtasks and to compensate for deviations from the planned situations.
4D/RCS control loop
At the heart of the control loop through each node is the world model, which provides the node with an internal model of the external world. The world model provides a site for data fusion, acts as a buffer between perception and behavior, and supports both sensory processing and behavior generation.
A high level diagram of the internal structure of the world model and value judgment system is shown in the figure. Within the knowledge database, iconic information (images and maps) is linked to each other and to symbolic information (entities and events). Situations and relationships between entities, events, images, and maps are represented by pointers. Pointers that link symbolic data structures to each other form syntactic, semantic, causal, and situational networks. Pointers that link symbolic data structures to regions in images and maps provide symbol grounding and enable the world model to project its understanding of reality onto the physical world.
Sensory processing performs the functions of windowing, grouping, computation, estimation, and classification on input from sensors. World modeling maintains knowledge in the form of images, maps, entities, and events with states, attributes, and values. Relationships between images, maps, entities, and events are defined by pointers. These relationships include class membership, ontologies, situations, and inheritance. Value judgment provides criteria for decision making. Behavior generation is responsible for planning and execution of behaviors.
Computational nodes
The 4D/RCS nodes have internal structure such as shown in the figure. Within each node there typically are four functional elements or processes:
behavior generation,
world modeling,
sensory processing, and
value judgment.
There is also a knowledge database that represents the node's best estimate of the state of the world at the
range and resolution that are appropriate for the behavioral decisions that are the responsibility of that node.
These are supported by a knowledge database, and a communication system that interconnects the functional processes and the knowledge database. Each functional element in the node may have an operator interface. The connections to the Operator Interface enable a human operator to input commands, to override or modify system behavior, to perform various types of teleoperation, to switch control modes (e.g., automatic, teleoperation, single step, pause), and to observe the values of state variables, images, maps, and entity attributes. The Operator Interface can also be used for programming, debugging, and maintenance.
Five levels of the architecture
The figure is a computational hierarchy view of the first five levels in the chain of command containing the Autonomous Mobility Subsystem in the 4D/RCS architecture developed for Demo III. On the right of figure, Behavior Generation (consisting of Planner and Executor) decompose high level mission commands into low level actions. The text inside the Planner at each level indicates the planning horizon at that level.
In the center of the figure, each map has a range and resolution that is appropriate for path planning at its level. At each level, there are symbolic data structures and segmented images with labeled regions that describe entities, events, and situations that are relevant to decisions that must be made at that level. On the left is a sensory processing hierarchy that extracts information from the sensory data stream that is needed to keep the world model knowledge database current and accurate.
The bottom (Servo) level has no map representation. The Servo level deals with actuator dynamics and reacts to sensory feedback from actuator sensors. The Primitive level map has range of 5 m with resolution of 4 cm. This enables the vehicle to make small path corrections to avoid bumps and ruts during the 500 ms planning horizon of the Primitive level. The Primitive level also uses accelerometer data to control vehicle dynamics and prevent rollover during high speed driving.
At all levels, 4D/RCS planners are designed to generate new plans well before current plans become obsolete. Thus, action always takes place in the context of a recent plan, and feedback through the executors closes reactive control loops using recently selected control parameters. To meet the demands of dynamic battlefield environments, the 4D/RCS architecture specifies that replanning should occur within about one-tenth of the planning horizon at each level.
Inter-Node Interactions within a Hierarchy
Sensory processing and behavior generation are both hierarchical processes, and both are embedded in the nodes that form the 4D/RCS organizational hierarchy. However, the SP and BG hierarchies are quite different in nature and are not directly coupled. Behavior generation is a hierarchy based on the decomposition of tasks and the assignment of tasks to operational units. Sensory processing is a hierarchy based on the grouping of signals and pixels into entities and events. In 4D/RCS, the hierarchies of sensory processing and behavior generation are separated by a hierarchy of world modeling processes. The WM hierarchy provides a buffer between the SP and BG hierarchies with interfaces to both.
Criticisms
There have been major criticisms of this architectural form, according to Balakirsky (2003) due to the fact that "the planning is performed on a model of the world rather than on the actual world, and the complexity of the computing large plans... Since the world is not static, and may change during this time delay that occurs between sensing, plan conception, and final execution, the validation of the computed plans have been called into question".
References
Further reading
Albus, J.S (1988). System Description and Design Architecture for Multiple Autonomous Undersea Vehicles. NISTTN 1251, National Institute of Standards and Technology, Gaithersburg, MD, September 1988
James S. Albus (2002). "4D/RCS A Reference Model Architecture for Intelligent Unmanned Ground Vehicles". In: Proceedings of the SPIE 16th Annual International Symposium on Aerospace/Defense Sensing, Simulation and Controls, Orlando, FL, April 1–5, 2002.
James Albus et al. (2002). 4D/RCS: A Reference Model Architecture For Unmanned Vehicle Systems Version 2.2. NIST August 2002
External links
RCS The Real-time Control Systems Architecture NIST Homepage
Control theory
Industrial computing
Uncrewed vehicles | 4D-RCS Reference Model Architecture | Mathematics,Technology,Engineering | 2,611 |
78,742,238 | https://en.wikipedia.org/wiki/Nivolumab/hyaluronidase | Nivolumab/hyaluronidase, sold under the brand name Opdivo Qvantig, is a fixed-dose combination anti-cancer medication used for the treatment of various forms of cancer. Nivolumab/hyaluronidase contains nivolumab, a programmed death receptor-1 (PD-1)-blocking monoclonal antibody; and hyaluronidase, an endoglycosidase. It is given by subcutaneous injection.
Nivolumab/hyaluronidase was approved for medical use in the United States in December 2024.
Medical uses
In December 2024, the US Food and Drug Administration (FDA) approved the combination of nivolumab and hyaluronidase across approved adult, solid tumor nivolumab indications as monotherapy, monotherapy maintenance following completion of hyaluronidase plus ipilimumab combination therapy, or in combination with chemotherapy or cabozantinib. The approval includes indications for renal cell carcinoma, melanoma, non-small cell lung cancer, head and neck squamous cell carcinoma, urothelial carcinoma, colorectal cancer, hepatocellular carcinoma, esophageal carcinoma, gastric cancer, gastroesophageal junction cancer, and esophageal adenocarcinoma.
History
The subcutaneous injection of nivolumab and hyaluronidase was evaluated in CHECKMATE-67T (NCT04810078), a multicenter, randomized, open-label trial in participants with advanced or metastatic clear cell renal cell carcinoma who received no more than two prior systemic treatment regimens. A total of 495 participants were randomized to receive either subcutaneous nivolumab and hyaluronidase or intravenous nivolumab.
Society and culture
Legal status
Nivolumab/hyaluronidase was approved for medical use in the United States in December 2024.
References
External links
Antineoplastic drugs
Combination drugs
Drugs developed by Bristol Myers Squibb
Monoclonal antibodies for tumors | Nivolumab/hyaluronidase | Chemistry | 450 |
19,866,616 | https://en.wikipedia.org/wiki/Benzyltrimethylammonium%20fluoride | Benzyltrimethylammonium fluoride is a quaternary ammonium salt. It is commercially available as the hydrate. The compound is a source of organic-soluble fluoride to removal of silyl ether protecting groups. As is the case for tetra-n-butylammonium fluoride and most other quaternary ammonium fluorides, the compound cannot be obtained in anhydrous form.
References
Quaternary ammonium compounds
Fluorides
Reagents for organic chemistry
Benzyl compounds | Benzyltrimethylammonium fluoride | Chemistry | 112 |
5,774,572 | https://en.wikipedia.org/wiki/Ship%20resistance%20and%20propulsion | A ship must be designed to move efficiently through the water with a minimum of external force. For thousands of years ship designers and builders of sailing vessels used rules of thumb based on the midship-section area to size the sails for a given vessel. The hull form and sail plan for the clipper ships, for example, evolved from experience, not from theory. It was not until the advent of steam power and the construction of large iron ships in the mid-19th century that it became clear to ship owners and builders that a more rigorous approach was needed.
Definition
Ship resistance is defined as the force required to tow the ship in calm water at a constant velocity.
Components of resistance
A body in water which is stationary with respect to water, experiences only hydrostatic pressure. Hydrostatic pressure always acts to oppose the weight of the body. The total (upward) force due to this buoyancy is equal to the (downward) weight of the displaced water. If the body is in motion, then there are also hydrodynamic pressures that act on the body. For a displacement vessel, that is the usual type of ship, three main types of resistance are considered: that due to wave-making, that due to the pressure of the moving water on the form, often not calculated or measured separately, and that due to friction of moving water on the wetted surface of the hull. These can be split up into more components:
Froude's experiments
Froude's method for extrapolating the results of model tests to ships was adopted in the 1870s. Another method created by Hughes introduced in the 1950s and later adopted by the International Towing Tank Conference (ITTC). Froude's method tends to overestimate the power for very large ships.
Froude had observed that when a ship or model was at its so-called Hull speed the wave pattern of the transverse waves (the waves along the hull) have a wavelength equal to the length of the waterline. This means that the ship's bow was riding on one wave crest and so was its stern. This is often called the hull speed and is a function of the length of the ship
where constant (k) should be taken as: 2.43 for velocity (V) in kn and length (L) in metres (m) or, 1.34 for velocity (V) in kn and length (L) in feet (ft).
Observing this, Froude realized that the ship resistance problem had to be broken into two different parts: residuary resistance (mainly wave making resistance) and frictional resistance. To get the proper residuary resistance, it was necessary to recreate the wave train created by the ship in the model tests. He found for any ship and geometrically similar model towed at the suitable speed that:
There is a frictional drag that is given by the shear due to the viscosity. This can result in 50% of the total resistance in fast ship designs and 80% of the total resistance in slower ship designs.
To account for the frictional resistance Froude decided to tow a series of flat plates and measure the resistance of these plates, which were of the same wetted surface area and length as the model ship, and subtract this frictional resistance from the total resistance and get the remainder as the residuary resistance.
Friction
(Main article: Skin friction drag) In a viscous fluid, a boundary layer is formed. This causes a net drag due to friction. The boundary layer undergoes shear at different rates extending from the hull surface until it reaches the field flow of the water.
Wave-making resistance
(Main article: Wave-making resistance) A ship moving over the surface of undisturbed water sets up waves emanating mainly from the bow and stern of the ship. The waves created by the ship consist of divergent and transverse waves. The divergent waves are observed as the wake of a ship with a series of diagonal or oblique crests moving outwardly from the point of disturbance. These waves were first studied by William Thomson, 1st Baron Kelvin, who found that regardless of the speed of the ship, they were always contained within the 39° wedge shape (19.5° on each side) following the ship. The divergent waves do not cause much resistance against the ship's forward motion. However, the transverse waves appear as troughs and crests along the length of a ship and constitute the major part of the wave-making resistance of a ship. The energy associated with the transverse wave system travels at one half the phase velocity or the group velocity of the waves. The prime mover of the vessel must put additional energy into the system in order to overcome this expense of energy. The relationship between the velocity of ships and that of the transverse waves can be found by equating the wave celerity and the ship's velocity.
Propulsion
(Main article: Marine propulsion) Ships can be propelled by numerous sources of power: human, animal, or wind power (sails, kites, rotors and turbines), water currents, chemical or atomic fuels and stored electricity, pressure, heat or solar power supplying engines and motors. Most of these can propel a ship directly (e.g. by towing or chain), via hydrodynamic drag devices (e.g. oars and paddle wheels) and via hydrodynamic lift devices (e.g. propellers or jets). A few exotic means also exist, such as "fish-tail propulsion", rockets or magnetohydrodynamic propulsion.
See also
William Froude
References
E. V. Lewis, ed., Principles of Naval Architecture, vol. 2 (1988)
Naval architecture | Ship resistance and propulsion | Engineering | 1,167 |
1,906,495 | https://en.wikipedia.org/wiki/Software%20rot | Software rot (bit rot, code rot, software erosion, software decay, or software entropy) is the degradation, deterioration, or loss of the use or performance of software over time.
From a software user experience perspective, it is operating environmental evolution inclusive of hardware. The first cause for the objective loss to the practical use of software is the loss of the host system as a practical operating environment.
The Jargon File, a compendium of hacker lore, defines "bit rot" as a jocular explanation for the degradation of a software program over time even if "nothing has changed"; the idea behind this is almost as if the bits that make up the program were subject to radioactive decay.
Causes
Several factors are responsible for software rot, including changes to the environment in which the software operates, degradation of compatibility between parts of the software itself, and the emergence of bugs in unused or rarely used code.
Environment change
When changes occur in the program's environment, particularly changes which the designer of the program did not anticipate, the software may no longer operate as originally intended. For example, many early computer game designers used the CPU clock speed as a timer in their games. However, newer CPU clocks were faster, so the gameplay speed increased accordingly, making the games less usable over time.
Onceability
There are changes in the environment not related to the program's designer, but its users. Initially, a user could bring the system into working order, and have it working flawlessly for a certain amount of time. But, when the system stops working correctly, or the users want to access the configuration controls, they cannot repeat that initial step because of the different context and the unavailable information (password lost, missing instructions, or simply a hard-to-manage user interface that was first configured by trial and error). Information architect Jonas Söderström has named this concept onceability, and defines it as "the quality in a technical system that prevents a user from restoring the system, once it has failed".
Unused code
Infrequently used portions of code, such as document filters or interfaces designed to be used by other programs, may contain bugs that go unnoticed. With changes in user requirements and other external factors, this code may be executed later, thereby exposing the bugs and making the software appear less functional.
Rarely updated code
Normal maintenance of software and systems may also cause software rot. In particular, when a program contains multiple parts which function at arm's length from one another, failing to consider how changes to one part that affect the others may introduce bugs.
In some cases, this may take the form of libraries that the software uses being changed in a way which adversely affects the software. If the old version of a library that previously worked with the software can no longer be used due to conflicts with other software or security flaws that were found in the old version, there may no longer be a viable version of a needed library for the program to use.
Online connectivity
Modern commercial software often connects to an online server for license verification and accessing information. If the online service powering the software is shut down, it may stop working.
Since the late 2010s most websites use secure HTTPS connections. However this requires encryption keys called root certificates which have expiration dates. After the certificates expire the device loses connectivity to most websites unless the keys are continuously updated.
Another issue is that in March 2021 old encryption standards TLS 1.0 and TLS 1.1 were deprecated. This means that operating systems, browsers and other online software that do not support at least TLS 1.2 cannot connect to most websites, even to download patches or update the browser, if these are available. This is occasionally called the "TLS apocalypse".
Products that cannot connect to most websites include PowerMacs, old Unix boxes and Microsoft Windows versions older than Server 2008/Windows 7 (at least without the use of a third-party browser).
The Internet Explorer 8 browser in Server 2008/Windows 7 does support TLS 1.2 but it is disabled by default.
Classification
Software rot is usually classified as being either 'dormant rot' or 'active rot'.
Dormant rot
Software that is not currently being used gradually becomes unusable as the remainder of the application changes. Changes in user requirements and the software environment also contribute to the deterioration.
Active rot
Software that is being continuously modified may lose its integrity over time if proper mitigating processes are not consistently applied. However, much software requires continuous changes to meet new requirements and correct bugs, and re-engineering software each time a change is made is rarely practical. This creates what is essentially an evolution process for the program, causing it to depart from the original engineered design. As a consequence of this and a changing environment, assumptions made by the original designers may be invalidated, thereby introducing bugs.
In practice, adding new features may be prioritized over updating documentation; without documentation, however, it is possible for specific knowledge pertaining to parts of the program to be lost. To some extent, this can be mitigated by following best current practices for coding conventions.
Active software rot slows once an application is near the end of its commercial life and further development ceases. Users often learn to work around any remaining software bugs, and the behaviour of the software becomes consistent as nothing is changing.
Examples
AI program example
Many seminal programs from the early days of AI research have suffered from irreparable software rot. For example, the original SHRDLU program (an early natural language understanding program) cannot be run on any modern-day computer or computer simulator, as it was developed during the days when LISP and PLANNER were still in development stage and thus uses non-standard macros and software libraries which do not exist anymore.
Forked online forum example
Suppose an administrator creates a forum using open source forum software, and then heavily modifies it by adding new features and options. This process requires extensive modifications to existing code and deviation from the original functionality of that software.
From here, there are several ways software rot can affect the system:
The administrator can accidentally make changes which conflict with each other or the original software, causing the forum to behave unexpectedly or break down altogether. This leaves them in a very bad position: as they have deviated so greatly from the original code, technical support and assistance in reviving the forum will be difficult to obtain.
A security hole may be discovered in the original forum source code, requiring a security patch. However, because the administrator has modified the code so extensively, the patch may not be directly applicable to their code, requiring the administrator to effectively rewrite the update.
The administrator who made the modifications could vacate their position, leaving the new administrator with a convoluted and heavily modified forum that lacks full documentation. Without fully understanding the modifications, it is difficult for the new administrator to make changes without introducing conflicts and bugs. Furthermore, documentation of the original system may no longer be available, or worse yet, misleading due to subtle differences in functional requirements.
Wiki example
Suppose a webmaster installs the latest version of MediaWiki, the software that powers wikis such as Wikipedia, then never applies any updates. Over time, the web host is likely to update their versions of the programming language (such as PHP) and the database (such as MariaDB) without consulting the webmaster. After a long enough time, this will eventually break complex websites that have not been updated, because the latest versions of PHP and MariaDB will have breaking changes as they hard deprecate certain built-in functions, breaking backwards compatibility and causing fatal errors. Other problems that can arise with un-updated website software include security vulnerabilities and spam.
Refactoring
Refactoring is a means of addressing the problem of software rot. It is described as the process of rewriting existing code to improve its structure without affecting its external behaviour. This includes removing dead code and rewriting sections that have been modified extensively and no longer work efficiently. Care must be taken not to change the software's external behaviour, as this could introduce incompatibilities and thereby itself contribute to software rot. Some design principles to consider when it comes to refactoring is maintaining the hierarchical structure of the code and implementing abstraction to simplify and generalize code structures.
Software entropy
Software entropy describes a tendency for repairs and modifications to a software system to cause it to gradually lose structure or increase in complexity. Manny Lehman used the term entropy in 1974 to describe the complexity of a software system, and to draw an analogy to the second law of thermodynamics. Lehman's laws of software evolution state that a complex software system will require continuous modifications to maintain its relevance to the environment around it, and that such modifications will increase the system's entropy unless specific work is done to reduce it.
Ivar Jacobson et al. in 1992 described software entropy similarly, and argued that this increase in disorder as a system is modified would always eventually make a software system uneconomical to maintain, although the time until that happens is greatly dependent on its initial design, and can be extended by refactoring.
In 1999, Andrew Hunt and David Thomas use fixing broken windows as a metaphor for avoiding software entropy in software development.
See also
Code smell
Dependency hell
Generation loss
Software bloat
Software brittleness
References
Software quality
Software maintenance
Software engineering folklore | Software rot | Engineering | 1,910 |
44,933,789 | https://en.wikipedia.org/wiki/Leapfrog%20filter | A leapfrog filter is a type of active circuit electronic filter that simulates a passive electronic ladder filter. Other names for this type of filter are active-ladder or multiple feedback filter. The arrangement of feedback loops in the signal flow-graph of the simulated ladder filter inspired the name leapfrog filter, which was coined by Girling and Good. The leapfrog filter maintains the low component sensitivity of the passive ladder filter that it simulates.
Synthesis
The definition and synthesis of leapfrog filters is described by Temes & LaPatra, Sedra & Brackett, Chen and Wait, Huelsman & Korn.
Synthesis of leapfrog filters typically includes the following steps:
Determine a prototype passive ladder filter that has the desired frequency response. Usually a doubly terminated prototype is used.
Write the equations relating element current to voltage across the element in a form suitable for expression as a signal-flow graph.
Draw the signal-flow graph. The nodes of the signal-flow graph will include both voltages and currents. The branch gains will include impedances and admittances.
Convert all nodes of the signal-flow graph to voltages and all impedances to dimensionless transmittances. This is accomplished by dividing all impedance elements by , an arbitrary resistance and multiplying all admittance elements by . This scaling does not change the frequency response.
Manipulate the signal-flow graph so that the gains feeding each summing node have the same signs. This is done as an implementation convenience. At the completion of this step, typically, all the feedback gains in the signal-flow graph will be +1 and the signs of the gain blocks in the forward path will alternate. As a result, some of the nodes, including the main output, may have a 180° phase inversion. This is usually of no consequence.
The gain blocks are implemented with active filters and interconnected as indicated by the signal-flow graph. Often, state variable filters are used for the gain blocks.
The final circuit usually has more components than the prototype passive filter. This means the final circuit has degrees of freedom which can be chosen to optimize the circuit for dynamic range and for practical component values.
Examples
Generic filter
The design starts out with a known ladder filter of one of the typologies shown in the previous figure. Usually, all the elements of the ladder filter are lossless except the first and the last which are lossy. Using a four element voltage input, voltage output ladder filter as an example, the equations that relate the element voltages and currents are as follows:
The signal-flow graph for these equations are shown in the second figure to the right. The arrangement of feedback loops in the signal flow-graph inspired the name leapfrog filter. The signal flow graph is manipulated to convert all current nodes into voltage nodes and all the impedances and admittances into dimensionless transmittances. This is equivalent to manipulating the equations either by multiplying both sides by R or by multiplying one side by R/R and distributing the R terms across the subtraction operation. This manipulation changes the equations as follows:
where H1 = RY1, H2 = GZ2, H3 = RY3, H4 = GZ4, G = 1/R, V1 = RI1, V3 = RI3
The signal flow graph is further manipulated so that the gains into each summing node is +1. The result of all the manipulation is shown as the bottom signal-flow graph in the figure. The equations represented by the resulting signal flow graph are as follows:
The awkward annotation of -V1 and -V2 as labels of nodes in the signal flow graph indicates that these nodes present a 180° phase inversion with respect to the signals in the prototype filter.
This manipulation is can be accomplished by a simple procedure:
Make all the odd numbered or all the even numbered transmittances negative. The overall phase shift with respect to the prototype will be 0° if the total number of inversions is even.
Change all feedback gains to +1.
Determine the sign of each node label by counting the number of inversions to that node from the input. If the number of inversions is odd, then the node label is negative.
The signal-flow graph is suitable for implementation. State variable filters that are available in both inverting and non-inverting typologies are often used.
Band pass filter
Passive circuit
The circuit for a band pass, passive ladder filter is first determined.
The individual components in parallel or series can be combined into general impedances or admittances. For this schematic:
Signal-flow graph
The current and voltage variables can be put into cause and effect relationships as follows.
A signal flow graph for these equations is shown to the right.
Scaled signal-flow graph
For implementation reasons, the current variables may be multiplied by an arbitrary resistance to convert them into voltage variables which also converts all gains to dimensionless values. In this example all currents are multiplied by . This is accomplished by either multiplying both sides of an equation by R of by multiplying one side by R/R and then distributing the R term over the currents.
Manipulated signal-flow graph
It is convenient for implementation if the gains feeding the summing nodes all have the same sign. In that case, summation can be achieved with a junction of two resisters.
Implementation
All the transmittances H1 - H4, in this example, are bandpass filters. They can be implemented with the modified Tow-Thomas active biquad filter. This biquad has both positive and negative bandpass outputs so that it can realize any of the transmittances. This biquad also has summing inputs so it can also implement the summing nodes.
Tuning
A leapfrog filter can be difficult to tune because of the complicated feedback. One strategy is to open the feedback loops so that the remaining filter structure is a simple cascade design. Each section can then be tuned independently. The inner sections, H2 and H3 have infinite and may be unstable when the feedback loops are opened. These stages may be designed with a large, but finite so that they can be tuned while the feedback loops are open.
Notes
References
Linear filters
Signal processing filter | Leapfrog filter | Chemistry | 1,286 |
47,538,091 | https://en.wikipedia.org/wiki/Boletus%20roseolateritius | Boletus roseolateritius is a bolete fungus found in the southern United States and northeast Mexico. It was described as a new species in 2003 by Alan Bessette, Ernst Both, and Dail Dunaway. The type collection was made in Mississippi, where it was found growing on the ground under American beech (Fagus grandifolia), near hickory and oak. The bolete was reported from a Mexican beech (Fagus mexicana) forest in Hidalgo, Mexico in 2010.
The fruit body has a cap that changes color depending on its age: it is initially dark reddish to orangish, later reddish brown at maturity, fading to brownish orange or brownish pink with dull yellow tints, and finally turning dull dingy yellow in age. It has a pale yellow stipe. Its spores measure 8.5–12 by 3.5–4.5 μm.
See also
List of Boletus species
List of North American boletes
References
External links
roseolateritius
Fungi described in 2003
Fungi of Mexico
Fungi of the United States
Fungi without expected TNC conservation status
Fungus species | Boletus roseolateritius | Biology | 224 |
30,356,158 | https://en.wikipedia.org/wiki/SWORD%20%28protocol%29 | SWORD (Simple Web-service Offering Repository Deposit) is an interoperability standard that allows digital repositories to accept the deposit of content from multiple sources in different formats (such as XML documents) via a standardized protocol. In the same way that the HTTP protocol allows any web browser to talk to any web server, so SWORD allows clients to talk to repository servers. SWORD is a profile (specialism) of the Atom Publishing Protocol, but restricts itself solely to the scope of depositing resources into scholarly systems.
History
The first version of the SWORD protocol was created in 2007 by a consortium of UK institutional repository experts. The project to develop SWORD was funded by the JISC and managed by UKOLN. An overview of the initial development of SWORD is given in "SWORD: Simple Web-service Offering Repository Deposit." The standard grew out of a need for an interoperable method by which resources could be deposited into repositories. Interoperable standards existed to allow the harvesting of content (e.g. Open Archives Initiative Protocol for Metadata Harvesting) or for searching (e.g. OpenSearch) but not for deposit.
Between the original release in 2007, two subsequent projects were undertaken until 2009 to further refine the version 1.0 specification and perform advocacy work. The resulting release was numbered 1.3. Further descriptions of the work is available in Lewis et al., "If SWORD is the answer, what is the question? Use of the Simple Web service Offering Repository Deposit protocol."
In 2011 a new project began to extend the "fire and forget" approach of the SWORD 1.x specification into a full CRUD (Create, Retrieve, Update, Delete) interface, and the result was a new version (designated 2.0). This was followed by extensive development work on client environments in several programming languages, and incorporation into the development of several Jisc-funded efforts.
Use cases
Many different use cases exist where it may be desirable to remotely deposit resources into scholarly systems. These include:
Deposit to multiple repositories at once.
Deposit from a desktop client (rather from within the repository system itself)
Deposit by third party systems (for example by automated laboratory equipment)
Repository to repository deposit
Implementations
Three categories of implementation exist: repository implementations for existing repository servers, client implementations that can be used to perform SWORD deposits, and code libraries to assist in the creation of new SWORD clients or servers.
SWORD-compliant repositories
The following digital repositories are SWORD compliant:
arXiv
Dataverse
DSpace
EPrints
Fedora
HAL
Intralibrary (project deprecated)
Microsoft Zentity (project deprecated)
MyCoRe
SWORD clients
EasyDeposit
Open Journal Systems
Pressbooks client
SWORD code libraries
PHP SWORD client library
Ruby SWORD client library
Java SWORD client and server library
Python client library
Python server library and SWORD 2.0 reference implementation
Other resources
The SWORD Course
References
External links
swordapp.org
Computer standards
Jisc
XML-based standards | SWORD (protocol) | Technology | 610 |
22,446,531 | https://en.wikipedia.org/wiki/Hebeloma%20hiemale | Hebeloma hiemale is a species of mushroom in the family Hymenogastraceae.
hiemale
Fungi of Europe
Taxa named by Giacomo Bresadola
Fungus species | Hebeloma hiemale | Biology | 39 |
773,779 | https://en.wikipedia.org/wiki/Ulrika%20Babiakov%C3%A1 | Ulrika Babiaková (3 April 1976 – 3 November 2002) was a Slovak astronomer and discoverer of minor planets from Banská Štiavnica, Slovakia. She is credited by the Minor Planet Center with the discovery and co-discovery of 14 asteroids during 1998–2001.
Babiaková died at the age of 26 in an accident.
The main-belt asteroid 32531 Ulrikababiaková, discovered by astronomer and husband Peter Kušnirák in 2001, was named in her memory on 8 October 2014 ().
References
1976 births
2002 deaths
20th-century astronomers
21st-century astronomers
Discoverers of asteroids
People from Banská Štiavnica
Slovak astronomers
Women astronomers | Ulrika Babiaková | Astronomy | 139 |
18,611,260 | https://en.wikipedia.org/wiki/Hermaphrodite | A hermaphrodite () is a sexually reproducing organism that produces both male and female gametes. Animal species in which individuals are either male or female are gonochoric, which is the opposite of hermaphroditic.
The individuals of many taxonomic groups of animals, primarily invertebrates, are hermaphrodites, capable of producing viable gametes of both sexes. In the great majority of tunicates, mollusks, and earthworms, hermaphroditism is a normal condition, enabling a form of sexual reproduction in which either partner can act as the female or male. Hermaphroditism is also found in some fish species, but is rare in other vertebrate groups. Most hermaphroditic species exhibit some degree of self-fertilization. The distribution of self-fertilization rates among animals is similar to that of plants, suggesting that similar pressures are operating to direct the evolution of selfing in animals and plants.
A rough estimate of the number of hermaphroditic animal species is 65,000, about 5% of all animal species, or 33% excluding insects. Insects are almost exclusively gonochoric, and no definitive cases of hermaphroditism have been demonstrated in this group. There are no known hermaphroditic species among mammals or birds.
About 94% of flowering plant species are either hermaphroditic (all flowers produce both male and female gametes) or monoecious, where both male and female flowers occur on the same plant. There are also mixed breeding systems, in both plants and animals, where hermaphrodite individuals coexist with males (called androdioecy) or with females (called gynodioecy), or all three exist in the same species (called trioecy). Sometimes, both male and hermaphrodite flowers occur on the same plant (andromonoecy) or both female and hermaphrodite flowers occur on the same plant (gynomonoecy).
Hermaphrodism is not to be confused with ovotesticular syndrome in mammals, which is a separate and unrelated phenomenon. While people with the condition were previously called "true hermaphrodites" in medical literature, this usage is now considered to be outdated as of 2006 and misleading, as people with ovotesticular syndrome do not have functional sets of both male and female organs.
Etymology
The term hermaphrodite derives from the , from , which derives from Hermaphroditus (Ἑρμαφρόδιτος), the son of Hermes and Aphrodite in Greek mythology. According to Ovid, he fused with the nymph Salmacis resulting in one individual possessing physical traits of male and female sexes. According to the earlier Diodorus Siculus, he was born with a physical body combining male and female sexes. The word hermaphrodite entered the English lexicon as early as the late fourteenth century.
Animals
Sequential hermaphrodites
Sequential hermaphrodites (dichogamy) occur in species in which the individual first develops as one sex, but can later change into the opposite sex. (Definitions differ on whether sequential hermaphroditism encompasses serial hermaphroditism; for authors who exclude serial hermaphroditism, a sequential hermaphrodite is also stipulated to only change sex once.) This contrasts with simultaneous hermaphrodites, in which an individual possesses fully functional male and female genitalia. Sequential hermaphroditism is common in fish (particularly teleost fish) and many gastropods (such as the common slipper shell). Sequential hermaphroditism can best be understood in terms of behavioral ecology and evolutionary life history theory, as described in the size-advantage mode first proposed by Michael T. Ghiselin which states that if an individual of a certain sex could significantly increase its reproductive success after reaching a certain size, it would be to their advantage to switch to that sex.
Sequential hermaphrodites can be divided into three broad categories:
Protandry: Where an organism develops as a male, and then changes sex to a female.
Example: The clownfish (genus Amphiprion) are colorful reef fish found living in symbiosis with sea anemones. Generally one anemone contains a 'harem', consisting of a large female, a smaller reproductive male, and even smaller non-reproductive males. If the female is removed, the reproductive male will change sex and the largest of the non-reproductive males will mature and become reproductive. It has been shown that fishing pressure can change when the switch from male to female occurs, since fishermen usually prefer to catch the larger fish. The populations are generally changing sex at a smaller size, due to natural selection.
Protogyny: Where the organism develops as a female, and then changes sex to a male.
Example: Wrasses (Family Labridae) are a group of reef fish in which protogyny is common. Wrasses also have an uncommon life history strategy, which is termed diandry (literally, two males). In these species, two male morphs exists: an initial phase male and a terminal phase male. Initial phase males do not look like males and spawn in groups with females. They are not territorial. They are, perhaps, female mimics (which is why they are found swimming in group with females). Terminal phase males are territorial and have a distinctively bright coloration. Individuals are born as males or females, but if they are born males, they are not born as terminal phase males. Females and initial phase males can become terminal phase males. Usually, the most dominant female or initial phase male replaces any terminal phase male when those males die or abandon the group.
Bidirectional sex changers: Where an organism has female and male reproductive organs, but may act either as a female or as a male during different stages in life.
Example: Lythrypnus dalli (Family Lythrypnus) are a group of coral reef fish in which bidirectional sex change occurs. Once a social hierarchy is established a fish changes sex according to its social status, regardless of the initial sex, based on a simple principle: if the fish expresses subordinate behavior then it changes its sex to female, and if the fish expresses dominant or non-dominant superior behavior then it changes its sex to male.
Dichogamy can have both conservation-related implications for humans, as mentioned above, as well as economic implications. For instance, groupers are favoured fish for eating in many Asian countries and are often aquacultured. Since the adults take several years to change from female to male, the broodstock are extremely valuable individuals.
Simultaneous hermaphrodites
Simultaneous hermaphrodites (or homogamous hermaphrodites) are individuals in which both male and female sexual organs are present and functional at the same time. Self-fertilization often occurs.
Pulmonate land snails and land slugs are perhaps the best-known kinds of simultaneous hermaphrodites, and are the most widespread of terrestrial animals possessing this sexual polymorphism. Sexual material is exchanged between both animals via spermatophores, and is then stored in the spermatheca. After exchange of spermatozoa, both animals will lay fertilized eggs after a period of gestation. The eggs will proceed to hatch after a development period. Snails typically reproduce from early spring through late autumn.
Banana slugs are an example of a hermaphroditic gastropod. Mating with a partner is more desirable biologically than self-fertilization, as the genetic material of the resultant offspring is varied, but if mating with a partner is not possible, self-fertilization is practiced. The male sexual organ of an adult banana slug is quite large in proportion to its size, as well as compared to the female organ. It is possible for banana slugs, while mating, to become stuck together. If a substantial amount of wiggling fails to separate them, the male organ will be bitten off (using the slug's radula), see apophallation. If a banana slug has lost its male sexual organ, it can still mate as a female, making hermaphroditism a valuable adaptation.
The species of colourful sea slugs Goniobranchus reticulatus is hermaphroditic, with both male and female organs active at the same time during copulation. After mating, the external portion of the penis detaches, but is able to regrow within 24 hours.
Earthworms are another example of a simultaneous hermaphrodite. Although they possess ovaries and testes, they have a protective mechanism against self-fertilization. Sexual reproduction occurs when two worms meet and exchange gametes, copulating on damp nights during warm seasons.
The free-living hermaphroditic nematode Caenorhabditis elegans reproduces primarily by self-fertilization, but infrequent out-crossing events occur at a rate of approximately 1%.
Hamlets do not practice self-fertilization, but a pair will mate multiple times over several nights, taking turns between which one acts as the male and which acts as the female.
The mangrove killifish (Kryptolebias marmoratus) are simultaneous hermaphrodites, producing both eggs and sperm and routinely reproducing by self-fertilization. Each individual normally fertilizes itself when an egg and sperm produced by an internal organ unite inside the fish's body. This species is also regarded as the only known vertebrate species that can reproduce by self fertilization.
Pseudohermaphroditism
When spotted hyenas were first scientifically observed by explorers, they were thought to be hermaphrodites. Early observations of wild spotted hyenas led researchers to believe that all spotted hyenas, male or female, were born with what looked to be a penis. A female spotted hyena's apparent penis is in fact an enlarged clitoris, which contains an external birth canal. It can be difficult to determine the sex of spotted hyenas until sexual maturity, when they may become pregnant. When a female spotted hyena gives birth, she passes the cub through the cervix internally, but then passes it out through the elongated clitoris.
Plants
The term hermaphrodite is used in botany to describe, for example, a perfect flower that has both staminate (male, pollen-producing) and carpellate (female, ovule-producing) parts. The overwhelming majority of flowering plant species are hermaphroditic.
Monoecy
Flowering plant species with separate, imperfect, male and female flowers on the same individual are called monoecious. Monoecy only occurs in about 7% of flowering plant species. Monoecious plants are often referred to as hermaphroditic because they produce both male and female gametes. However, the individual flowers are not hermaphroditic if they only produce gametes of one sex. 65% of gymnosperm species are dioecious, but conifers are almost all monoecious. Some plants can change their sex throughout their lifetime, a phenomenon called sequential hermaphroditism.
Andromonoecy
In andromonoecious species, the plants produce perfect (hermaphrodite) flowers and separate fertile male flowers that are sterile as female. Andromonoecy occurs in about 4000 species of flowering plants (2% of flowering plants).
Gynomonoecy
In gynomonoecious species, the plants produce hermaphrodite flowers and separate male-sterile pistillate flowers. One example is the meadow saxifrage, Saxifraga granulata. Charles Darwin gave several other examples in his 1877 book "The Different Forms of Flowers on Plants of the Same Species".
About 57% of moss species and 68% of liverworts are unisexual, meaning that their gametophytes produce either male or female gametes, but not both.
Sequential hermaphroditism is common in bryophytes and some vascular plants.
Use regarding humans
Historically, the term hermaphrodite was used in law to refer to people whose sex was in doubt. The 12th-century states that "Whether an hermaphrodite may witness a testament, depends on which sex prevails" ("Hermafroditus an ad testamentum adhiberi possit, qualitas sexus incalescentis ostendit.").
Alexander ab Alexandro (1461–1523) stated, using the term hermaphrodite, that the people who bore the sexes of both man and woman were regarded by the Athenians and the Romans as monsters, and thrown into the sea at Athens and into the Tiber at Rome. Similarly, the 17th-century English jurist and judge Edward Coke (Lord Coke), wrote in his Institutes of the Lawes of England on laws of succession stating, "Every heire is either a male, a female, or an hermaphrodite, that is both male and female. And an hermaphrodite (which is also called Androgynus) shall be heire, either as male or female, according to that kind of sexe which doth prevaile."
During the Victorian era, medical authors attempted to ascertain whether or not humans could be hermaphrodites, adopting a precise biological definition to the term. From that period until the early 21st century, individuals with ovotesticular syndrome were termed true hermaphrodites if their gonadal tissue contained both testicular and ovarian tissue, and pseudohermaphrodites if their external appearance (phenotype) differed from sex expected from internal gonads. This language has fallen out of favor due to misconceptions and stigma associated with the terms, and also a shift to nomenclature based on genetics.
The term "intersex" described a wide variety of combinations of what are ambiguous biological characteristics. Intersex biology may include, for example, ambiguous-looking external genitalia, karyotypes that include mixed XX and XY chromosome pairs (46XX/46XY, 46XX/47XXY or 45X/XY mosaic). Clinically, medicine currently uses the terminology "disorders of sex development" (also known as variations in sex characteristics.) This is particularly significant because of the relationship between medical terminology and medical intervention.
Intersex civil society organizations, and many human rights institutions, have criticized medical interventions designed to make bodies more typically male or female.
In some cases, variations in sex characteristics are caused by unusual levels of sex hormones, which may be the result of an atypical set of sex chromosomes. One common cause of variations in sex characteristics traits is the crossing over of the testis-determining factor (SRY) from the Y chromosome to the X chromosome during meiosis. The SRY is then activated in only certain areas, causing development of testes in some areas by beginning a series of events starting with the upregulation of the transcription factor (SOX9), and in other areas not being active (causing the growth of ovarian tissues). Thus, testicular and ovarian tissues will both be present in the same individual. Of all total recorded cases of ovotesticular DSD, in only 8% percent of all cases was SRY present, leaving the rest of cases that could be explained to other or less common causes, with the vast majority simply being currently unexplainable.
Fetuses were previously thought to be phenotypically female before the sexual differentiation stage; however, this is now known to be incorrect, as humans are simply undifferentiated before this stage and possess a paramesonephric duct, a mesonephric duct, and a genital tubercle.
Evolution
The evolution of anisogamy may have contributed to the evolution of simultaneous hermaphroditism and sequential hermaphroditism, it remains unclear if the evolution of anisogamy first led to hermaphroditism or gonochorism.
A 2023 study argued that hermaphroditism can evolve directly from mating types under certain circumstances, such as if the fertilization is well organized and the average size of groups is small. Simultaneous hermaphroditism that exclusively reproduces through self-fertilization has evolved many times in plants and animals, but it might not last long evolutionarily.
In animals
Joan Roughgarden and Priya Iyer argued that the last common ancestor for animals was hermaphroditic and that transitions from hermaphroditism to gonochorism were more numerous than the reverse. Other scientists have criticized this argument; saying it’s based on paraphyletic Spiralia, assignments of sexual modes for the phylum level than the species level, and methods exclusively based on maximum parsimony.
Hermaphroditism is polyphyletic in invertebrates where it evolved from gonochorism and gonochorism is also ancestral to hermaphroditic fishes. According to Nelson Çabej simultaneous hermaphroditism in animals most likely evolved due to a limited number of mating partners.
In plants
It is widely accepted that the first vascular plants were outcrossing hermaphrodites. In flowering plants, hermaphroditism is ancestral to dioecy.
Hermaphroditism in plants may promote self fertilization in pioneer populations. However, plants have evolved multiple different mechanisms to avoid self-fertilization in hermaphrodites, including sequential hermaphroditism, molecular recognition systems and mechanical or morphological mechanisms such as heterostyly.
See also
Asexual reproduction
Trioecy
Androgyny
Futanari
Gonochorism
Gynandromorph
Self-pollination
Self-fertilization
References
Further reading
Discovery Health Channel, (2007) "I Am My Own Twin"
, reprinted in:
External links
Britannica Online Encyclopedia: hermaphroditism (biology)
Current Biology – Gender trading in a hermaphrodite
The Evolution of Self-Fertile Hermaphroditism: The Fog Is Clearing
"Born True Hermaphrodite – Pictorial Profile", about Lynn Edward Harris
Sexual reproduction
Reproductive system
Intersex history
Supernumerary body parts | Hermaphrodite | Biology | 3,879 |
7,017,083 | https://en.wikipedia.org/wiki/Heterotrophic%20picoplankton | Heterotrophic picoplankton is the fraction of plankton composed by cells between 0.2 and 2 μm that do not perform photosynthesis. They form an important component of many biogeochemical cycles.
Cells can be either:
prokaryotes
Archaea form a major part of the picoplankton in the Antarctic and are abundant in other regions of the ocean. Archaea have also been found in freshwater picoplankton, but do not appear to be so abundant in these environments.
eukaryotes
Cell structure
Nucleic acid content in cells
Heterotrophic picoplankton can be divided into two broad categories: high nucleic acid (HNA) content cells and low nucleic acid (LNA) content cells. Nucleic acids are large biomolecules that store and express genomic information. HNA picoplankton dominate in waters that are eutrophic to mesotrophic while low LNA picoplankton dominate in stratified oligotrophic environments. The proportion of HNA picoplankton to LNA picoplankton is a defining characteristic of bacterioplankton communities. Addition of glyphosate, a common herbicide that causes increased levels of phosphorus when introduced to aquatic systems, causes an increase in the ratio of HNA to LNA bacteria. Nucleic acids are a costly compound for cells to synthesize and the increased bioavailable phosphorus in the system likely allows HNA bacteria to rapidly synthesize more nucleic acids and divide. HNA bacterioplankton are larger and more active than LNA picoplankton. HNA cells also have higher specific metabolic and growth rates, likely allowing these type of bacterioplankton to better utilize and exploit sudden increases in nutrients within the water column. The relative abundance of HNA to LNA cells is related to overall system productivity, specifically chlorophyll concentration, though other factors likely also contribute to bacterioplankton distribution.
Biogeochemical cycling
Dissolved organic matter
Heterotrophic picoplankton play a critical role in nutrient and carbon recycling in ecological food webs by transforming and mineralizing organic matter. Aquatic dissolved organic matter is one of the largest organic pools on Earth and a major part of the carbon cycle. The majority of dissolved organic matter is either resistant to transformation or semi-labile, limiting the availability of these compounds to biodegradation. Water bodies accumulate dissolved organic matter via both allochthonous sources, mainly decaying terrestrial plants and soil organic matter, and autochthonous sources, mainly from phytoplankton and macrophytes. As major decomposers of organic matter, heterotrophic bacterioplankton act as an important link between detritus, dissolved organic matter, and higher trophic levels in aquatic systems. Bacterioplankton degrade particulate organic matter into smaller compounds and either assimilate and absorb them or expel them as inorganic carbon. Both of these processes promote transformation of matter within the aquatic system and promote energy flow and are important components of the overall quality of a water body. Heterotrophic bacteria community structure and functionality is used to assess the trophic status and quality of freshwater systems.
References
Biological oceanography
Planktology
Aquatic ecology | Heterotrophic picoplankton | Biology | 707 |
623,465 | https://en.wikipedia.org/wiki/Metallate | Metallate or metalate is the name given to any complex anion containing a metal ligated to several atoms or small groups.
Typically, the metal will be one of the transition elements and the ligand will be oxygen or another chalcogenide or a cyanide group (though others are known). The chalcogenide metallates are known as oxometallates, thiometallates, selenometallates and tellurometallates; the cyanide metallates are known as cyanometallates.
Oxometallates include permanganate (), chromate () and vanadate ( or ).
Thiometallates include tetrathiovanadate (), tetrathiomolybdate (), tetrathiotungstate () and similar ions.
Cyanometallates include ferricyanide and ferrocyanide.
Metallate is also used as a verb by bioinorganic chemistry to describe the act of adding metal atoms or ions to a site (synthetic ligand or protein).
References
Anions | Metallate | Physics,Chemistry | 230 |
74,056,025 | https://en.wikipedia.org/wiki/Four%20Core%20Genotypes%20mouse%20model | Four Core Genotypes (FCG) mice are laboratory mice produced by genetic engineering that allow biomedical researchers to determine if a sex difference in phenotype is caused by effects of gonadal hormones or sex chromosome genes. The four genotypes include XX and XY mice with ovaries, and XX and XY mice with testes. The comparison of XX and XY mice with the same type of gonad reveals sex differences in phenotypes that are caused by sex chromosome genes. The comparison of mice with different gonads but the same sex chromosomes reveals sex differences in phenotypes that are caused by gonadal hormones.
Development
The FCG model was created by Paul Burgoyne and Robin Lovell-Badge at the National Institute for Medical Research, London (now Francis Crick Institute). The model involves deleting the testis-determining gene Sry from the Y chromosome, and inserting Sry onto chromosome 3. Therefore the sex chromosomes no longer determine the type of gonad, so that XX and XY mice can have the same type of gonad and gonadal hormones.
Significance
The FCG model has been used to discover that the XX and XY animals respond differently in models of human physiology and disease, including autoimmunity, metabolism, cardiovascular disease, cancer, Alzheimer’s disease, and neural and behavioral processes. These findings imply that some sex chromosome genes may protect from disease, rationalizing the search for therapies that enhance such protective factors.
References
Genetic engineering
Sex | Four Core Genotypes mouse model | Chemistry,Engineering,Biology | 314 |
2,374,688 | https://en.wikipedia.org/wiki/Evolutionary%20capacitance | Evolutionary capacitance is the storage and release of variation, just as electric capacitors store and release charge. Living systems are robust to mutations. This means that living systems accumulate genetic variation without the variation having a phenotypic effect. But when the system is disturbed (perhaps by stress), robustness breaks down, and the variation has phenotypic effects and is subject to the full force of natural selection. An evolutionary capacitor is a molecular switch mechanism that can "toggle" genetic variation between hidden and revealed states. If some subset of newly revealed variation is adaptive, it becomes fixed by genetic assimilation. After that, the rest of variation, most of which is presumably deleterious, can be switched off, leaving the population with a newly evolved advantageous trait, but no long-term handicap. For evolutionary capacitance to increase evolvability in this way, the switching rate should not be faster than the timescale of genetic assimilation.
This mechanism would allow for rapid adaptation to new environmental conditions. Switching rates may be a function of stress, making genetic variation more likely to affect the phenotype at times when it is most likely to be useful for adaptation. In addition, strongly deleterious variation may be purged while in a partially cryptic state, so cryptic variation that remains is more likely to be adaptive than random mutations are. Capacitance can help cross "valleys" in the fitness landscape, where a combination of two mutations would be beneficial, even though each is deleterious on its own.
There is currently no consensus about the extent to which capacitance might contribute to evolution in natural populations. The possibility of evolutionary capacitance is considered to be part of the extended evolutionary synthesis.
Switches that turn robustness to phenotypic rather than genetic variation on and off do not fit the capacitance analogy, as their presence does not cause variation to accumulate over time. They have instead been called phenotypic stabilizers.
Enzyme promiscuity
In addition to their native reaction, many enzymes perform side reactions. Similarly, binding proteins may spend some proportion of their time bound to off-target proteins. These reactions or interactions may be of no consequence to current fitness but under altered conditions, may provide the starting point for adaptive evolution. For example, several mutations in the antibiotic resistance gene B-lactamase introduce cefotaxime resistance but do not affect ampicillin resistance. In populations exposed only to ampicillin, such mutations may be present in a minority of members since there is not fitness cost (i.e. are within the neutral network). This represents cryptic genetic variation since if the population is newly exposed to cefotaxime, the minority members will exhibit some resistance.
Chaperones
Chaperones assist in protein folding. The need to fold proteins correctly is a big restriction on the evolution of protein sequences. It has been proposed that the presence of chaperones may, by providing additional robustness to errors in folding, allow the exploration of a larger set of genotypes. When chaperones are overworked at times of environmental stress, this may "switch on" previously cryptic genetic variation.
Hsp90
The hypothesis that chaperones can act as evolutionary capacitors is closely associated with the heat shock protein Hsp90. When Hsp90 is downregulated in the fruit fly Drosophila melanogaster, a broad range of different phenotypes are seen, where the identity of the phenotype depends on the genetic background. Also, a recent study on the model insect, the red flour beetle Tribolium castaneum, showed that Hsp90 impairment revealed a new phenotype, reduced-eye phenotype, which was stably inherited without further HSP90 inhibition (https://doi.org/10.1101/690727). This was thought to prove that the new phenotypes depended on pre-existing cryptic genetic variation that had merely been revealed. More recent evidence suggests that these data might be explained by new mutations caused by the reactivation of formally dormant transposable elements. However, this finding regarding transposable elements may be dependent on the strong nature of the Hsp90 knockdown used in that experiment.
GroEL
The overproduction of GroEL in Escherichia coli increases mutational robustness. This can increase evolvability.
Yeast prion [PSI+]
Sup35p is a yeast protein involved in recognising stop codons and causing translation to stop correctly at the ends of proteins. Sup35p comes in a normal form ([psi-]) and a prion form ([PSI+]). When [PSI+] is present, this depletes the amount of normal Sup35p available. As a result, the rate of errors in which translation continues beyond a stop codon increases from about 0.3% to about 1%.
This can lead to different growth rates, and sometimes different morphologies, in matched [PSI+] and [psi-] strains in a variety of stressful environments. Sometimes the [PSI+] strain grows faster, sometimes [psi-]: this depends on the genetic background of the strain, suggesting that [PSI+] taps into pre-existing cryptic genetic variation. Mathematical models suggest that [PSI+] may have evolved, as an evolutionary capacitor, to promote evolvability.
[PSI+] appears more frequently in response to environmental stress. In yeast, more stop codon disappearances are in-frame, mimicking the effects of [PSI+], than would be expected from mutation bias or than are observed in other taxa that do not form the [PSI+] prion. These observations are compatible with [PSI+] acting as an evolutionary capacitor in the wild.
Similar transient increases in error rates can evolve emergently in the absence of a "widget" like [PSI+]. The primary advantage of a [PSI+]-like widget is to facilitate the subsequent evolution of lower error rates once genetic assimilation has occurred.
Gene knockouts
Gene knockouts can be used to identify novel genes or genomic regions which function as evolutionary capacitors. When a gene is knocked out, and its removal reveals phenotypic variation that was not previously observable, that gene is functioning as a phenotypic capacitor. If any of the variation is adaptive, it is functioning as an evolutionary capacitor.
Fruit Flies
Deficiency in at least 15 different genes reveals cryptic variation in wing morphology in Drosophila melanogaster. While some of the variation revealed by these knockouts is deleterious, other variation has a relatively minor effect on aerodynamics, and could even improve the flight capability of an individual.
Yeast
In yeast, the knockout of certain chromatin regulating genes increases the differences in expression between yeast species. The majority of the variation in protein expression is attributable to trans effects, suggesting that trans-regulatory processes are strongly involved in canalization. Unlike the chromatin regulators, the removal of genes which code for metabolic enzymes does not have a consistent effect on the difference in expression between species, with different enzyme knockouts either increasing, decreasing, or not significantly affecting the expression difference.
Broader knockout samples in yeast have identified at least 300 genes which, when absent, increase morphological variation between yeast individuals. These capacitor genes predominantly occupy a few key domains in gene ontology, including chromosome organization and DNA integrity, RNA elongation, protein modification, cell cycle, and response to stimuli such as stress. More generally, capacitor genes are likely to express proteins which act as network hubs in the interactome of a cell, and in the network of synthetic-lethal interactions. The confidence that a specific gene acts as a phenotypic capacitor is correlated with the number of protein-protein interactions observed for its expressed protein. However, proteins with the highest amount of interactions have reduced phenotypic capacitance, possibly due to increased duplication of regions coding these proteins in the genome, reducing the effect of a single knockout.
Capacitor genes are less likely to have paralogs elsewhere in the genome; most capacitors identified in yeast are either singleton genes, or have historical paralogs from which they have diverged substantially in terms of expression. Singleton and duplicate capacitors largely exhibit disjoint behavior in the interactome. Singleton capacitors are most often part of highly interconnected complexes (such as the mediator complex), while duplicate capacitors are more highly connected and tend to interact with multiple large complexes. The gene ontologies of singleton and duplicate capacitors also differ notably. Singleton capacitors are concentrated in the categories of DNA maintenance and organization, response to stimuli, and RNA transcription and localization, whereas duplicate capacitors are concentrated in the categories of protein metabolism and endocytosis.
Redundancy
The mechanism of phenotypic capacitor genes in yeast appears to be closely related to the modalities of functional redundancy at various levels of the genome. Coding regions that are necessary for the synthesis of key proteins which do not have paralogs elsewhere in the genome are lethal when removed. Conversely, coding regions with many paralogs or strongly expressed paralogs have a minimal effect on overall expression (especially trans regulatory expression) when removed. Singleton and duplicate capacitors both largely represent instances of incomplete functional redundancy; differentially expressed paralogs of duplicate capacitors continue some functionality of the original gene, and the protein-protein interaction complexes within which singleton capacitors reside largely exhibit overlapping functionality. In general the phenotypic capacitors identified by knockouts in yeast are genes expressed in several key regulatory areas which, while non-lethal when removed, do not have enough redundancy to maintain complete functionality. The concept of functional redundancy may also be involved in the high number of synthetic-lethal interactions which capacitor genes participate in. When a gene has its functionality resumed by a paralog or functional analog, its removal is not inherently lethal, however when the gene and its redundancy are removed, the result is lethality.
Simulations
Computational simulations of knockouts in complex gene interaction networks have demonstrated that many, and possibly all expressed genes have the potential to reveal phenotypic variation of some kind when removed, and that previously identified capacitor genes are simply especially strong examples. Capacitance, then, is simply a feature of complex gene networks that arises in conjunction with canalization.
Facultative sex
Recessive mutations can be thought of as cryptic when they are present overwhelmingly in heterozygotes rather than homozygotes. Facultative sex that takes the form of selfing can act as an evolutionary capacitor in a primarily asexual population by creating homozygotes. Facultative sex that takes the form of outcrossing can act as an evolutionary capacitor by breaking up allele combinations with phenotypic effects that normally cancel out.
See also
Canalization (genetics)
Epigenetics
Preadaptation
Susan Lindquist
References
Evolutionary biology
Extended evolutionary synthesis
Selection | Evolutionary capacitance | Biology | 2,305 |
1,446,859 | https://en.wikipedia.org/wiki/Polysomnography | Polysomnography (PSG) is a multi-parameter type of sleep study and a diagnostic tool in sleep medicine. The test result is called a polysomnogram, also abbreviated PSG. The name is derived from Greek and Latin roots: the Greek πολύς (polus for "many, much", indicating many channels), the Latin somnus ("sleep"), and the Greek γράφειν (graphein, "to write").
Type I polysomnography is a sleep study performed overnight with the patient continuously monitored by a credentialed technologist. It records the physiological changes that occur during sleep, usually at night, though some labs can accommodate shift workers and people with circadian rhythm sleep disorders who sleep at other times. The PSG monitors many body functions, including brain activity (EEG), eye movements (EOG), muscle activity or skeletal muscle activation (EMG), and heart rhythm (ECG). After the identification of the sleep disorder sleep apnea in the 1970s, breathing functions, respiratory airflow, and respiratory effort indicators were added along with peripheral pulse oximetry. Polysomnography no longer includes NPT monitoring for erectile dysfunction, as it is reported that all male patients will experience erections during phasic REM sleep, regardless of dream content.
Limited channel polysomnography, or unattended home sleep tests, are called Type II–IV channel polysomnography. Polysomnography should only be performed by technicians and technologists who are specifically accredited in sleep medicine. However, at times nurses and respiratory therapists perform polysomnography without specific knowledge and training in the field.
Polysomnography data can be directly related to sleep onset latency (SOL), REM-sleep onset latency, number of awakenings during the sleep period, total sleep duration, percentages and durations of every sleep stage, and number of arousals. It may also record other information crucial for diagnostics that are not directly linked with sleep, such as movements, respiration, and cardiovascular parameters. In any case, through polysomnographic evaluation, other information (such as body temperature or esophageal pH) can be obtained according to the patient's or the study's needs.
Video-EEG polysomnography, which combines polysomnography with video recording, has been described as more effective than polysomnography alone for the evaluation of sleep troubles such as parasomnias, because it allows easier correlation of EEG and polysomnography with bodily motion.
Medical uses
Polysomnography is used to diagnose or rule out many types of sleep disorders, including narcolepsy, idiopathic hypersomnia, periodic limb movement disorder (PLMD), REM behavior disorder, parasomnias, and sleep apnea. Although it is not directly useful in diagnosing circadian rhythm sleep disorders, it may be used to rule out other sleep disorders.
The use of polysomnography as a screening test for persons with excessive daytime sleepiness as their sole presenting complaint is controversial.
Mechanism
A polysomnogram will typically record a minimum of 12 channels, requiring a minimum of 22 wire attachments to the patient. These channels vary in every lab and may be adapted to meet the doctor's requests. A minimum of three channels are used for the EEG, one or two measure airflow, one or two are for chin muscle tone, one or more for leg movements, two for eye movements (EOG), one or two for heart rate and rhythm, one for oxygen saturation, and one each for the belts, which measure chest wall movement and upper abdominal wall movement. The movement of the belts is typically measured with piezoelectric sensors or respiratory inductance plethysmography. This movement is equated to effort and produces a low-frequency sinusoidal waveform as the patient inhales and exhales.
Wires for each channel of recorded data lead from the patient and converge into a central box, which in turn is connected to a computer system for recording, storing and displaying the data. During sleep, the computer monitor can display multiple channels continuously. In addition, most labs have a small video camera in the room so the technician can observe the patient visually from an adjacent room.
The electroencephalogram (EEG) will generally use six "exploring" electrodes and two "reference" electrodes, unless a seizure disorder is suspected, in which case more electrodes will be applied to document the appearance of seizure activity. The exploring electrodes are usually attached to the scalp near the frontal, central (top) and occipital (back) portions of the brain via a paste that will conduct electrical signals originating from the neurons of the cortex. These electrodes will provide a readout of the brain activity that can be "scored" into different stages of sleep (N1, N2, and N3 – which combined are referred to as NREM sleep – and Stage R, which is rapid eye movement sleep, or REM, and wakefulness). The EEG electrodes are placed according to the International 10-20 system.
The electrooculogram (EOG) uses two electrodes, one that is placed 1 cm above the outer canthus of the right eye and one that is placed 1 cm below the outer canthus of the left eye. These electrodes pick up the activity of the eyes in virtue of the electropotential difference between the cornea and the retina (the cornea is positively charged relative to the retina). This helps to determine when REM sleep occurs, of which rapid eye movements are characteristic, and also essentially aids in determining when sleep occurs.
The electromyogram (EMG) typically uses four electrodes to measure muscle tension in the body as well as to monitor for an excessive amount of leg movements during sleep (which may be indicative of periodic limb movement disorder, PLMD). Two leads are placed on the chin with one above the jawline and one below. This, like the EOG, helps determine when sleep occurs as well as REM sleep. Sleep generally includes relaxation and so a marked decrease in muscle tension occurs. A further decrease in skeletal muscle tension occurs in REM sleep. A person becomes partially paralyzed to make acting out of dreams impossible, although people that do not have this paralysis can develop REM behavior disorder. Finally, two more leads are placed on the anterior tibialis of each leg to measure leg movements.
Though a typical electrocardiogram (ECG or EKG) would use ten electrodes, only two or three are used for a polysomnogram. They can either be placed under the collarbone on each side of the chest or one under the collarbone and the other six inches above the waist on either side of the body. These electrodes measure the electrical activity of the heart as it contracts and expands, recording such features as the "P" wave, "QRS" complex, and "T" wave. These can be analyzed for any abnormalities that might be indicative of an underlying heart pathology.
Nasal and oral airflow can be measured using pressure transducers, and/or a thermocouple, fitted in or near the nostrils; the pressure transducer is considered the more sensitive. This allows the clinician/researcher to measure the rate of respiration and identify interruptions in breathing. Respiratory effort is also measured in concert with nasal/oral airflow by the use of belts. These belts expand and contract upon breathing effort. However, this method of respiration may also produce false negatives. Some patients will open and close their mouth while obstructive apneas occur. This forces air in and out of the mouth while no air enters the airway and lungs. Thus, the pressure transducer and thermocouple will detect this diminished airflow and the respiratory event may be falsely identified as a hypopnea, or a period of reduced airflow, instead of an obstructive apnea.
Pulse oximetry determines changes in blood oxygen levels that often occur with sleep apnea and other respiratory problems. The pulse oximeter fits over a fingertip or an earlobe.
Snoring may be recorded with a sound probe over the neck, though more commonly the sleep technician will just note snoring as "mild", "moderate" or "loud" or give a numerical estimate on a scale of 1 to 10. Also, snoring indicates airflow and can be used during hypopneas to determine whether the hypopnea may be an obstructive apnea.
Procedure
For the standard test, the patient comes to a sleep lab in the early evening and over the next 1–2 hours is introduced to the setting and "wired up" so that multiple channels of data can be recorded when they fall asleep. The sleep lab may be in a hospital, a free-standing medical office, or a hotel. A sleep technician should always be in attendance and is responsible for attaching the electrodes to the patient and monitoring the patient during the study.
During the study, the technician observes sleep activity by looking at the video monitor and the computer screen that displays all the data second by second. In most labs, the test is completed and the patient is discharged home by 7 a.m. unless a Multiple Sleep Latency Test (MSLT) is to be done during the day to test for excessive daytime sleepiness.
Most recently, health care providers may prescribe home studies to enhance patient comfort and reduce expense. The patient is given instructions after a screening tool is used, uses the equipment at home and returns it the next day. Most screening tools consist of an airflow measuring device (thermistor) and a blood oxygen monitoring device (pulse oximeter). The patient would sleep with the screening device for one to several days, then return the device to the health care provider. The provider would retrieve data from the device and could make assumptions based on the information given. For example, series of drastic blood oxygen desaturations during night periods may indicate some form of respiratory event (apnea). The equipment monitors, at a minimum, oxygen saturation. More sophisticated home study devices have most of the monitoring capability of their counterparts run by sleep lab technicians, and can be complex and time-consuming to set up for self-monitoring.
Interpretation
After the test is completed, a "scorer" analyzes the data by reviewing the study in 30-second "epochs".
The score consists of the following information:
Onset of sleep from time the lights were turned off: this is called "sleep onset latency" and normally is less than 20 minutes. (Note that determining "sleep" and "waking" is based solely on the EEG. Patients sometimes feel they were awake when the EEG shows they were sleeping. This may be because of sleep state misperception, drug effects on brain waves, or individual differences in brain waves.)
Sleep efficiency: the number of minutes of sleep divided by the number of minutes in bed. Normal is approximately 85 to 90% or higher.
Sleep stages: these are based on 3 sources of data coming from 7 channels: EEG (usually 4 channels), EOG (2), and chin EMG (1). From this information, each 30-second epoch is scored as "awake" or one of 4 sleep stages: 1, 2, 3, and REM, or Rapid Eye Movement, sleep. Stages 1–3 are together called non-REM sleep. Non-REM sleep is distinguished from REM sleep, which is altogether different. Within non-REM sleep, stage 3 is called "slow wave" sleep because of the relatively wide brain waves compared to other stages; another name for stage 3 is "deep sleep". By contrast, stages 1 and 2 are "light sleep". The figures show stage 3 sleep and REM sleep; each figure is a 30-second epoch from an overnight PSG.
(The percentage of each sleep stage varies by age, with decreasing amounts of REM and deep sleep in older people. The majority of sleep at all ages except infancy is stage 2. REM normally occupies about 20-25% of sleep time. Many factors besides age can affect both the amount and percentage of each sleep stage, including drugs [particularly anti-depressants and pain medication], alcohol taken before bedtime, and sleep deprivation.)
Any breathing irregularities, mainly apneas and hypopneas. Apnea is a complete or near complete cessation of airflow for at least 10 seconds followed by an arousal and/or 3% oxygen desaturation; hypopnea is a 30% or greater decrease in airflow for at least 10 seconds followed by an arousal and/or 4% oxygen desaturation. (The national insurance program Medicare in the US requires a 4% desaturation in order to include the event in the report.)
"Arousals" are sudden shifts in brain wave activity. They may be caused by numerous factors, including breathing abnormalities, leg movements, environmental noises, etc. An abnormal number of arousals indicates "interrupted sleep" and may explain a person's daytime symptoms of fatigue and/or sleepiness.
Cardiac rhythm abnormalities.
Leg movements.
Body position during sleep.
Oxygen saturation during sleep.
Once scored, the test recording and the scoring data are sent to the sleep medicine physician for interpretation. Ideally, interpretation is done in conjunction with the medical history, a complete list of drugs the patient is taking, and any other relevant information that might impact the study such as napping done before the test.
After interpreting the data, the sleep physician writes a report that is sent to the referring provider, usually with specific recommendations based on the test results.
Examples of summary reports
The below example report describes a patient's situation and the results of some tests, and mentions CPAP as a treatment for obstructive sleep apnea. CPAP is continuous positive airway pressure and is delivered via a mask to the patient's nose or the patient's nose and mouth. (Some masks cover one, some both.) CPAP is typically prescribed after the diagnosis of OSA is made from a sleep study (i.e., after a PSG test). To determine the correct amount of pressure and the right mask type and size, and also to make sure the patient can tolerate this therapy, a "CPAP titration study" is recommended. This is the same as a PSG but with the addition of the mask applied so the technician can increase the airway pressure inside the mask as needed until all, or most, of the patient's airway obstructions are eliminated.
This report recommends that Mr. J---- return for a CPAP titration study, which means a return to the lab for a second all-night PSG (this one with the mask applied). Often, however, when a patient manifests OSA in the first 2 or 3 hours of the initial PSG, the technician will interrupt the study and apply the mask right then and there; the patient is awakened and fitted for a mask. The rest of the sleep study is then a "CPAP titration." When both the diagnostic PSG and a CPAP titration are done the same night, the entire study is called "split night".
The split-night study has these advantages:
The patient only has to come to the lab once, so it is less disruptive than is coming two different nights;
It is "half as expensive" to whoever is paying for the study.
The split-night study has these disadvantages:
There is less time to make a diagnosis of OSA (Medicare in the US requires a minimum of 2 hours of diagnosis time before the mask can be applied); and
There is less time to assure an adequate CPAP titration. If the titration begins with only a few hours of sleep left, the remaining time may not assure a proper CPAP titration, and the patient may still have to return to the lab.
Because of costs, more and more studies for "sleep apnea" are attempted as split-night studies when there is early evidence for OSA. (Note that both types of study, with and without a CPAP mask, are still polysomnograms.) When the CPAP mask is worn, however, the flow-measurement lead in the patient's nose is removed. Instead, the CPAP machine relays all flow-measurement data to the computer. The below is an example report that might be produced from a split night study:
See also
Polysomnographic technician
Respiratory monitoring
Sleep disorder
Sleep medicine
Sleep study
References
Further reading
Iber C, Ancoli-Israel S, Chesson A, and Quan SF for the American Academy of Sleep Medicine. The AASM Manual for the Scoring of Sleep and Associated Events: Rules, Terminology and Technical Specifications, 1st ed.: Westchester, Illinois: American Academy of Sleep Medicine, 2007.
External links
Practical guide to Polysomnography
What is Polysomnography
What is a Sleep Study for Sleep Apnea?
Polysomnography by Carmel Armon, on Medscape Reference
Diagnostic neurology
Diagnostic pulmonology
Sleep medicine | Polysomnography | Biology | 3,574 |
22,381,897 | https://en.wikipedia.org/wiki/ExxonMobil%20Electrofrac | ExxonMobil Electrofrac is an in situ shale oil extraction technology proposed by ExxonMobil for converting kerogen in oil shale to shale oil.
Technology
ExxonMobil Electrofrac uses a series of fractures created in the oil shale formation. Preferably these fractures should be longitudinal vertical fractures created from horizontal wells and conducting electricity from the heel to the toe of each heating well. For conductivity, an electrically-conductive material such as calcined petroleum coke is injected into the wells in fractures, forming a heating element. Heating wells are placed in a parallel row with a second horizontal well intersecting them at their toe. This allows opposing electrical charges to be applied at either end. Laboratory experiments have demonstrated that electrical continuity is unaffected by kerogen conversion and that hydrocarbons are expelled from heated oil shale even under in situ stress. Planar heaters should be used because they require fewer wells than wellbore heaters and offer a reduced surface footprint. The shale oil is extracted by separate dedicated production wells.
See also
Shell in situ conversion process
Chevron CRUSH
References
Oil shale technology
ExxonMobil | ExxonMobil Electrofrac | Chemistry | 226 |
11,552,649 | https://en.wikipedia.org/wiki/Coniothyrium%20glycines | Coniothyrium glycines is a fungal plant pathogen infecting soybean.
History
This fungus species has undergone various name changes. Originally described in 1957, from soyabean leaf lesions and it was classified as a new species in the genus Pyrenochaeta, published as Pyrenochaeta glycines , due to the pycnidial stage (when shaped like a bulging vase) (Stewart, 1957). In another study, the pycnidial state was not observed, but sclerotia (a compact mass of hardened fungal mycelium containing food reserves) were seen within soyabean leaf lesions associated with red leaf blotch, and a new genus was formed and the fungus was published as Dactuliophora glycines on the basis of the sclerotial stage (Leakey, 1964). In 1986, Dactuliophora glycines was thought to be the sclerotial state of Pyrenochaeta glycines (Datnoff et al., 1986a). In 1988, the genus Dactuliochaeta was established to contain Pyrenochaeta glycines and its synanamorph, Dactuliophora glycines (Hartman and Sinclair, 1988). In 2002, the fungus was then classified as a Phoma species on the basis of its similar production of pycniospores to other species of Phoma and the species was re-named Phoma glycinicola (de Gruyter and Boerema, 2002). In 2013, the fungus was placed in the genus Coniothyrium and then named as Coniothyrium glycines (de Gruyter et al., 2013) as a new combination because of its similarity of pycniospore production to that of other species in the genus Coniothyrium, which differ from Phoma. In the original description it was noted that the conidia were greenish-yellow in mass (Stewart, 1957), resembling coniothyrium-like conidia (de Gruyter et al., 2012). The fungus is unique as it produces well-defined, melanized sclerotia that can be infectious, or the species can produce pycnidia on their surface that then also produce infectious conidia (Hartman and Sinclair, 1988).
See also
List of soybean diseases
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Soybean diseases
glycinicola
Fungi described in 2002
Fungus species | Coniothyrium glycines | Biology | 540 |
23,887,860 | https://en.wikipedia.org/wiki/C10H20 | {{DISPLAYTITLE:C10H20}}
The molecular formula C10H20 (molar mass: 140.26 g/mol, exact mass: 140.1565 u) may refer to:
Cyclodecane
Decene
p-Menthane
Molecular formulas | C10H20 | Physics,Chemistry | 61 |
16,553,719 | https://en.wikipedia.org/wiki/Earth-centered%20inertial | Earth-centered inertial (ECI) coordinate frames have their origins at the center of mass of Earth and are fixed with respect to the stars. "I" in "ECI" stands for inertial (i.e. "not accelerating"), in contrast to the "Earth-centered – Earth-fixed" (ECEF) frames, which remains fixed with respect to Earth's surface in its rotation, and then rotates with respect to stars.
For objects in space, the equations of motion that describe orbital motion are simpler in a non-rotating frame such as ECI. The ECI frame is also useful for specifying the direction toward celestial objects:
To represent the positions and velocities of terrestrial objects, it is convenient to use ECEF coordinates or latitude, longitude, and altitude.
In a nutshell:
ECI: inertial, not rotating, with respect to the stars; useful to describe motion of celestial bodies and spacecraft.
ECEF: not inertial, accelerated, rotating with respect to the stars; useful to describe motion of objects on Earth surface.
The extent to which an ECI frame is actually inertial is limited by the non-uniformity of the surrounding gravitational field. For example, the Moon's gravitational influence on a high-Earth orbiting satellite is significantly different than its influence on Earth, so observers in an ECI frame would have to account for this acceleration difference in their laws of motion. The closer the observed object is to the ECI-origin, the less significant the effect of the gravitational disparity is.
Coordinate system definitions
It is convenient to define the orientation of an ECI frame using the Earth's orbit plane and the orientation of the Earth's rotational axis in space. The Earth's orbit plane is called the ecliptic, and it does not coincide with the Earth's equatorial plane. The angle between the Earth's equatorial plane and the ecliptic, ε, is called the obliquity of the ecliptic and ε ≈ 23.4°.
An equinox occurs when the earth is at a position in its orbit such that a vector from the earth toward the sun points to where the ecliptic intersects the celestial equator. The equinox which occurs near the first day of spring (with respect to the North hemisphere) is called the vernal equinox. The vernal equinox can be used as a principal direction for ECI frames. The Sun lies in the direction of the vernal equinox around 21 March. The fundamental plane for ECI frames is usually either the equatorial plane or the ecliptic.
The location of an object in space can be defined in terms of right ascension and declination which are measured from the vernal equinox and the celestial equator. Right ascension and declination are spherical coordinates analogous to longitude and latitude, respectively. Locations of objects in space can also be represented using Cartesian coordinates in an ECI frame.
The gravitational attraction of the Sun and Moon on the Earth's equatorial bulge cause the rotational axis of the Earth to precess in space similar to the action of a top. This is called precession. Nutation is the smaller amplitude shorter-period (< 18.6 years) wobble that is superposed on the precessional motion of the Celestial pole. It is due to shorter-period fluctuations in the strength of the torque exerted on Earth's equatorial bulge by the sun, moon, and planets. When the short-term periodic oscillations of this motion are averaged out, they are considered "mean" as opposed to "true" values. Thus, the vernal equinox, the equatorial plane of the Earth, and the ecliptic plane vary according to date and are specified for a particular epoch. Models representing the ever-changing orientation of the Earth in space are available from the International Earth Rotation and Reference Systems Service.
Examples include:
J2000: One commonly used ECI frame is defined with the Earth's Mean Equator and Mean Equinox (MEME) at 12:00 Terrestrial Time on 1 January 2000. It can be referred to as J2K, J2000 or EME2000. The x-axis is aligned with the mean vernal equinox. The z-axis is aligned with the Earth's rotation axis (or equivalently, the celestial North Pole) as it was at that time. The y-axis is rotated by 90° East about the celestial equator.
M50: This frame is similar to J2000, but is defined with the mean equator and equinox at the beginning of the Besselian year 1950, which is B1950.0 = JDE 2433282.423357 = 1950 January 0.9235 TT = 1949 December 31 22:09:50.4 TT.
GCRF: Geocentric Celestial Reference Frame is the Earth-centered counterpart of the International Celestial Reference Frame.
MOD: a Mean of Date (MOD) frame is defined using the mean equator and equinox on a particular date.
TEME: the ECI frame used for the NORAD two-line elements is sometimes called true equator, mean equinox (TEME) although it does not use the conventional mean equinox.
See also
Earth's axial tilt
Geocentric Celestial Reference System
Orbital state vectors
References
Astronomical coordinate systems | Earth-centered inertial | Astronomy,Mathematics | 1,120 |
96,628 | https://en.wikipedia.org/wiki/Offspring | In biology, offspring are the young creation of living organisms, produced either by sexual or asexual reproduction. Collective offspring may be known as a brood or progeny. This can refer to a set of simultaneous offspring, such as the chicks hatched from one clutch of eggs, or to all offspring produced over time, as with the honeybee. Offspring can occur after mating, artificial insemination, or as a result of cloning.
Human offspring (descendants) are referred to as children; male children are sons and female children are daughters (see Kinship).
Overview
Offspring contains many parts and properties that are precise and accurate in what they consist of, and what they define. As the offspring of a new species, also known as a child or f1 generation, consist of genes of the father and the mother, which is also known as the parent generation. Each of these offspring contains numerous genes which have coding for specific tasks and properties. Males and females both contribute equally to the genotypes of their offspring, in which gametes fuse and form. An important aspect of the formation of the parent offspring is the chromosome, which is a structure of DNA which contains many genes.
To focus more on the offspring and how it results in the formation of the f1 generation, is an inheritance called sex linkage, which is a gene located on the sex chromosome, and patterns of this inheritance differ in both male and female. The explanation that proves the theory of the offspring having genes from both parent generations is proven through a process called crossing over, which consists of taking genes from the male chromosomes and genes from the female chromosome, resulting in a process of meiosis occurring, and leading to the splitting of the chromosomes evenly. Depending on which genes are dominantly expressed in the gene will result in the sex of the offspring. The female will always give an X chromosome, whereas the male, depending on the situation, will either give an X chromosome or a Y chromosome. If a male offspring is produced, the gene will consist of an X and a Y chromosome, and if a female offspring is produced, the gene will consist of two X chromosomes.
Cloning is the production of an offspring which represents the identical genes to its parent. Reproductive cloning begins with the removal of the nucleus from an egg, which holds the genetic material. In order to clone an organ, a stem cell is to be produced and then utilized to clone that specific organ. A common misconception of cloning is that it produces an exact copy of the parent being cloned. Cloning copies the DNA/genes of the parent and then creates a genetic duplicate. The clone will not be a similar copy as they will grow up in different surroundings from the parent and may encounter different opportunities and experiences that can result in epigenetic changes. Although mostly positive, cloning also faces some setbacks in terms of ethics and human health. Though cell division and DNA replication is a vital part of survival, there are many steps involved and mutations can occur with permanent change in an organism's and their offspring's DNA. Some mutations can be good as they result in random evolution periods which may be good for the species, but most mutations are bad as they can change the genotypes of offspring, which can result in changes that harm the species.
See also
Breeding (disambiguation)
Family
Infanticide (zoology)
Litter
Parent–offspring conflict
Parental investment
Patrilineality
References
Families
Reproduction
Zoology | Offspring | Biology | 698 |
19,609,885 | https://en.wikipedia.org/wiki/Friedel%27s%20salt | Friedel's salt is an anion exchanger mineral belonging to the family of the layered double hydroxides (LDHs). It has affinity for anions as chloride and iodide and is capable of retaining them to a certain extent in its crystallographical structure.
Composition
Friedel's salt is a layered double hydroxide (LDH) of general formula:
or more explicitly for a positively-charged LDH mineral:
or by directly incorporating water molecules into the Ca,Al hydroxide layer:
where chloride and hydroxide anions occupy the interlayer to compensate the excess of positive charges.
In the cement chemist notation (CCN), considering that
and doubling all the stoichiometry, it could also be written in CCN as follows:
A simplified chemical composition with only Cl– in the interlayer, and without OH–, as:
can be also written in cement chemist notation as:
Friedel's salt is formed in cements initially rich in tri-calcium aluminate (C3A). Free-chloride ions directly bind with the AFm hydrates (C4AH13 and its derivatives) to form Friedel's salt.
Importance of chloride binding in AFm phases
Friedel's salt plays a main role in the binding and retention of chloride anions in cement and concrete. However, Friedel's salt remains a poorly understood phase in the CaO–Al2O3–CaCl2–H2O system. A sufficient understanding of the Friedel's salt system is essential to correctly model the reactive transport of chloride ions in reinforced concrete structures affected by chloride attack and steel reinforcement corrosion. It is also important to assess the long-term stability of salt-saturated Portland cement-based grouts to be used in engineering structures exposed to seawater or concentrated brine as it is the case for radioactive waste disposal in deep salt formations.
Another reason to study AFm phases and the Friedel's salt system is their tendency to bind, trap and to immobilise toxic anions, such as , , and , or the long-lived radionuclide 129I−, in cementitious materials. Their characterization is important to conceive anion getters and to assess the retention capacity of cementitious buffer and concrete barriers used for radioactive waste disposal.
Chloride sorption and anion exchange in AFm phases
Friedel's salt could be first tentatively represented as an AFm phase in which two chloride ions would have simply replaced one sulfate ion. This conceptual representation based on the intuition of a simple stoichiometric exchange is very convenient to remind but such a simple mechanism likely does not directly occur and must be considered with caution:
Indeed, the reality appears to be more complex than such a simple stoichiometric exchange between chloride and sulfate ions in the AFm crystal structure. In fact, it seems that chloride ions are electrostatically sorbed onto the positively charged [Ca2Al(OH)6 · 2H2O]+ layer of AFm hydrate, or could also exchange with hydroxide ions (OH–) also present in the interlayer. So, the simple and "apparent" exchange reaction first presented here above for the sake of ease does not correspond to the reality and is an oversimplified representation.
Similarly, Kuzel’s salt could seem to be formed when only 1 Cl– ion exchanges with in AFm (half substitution of sulfate ions):
Glasser et al. (1999) proposed to name this half-substituted salt in honor of his discoverer: Hans-Jürgen Kuzel.
However, Mesbah et al. (2011) have identified two different types of interlayers in the crystallographic structure they have determined and it precludes the common anion exchange reaction presented here above as stated by the authors themselves in their conclusions:
Kuzel's salt is a two-stage layered compound with two distinct interlayers, which are alternatively filled by chloride anions only (for one kind of interlayer) and by sulfate anions and water molecules (for the other kind of interlayer). Kuzel's salt structure is composed of the perfect intercalation of the Friedel's salt structure and the monosulfoaluminate structure (the two end-members of the studied bi-anionic AFm compound). The structural properties of Kuzel's salt explain the absence of extended chloride to sulfate or sulfate to chloride substitution.
The staging feature of Kuzel's salt certainly explains the difficulties to substitute chloride and sulfate: the modification in one kind of interlayer involves a modification in the other kind of interlayer in order to preserve the electroneutrality of the compound. The two-stage feature of Kuzel's salt implies that each interlayer should be mono-anionic.
So, if the global chemical composition of Friedel's salt and Kuzel's salt corresponds well respectively with the stoichiometry of a complete substitution, or a half substitution, of sulfate ions by chloride ions in the crystal structure of AFm, it does not tell directly anything on the exact mechanism of anion substitution in this complicated system. Only detailed and well controlled chloride sorption, or anion exchange, experiments with a complete analysis of all the dissolved species present in aqueous solution (also including OH–, Na+ and Ca2+ ions) can decipher the system.
Discovery
Friedel's salt discovery is relatively difficult to trace back from the recent literature, simply because it is an ancient finding of a poorly known and non-natural product. It has been synthesised and identified in 1897 by Georges Friedel, mineralogist and crystallographer, son of the famous French chemist Charles Friedel. Georges Friedel also synthesised calcium aluminate (1903) in the framework of his work on the macles theory (twin crystals). This point requires further verification.
Formation
Relation with Tricalcium aluminate.
Incorporation of chloride.
Solid solutions.
See also
AFm phases
Aluminium chlorohydrate
Cement
Sorel cement, a mixture of general formula: Mg4Cl2(OH)6
Stanislas Sorel, a French engineer who made a new form of cement from a combination of magnesium oxide and magnesium chloride
Concrete
Salt-concrete, also known as salzbeton
Chloride
Layered double hydroxides
Tricalcium aluminate
Friedel-Crafts reaction
Friedel family, a rich lineage of French scientists:
Charles Friedel (1832–1899), French chemist known for the Friedel-Crafts reaction
Georges Friedel (1865–1933), here above mentioned, French crystallographer and mineralogist; son of Charles
Edmond Friedel (1895-1972) (1895–1972), French Polytechnician and mining engineer, founder of BRGM, the French geological survey; son of Georges
Jacques Friedel, (1921-2014), French physicist; son of Edmond, see the French site for Jacques Friedel
References
Further reading
External links
Friedel's salt, Ca2Al(OH)6 (Cl, OH) · 2H2O: Its solid solutions and their role in chloride binding
Glossary of cement concrete and SEM terminology
Aluminates
Calcium compounds
Cement
Concrete
Crystallography
Hydrates
Hydroxides
Materials | Friedel's salt | Physics,Chemistry,Materials_science,Engineering | 1,499 |
74,854,900 | https://en.wikipedia.org/wiki/List%20of%20heritage%20registers%20in%20Bosnia%20and%20Herzegovina | National Monuments of Bosnia and Herzegovina are declared and maintained through the Commission to preserve national monuments of Bosnia and Herzegovina or KONS.
State level
Commission to preserve national monuments of Bosnia and Herzegovina
Central Register of Monuments
Also, a Bosnia and Herzegovina state commission for cooperation with the UNESCO is established:
State Commission of Bosnia and Herzegovina for UNESCO
Local level
Local level include entity registers, district Brčko, cantonal, and regional registers:
Institute for the Protection of Monuments of the Federation of Bosnia and Herzegovina [Zavod za zaštitu spomenika Federacija Bosne i Hercegovine]
Republic Institute for Protection of Cultural and Natural Heritage of Republic of Srpska [Republic Institute for Protection of Cultural and Natural Heritage of Republic of Srpska]
Institute for the Protection of Monuments District Brčko [Zavod za zaštitu spomenika District Brčko] (Služba za turizam Vlade Brčko distrikta Bosne i Hercegovine)
Cantonal Institute for the Protection of Cultural–Historical and Natural Heritage Sarajevo [Kantonalni zavod za zaštitu kulturno–historijskog i prirodnog naslijeđa Sarajevo]
Public Institution Institute for the Protection and Use of Cultural–Historical and Natural Heritage of Tuzla Canton [JU Zavod za zaštitu i korištenje kulturno–historijskog i prirodnog naslijeđa Tuzlanskog kantona]
Cantonal Institute for Urbanism, Spatial Planning and Protection of the Cultural and Historical Heritage of the Central Bosnian Canton [Kantonalni zavod za urbanizam, prostorno planiranje i zaštitu kulturno–historijskog naslijeđa Srednjobosanskog Kantona]
Institute for the Protection of Cultural and Historical Heritage of Herzegovina–Neretva Canton [Zavod za zaštitu kulturno–historijske baštine Hercegovačko–Neretvanskog Kantona]
Public Institution Institute for the Protection of Cultural Heritage Bihać – Una-Sana Canton [JU Zavod za zaštitu kulturnog naslijeđa Bihać – Unsko–Sanski Kanton]
Institute for the Protection of Cultural Heritage of the Zenica–Doboj Canton [Zavod za zaštitu kulturne baštine Zeničko–dobojskog kantona]
Public Institution Agency for cultural–historical and natural heritage and development of the tourist potential of the city of Jajce [JU Agencija za kulturno–povijesnu i prirodnu baštinu i razvoj turističkih potencijala grada Jajca]
See also
Cultural heritage
National heritage site
World Heritage Site
List of heritage registers
List of National Monuments of Bosnia and Herzegovina
List of Intangible Cultural Heritage of Bosnia and Herzegovina
List of World Heritage Sites in Bosnia and Herzegovina
List of fortifications in Bosnia and Herzegovina
List of bridges in Bosnia and Herzegovina
List of World War II monuments and memorials in Bosnia and Herzegovina
List of People's Heroes of Yugoslavia monuments in Bosnia and Herzegovina
List of museums in Bosnia and Herzegovina
References
External links
Commission to preserve national monuments
Commission to preserve national monuments (old website in use as an archive)
Bosnia and Herzegovina
Heritage registers in Bosnia and Herzegovina
B | List of heritage registers in Bosnia and Herzegovina | Engineering | 709 |
76,628,961 | https://en.wikipedia.org/wiki/IC%204271 | IC 4271 is a spiral galaxy located some 800 million light-years away in the Canes Venatici constellation. It is 130,000 light-years in diameter. IC 4271 was first located on July 10, 1896, by Stephane Javelle, a French astronomer. It hosts a Seyfert type 2 nucleus, containing an acceleration disc around its supermassive black hole which releases large amounts of radiation, hence its bright appearance. IC 4271 appears to be interacting with its smaller neighboring galaxy, PGC 3096774.
Both galaxies form Arp 40. In the Atlas of Peculiar Galaxies created by Halton Arp, they fall under spiral galaxies that have companions with low-surface-brightness.
References
Spiral galaxies
Interacting galaxies
Seyfert galaxies
Canes Venatici
4271
040
47334
+06-30-15 | IC 4271 | Astronomy | 181 |
43,253,786 | https://en.wikipedia.org/wiki/AK%20Pictoris | AK Pictoris is a star system in the constellation Pictor. Its combined apparent magnitude is 6.182. Based on the system's parallax, it is located 69 light-years (21.3 parsecs) away. AK Pictoris is a member of the AB Doradus moving group, a group of stars with similar motions that are thought to be associated.
AK Pictoris is a binary star. Its two stars orbit each other every 217.6 years, separated by 2.004. The primary star is a G-type star with similar properties to the Sun. The secondary star is a K-type star. The primary star is a young BY Draconis variable, a class of variable stars that derive their variability from stellar rotation. It is also known to host a debris disk, inferred from its infrared excess.
References
Pictor
Pictoris, AK
3400
048189
031711
2466
G-type main-sequence stars
K-type main-sequence stars
BY Draconis variables
Durchmusterung objects
Binary stars | AK Pictoris | Astronomy | 224 |
501,178 | https://en.wikipedia.org/wiki/MreB | MreB is a protein found in bacteria that has been identified as a homologue of actin, as indicated by similarities in tertiary structure and conservation of active site peptide sequence. The conservation of protein structure suggests the common ancestry of the cytoskeletal elements formed by actin, found in eukaryotes, and MreB, found in prokaryotes. Indeed, recent studies have found that MreB proteins polymerize to form filaments that are similar to actin microfilaments. It has been shown to form multilayer sheets comprising diagonally interwoven filaments in the presence of ATP or GTP.
MreB along with MreC and MreD are named after the mre operon (murein formation gene cluster E) to which they all belong.
Function
MreB controls the width of rod-shaped bacteria, such as Escherichia coli. A mutant E. coli that creates defective MreB proteins will be spherical instead of rod-like. Also, most bacteria that are naturally spherical do not have the gene encoding MreB. Members of the Chlamydiota are a notable exception, as these bacteria utilize the protein for localized septal peptidoglycan synthesis. Prokaryotes carrying the mreB gene can also be helical in shape. MreB has long been thought to form a helical filament underneath the cytoplasmic membrane, however, this model has been brought into question by three recent publications showing that filaments cannot be seen by electron cryotomography and that GFP-MreB can be seen as patches moving around the cell circumference. It has been shown to interact with several proteins that are proven to be involved in length growth (for instance PBP2). Therefore, it probably directs the synthesis and insertion of new peptidoglycan building units into the existing peptidoglycan layer to allow length growth of the bacteria.
References
Further reading
- source of information added to this entry as of February 20, 2006
Cytoskeleton proteins
Prokaryotic cells | MreB | Biology | 430 |
1,525,765 | https://en.wikipedia.org/wiki/Cluster%20%28physics%29 | In physics, the term clusters denotes small, polyatomic particles. As a rule of thumb, any particle made of between 3×100 and 3×107 atoms is considered a cluster.
The term can also refer to the organization of protons and neutrons within an atomic nucleus, e.g. the alpha particle (also known as "α-cluster"), consisting of two protons and two neutrons (as in a helium nucleus).
Overview
Although first reports of cluster species date back to the 1940s, cluster science emerged as a separate direction of research in the 1980s, One purpose of the research was to study the gradual development of collective phenomena which characterize a bulk solid. For example, these are the color of a body, its electrical conductivity, its ability to absorb or reflect light, and magnetic phenomena such as ferro-, ferri-, or antiferromagnetism. These are typical collective phenomena which only develop in an aggregate of a large number of atoms.
It was found that collective phenomena break down for very small cluster sizes. It turned out, for example, that small clusters of a ferromagnetic material are super-paramagnetic rather than ferromagnetic. Paramagnetism is not a collective phenomenon, which means that the ferromagnetism of the macrostate was not conserved by going into the nanostate. The question then was asked for example, "How many atoms do we need in order to obtain the collective metallic or magnetic properties of a solid?" Soon after the first cluster sources had been developed in 1980, an ever larger community of cluster scientists was involved in such studies.
This development led to the discovery of fullerenes in 1986 and carbon nanotubes a few years later.
In science, a lot is known about properties of the gas phase; however, comparatively little is known about the condensed phases (the liquid phase and solid phase.) The study of clusters attempts to bridge this gap of knowledge by clustering atoms together and studying their characteristics. If enough atoms were clustered together, eventually one would obtain a liquid or solid.
The study of atomic and molecular clusters also benefits the developing field of nanotechnology. If new materials are to be made out of nanoscale particles, such as nanocatalysts and quantum computers, the properties of the nanoscale particles (the clusters) must first be understood.
See also
Cluster chemistry
Nanoparticle
Nanocluster
References
External links
Scientific community portal for clusters, fullerenes, nanotubes, nanostructures, and similar small systems.
Nanomaterials | Cluster (physics) | Materials_science | 533 |
73,426,662 | https://en.wikipedia.org/wiki/Non-planetary%20abiogenesis | There are several hypotheses of the possibility of life originating in the universe in places other than planets, dated as early as 1774. Suggested locations are within stars, on the surface of stars, as well as in the interstellar space.
Life within the Sun
In 1965 astronomer Ernst Julius Öpik wrote the article "Is the Sun Habitable?" in which he described that in 1774 Alexander Wilson of Glasgow, remarking that sunspots are apparently lower than the rest of the surface of the Sun, hypothesised that the interior of the Sun is colder than its surface and possibly suitable for life. Wilson suggested that the sunspots he observed were probably "immense excavations in the body of the Sun" (p. 16) considerably beneath the surface of the Sun and they provided a glimpse on the surface below that does not emit much light. Prefacing with many words of caution, he further hypothesises that the Sun "is made up of two kinds of matter, very different in their qualities; that by far the greater part is solid and dark" (p. 20) and the dark globe is thinly covered in a luminous substance. His hypothesis, acknowledged by William Hershel, did not contradict the knowledge of the time. In 20th century an amateur astronomer G. Buere of Osnabruck offered a prize of DM 25,000 to anyone who can disprove the statement that the Sun has life. When objecting to a claimant of the prize, G. Buere essentially repeated the Wilson-Herschel hypothesis: "The sunspots are not spots but holes. They are dark which means that the interior of the Sun is cooler than its exterior. If this is so, there must be vegetation and the solar core is habitable."
Life within other stars
In order to discuss abiological life inside stars, Luis Anchordoqui and Eugene Chudnovsky suggest three postulates which must be satisfied by any reasonable definition of life:
The ability to encode information
The ability of information carriers to self-replicate faster than they disintegrate
The presence of free energy needed to constantly create order out of the disorder (i.e., to combat entropy) via self-replication
The authors proceed to argue that inside Sun-like stars objects that satisfy the above conditions can exist. They also suggest that an indication on the existence of such "nuclear life" could be observed deviations from predictions of models of stellar evolution, such as anomalies in luminosity. Authors themselves characterize the attributions of such anomalies to "life" as "a very long shot".
Life elsewhere
The concept of life forms living on the surface of neutron stars was proposed by radio astronomer Frank Drake in 1973. Drake said that the atomic nuclei in neutron stars have large variety which might combine in supernuclei, analogous to the molecules that serve the base of life on Earth. Life of this type would be extremely fast, with several generations arising and dying within the span of a second. With a tongue in cheek, Drake described musings of a (hypothetical) scientist on a neutron star:
"Our theoreticians have predicted things called
atoms ... almost empty space ... we never thought
they could exist but they seem to exist out there.
Could there be life? Suppose those things bond together to make a big molecule? Well it wouldn't
be alive. After all, the temperature is too low and
everything happens so slowly that nothing ever
changes."
In chapter "Stellar Graveyards, Nucleosynthesis, and Why We Exist" of The Stars of Heaven (2001) Clifford A. Pickover discusses various forms of abiological lifes. He poses the question whether in the times of ultimate expansion of the Universe with extremely low density of matter some structures could exist that can support the life of the entities he calls the "Diffuse Ones". He also discussed the possibility of life without sunlight/starlight, e.g., on the surface of brown dwarfs. In the latter discussion he extrapolates from the existence of life with no sunlight in the depths of Earth's ocean that draw energy from hydrogen sulphide. Life in the atmosphere brown dwarfs was also discussed by Yates et al. in 2017, and in 2019 Manasvi Lingam and Abraham Loeb extended the discussion of Yates et al.. Both articles extend the viability of Earth-like biological life beyond planets. Their ideas were criticized by experts in brown dwarfs.
In 2007 Russian expert in plasma physics together with German and Australian colleagues published a paper in which they speculated about plasma-based inorganic living matter, extrapolating from computer simulations of self-organization reported in plasma. The simulated conditions can exist in nebulae. Tsytovich claims that the described structures are autonomous, reproducing and evolving, thus satisfying the conditions expected from life.
In fiction
Some works of science fiction involve life on or in neutron stars, whole sentient stars and even sentient black holes.
See also
Carbon chauvinism
Hypothetical types of biochemistry
Planetary chauvinism
Panspermia
Notes
References
Hypothetical life forms
Extraterrestrial life
Origin of life | Non-planetary abiogenesis | Astronomy,Biology | 1,051 |
30,290,243 | https://en.wikipedia.org/wiki/Elusimicrobiota | The phylum Elusimicrobiota, previously known as "Termite Group 1", has been shown to be widespread in different ecosystems like marine environment, sewage sludge, contaminated sites and soils, and toxic wastes. The high abundance of Elusimicrobiota representatives is only seen for the lineage of symbionts found in termites and ants.
The first organism to be cultured was Elusimicrobium minutum; however, two other species have been partially described and placed in a separate class, known as Endomicrobia.
Phylogeny
Taxonomy
The currently accepted taxonomy is based solemnly on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI)
Phylum Elusimicrobiota Geissinger et al. 2021
Class "Elusimicrobiia" Oren, Parte & Garrity 2016 ex Cavalier-Smith 2020 [Elusimicrobia Geissinger et al. 2010]
Order "Obscuribacteriales" Uzun et al. 2023 [F11]
Family "Obscuribacteriaceae" Uzun et al. 2023
Genus "Ca. Obscuribacterium" Uzun et al. 2023
Order Elusimicrobiales Geissinger et al. 2010
Family Lloretiaceae" Gago et al. 2024
Genus ?"Ca. Lloretia" Gago et al. 2024
Family Elusimicrobiaceae Geissinger et al. 2010
Genus "Ca. Avelusimicrobium" Gilroy et al. 2021
Genus Elusimicrobium Geissinger et al. 2010
Class Endomicrobiia corrig. Zheng et al. 2018
Order Endomicrobiales Zheng et al. 2018
Family "Liberimonadaceae" Uzun et al. 2023
Genus "Ca. Liberimonas" Uzun et al. 2023 [JAFGIL01]
Family Endomicrobiaceae Zheng et al. 2018
Genus "Ca. Ectomicrobium" Mies & Brune 2024 [JAISQF01]
Genus "Ca. Endomicrobiellum" Mies & Brune 2024 [Endomicrobium_A]
Genus Endomicrobium Zheng et al. 2018
Genus "Ca. Parendomicrobium" Mies & Brune 2024 [JAISKX01]
Genus "Ca. Praeruminimicrobium" Mies & Brune 2024 [JAHDQW01]
Genus "Ca. Proendomicrobium" Mies & Brune 2024 [WQVR01]
Genus "Ca. Proruminimicrobium" Mies & Brune 2024 [JAHDRN01]
Genus "Ca. Ruminimicrobiellum" Mies & Brune 2024 [RUG658]
Genus "Ca. Ruminimicrobium" Mies & Brune 2024 [RUG240]
See also
List of bacterial orders
List of bacteria genera
References
Bacteria phyla | Elusimicrobiota | Biology | 664 |
44,385,352 | https://en.wikipedia.org/wiki/ISO/IEC%2020248 | ISO/IEC 20248 Automatic Identification and Data Capture Techniques – Data Structures – Digital Signature Meta Structure is an international standard specification under development by ISO/IEC JTC 1/SC 31/WG 2. This development is an extension of SANS 1368, which is the current published specification. ISO/IEC 20248 and SANS 1368 are equivalent standard specifications. SANS 1368 is a South African national standard developed by the South African Bureau of Standards.
ISO/IEC 20248 [and SANS 1368] specifies a method whereby data stored within a barcode and/or RFID tag is structured and digitally signed. The purpose of the standard is to provide an open and interoperable method, between services and data carriers, to verify data originality and data integrity in an offline use case. The ISO/IEC 20248 data structure is also called a "DigSig" which refers to a small, in bit count, digital signature.
ISO/IEC 20248 also provides an effective and interoperable method to exchange data messages in the Internet of Things [IoT] and machine to machine [M2M] services allowing intelligent agents in such services to authenticate data messages and detect data tampering.
Description
ISO/IEC 20248 can be viewed as an X.509 application specification similar to S/MIME. Classic digital signatures are typically too big (the digital signature size is typically more than 2k bits) to fit in barcodes and RFID tags while maintaining the desired read performance. ISO/IEC 20248 digital signatures, including the data, are typically smaller than 512 bits. X.509 digital certificates within a public key infrastructure (PKI) is used for key and data description distribution. This method ensures the open verifiable decoding of data stored in a barcode and/or RFID tag into a tagged data structure; for example JSON and XML.
ISO/IEC 20248 addresses the need to verify the integrity of physical documents and objects. The standard counters verification costs of online services and device to server malware attacks by providing a method for multi-device and offline verification of the data structure. Examples documents and objects are education and medical certificates, tax and share/stock certificates, licences, permits, contracts, tickets, cheques, border documents, birth/death/identity documents, vehicle registration plates, art, wine, gemstones and medicine.
A DigSig stored in a QR code or near field communications (NFC) RFID tag can easily be read and verified using a smartphone with an ISO/IEC 20248 compliant application. The application only need to go online once to obtain the appropriate DigSig certificate, where after it can offline verify all DigSigs generated with that DigSig certificate.
A DigSig stored in a barcode can be copied without influencing the data verification. For example; a birth or school certificate containing a DigSig barcode can be copied. The copied document can also be verified to contain the correct information and the issuer of the information. A DigSig barcode provides a method to detect tampering with the data.
A DigSig stored in an RFID/NFC tag provides for the detection of copied and tampered data, therefore it can be used to detect the original document or object. The unique identifier of the RFID tag is used for this purpose.
The DigSig Envelope
ISO/IEC 20248 calls the digital signature meta structure a DigSig envelope. The DigSig envelope structure contains the DigSig certificate identifier, the digital signature and the timestamp. Fields can be contained in a DigSig envelope in 3 ways; Consider the envelope DigSig{a, b, c} which contains field sets a, b and c.
a fields are signed and included in the DigSig envelope. All the information (the signed field value and the field value is stored on the AIDC) is available to verify when the data structure is read from the AIDC (barcode and/or RFID).
b fields are signed but NOT included in the DigSig envelope - only the signed field value is stored on the AIDC. Therefore the value of a b field must be collected by the verifier before verification can be performed. This is useful to link a physical object with an barcode and/or RFID tag to be used as an anti-counterfeiting measure; for example the seal number of a bottle of wine may be a b field. The verifier needs to enter the seal number for a successful verification since it is not stored in the barcode on the bottle. When the seal is broken the seal number may also be destroyed and yielded unreadable; the verification can therefore not take place since it requires the seal number. A replacement seal must display the same seal number; using holograms and other techniques may make the generation of a new copied seal number not viable. Similarly the unique tag ID, also known is the TID in ISO/IEC 18000, can be used in this manner to prove that the data is stored on the correct tag. In this case the TID is a b field. The interrogator will read the DigSig envelope from the changeable tag memory and then read the non-changeable unique TID to allow for the verification. If the data was copied from one tag to another, then the verification process of the signed TID, as stored in the DigSig envelope, will reject the TID of the copied tag.
c fields are NOT signed but included in the DigSig envelope - only the field value is stored on the AIDC. A c field can therefore NOT be verified, but extracted from the AIDC. This field value may be changed without affecting the integrity of the signed fields.
The DigSig Data Path
Typically data stored in a DigSig originate as structured data; JSON or XML. The structured data field names maps directly on the DigSig Data Description [DDD]. This allows the DigSig Generator to digitally sign the data, store it in the DigSig envelope and compact the DigSig envelope to fit in the smallest bits size possible. The DigSig envelope is then programmed in an RFID tag or printed within a barcode symbology.
The DigSig Verifier reads the DigSig envelope from the barcode or RFID tag. It then identifies the relevant DigSig certificate, which it uses to extract the fields from the DigSig envelope and obtain the external fields. The Verifier then performs the verification and makes the fields available as structured data for example JSON or XML.
Examples
QR example
The following education certificate examples use the URI-RAW DigSig envelope format. The URI format allows a generic barcode reader to read the DigSig where after it can be verified online using the URI of the trusted issuer of the DigSig. Often the ISO/IEC 20248 compliant smartphone application (App) will be available on this website for down load, where after the DigSig can be verified offline. Note, a compliant App must be able to verify DigSigs from any trusted DigSig issuer.
The university certificate example illustrates the multi-language support of SANS 1368.
RFID and QR Example
In this example a vehicle registration plate is fitted with an ISO/IEC 18000-63 (Type 6C) RFID tag and printed with a QR barcode. The plate is both offline verifiable using a smartphone, when the vehicle is stopped; or using an RFID reader, when the vehicle drive past the reader.
Note the 3 DigSig Envelope formats; RAW, URI-RAW and URI-TEXT.
The DigSig stored in the RFID tag is typically in a RAW envelope format to reduce the size from the URI envelope format. Barcodes will typically use the URI-RAW format to allow generic barcode readers to perform an online verification. The RAW format is the most compact but it can only be verified with a SANS 1368 compliant application.
The DigSig stored in the RFID tag will also contain the TID (Unique Tag Identifier) within the signature part. A DigSig Verifier will therefore be able to detect data copied onto another tag.
QR with External data example
The following QR barcode is attached to a computer or smartphone to prove it belongs to a specific person. It uses a b type field, described above, to contain a secure personal identification number [PIN] remembered by the owner of the device. The DigSig Verifier will ask for the PIN to be entered, before the verification can take place. The verification will be negative if the PIN is incorrect. The PIN for the example is "123456".
The DigSig Data Description for the above DigSig is as follows:
{ "defManagementFields":
{ "mediasize":"50000",
"specificationversion":1,
"country":"ZAR",
"DAURI":"https://www.idoctrust.com/",
"verificationURI":"http://sbox.idoctrust.com/verify/",
"revocationURI":"https://sbox.idoctrust.com/chkrevocation/",
"optionalManagementFields":{}}},
"defDigSigFields":
[{ "fieldid":"cid",
"type":"unsignedInt",
"benvelope":false},
{ "fieldid":"signature",
"type":"bstring",
"binaryformat":"{160}",
"bsign":false},
{ "fieldid":"timestamp",
"type":"date",
"binaryformat":"Tepoch"},
{ "fieldid":"name",
"fieldname":{"eng":"Name"},
"type":"string",
"range":"[a-zA-Z ]",
"nullable":false},
{ "fieldid":"idnumber",
"fieldname":{"eng":"Employee ID Number"},
"type":"string",
"range":"[0-9 ]"},
{ "fieldid":"sn",
"fieldname":{"eng":"Asset Serial Number"},
"type":"string",
"range":"[0-9a-zA-Z ]"},
{ "fieldid":"PIN",
"fieldname":{"eng":"6 number PIN"},
"type":"string",
"binaryformat":"{6}",
"range":"[0-9]",
"benvelope":false,
"pragma":"enterText"}]}
References
SANS 1368, Automatic identification and data capture techniques — Data structures — Digital Signature meta structure
FIPS PUB 186-4, Digital Signature Standard (DSS) – Computer security – Cryptography
IETF RFC 3076, Canonical XML Version 1.0
IETF RFC 4627, The application/JSON media type for JavaScript Object Notation (JSON)
IETF RFC 3275, (Extensible Markup Language) XML-Signature syntax and processing
IETF RFC 5280, Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile
ISO 7498-2, Information processing systems – Open systems interconnection – Basic reference model – Part 2: Security architecture
ISO/IEC 9594-8 (ITU X.509), Information technology – Open Systems Interconnection – The Directory: Public-key and attribute certificate frameworks
ISO/IEC 10181-4, Information technology – Open Systems Interconnection – Security frameworks for open systems: Non-repudiation framework
ISO/IEC 11770-3, Information technology – Security techniques – Key management – Part 3: Mechanisms using asymmetric techniques
ISO/IEC 11889 (all parts), Information technology – Trusted Platform Module
ISO/IEC 15415, Information technology – Automatic identification and data capture techniques – Bar code print quality test specification – Two-dimensional symbols
ISO/IEC 15419, Information technology – Automatic identification and data capture techniques – Bar code digital imaging and printing performance testing
ISO/IEC 15423, Information technology – Automatic identification and data capture techniques – Bar code scanner and decoder performance testing
ISO/IEC 15424, Information technology – Automatic identification and data capture techniques – Data Carrier Identifiers (including Symbology Identifiers)
ISO/IEC 15963, Information technology – Radio frequency identification for item management – Unique identification for RF tags
ISO/IEC 16022, Information technology – Automatic identification and data capture techniques – Data Matrix bar code symbology specification
ISO/IEC 16023, Information technology – International symbology specification – MaxiCode
ISO/IEC 18000 (all parts), Information technology – Radio frequency identification for item management
ISO/IEC 18004, Information technology – Automatic identification and data capture techniques – QR Code 2005 bar code symbology specification
ISO/IEC TR 14516, Information technology – Security techniques – Guidelines for the use and management of Trusted Third Party services
ISO/IEC TR 19782, Information technology – Automatic identification and data capture techniques– Effects of gloss and low substrate opacity on reading of bar code symbols
ISO/IEC TR 19791, Information technology – Security techniques – Security assessment of operational systems
ISO/IEC TR 29162, Information technology – Guidelines for using data structures in AIDC media
ISO/IEC TR 29172, Information technology – Mobile item identification and management –Reference architecture for Mobile AIDC services
External links
http://csrc.nist.gov
http://www.ietf.org
https://web.archive.org/web/20141217133239/http://idoctrust.com/
http://www.iso.org
http://www.itu.int
http://www.sabs.co.za
Barcodes
Radio-frequency identification | ISO/IEC 20248 | Engineering | 2,980 |
29,962,966 | https://en.wikipedia.org/wiki/Critical%20Foreign%20Dependencies%20Initiative | The Critical Foreign Dependencies Initiative (CFDI) is a strategy and list, maintained by the United States Department of Homeland Security, of foreign infrastructure which "if attacked or destroyed would critically impact the U.S." A copy of the 2008 list was redacted (removing details of names and locations) and leaked by WikiLeaks on 5 December 2010 as part of the website's leak of US diplomatic cables; no details on the exact location of the assets was included in the list. In September 2011, WikiLeaks published the unredacted copy of the list. The list's release was met with strong criticism from the US and British governments, while media and other countries have reacted less strongly saying that the entries are not secret and easily identified.
Overview
According to the Department of Homeland Security (DHS), it "Developed and executed the Critical Foreign Dependencies Initiative (CFDI) which extends our protection strategy overseas to include important foreign infrastructure that if attacked or destroyed would critically impact the U.S. The prioritized National Critical Foreign Dependencies List (NCFDL) currently contains over 300 assets and systems in over 50 countries." According to the 2009 National Infrastructure Protection Plan, the CFDI was launched by the federal government "working in close coordination and cooperation with the private sector" in 2007 "to identify assets and systems located outside the United States, which, if disrupted or destroyed, would critically affect public health and safety, the economy, or national security. The resulting strategic compendium guides engagement with foreign countries in the CIKR [critical infrastructure and key resources] protection mission area". Using an initial inventory of infrastructure located outside the United States created by the federal government, DHS and the Department of State (DOS) developed the CFDI, "a process designed to ensure that the resulting classified list of critical foreign dependencies is representative and leveraged in a coordinated and inclusive manner."
Development of the CFDI was planned in three phases, on an annual and ongoing basis. The first phase was identification, beginning with "the first-ever National Critical Foreign
Dependencies List in FY2008". This was done by the DHS working with "other Federal partners", in a process that "includes input from public and private sector CIKR community partners." Next comes prioritization, in which "DHS, in collaboration with other CIKR community partners and, in particular, DOS, prioritized the National Critical Foreign Dependencies List based on factors such as the overall criticality of the CIKR to the United States and the willingness and capability of foreign partners to engage in collaborative risk management activities." The third "involves leveraging the prioritized list to guide current and future U.S. bilateral and multilateral incident and risk management activities with foreign partners. DHS and DOS established mechanisms to ensure coordinated engagement and collaboration by public entities, in partnership with the private sector."
Disclosure
The "2008 Critical Foreign Dependencies Initiative (CFDI) list" was contained in a February 2009 diplomatic cable to the U.S. Secretary of State, Hillary Clinton, which was leaked, redacted and released in the United States diplomatic cables leak by WikiLeaks in 2010. The BBC described it as "one of the most sensitive" leaks as of 6 December 2010. In its redaction process, WikiLeaks removed only a minority of the details of names and locations, and left the rest uncensored; details of the exact location of the assets were not included in the list. In September 2011, WikiLeaks published the unredacted copy of the list. The list did not include any military facilities, but rather facilities important for the global supply chain, global communications, and economically important goods and services.
In the cable the State Department asked American diplomats to identify installations overseas "whose loss could critically impact the public health, economic security, and/or national and homeland security of the United States." The order was under the direction of the Department for Homeland Security in co-ordination with the Department of State.
In summary the list consists of Submarine communications cables, major port hubs, critical sea lanes, oil pipelines, mines, dams, and pharmaceutical facilities. A major emphasis on European pharmaceutical facilities was said by the BBC to suggest a fear of biological warfare or global pandemic.
Responses to disclosure
The cable had been classified secret and not for review by non-U.S. personnel,. The publication of the cable was followed by strong criticism from the US government and the British government, but a tepid response from news outlets and other foreign nations.
WikiLeaks spokesman Kristinn Hrafnsson said with reference to the cable: "This further undermines claims made by the US Government that its embassy officials do not play an intelligence-gathering role. Part of the cable read: "Posts are not/not being asked to consult with host governments with respect to this request." Hrafnsson later explained to The Times that the list itself "had been made available to 2.5 million people including military personnel and private contractors by the U.S. government". He went on to say: "in terms of security issues, while this cable details the strategic importance of assets across the world, it does not give any information as to their exact locations, security measures, vulnerabilities or any similar factors, though it does reveal the U.S. asked its diplomats to report back on these matters."
United States
US State Department spokesman P.J. Crowley denounced the disclosure saying it "gives a group like al-Qaeda a targeting list." Anthony Cordesman, a 'national security analyst for the Center for Strategic and International Studies', said: "this has given a global map – a menu, if not a recipe book – to every extremist group in the world. To me it would be amazing to see how WikiLeaks could rationalize this." However, Alistair Millar, 'director of the Center on Global Counterterrorism Cooperation', said: "it's a little different...than with diplomatic cable leaks...in this case, this is largely information available to everyone if they really wanted to look."
Janet Napolitano, the Secretary of Homeland Security, said the list "could jeopardize our national security".
Nations other than the United States
A spokesman for British prime minister David Cameron said: "The leaks and their publication are damaging to national security in the United States, Britain and elsewhere. It is vital that governments are able to operate on the basis of confidentiality of information."
Vic Toews, the Public Safety Minister of Canada, seemed "unconcerned or unaware" of the release of the list. He said: "I don't follow gossip very much so I don't really know the impact of WikiLeaks, but I can assure you that the security agencies in Canada are following it very closely and to the extent that I need to be involved and address those issues, they will brief me on the issues."
Lin Yu-fang, a politician in Taiwan, stated, in regards to the revealing of the six undersea telecommunications cables in China, there are "actually no secrets concerning the cables", but he said there "could be certain thorny political or military issues involving Taiwan, the U.S. or Japan if more sensitive secrets were exposed".
News outlets
A CBS article elaborating on the release stated that "although much of the information contained [in the list] was already in the public domain, officials in Washington and London have been quick to condemn WikiLeaks for publishing it, calling the act evidence of the organization's willingness to potentially aid terror groups in its mission to reveal U.S. secrets." The New York Times stated that the list "appears largely limited to sites that any would-be terrorist with Internet access and a bit of ingenuity might quickly have identified."
The Lancashire Evening Post pointed out in an article that the list "contains information on defence sites in Lancashire which is more than five years out of date." The article specifically pointed out that the "Royal Ordnance (RO) site at Chorley...has been developed as Buckshaw Village for the past five years" and the "BAE facility in Plymouth, Devon...[was] sold as part of a deal three years ago."
Companies
Mayne Pharma told the Herald-Sun that "its entry on a classified diplomatic cable is out-of-date and full of errors", since the drug listed on the cable as its resource, a snake anti-venom, hasn't been made by the company for "more than ten years".
Roger Aston, the chief executive of Mayne Pharma, said: "I can only go on what I can see now in the media (about WikiLeaks) but judging from what I've seen about what they've said about Mayne Pharma and Faulding, a lot of it (the information) is old, out of date stuff that's not relevant."
Dean Veverka of Southern Cross concurred, saying, "(Roger Aston's comments) that the information in the WikiLeaks document was ten years out of date could be accurate. To only list Southern Cross as the only internet cable network here might have been relevant 10 years ago (when only coaxial cables were available), but Australia now has seven cables going out of country. Australia has a very resilient network nowadays."
Bill Gorman, sales director of David Brown Ltd., said: "We make gearboxes for our platinum and gold mines. We have supplied equipment via the US for other countries, but have only once exported directly to the States, for a copper mine seven years ago. I have no idea why we're on the list."
A BAE Systems spokeswoman said: "The information in the list was incorrect. The site in Plymouth was sold in 2007, and in Chorley, there are no longer any weapons manufacturing, although there is still an office there. The information about Preston was correct. The safety and security of our people and facilities is of highest priority."
List of critical foreign dependencies
The 2008 CFDI list, as redacted by WikiLeaks, listed the following infrastructures:
Sea ports
A number of sea ports were listed, including several Chinese ports (Shanghai Port, Guangzhou Port, Hong Kong Port, Ningbo Port, Tianjin Port) as well as one Taiwanese port (Kaohsiung Port) and several European ports (Port of Antwerp, Port of Hamburg, Rotterdam Port).
Cable routes
Northern hemisphere
Bermuda - GlobeNet, formerly Bermuda US-1 (BUS-1) undersea cable landing Devonshire, Bermuda
Canada - Hibernia Atlantic undersea cable landing at Herring Cove, Nova Scotia, Canada
China - C2C Cable Network undersea cable landings at Chom Hom Kok, Tseung Kwan O, and Shanghai; China-US undersea cable landings at Chongming and Shantou; and FLAG/REACH North Asia Loop undersea cable landing as Tong Fuk
Denmark - TAT-14 undersea cable landing, Blaabjerg, Denmark
Fiji - Southern Cross undersea cable landing, Suva, Fiji
France - APOLLO undersea cable, Lannion, France; FA-1 undersea cable, Plerin, France; and TAT-14 undersea cable landing St. Valery, France
French Guiana - Americas-II undersea cable landing Cayenne, French Guiana
Germany - TAT-14 undersea cable landing, Norden, Germany; Atlantic Crossing-1 (AC-1) undersea cable landing Sylt
Ireland - Hibernia Atlantic undersea cable landing, Dublin Ireland
Japan - C2C Cable Network undersea cable landings in Chikura, Ajigaura, and Shima; China-US undersea cable in Okinawa; FLAG/REACH North Asia Loop undersea cable landing in Wada; Japan-US undersea cable landings at Maruyama and Kitaibaraki; KJCN undersea cable landings at Fukuoka and Kita-Kyushu; Pacific Crossing-1 (PC-1) undersea cable landing in Ajigaura and Shima; and Tyco Transpacific undersea cable landings in Toyohashi and Emi.
Martinique - Americas-II undersea cable landing Le Lamentin, Martinique
Mexico - FLAG/REACH North Asia Loop undersea cable landing, Tijuana and Pan-American Crossing (PAC) undersea cable landing, Mazatlan
Netherlands - Atlantic Crossing-1 (AC-1) undersea cable landing, Beverwijk; TAT-14 undersea cable landing, Katwijk
Panama - FLAG/REACH North Asia Loop undersea cable landing Fort Amador, Panama
Philippines - C2C Cable Network undersea cable landing, Batangas, Philippines; and EAC undersea cable landing Cavite, Philippines
Republic of Korea - C2C Cable Network undersea cable landing, Pusan, Republic of Korea; EAC undersea cable landing Shindu-Ri, Republic of Korea; FLAG/REACH North Asia Loop undersea cable landing Pusan, Republic of Korea; and KJCN undersea cable landing Pusan, Republic of Korea
Singapore - C2C Cable Network undersea cable landing, Changi, Singapore; EAC undersea cable landing Changi North, Singapore; C2C Cable Network undersea cable landing, Changi, Singapore; and EAC undersea cable landing Changi North, Singapore
Taiwan- C2C Cable Network undersea cable landing, Fangshan, Taiwan; C2C Cable Network undersea cable landing, Tanshui, Taiwan; China-US undersea cable landing Fangshan, Taiwan; EAC undersea cable landing Pa Li, Taiwan; FLAG/REACH North Asia Loop undersea cable landing Toucheng, Taiwan
Trinidad and Tobago - Americas-II undersea cable landing Port of Spain
United Kingdom - APOLLO undersea cable landing Bude, Cornwall Station, United Kingdom; Atlantic Crossing-1 (AC-1) undersea cable landing Whitesands Bay; FA-1 undersea cable landing Skewjack, Cornwall Station; Hibernia Atlantic undersea cable landing, Southport, United Kingdom; TAT-14 undersea cable landing Bude, Cornwall Station, United Kingdom; Tyco Transatlantic undersea cable landing, Highbridge, United Kingdom; Tyco Transatlantic undersea cable landing, Pottington, United Kingdom; and Yellow/Atlantic Crossing-2 (AC-2) undersea cable landing Bude, United Kingdom
Venezuela - Four cable landing sites in Venezuela. GlobeNet undersea cable landings at Punta Gorda, Catia La Mar, and Manonga
Southern hemisphere
Australia - Southern Cross undersea cable landings at Brookvale and Sydney, Australia
Brazil - Americas-II undersea cable landing at Fortaleza; GlobeNet undersea cable landing at Fortaleza; and GlobeNet undersea cable landing Rio de Janeiro
Netherlands Antilles - Americas-II undersea cable landing, Willemstad
New Zealand - Southern Cross undersea cable landing, Whenuapai, New Zealand; and Southern Cross undersea cable landing, Takapuna, New Zealand
Mineral resources
Australia - Manganese - Battery grade, natural; battery grade, synthetic; chemical grade; ferro; metallurgical grade; Nickel Mines
China - Fluorite (Mine); Germanium Mine; Graphite Mine; Rare-earth minerals/elements; Tin Mine and Plant; and Tungsten - Mine and Plant
Democratic Republic of Congo - Cobalt (Mine and Plant)
Gabon - Manganese - Battery grade, natural; battery grade, synthetic; chemical grade; ferro; metallurgical grade
Guinea - Bauxite (Mine)
South Africa - Chromite mines around Rustenburg; Ferrochromium; Manganese - Battery grade, natural; battery grade, synthetic; chemical grade; ferro; metallurgical grade; Palladium Mine and Plant; Platinum Mines; and Rhodium
Indonesia - Tin Mine and Plant
Japan - Iodine Mine
Belgium - Germanium Mine
Norway - Cobalt Nickel Mine
Russia - Uranium Nickel Mine: Used in certain types of stainless steel and superalloys; Palladium Mine and Plant; and Rhodium
Ukraine - Manganese - Battery grade, natural; battery grade, synthetic; chemical grade; ferro; metallurgical grade
Kazakhstan - Ferrochromium Khromtau Complex, Kempersai, (Chromite Mine)
India -Orissa (chromite mines) and Karnataka (chromite mines)
Brazil - Iron Ore from Rio Tinto Mine; Manganese - Battery grade, natural; battery grade, synthetic; chemical grade; ferro; metallurgical grade; Niobium (Columbium), Araxa, Minas Gerais State (mine); and Ouvidor and Catalao I, Goias State: Niobium
Chile - Iodine Mine
Canada - Germanium Mine; Graphite Mine; Iron Ore Mine; Nickel Mine; Niobec Mine, Quebec, Canada: Niobium
Mexico - Graphite Mine
Peru - Tin Mine and Plant
Other sites
Africa
Morocco
Strait of Gibraltar
Maghreb-Europe (GME) gas pipeline, Morocco
South Africa
BAE Land System OMC, Benoni, South Africa
Brown David Gear Industries LTD, Benoni, South Africa
Tunisia
Trans-Med Gas Pipeline
East Asia and the Pacific
Australia
Maybe Faulding Mulgrave (F H Faulding) Victoria, Australia: Manufacturing facility for Midazolam injection.
Mayne Pharma (fill/finish), Melbourne, Australia: Sole suppliers of Crotalid Polyvalent Antivenin (CroFab)
China
Hydroelectric Dam Turbines and Generators
Polypropylene Filter Material for N-95 Masks
Indonesia
Straits of Malacca
Japan
Hitachi, Hydroelectric Dam Turbines and Generators
Ports at Chiba, Kobe, Nagoya, and Yokohama
Metal Fabrication Machines Titanium
Metal (Processed) Biken, Kanonji City, Japan
Hitachi Electrical Power Generators and Components Large AC Generators above 40 MVA
Republic of Korea
Hitachi Large Electric Power Transformers 230 - 500 kV Busan Port
Malaysia
Straits of Malacca
Singapore
Straits of Malacca
Europe and Eurasia
Austria
Baxter AG, Vienna, Austria: Immune Globulin Intravenous (IGIV)
Octapharma Pharmazeutika, Vienna, Austria: Immune Globulin Intravenous (IGIV)
Azerbaijan
Sangachal Terminal
Baku-Tbilisi-Ceyhan Pipeline
Belarus
Druzhba Oil Pipeline
Belgium
Baxter SA, Lessines, Belgium: Immune Globulin Intravenous (IGIV)
Glaxo Smith Kline, Rixensart, Belgium: Acellular Pertussis Vaccine Component
GlaxoSmithKline Biologicals SA, Wavre, Belgium: Acellular Pertussis Vaccine Component
Denmark
Bavarian Nordic (BN), Hejreskovvej, Kvistgard, Denmark: Smallpox Vaccine
Novo Nordisk Pharmaceuticals, Inc. Bagsvaerd, Denmark: Numerous formulations of insulin
Novo Nordisk Insulin Manufacturer: Global insulin supplies
Statens Serum Institut, Copenhagen, Denmark: DTaP (including D and T components) pediatric version
France
Sanofi-Aventis Insulin Manufacturer: Global insulin supplies Foot-and-mouth disease Vaccine finishing
Alstom, Hydroelectric Dam Turbines and Generators
Alstrom Electrical Power Generators and Components
EMD Pharms Semoy, France: Cyanokit Injection
GlaxoSmithKline, Inc. Évreux, France: Influenza Neuraminidase inhibitor RELENZA (Zanamivir)
Diagast, Cedex, France: Olympus (assists with detecting blood group)
Genzyme Polyclonals SAS (bulk), Lyon, France: Thymoglobulin
Sanofi Pasteur SA, Lyon, France: Rabies virus vaccine
Georgia
Baku-Tbilisi-Ceyhan Pipeline
Germany
BASF Ludwigshafen: World's largest integrated chemical complex
Siemens Erlangen: Essentially irreplaceable production of key chemicals
Siemens, GE, Hydroelectric Dam Turbines and Generators
Draeger Safety AG & Co., Lübeck, Germany: Critical to gas detection capability
Junghans Microtec Dunningen-Seedorf, Germany: Critical to the production of mortars
TDW-Gesellschaft Wirksysteme, Schroebenhausen, Germany: Critical to the production of the Patriot Advanced Capability Lethality Enhancement Assembly
Siemens, Large Electric Power Transformers 230 - 500 kV
Siemens, GE Electrical Power Generators and Components
Druzhba Oil Pipeline
Sanofi Aventis Frankfurt am Main, Germany: Lantus Injection (insulin)
Heyl Chemish-pharmazeutische Fabrik GmbH: Radiogardase (Prussian blue)
Hameln Pharmaceuticals, Hameln, Germany: Pentetate Calcium Trisodium (Ca DTPA) and Pentetate Zinc Trisodium (Zn DTPA) for contamination with plutonium, americium, and curium
IDT Biologika GmbH, Dessau Rossiau, Germany: BN Small Pox Vaccine
Biotest AG, Dreiech, Germany: Supplier for TANGO (impacts automated blood typing ability)
CSL Behring GmbH, Marburg, Germany: Antihemophilic factor/von Willebrand factor
Novartis Vaccines and Diagnostics GmbH, Marburg, Germany: Rabies virus vaccine
Vetter Pharma Fertigung GmbH & Co KG, Ravensburg, Germany (filling): Rho(D) IGIV
Ireland
Genzyme Ireland Ltd. (filling), Waterford, Ireland: Thymoglobulin
Italy
Glaxo Smith Kline SpA (fill/finish), Parma, Italy: Digibind (used to treat snake bites)
Trans-Med gas pipeline
Poland
Druzhba Oil Pipeline
Russia
Novorossiysk Export Terminal
Primorsk Export Terminal
Nadym Gas Pipeline Junction: The most critical gas facility in the world
Spain
Strait of Gibraltar
Instituto Grifols, SA, Barcelona, Spain: Immune Globulin Intravenous (IGIV)
Maghreb-Europe (GME) gas pipeline, Algeria
Sweden
Recip AB Sweden: Thyrosafe (potassium iodine)
Switzerland
Hoffman-LaRoche, Inc. Basel, Switzerland: Tamiflu (oseltamivir)
Berna Biotech, Berne, Switzerland: Typhoid vaccine
CSL Behring AG, Berne, Switzerland: Immune Globulin Intravenous (IGIV)
Turkey
Metal Fabrication Machines: Small number of Turkish companies (Durma, Baykal, Ermaksan)
Bosporus Strait
Baku-Tbilisi-Ceyhan Pipeline
United Kingdom
Goonhilly Teleport, Goonhilly Downs, United Kingdom
Madley Teleport, Stone Street, Madley, United Kingdom
Martelsham Teleport, Ipswich, United Kingdom
Foot and Mouth Disease Vaccine finishing
BAE Systems (Operations) Ltd., Presont [Preston], Lancashire, United Kingdom: Critical to the F-35 Joint Strike Fighter
BAE Systems Operations Ltd., Southway, Plymouth Devon, United Kingdom: Critical to Extended Range Guided Munitions
BAE Systems RO Defence, Chorley, United Kingdom: Critical to the Joint Standoff Weapon (JSOW) AGM-154C (Unitary Variant)
MacTaggart Scott, Loanhead, Edinburgh, Lothian, Scotland, United Kingdom: Critical to the Ship Submersible Nuclear (SSN)
Near/Middle East
Djibouti
Bab al-Mendeb: Shipping lane is a critical supply chain node
Egypt
'Ayn Sukhnah-SuMEd Receiving Import Terminal
'Sidi Kurayr-SuMed Offloading Export Terminal Suez Canal
Iran
Strait of Hormuz
Khark (Kharg) Island
Sea Island Export Terminal
Khark Island T-Jetty
Iraq
Al Basrah Oil Terminal
Israel
Rafael Ordnance Systems Division, Haifa, Israel: Critical to Sensor Fused Weapons (SFW), Wind Corrected Munitions Dispensers (WCMD), Tail Kits, and batteries
Kuwait
Mina' al Ahmadi Export Terminal
Oman
Strait of Hormuz
Qatar
Ras Laffan Industrial Center: By 2012 Qatar will be the largest source of imported LNG to U.S.
Saudi Arabia
Abqaiq Processing Center: Largest crude oil processing and stabilization plant in the world
Al Ju'aymah Export Terminal: Part of the Ras Tanura complex
As Saffaniyah Processing Center
Qatif Pipeline Junction
Ras at Tanaqib Processing Center
Ras Tanura Export Terminal
Shaybah Central Gas-oil Separation Plant
United Arab Emirates (UAE)
Das Island Export Terminal
Jabal Zannah Export Terminal
Strait of Hormuz
Yemen
Bab al-Mendeb: Shipping lane is a critical supply chain node
South and Central Asia
India
Generamedix Gujurat, India: Chemotherapy agents, including fluorouracil and methotrexate
Western Hemisphere
Argentina
Foot and Mouth Disease Vaccine finishing
Canada
James Bay Power Project, Quebec: monumental hydroelectric power development
Mica Dam, British Columbia: Failure would impact the Columbia River Basin
Hydro Quebec, Quebec: Critical irreplaceable source of power to portions of Northeast U. S.
Robert Moses-Robert H. Saunders Power Dam: Part of the St. Lawrence Power Project, between Barnhart Island, New York, and Cornwall, Ontario
Seven Mile Dam, British Columbia: Concrete gravity dam between two other hydroelectric power dams along the Pend d'Oreille River
Pickering Nuclear Power Plant, Ontario
Chalk River Nuclear Facility, Ontario: Largest supplier of medical radioisotopes in the world
Hydrofluoric Acid Production Facility, Allied Signal, Amherstburg, Ontario
Enbridge Pipeline Alliance Pipeline: Natural gas transmission from Canada Maritime and Northeast Pipeline: Natural gas transmission from Canada
TransCanada Gas: Natural gas transmission from Canada
Alexandria Bay Point of Entry (POE), Ontario: Northern border crossing
Ambassador Bridge Point of Entry, Ontario: Northern border crossing
Blaine POE, British Columbia: Northern border crossing
Blaine Washington Rail Crossing, British Columbia
Blue Water Bridge POE, Sarnia, Ontario: Northern border crossing
Champlain Bridge POE, Quebec: Northern border crossing
CPR Tunnel Rail Crossing, Ontario (Michigan Central Rail Crossing)
International Bridge Rail Crossing, Ontario International Railway Bridge Rail Crossing
Lewiston-Queenston POE, Ontario: Northern border crossing
Peace Bridge POE, Ontario: Northern border crossing
Pembina, North Dakota POE, North Dakota/Manitoba border crossing.
North Portal Rail Crossing, Saskatchewan
St. Clair Tunnel Rail Crossing between Sarnia,Ontario and Port Huron, Michigan
Waneta Dam, British Columbia: Earthfill/concrete hydropower dam
Darlington Nuclear Power Plant, Ontario, Canada
E-ONE Moli Energy, Maple Ridge, British Columbia, Canada: Critical to production of various military application electronics
General Dynamics Land Systems - Canada, London Ontario, Canada: Critical to the production of the Stryker/USMC LAV Vehicle Integration
Raytheon Systems Canada Ltd. ELCAN Optical Technologies Division, Midland, Ontario: Critical to the production of the AGM-130 Missile
Thales Optronique Canada, Inc., Montreal, Quebec: Critical optical systems for ground combat vehicles
Cangene, Winnipeg, Manitoba: Plasma
Sanofi Pasteur Ltd., Toronto, Canada: makers of polio virus vaccine
GlaxoSmithKline Biologicals, North America, Quebec: Pre-pandemic influenza vaccines
Mexico
Amistad International Dam: On the Rio Grande near Del Rio, Texas and Ciudad Acuna, Coahuila, Mexico
Anzalduas Dam: Diversion dam south of Mission, Texas, operated jointly by the U.S. and Mexico for flood control
Falcon International Dam: Upstream of Roma, Texas and Miguel Aleman, Tamaulipas, Mexico
Retamal Dam: Diversion dam south of Weslaco, Texas, operated jointly by the U.S. and Mexico for flood control
GE Hydroelectric Dam Turbines and Generators: Main source for a large portion of larger components
Bridge of the Americas (El Paso – Ciudad Juárez): Southern border crossing
Brownsville POE: Southern border crossing
Calexico East POE: Southern border crossing
Colombia-Solidarity Bridge: Southern border crossing
Kansas City Southern de Mexico (KCSM) Rail Line, (Mexico)
Nogales POE: Southern border crossing
Laredo Rail Crossing
Eagle Pass Rail Crossing
Southern border crossings, Otay Mesa Crossing, World Trade Bridge, and Ysleta Zaragosa Bridge
Pharr International Bridge: Southern border crossing
Hydrofluoric Acid Production Facility
GE Electrical Power Generators and Components
General Electric, Large Electric Power Transformers 230 - 500 kV
Panama
Panama Canal
Trinidad and Tobago
Atlantic LNG: Provides 70% of U.S. natural gas import needs
References
United States diplomatic cables leak
United States Department of Homeland Security
Infrastructure
Classified documents | Critical Foreign Dependencies Initiative | Engineering | 5,891 |
36,939,819 | https://en.wikipedia.org/wiki/Analog%20observation | Analog observation is, in contrast to naturalistic observation, a research tool by which a subject is observed in an artificial setting. Typically, types of settings in which analog observation is utilized include clinical offices or research laboratories, but, by definition, analog observations can be made in any artificial environment, even if the environment is one which the subject is likely to encounter naturally.
Applications
Analog observation is typically divided into two iteration of application: The first iteration primarily studies the effect of manipulation of variables in the subject's environment, including setting and events, on the subject's behavior. The second iteration primarily seeks to observe the subject's behavior in quasi-experimental social situations.
See also
Psychological research
Psychological research methods
Naturalistic observation
Observational study
References
Behaviorism
Psychology experiments
Qualitative research
Naturalism (philosophy) | Analog observation | Biology | 163 |
20,995,033 | https://en.wikipedia.org/wiki/Clathrus%20ruber | Clathrus ruber is a species of fungus in the family Phallaceae, and the type species of the genus Clathrus. It is commonly known as the latticed stinkhorn, the basket stinkhorn, or the red cage, alluding to the striking fruit bodies that are shaped somewhat like a round or oval hollow sphere with interlaced or latticed branches. The species was illustrated in the scientific literature during the 16th century, but was not officially described until 1729.
The fruit body initially appears like a whitish "egg" attached to the ground at the base by cords called rhizomorphs. The egg has a delicate, leathery outer membrane enclosing the compressed lattice that surrounds a layer of olive-green spore-bearing slime called the gleba, which contains high levels of calcium that help protect the fruit body during development. As the egg ruptures and the fruit body expands, the gleba is carried upward on the inner surfaces of the spongy lattice, and the egg membrane remains as a volva around the base of the structure. The fruit body can reach heights of up to . The color of the fruit body, which can range from pink to orange to red, results primarily from the carotenoid pigments lycopene and beta-carotene. The gleba has a fetid odor, somewhat like rotting meat, which attracts flies and other insects to help disperse its spores.
The fungus is saprobic, feeding off decaying woody plant material, and is often found alone or in groups in leaf litter on garden soil, grassy places, or on woodchip garden mulches. Although considered primarily a European species, C. ruber has been introduced to other areas, and now has a wide distribution that includes all continents except Antarctica. Although the edibility of the fungus is not known with certainty, it has a deterrent odor. It was poorly regarded in southern European folklore, suggesting that those who handled the mushroom risked contracting various ailments.
Taxonomy
Clathrus ruber was illustrated in 1560 by the Swiss naturalist Conrad Gesner in his Nomenclator Aquatilium Animantium—Gesner mistook the mushroom for a marine organism. It appeared in a woodcut in John Gerard's 1597 Great Herball, shortly thereafter in Carolus Clusius 1601 Fungorum in Pannoniis Observatorum Brevis Historia, and was one of the species featured in Cassiano dal Pozzo's museo cartaceo ("paper museum") that consisted of thousands of illustrations of the natural world.
The fungus was first described scientifically in 1729, by the Italian Pier Antonio Micheli in his Nova plantarum genera iuxta Tournefortii methodum disposita, who gave it its current scientific name. The species was once referred to by American authors as Clathrus cancellatus L., as they used a system of nomenclature based on the former American Code of Botanical Nomenclature, in which the starting point for naming species was Linnaeus's 1753 Species Plantarum. The International Code for Botanical Nomenclature now uses the same starting date, but names of Gasteromycetes used by Christian Hendrik Persoon in his Synopsis Methodica Fungorum (1801) are sanctioned and automatically replace earlier names. Since Persoon used the specific epithet ruber, the correct name for the species is Clathrus ruber. Several historical names of the fungus are now synonyms: Clathrus flavescens, named by Persoon in 1801; Clathrus cancellatus by Joseph Pitton de Tournefort and published by Elias Fries in 1823; Clathrus nicaeensis, published by Jean-Baptiste Barla in 1879; and Clathrus ruber var. flavescens, published by Livio Quadraccia and Dario Lunghini in 1990.
Clathrus ruber is the type species of the genus Clathrus, and is part of the group of Clathrus species known as the Laternoid series. Common features uniting this group include the vertical arms of the receptacle (fruit body) that are not joined together at the base, and the spongy structure of the receptacle. According to a molecular analysis published in 2006, out of the about 40 Phallales species used in the study, C. ruber is most closely related to Aseroe rubra, Clathrus archeri, Laternea triscapa, and Clathrus chrysomycelinus.
The generic name Clathrus is derived from Ancient Greek κλειθρον or "lattice", and the specific epithet is Latin ruber, meaning "red". The mushroom is commonly known as the "basket stinkhorn", the "lattice stinkhorn", or the "red cage". It was known to the locals of the Adriatic hinterland in the former Yugoslavia as veštičije srce or vještičino srce, meaning "witch's heart". This is still the case in parts of rural France, where it is known as cœur de sorcière.
Description
Before the volva opens, the fruiting body is egg-shaped to roughly spherical, up to in diameter, with a gelatinous interior up to thick. White to grayish in color, it is initially smooth, but develops a network of polygonal marks on the surface prior to opening as the internal structures expand and stretch the peridium taut. The fruit body, or receptacle, bursts the egg open as it expands (a process that can take as little as a few hours), and leaves the remains of the peridium as a cup or volva surrounding the base. The receptacle ranges in color from red to pale orange, and it is often lighter in color approaching the base. The color appears to be dependent upon the temperature and humidity of the environment. The receptacle consists of a spongy network of "arms" interlaced to make meshes of unequal size. At the top of the receptacle, the arms are up to thick, but they taper down to smaller widths near the base. A cross-section of the arm reveals it to be spongy, and made up of one wide inner tube and two indistinct rows of tubes towards the outside. The outer surface of the receptacle is ribbed or wrinkled. There are 7–20 angular windows and 80–120 mesh holes in the receptacle.
A considerable variation in height has been reported for the receptacle, ranging from tall. The base of the fruit bodies are attached to the substrate by rhizomorphs (thickened cords of mycelia). The dark olive-green to olive-brown, foul-smelling sticky gleba covers the inner surface of the receptacle, except near the base. The odor—described as resembling rotting meat—attracts flies, other insects, and, in one report, a scarab beetle (Scarabaeus sacer) that help disperse the spores. The putrid odor—and people's reaction to it—have been well documented. In 1862 Mordecai Cubitt Cooke wrote "it is recorded of a botanist who gathered one for the purpose of drying it for his herbarium, that he was compelled by the stench to rise during the night and cast the offender out the window". American mycologist David Arora called the odor "the vilest of any stinkhorn". The receptacle collapses about 24 hours after its initial eruption from the egg.
The spores are elongated, smooth, and have dimensions of 4–6 by 1.5–2 μm. Scanning electron microscopy has revealed that C. ruber (in addition to several other Phallales species) has a hilar scar—a small indentation in the surface of the spore where it was previously connected to the basidium via the sterigma. The basidia (spore-bearing cells) are six-spored.
Biochemistry
Like other stinkhorn fungi, C. ruber bioaccumulates the element manganese. It has been postulated that this element plays a role in the enzymatic breakdown of the gleba with simultaneous formation of odorous compounds. Compounds like dimethyl sulfide, aldehydes, and amines—which contribute to the disagreeable odor of the gleba—are produced by the enzymatic decarboxylation of keto acids and amino acids, but the enzymes will only work in the presence of manganese. A chemical analysis of the elemental composition of the gelatinous outer layer, the embryonic receptacle and the gleba showed the gelatinous layer to be richest in potassium, calcium, manganese, and iron ions. Calcium ion stabilizes the polysaccharide gel, protecting the embryonic receptacle from drying out during the growth of the egg. Potassium is required for the gelatinous layer to retain its osmotic pressure and retain water; high concentrations of the element are needed to support the rapid growth of the receptacle. The high concentration of elements suggests that the gelatinous layer has a "placenta-like" function—serving as a reservoir from which the receptacle may draw upon as it rapidly expands.
Pigments responsible for the orange to red colors of the mature fruit bodies have been identified as carotenes, predominantly lycopene and beta-carotene—the same compounds responsible for the red and orange colors of tomatoes and carrots, respectively. Lycopene is also the main pigment in the closely related fungus Clathrus archeri, while beta-carotene is the predominant pigment in the Phallaceae species Mutinus caninus, M. ravenelii, and M. elegans.
Similar species
Clathrus ruber may be distinguished from the closely related tropical species C. crispus by the absence of the corrugated rims which surround each mesh of the C. crispus fruit body. The phylogenetically close species C. chrysomycelinus has a yellow receptacle with arms that are structurally simpler, and its gleba is concentrated on specialized "glebifers" located at the lattice intersections. It is known only from Venezuela to southern Brazil. Clathrus columnatus has a fruit body with two to five long vertical orange or red spongy columns, joined together at the apex.
Habitat and distribution
Like most of the species of the order Phallales, Clathrus ruber is saprobic—a decomposer of wood and plant matter—and is commonly found fruiting in mulch beds. The fungus grows alone or clustered together near woody debris, in lawns, gardens, and cultivated soil.
Clathrus ruber was originally described by Micheli from Italy. It is considered native to southern and central continental Europe, as well as Macaronesia (the Azores and the Canary Islands), western Turkey, North Africa (Algeria), and western Asia (Iran). The fungus is rare in central Europe, and is listed in the Red data book of Ukraine.
The fungus has probably been introduced elsewhere, often because of the use of imported mulch used in gardening and landscaping. It may have extended its range northwards into the British Isles or been introduced in the nineteenth century. It now has a mainly southerly distribution in England and has been recorded from Cornwall, Devon, Dorset, Somerset, the Isle of Wight, Hampshire, Berkshire, Sussex, Surrey, and Middlesex. In Scotland, it has been recorded from Argyll. It is also known from Wales, the Channel Islands, and Ireland. The fungus also occurs in the United States in urban areas of its likely introduction (in California, Florida, Georgia, Hawaii, Alabama, Virginia, North Carolina, and New York), as well as in Canada, Mexico, and Australasia. The species was also reported from South America (Argentina). In China, it has been collected from Guangdong, Sichuan, Guizhou, and Tibet. Records from Japan are referable to Clathrus kusanoi; records from the Caribbean are probably of C. crispus.
In North America, the species can be found from October to March.
Toxicity
Although edibility for C. ruber has not been officially documented, its foul smell would dissuade most people from eating it. In general, stinkhorn mushrooms are considered edible when still in the egg stage, and are even considered delicacies in some parts of Europe and Asia, where they are pickled raw and sold in markets as "devil's eggs". An 1854 report provides a cautionary tale to those considering consuming the mature fruit body. Dr. F. Peyre Porcher, of Charleston, South Carolina, described an account of poisoning caused by the mushroom: A young person having eaten a bit of it, after six hours suffered from a painful tension of the lower stomach, and violent convulsions. He lost the use of his speech, and fell into a state of stupor, which lasted for forty-eight hours. After taking an emetic he threw up a fragment of the mushroom, with two worms, and mucus, tinged with blood. Milk, oil, and emollient fomentations, were then employed with success.
C. ruber is generally listed as inedible or poisonous in many British mushroom publications from 1974 to 2008.
British mycologist Donald Dring, in his 1980 monograph on the family Clathraceae, wrote that C. ruber was not regarded highly in southern European folklore. He mentions a case of poisoning following its ingestion, reported by Barla in 1858, and notes that Ciro Pollini reported finding it growing on a human skull in a tomb in a deserted church. According to John Ramsbottom, Gascons consider the mushroom a cause of cancer; they will usually bury specimens they find. In other parts of France it has been reputed to produce skin rashes or cause convulsions.
In culture
Mycologist David Arora likened the unusual shape of the receptacle to a whiffleball. The German Mycological Society described it as "like an alien from a science fiction horror film" and named the species the 2011 "Mushroom of the Year".
References
External links
(in French with English text)
Bay Area Mycological Society Description and images
Phallales
Fungi described in 1801
Fungi of Africa
Fungi of Asia
Fungi of Australia
Fungi of Europe
Fungi of North America
Fungi of South America
Inedible fungi
Fungi of Macaronesia
Fungus species | Clathrus ruber | Biology | 3,010 |
44,907,038 | https://en.wikipedia.org/wiki/6262%20aluminium%20alloy | 6262 aluminium alloy is an alloy in the wrought aluminium-magnesium-silicon family (6000 or 6xxx series). It is related to 6162 aluminium alloy (Aluminum Association designations that only differ in the second digit are variations on the same alloy), but sees much more widespread use. It is notably distinct from 6162, and most other aluminium alloys, in that it contains lead in its alloy composition. It is typically formed by extrusion, forging, or rolling, but as a wrought alloy it is not used in casting. It can also be clad, but that is not common practice with this alloy. It cannot be work hardened, but is commonly heat treated to produce tempers with a higher strength but lower ductility.
Alternate names and designations for this alloy include AlMg1SiPb and A96262. The alloy and its various tempers are covered by the following standards:
ASTM B 210: Standard Specification for Aluminium and Aluminium-Alloy Drawn Seamless Tubes
ASTM B 211: Standard Specification for Aluminium and Aluminium-Alloy Bar, Rod, and Wire
ASTM B 221: Standard Specification for Aluminium and Aluminium-Alloy Extruded Bars, Rods, Wire, Profiles, and Tubes
ASTM B 483: Standard Specification for Aluminium and Aluminium-Alloy Drawn Tube and Pipe for General Purpose Applications
EN 573-3: Aluminium and aluminium alloys. Chemical composition and form of wrought products. Chemical composition and form of products
EN 754-2: Aluminium and aluminium alloys. Cold drawn rod/bar and tube. Mechanical properties
EN 755-2: Aluminium and aluminium alloys. Extruded rod/bar, tube and profiles. Mechanical properties
Chemical composition
The alloy composition of 6262 aluminium is:
Aluminium: 94.7 to 97.8%
Bismuth: 0.4 to 0.7%
Chromium: 0.04 to 0.14%
Copper: 0.15 to 0.40%
Iron: 0.7% max
Lead: 0.4 to 0.7%
Magnesium: 0.8 to 1.2%
Manganese: 0.15%
Silicon: 0.4 to 0.8%
Titanium: 0.15% max
Zinc: 0.25% max
Residuals: 0.15% max
Properties
Typical material properties for 6262 aluminium alloy include:
Density: 2.72 g/cm3, or 170 lb/ft3.
Electrical conductivity: 44% IACS.
Young's modulus: 69 GPa, or 10 Msi.
Ultimate tensile strength: 280 to 390 MPa, or 41 to 57 ksi.
Yield strength: 260 to 360 MPa, or 38 to 52 ksi.
Thermal expansion: 21.8 μm/m-K.
Solidus: 582 °C or 1080 °F.
References
Aluminium alloy table
Aluminium alloys
Aluminium–magnesium–silicon alloys | 6262 aluminium alloy | Chemistry | 589 |
14,958,360 | https://en.wikipedia.org/wiki/Magic%20circle%20%28mathematics%29 | Magic circles were invented by the Song dynasty (960–1279) Chinese mathematician Yang Hui (c. 1238–1298). It is the arrangement of natural numbers on circles where the sum of the numbers on each circle and the sum of numbers on diameters are identical. One of his magic circles was constructed from the natural numbers from 1 to 33 arranged on four concentric circles, with 9 at the center.
Yang Hui magic circles
Yang Hui's magic circle series was published in his Xugu Zhaiqi Suanfa《續古摘奇算法》(Sequel to Excerpts of Mathematical Wonders) of 1275. His magic circle series includes: magic 5 circles in square, 6 circles in ring, magic eight circle in square
magic concentric circles, magic 9 circles in square.
Yang Hui magic concentric circle
Yang Hui's magic concentric circle has the following properties
The sum of the numbers on four diameters = 147,
28 + 5 + 11 + 25 + 9 + 7 + 19 + 31 + 12 = 147
The sum of 8 numbers plus 9 at the center = 147;
28 + 27 + 20 + 33 + 12 + 4 + 6 + 8 + 9 = 147
The sum of eight radius without 9 = magic number 69: such as 27 + 15 + 3 + 24 = 69
The sum of all numbers on each circle (not including 9) = 2 × 69
There exist 8 semicircles, where the sum of numbers = magic number 69; there are 16 line segments (semicircles and radii) with magic number 69, more than a 6 order magic square with only 12 magic numbers.
Yang Hui magic eight circles in a square
64 numbers (1–64) are arranged in eight circles, each with eight numbers; each circle sums to 260. The total sum of all numbers is 2080 (=8×260). The circles are arranged in a 3×3 square grid with the center area open in a way that also makes the horizontal / vertical sum along the central columns and rows is 260, and the total sum of the numbers along both diagonals is 520.
Yang Hui magic nine circles in a square
72 numbers from 1 to 72, arranged in nine circles of eight numbers in a square; with neighbouring numbers forming four additional eight number circles: thus making a total of 13 eight number circles:
Extra circle x1 contains numbers from circles NW, N, C, and W; x2 contains numbers from N, NE, E, and C; x3 contains numbers from W, C, S, and SW; x4 contains numbers from C, E, SE, and S.
Total sum of 72 numbers = 2628;
sum of numbers in any eight number circle = 292;
sums of three circles along horizontal lines = 876;
sum of three circles along vertical lines = 876;
sum of three circles along the diagonals = 876.
Ding Yidong magic circles
Ding Yidong was a mathematician contemporary with Yang Hui. In his magic circle with 6 rings, the unit numbers of the 5 outer rings, combined with the unit number of the center ring, form the following magic square:
{| class="wikitable"
| 4 || 9 || 2
|-
| 3 || 5 || 7
|-
| 8 || 1 || 6
|}
Method of construction:
Let radial group 1 =1,11,21,31,41
Let radial group 2=2,12,22,32,42
Let radial group 3=3,13,23,33,43
Let radial group 4=4,14,24,34,44
Let radial group 6=6,16,26,36,46
Let radial group 7=7,17,27,37,47
Let radial group 8=8,18,28,38,48
Let radial group 9=9,19,29,39,49
Let center group =5,15,25,35,45
Arrange group 1,2,3,4,6,7,9 radially such that
each number occupies one position on circle
alternate the direction such that one radial has smallest number at the outside, the adjacent radial has largest number outside.
Each group occupies the radial position corresponding to the number on the Luoshu magic square, i.e., group 1 at 1 position, group 2 at 2 position etc.
Finally arrange center group at the center circle, such that
number 5 on group 1 radial
number 10 on group 2 radial
number 15 on group 3 radial
...
number 45 on group 9 radial
Cheng Dawei magic circles
Cheng Dawei, a mathematician in the Ming dynasty, in his book Suanfa Tongzong listed several magic circles
Extension to higher dimensions
In 1917, W. S. Andrews published an arrangement of numbers 1, 2, 3, and 62 in eleven circles of twelve numbers each on a sphere representing the parallels and meridians of the Earth, such that each circle has 12 numbers totalling 378.
Relationship with magic squares
A magic circle can be derived from one or more magic squares by putting a number at each intersection of a circle and a spoke. Additional spokes can be added by replicating the columns of the magic square.
In the example in the figure, the following 4 × 4 most-perfect magic square was copied into the upper part of the magic circle. Each number, with 16 added, was placed at the intersection symmetric about the centre of the circles. This results in a magic circle containing numbers 1 to 32, with each circle and diameter totalling 132.
References
Lam Lay Yong: A Critical Study of Hang Hui Suan Fa 《杨辉算法》 Singapore University Press 1977
Wu Wenjun (editor in chief), Grand Series of History of Chinese Mathematics, Vol 6, Part 6 Yang Hui, section 2 Magic circle (吴文俊 主编 沈康身执笔 《中国数学史大系》 第六卷 第六篇 《杨辉》 第二节 《幻圆》) /O
Chinese mathematics
Song dynasty
Magic figures | Magic circle (mathematics) | Mathematics | 1,231 |
7,621,850 | https://en.wikipedia.org/wiki/Saturn%20C-8 | The Saturn C-8 was the largest member of the Saturn series of rockets to be designed. It was a potential alternative to the Nova rocket, should NASA have chosen a direct ascent method of lunar exploration for the Apollo program. The first stage was an increased-diameter version of the S-IC. The second stage was an increased-diameter version of the S-II. Both of these stages had eight engines, as opposed to the standard five. The third stage was a stretched S-IVB stage, which retained its original diameter and engine.
NASA announced on September 7, 1961, that the government-owned Michoud Ordnance Plant near New Orleans, Louisiana, would be the site for fabrication and assembly of the Saturn first stages as well as larger vehicles in the Saturn program. Finalists were two government-owned plants in St. Louis and New Orleans. The height of the factory roof at Michoud meant that a launch vehicle with eight F-1 engines (Saturn C-8, Nova class) could not be built; four or five engines ( diameter) would have to be the maximum.
This decision ended consideration of a Nova-class launch vehicle for direct ascent to the Moon or as heavy-lift derivatives for Earth orbit rendezvous. Ultimately, the lunar orbit rendezvous ("LOR") concept approved in 1962 rendered the C-8 obsolete, and the smaller Saturn C-5 was developed instead under the designation "Saturn V", as the LOR spacecraft was within its payload capacity.
The Saturn C-8 configuration was never taken further than the design process, as it was too large and costly.
References
Bilstein, Roger E, Stages to Saturn, US Government Printing Office, 1980. . Excellent account of the evolution, design, and development of the Saturn launch vehicles.
Stuhlinger, Ernst, et al., Astronautical Engineering and Science: From Peenemuende to Planetary Space, McGraw-Hill, New York, 1964.
NASA, "Earth Orbital Rendezvous for an Early Manned Lunar Landing," pt. I, "Summary Report of Ad Hoc Task Group Study" [Heaton Report], August 1961.
David S. Akens, Saturn Illustrated Chronology: Saturn's First Eleven Years, April 1957 through April 1968, 5th ed., MHR-5 (Huntsville, AL : MSFC, 20 Jan. 1971).
Final Report, NASA-DOD Large Launch vehicle Planning Group, NASA-DOD LLVPG 105 [Golovin Committee], 3 vols., 1 Feb. 1962
External links
Diagram of C-8 with alternate 2-engine 3rd stage (not to the same proportions as the image above)
Cancelled space launch vehicles
Saturn C | Saturn C-8 | Astronomy | 543 |
53,244,435 | https://en.wikipedia.org/wiki/C12orf66 | C12orf66 is a protein that in humans is encoded by the C12orf66 gene. The C12orf66 protein is one of four proteins in the KICSTOR protein complex which negatively regulates mechanistic target of rapamycin complex 1 (mTORC1) signaling.
Gene
C12orf66 is located on the minus strand in the locus 12q14.2. C12orf66 variant 1 is 36 Mbp in length spanning the base pairs 64,186,312 - 64,222,296 on chromosome 12. There are 3 total C12orf66 transcript variants. C12orf66 variant 1 is the longest with 4 exons. C12orf66 variant 2 has a shortened exon 1 and is missing exon 4 compared to variant 1. C12orf66 variant 3 is missing exon 4.
Expression
In humans, C12orf66 has higher than average expression in a number of tissues such as endocrine glands as well as lymphoid tissues and cells. Additionally, C12orf66 expression is increased in a number of cancers including leukemia, breast cancer, cervical cancer, and a number of gastrointestinal related cancers. C12orf66 expression is higher earlier in development. A number of experiments using different human embryonic stem cell lines, oocytes, as well as erythroblasts found C12orf66 expression was increased in these cells earlier in development and expression decreased as these cells became more differentiated. Additionally, expression of C12orf66 in fetal organs is higher than C12orf66 expression in the same adult organs.
Protein
The human C12orf66 protein is 446 amino acids in length with a molecular weight of 50kdal . C12orf66 contains the domain of unknown function 2003 (DUF2003) from amino acids 10-444. The DUF2003 is characterized by a series of alpha helices and beta sheets.
Function
C12orf66 is part of a larger protein complex called KICSTOR. KICSTOR is a complex of four proteins coded by the genes KPTN, ITFG2, C12orf66, and SZT2. The KICSTOR complex plays a role in regulating mTORC1 signaling. mTORC1 activates protein translation when the cell has sufficient amounts amino acids and energy. This ensures cell growth and proliferation occurs in ideal cellular environments. KICSTOR recruits the protein complex GATOR1, a negative regulator of mTORC1, to the correct location on the lysosome where mTORC1 signaling occurs. In addition to the localization of GATOR1 to the lysosome, KICSTOR is also necessary for the regulation of mTORC1 signaling by amino acid or glucose deprivation. Normally, amino acid or glucose deprivation inhibits mTORC1 signaling. However, loss of any one protein in the four protein KICSTOR complex resulted in a lack of inhibition of mTORC1 by amino acid or glucose deprivation and increased mTORC1 signaling. Thus, KICSTOR is a negative regulator of mTORC1 signaling that functions by localizing GATOR1 to the lysosomal surface as well inhibiting mTORC1 during periods of amino acid or glucose deprivation. How the KICSTOR complex directly inhibits mTORC1 as well as senses amino acid or glucose deprivation remains to be elucidated.
Clinical Significance
Loss of the genomic locus 12q14 which contains the human protein encoding gene C12orf66 is linked to a number of developmental delays and neurodevelopment disorders such as macrocephaly. Additionally, one study found the level of C12orf66 expression is down-regulated in colorectal cancer. In this study, the amount of C12orf66 down-regulation along with the expression of a number of other genes were used as an accurate indicator of clinical outcome in patients with colorectal cancer. Thus, the level of C12orf66 gene expression reflected the survivability of these patients.
Protein-Protein Interactions
C12orf66 interacts with the three proteins of the KICSTOR complex coded by the genes KPTN, ITFG2, and SZT2 as well as GATOR1. Additionally, C12orf66 is predicted to interact with KRAS, DEPDC5, and C7orf60. These interactions were detected by high throughput affinity capture chromatography.
Homologs
C12orf66 is a highly conserved protein with a large number of orthologs and no known paralogs. The list of C12orf66 orthologs includes mammals, birds, reptiles, amphibians, fish, marine worms, mollusks, insects, and fungi.
References
Uncharacterized proteins | C12orf66 | Biology | 980 |
65,257,148 | https://en.wikipedia.org/wiki/MicMac%20%28software%29 | MicMac is an open-source software for photogrammetry developed by the French National Geographic Institute.
See also
Comparison of photogrammetry software
References
External links
Official website
Citations of Rupnik et al. (2017).
Photogrammetry software
Free and open-source software | MicMac (software) | Technology | 57 |
1,435,196 | https://en.wikipedia.org/wiki/Dragon%20%28rocket%29 | The Dragon is a two-stage French solid propellant sounding rocket used for high altitude research between 1962 and 1973. It belonged thereby to a family of solid-propellant rockets derived from the Bélier, including the Centaure, the Dauphin and the Éridan.
The dragon's first stage was a Stromboli engine (diameter 56 cm) which burned 675 kg of propellant in 16 seconds and so produced a maximum thrust of 88 kN. Versions of the Bélier engine were used as upper stages.
A payload of 30 to 120 kg could be carried on parabolic with apogees between 440 km (270 mi) (Dragon-2B) and 560 km (340 mi)(Dragon-3)
Versions
The Dragon was built in several versions including the Dragon-2B, and Dragon-3:
Launches
Dragons have been launched from Andøya, Biscarrosse, Dumont d'Urville, CELPA (El Chamical), CIEES, Kerguelen Islands, Kourou, Salto di Quirra, Sonmiani, Thumba, and Vík í Mýrdal between 1962 and 1973.
See also
Belier
Centaure
Dauphin
Éridan
References
Sud-Aviation Belier rocket family | Dragon (rocket) | Astronomy | 258 |
7,315,901 | https://en.wikipedia.org/wiki/Mil%C3%BC | Milü (; "close ratio"), also known as Zulü (Zu's ratio), is the name given to an approximation to (pi) found by Chinese mathematician and astronomer Zu Chongzhi in the 5th century. Using Liu Hui's algorithm (which is based on the areas of regular polygons approximating a circle), Zu famously computed to be between 3.1415926 and 3.1415927 and gave two rational approximations of , and , naming them respectively Yuelü (; "approximate ratio") and Milü.
is the best rational approximation of with a denominator of four digits or fewer, being accurate to six decimal places. It is within % of the value of , or in terms of common fractions overestimates by less than . The next rational number (ordered by size of denominator) that is a better rational approximation of is , though it is still only correct to six decimal places. To be accurate to seven decimal places, one needs to go as far as . For eight, is needed.
The accuracy of Milü to the true value of can be explained using the continued fraction expansion of , the first few terms of which are . A property of continued fractions is that truncating the expansion of a given number at any point will give the "best rational approximation" to the number. To obtain Milü, truncate the continued fraction expansion of immediately before the term 292; that is, is approximated by the finite continued fraction , which is equivalent to Milü. Since 292 is an unusually large term in a continued fraction expansion (corresponding to the next truncation introducing only a very small term, , to the overall fraction), this convergent will be especially close to the true value of :
Zu's contemporary calendarist and mathematician He Chengtian invented a fraction interpolation method called "harmonization of the divisor of the day" () to increase the accuracy of approximations of by iteratively adding the numerators and denominators of fractions. Zu Chongzhi's approximation ≈ can be obtained with He Chengtian's method.
An easy mnemonic helps memorize this fraction by writing down each of the first three odd numbers twice: , then dividing the decimal number represented by the last 3 digits by the decimal number given by the first three digits: . (In Eastern Asia, fractions are read by stating the denominator first, followed by the numerator). Alternatively, .
See also
Continued fraction expansion of and its convergents
Approximations of π
Pi Approximation Day
Notes
References
External links
Fractional Approximations of Pi
Pi
History of mathematics
History of science and technology in China
Chinese mathematical discoveries
Chinese words and phrases
Approximations
Rational numbers
Zu Chongzhi | Milü | Mathematics | 566 |
2,362 | https://en.wikipedia.org/wiki/Antibody | An antibody (Ab) or immunoglobulin (Ig) is a large, Y-shaped protein belonging to the immunoglobulin superfamily which is used by the immune system to identify and neutralize antigens such as bacteria and viruses, including those that cause disease. Antibodies can recognize virtually any size antigen, able to perceive diverse chemical compositions. Each antibody recognizes one or more specific antigens. Antigen literally means "antibody generator", as it is the presence of an antigen that drives the formation of an antigen-specific antibody. Each tip of the "Y" of an antibody contains a paratope that specifically binds to one particular epitope on an antigen, allowing the two molecules to bind together with precision. Using this mechanism, antibodies can effectively "tag" a microbe or an infected cell for attack by other parts of the immune system, or can neutralize it directly (for example, by blocking a part of a virus that is essential for its invasion).
More narrowly, an antibody (Ab) can refer to the free (secreted) form of these proteins, as opposed to the membrane-bound form found in a B cell receptor. The term immunoglobulin can then refer to both forms. Since they are, broadly speaking, the same protein, the terms are often treated as synonymous.
To allow the immune system to recognize millions of different antigens, the antigen-binding sites at both tips of the antibody come in an equally wide variety. The rest of the antibody structure is much less variable; in humans, antibodies occur in five classes, sometimes called isotypes: IgA, IgD, IgE, IgG, and IgM. Human IgG and IgA antibodies are also divided into discrete subclasses (IgG1, IgG2, IgG3, IgG4; IgA1 and IgA2). The class refers to the functions triggered by the antibody (also known as effector functions), in addition to some other structural features. Antibodies from different classes also differ in where they are released in the body and at what stage of an immune response. Between species, while classes and subclasses of antibodies may be shared (at least in name), their functions and distribution throughout the body may be different. For example, mouse IgG1 is closer to human IgG2 than human IgG1 in terms of its function.
The term humoral immunity is often treated as synonymous with the antibody response, describing the function of the immune system that exists in the body's humors (fluids) in the form of soluble proteins, as distinct from cell-mediated immunity, which generally describes the responses of T cells (especially cytotoxic T cells). In general, antibodies are considered part of the adaptive immune system, though this classification can become complicated. For example, natural IgM, which are made by B-1 lineage cells that have properties more similar to innate immune cells than adaptive, refers to IgM antibodies made independently of an immune response that demonstrate polyreactivity- they recognize multiple distinct (unrelated) antigens. These can work with the complement system in the earliest phases of an immune response to help facilitate clearance of the offending antigen and delivery of the resulting immune complexes to the lymph nodes or spleen for initiation of an immune response. Hence in this capacity, the function of antibodies is more akin to that of innate immunity than adaptive. Nonetheless, in general antibodies are regarded as part of the adaptive immune system because they demonstrate exceptional specificity (with some exception), are produced through genetic rearrangements (rather than being encoded directly in germline), and are a manifestation of immunological memory.
In the course of an immune response, B cells can progressively differentiate into antibody-secreting cells or into memory B cells. Antibody-secreting cells comprise plasmablasts and plasma cells, which differ mainly in the degree to which they secrete antibody, their lifespan, metabolic adaptations, and surface markers. Plasmablasts are rapidly proliferating, short-lived cells produced in the early phases of the immune response (classically described as arising extrafollicularly rather than from the germinal center) which have the potential to differentiate further into plasma cells. Occasionally plasmablasts are described as short-lived plasma cells, formally this is incorrect. Plasma cells, in contrast, do not divide (they are terminally differentiated), and rely on survival niches comprising specific cell types and cytokines to persist. Plasma cells will secrete huge quantities of antibody regardless of whether or not their cognate antigen is present, ensuring that antibody levels to the antigen in question do not fall to 0, provided the plasma cell stays alive. The rate of antibody secretion, however, can be regulated, for example, by the presence of adjuvant molecules that stimulate the immune response such as TLR ligands. Long-lived plasma cells can live for potentially the entire lifetime of the organism. Classically, the survival niches that house long-lived plasma cells reside in the bone marrow, though it cannot be assumed that any given plasma cell in the bone marrow will be long-lived. However, other work indicates that survival niches can readily be established within the mucosal tissues- though the classes of antibodies involved show a different hierarchy from those in the bone marrow. B cells can also differentiate into memory B cells which can persist for decades similarly to long-lived plasma cells. These cells can be rapidly recalled in a secondary immune response, undergoing class switching, affinity maturation, and differentiating into antibody-secreting cells.
Antibodies are central to the immune protection elicited by most vaccines and infections (although other components of the immune system certainly participate and for some diseases are considerably more important than antibodies in generating an immune response, e.g. herpes zoster). Durable protection from infections caused by a given microbe – that is, the ability of the microbe to enter the body and begin to replicate (not necessarily to cause disease) – depends on sustained production of large quantities of antibodies, meaning that effective vaccines ideally elicit persistent high levels of antibody, which relies on long-lived plasma cells. At the same time, many microbes of medical importance have the ability to mutate to escape antibodies elicited by prior infections, and long-lived plasma cells cannot undergo affinity maturation or class switching. This is compensated for through memory B cells: novel variants of a microbe that still retain structural features of previously encountered antigens can elicit memory B cell responses that adapt to those changes. It has been suggested that long-lived plasma cells secrete B cell receptors with higher affinity than those on the surfaces of memory B cells, but findings are not entirely consistent on this point.
Structure
Antibodies are heavy (~150 kDa) proteins of about 10 nm in size,
arranged in three globular regions that roughly form a Y shape.
In humans and most other mammals, an antibody unit consists of four polypeptide chains; two identical heavy chains and two identical light chains connected by disulfide bonds.
Each chain is a series of domains: somewhat similar sequences of about 110 amino acids each.
These domains are usually represented in simplified schematics as rectangles.
Light chains consist of one variable domain VL and one constant domain CL, while heavy chains contain one variable domain VH and three to four constant domains CH1, CH2, ...
Structurally an antibody is also partitioned into two antigen-binding fragments (Fab), containing one VL, VH, CL, and CH1 domain each, as well as the crystallisable fragment (Fc), forming the trunk of the Y shape.
In between them is a hinge region of the heavy chains, whose flexibility allows antibodies to bind to pairs of epitopes at various distances, to form complexes (dimers, trimers, etc.), and to bind effector molecules more easily.
In an electrophoresis test of blood proteins, antibodies mostly migrate to the last, gamma globulin fraction.
Conversely, most gamma-globulins are antibodies, which is why the two terms were historically used as synonyms, as were the symbols Ig and γ.
This variant terminology fell out of use due to the correspondence being inexact and due to confusion with γ (gamma) heavy chains which characterize the IgG class of antibodies.
Antigen-binding site
The variable domains can also be referred to as the FV region. It is the subregion of Fab that binds to an antigen.
More specifically, each variable domain contains three hypervariable regions – the amino acids seen there vary the most from antibody to antibody.
When the protein folds, these regions give rise to three loops of β-strands, localized near one another on the surface of the antibody.
These loops are referred to as the complementarity-determining regions (CDRs), since their shape complements that of an antigen.
Three CDRs from each of the heavy and light chains together form an antibody-binding site whose shape can be anything from a pocket to which a smaller antigen binds, to a larger surface, to a protrusion that sticks out into a groove in an antigen.
Typically though, only a few residues contribute to most of the binding energy.
The existence of two identical antibody-binding sites allows antibody molecules to bind strongly to multivalent antigen (repeating sites such as polysaccharides in bacterial cell walls, or other sites at some distance apart), as well as to form antibody complexes and larger antigen-antibody complexes.
The structures of CDRs have been clustered and classified by Chothia et al.
and more recently by North et al.
and Nikoloudis et al. However, describing an antibody's binding site using only one single static structure limits the understanding and characterization of the antibody's function and properties. To improve antibody structure prediction and to take the strongly correlated CDR loop and interface movements into account, antibody paratopes should be described as interconverting states in solution with varying probabilities.
In the framework of the immune network theory, CDRs are also called idiotypes. According to immune network theory, the adaptive immune system is regulated by interactions between idiotypes.
Fc region
The Fc region (the trunk of the Y shape) is composed of constant domains from the heavy chains. Its role is in modulating immune cell activity: it is where effector molecules bind to, triggering various effects after the antibody Fab region binds to an antigen.
Effector cells (such as macrophages or natural killer cells) bind via their Fc receptors (FcR) to the Fc region of an antibody, while the complement system is activated by binding the C1q protein complex. IgG or IgM can bind to C1q, but IgA cannot, therefore IgA does not activate the classical complement pathway.
Another role of the Fc region is to selectively distribute different antibody classes across the body. In particular, the neonatal Fc receptor (FcRn) binds to the Fc region of IgG antibodies to transport it across the placenta, from the mother to the fetus. In addition to this, binding to FcRn endows IgG with an exceptionally long half-life relative to other plasma proteins of 3-4 weeks. IgG3 in most cases (depending on allotype) has mutations at the FcRn binding site which lower affinity for FcRn, which are thought to have evolved to limit the highly inflammatory effects of this subclass.
Antibodies are glycoproteins, that is, they have carbohydrates (glycans) added to conserved amino acid residues.
These conserved glycosylation sites occur in the Fc region and influence interactions with effector molecules.
Protein structure
The N-terminus of each chain is situated at the tip.
Each immunoglobulin domain has a similar structure, characteristic of all the members of the immunoglobulin superfamily:
it is composed of between 7 (for constant domains) and 9 (for variable domains) β-strands, forming two beta sheets in a Greek key motif.
The sheets create a "sandwich" shape, the immunoglobulin fold, held together by a disulfide bond.
Antibody complexes
Secreted antibodies can occur as a single Y-shaped unit, a monomer.
However, some antibody classes also form dimers with two Ig units (as with IgA), tetramers with four Ig units (like teleost fish IgM), or pentamers with five Ig units (like shark IgW or mammalian IgM, which occasionally forms hexamers as well, with six units). IgG can also form hexamers, though no J chain is required. IgA tetramers and pentamers have also been reported.
Antibodies also form complexes by binding to antigen: this is called an antigen-antibody complex or immune complex.
Small antigens can cross-link two antibodies, also leading to the formation of antibody dimers, trimers, tetramers, etc.
Multivalent antigens (e.g., cells with multiple epitopes) can form larger complexes with antibodies.
An extreme example is the clumping, or agglutination, of red blood cells with antibodies in blood typing to determine blood groups: the large clumps become insoluble, leading to visually apparent precipitation.
B cell receptors
The membrane-bound form of an antibody may be called a surface immunoglobulin (sIg) or a membrane immunoglobulin (mIg). It is part of the B cell receptor (BCR), which allows a B cell to detect when a specific antigen is present in the body and triggers B cell activation. The BCR is composed of surface-bound IgD or IgM antibodies and associated Ig-α and Ig-β heterodimers, which are capable of signal transduction. A typical human B cell will have 50,000 to 100,000 antibodies bound to its surface. Upon antigen binding, they cluster in large patches, which can exceed 1 micrometer in diameter, on lipid rafts that isolate the BCRs from most other cell signaling receptors.
These patches may improve the efficiency of the cellular immune response. In humans, the cell surface is bare around the B cell receptors for several hundred nanometers, which further isolates the BCRs from competing influences.
Classes
Antibodies can come in different varieties known as isotypes or classes. In humans there are five antibody classes known as IgA, IgD, IgE, IgG, and IgM, which are further subdivided into subclasses such as IgA1, IgA2.
The prefix "Ig" stands for immunoglobulin, while the suffix denotes the type of heavy chain the antibody contains: the heavy chain types α (alpha), γ (gamma), δ (delta), ε (epsilon), μ (mu) give rise to IgA, IgG, IgD, IgE, IgM, respectively.
The distinctive features of each class are determined by the part of the heavy chain within the hinge and Fc region.
The classes differ in their biological properties, functional locations and ability to deal with different antigens, as depicted in the table.
For example, IgE antibodies are responsible for an allergic response consisting of histamine release from mast cells, often a sole contributor to asthma (though other pathways exist as do exist symptoms very similar to yet not technically asthma). The antibody's variable region binds to allergic antigen, for example house dust mite particles, while its Fc region (in the ε heavy chains) binds to Fc receptor ε on a mast cell, triggering its degranulation: the release of molecules stored in its granules.
The antibody isotype of a B cell changes during cell development and activation. Immature B cells, which have never been exposed to an antigen, express only the IgM isotype in a cell surface bound form. The B lymphocyte, in this ready-to-respond form, is known as a "naive B lymphocyte." The naive B lymphocyte expresses both surface IgM and IgD. The co-expression of both of these immunoglobulin isotypes renders the B cell ready to respond to antigen. B cell activation follows engagement of the cell-bound antibody molecule with an antigen, causing the cell to divide and differentiate into an antibody-producing cell called a plasma cell. In this activated form, the B cell starts to produce antibody in a secreted form rather than a membrane-bound form. Some daughter cells of the activated B cells undergo isotype switching, a mechanism that causes the production of antibodies to change from IgM or IgD to the other antibody isotypes, IgE, IgA, or IgG, that have defined roles in the immune system.
Light chain types
In mammals there are two types of immunoglobulin light chain, which are called lambda (λ) and kappa (κ). However, there is no known functional difference between them, and both can occur with any of the five major types of heavy chains. Each antibody contains two identical light chains: both κ or both λ. Proportions of κ and λ types vary by species and can be used to detect abnormal proliferation of B cell clones. Other types of light chains, such as the iota (ι) chain, are found in other vertebrates like sharks (Chondrichthyes) and bony fishes (Teleostei).
In non-mammalian animals
In most placental mammals, the structure of antibodies is generally the same.
Jawed fish appear to be the most primitive animals that are able to make antibodies similar to those of mammals, although many features of their adaptive immunity appeared somewhat earlier.
Cartilaginous fish (such as sharks) produce heavy-chain-only antibodies (i.e., lacking light chains) which moreover feature longer chain pentamers (with five constant units per molecule). Camelids (such as camels, llamas, alpacas) are also notable for producing heavy-chain-only antibodies.
Antibody–antigen interactions
The antibody's paratope interacts with the antigen's epitope. An antigen usually contains different epitopes along its surface arranged discontinuously, and dominant epitopes on a given antigen are called determinants.
Antibody and antigen interact by spatial complementarity (lock and key). The molecular forces involved in the Fab-epitope interaction are weak and non-specific – for example electrostatic forces, hydrogen bonds, hydrophobic interactions, and van der Waals forces. This means binding between antibody and antigen is reversible, and the antibody's affinity towards an antigen is relative rather than absolute. Relatively weak binding also means it is possible for an antibody to cross-react with different antigens of different relative affinities.
Function
The main categories of antibody action include the following:
Neutralisation, in which neutralizing antibodies block parts of the surface of a bacterial cell or virion to render its attack ineffective
Agglutination, in which antibodies "glue together" foreign cells into clumps that are attractive targets for phagocytosis
Precipitation, in which antibodies "glue together" serum-soluble antigens, forcing them to precipitate out of solution in clumps that are attractive targets for phagocytosis
Complement activation (fixation), in which antibodies that are latched onto a foreign cell encourage complement to attack it with a membrane attack complex, which leads to the following:
Lysis of the foreign cell
Encouragement of inflammation by chemotactically attracting inflammatory cells
More indirectly, an antibody can signal immune cells to present antibody fragments to T cells, or downregulate other immune cells to avoid autoimmunity.
Activated B cells differentiate into either antibody-producing cells called plasma cells that secrete soluble antibody or memory cells that survive in the body for years afterward in order to allow the immune system to remember an antigen and respond faster upon future exposures.
At the prenatal and neonatal stages of life, the presence of antibodies is provided by passive immunization from the mother. Early endogenous antibody production varies for different kinds of antibodies, and usually appear within the first years of life. Since antibodies exist freely in the bloodstream, they are said to be part of the humoral immune system. Circulating antibodies are produced by clonal B cells that specifically respond to only one antigen (an example is a virus capsid protein fragment). Antibodies contribute to immunity in three ways: They prevent pathogens from entering or damaging cells by binding to them; they stimulate removal of pathogens by macrophages and other cells by coating the pathogen; and they trigger destruction of pathogens by stimulating other immune responses such as the complement pathway. Antibodies will also trigger vasoactive amine degranulation to contribute to immunity against certain types of antigens (helminths, allergens).
Activation of complement
Antibodies that bind to surface antigens (for example, on bacteria) will attract the first component of the complement cascade with their Fc region and initiate activation of the "classical" complement system. This results in the killing of bacteria in two ways. First, the binding of the antibody and complement molecules marks the microbe for ingestion by phagocytes in a process called opsonization; these phagocytes are attracted by certain complement molecules generated in the complement cascade. Second, some complement system components form a membrane attack complex to assist antibodies to kill the bacterium directly (bacteriolysis).
Activation of effector cells
To combat pathogens that replicate outside cells, antibodies bind to pathogens to link them together, causing them to agglutinate. Since an antibody has at least two paratopes, it can bind more than one antigen by binding identical epitopes carried on the surfaces of these antigens. By coating the pathogen, antibodies stimulate effector functions against the pathogen in cells that recognize their Fc region.
Those cells that recognize coated pathogens have Fc receptors, which, as the name suggests, interact with the Fc region of IgA, IgG, and IgE antibodies. The engagement of a particular antibody with the Fc receptor on a particular cell triggers an effector function of that cell; phagocytes will phagocytose, mast cells and neutrophils will degranulate, natural killer cells will release cytokines and cytotoxic molecules; that will ultimately result in destruction of the invading microbe. The activation of natural killer cells by antibodies initiates a cytotoxic mechanism known as antibody-dependent cell-mediated cytotoxicity (ADCC) – this process may explain the efficacy of monoclonal antibodies used in biological therapies against cancer. The Fc receptors are isotype-specific, which gives greater flexibility to the immune system, invoking only the appropriate immune mechanisms for distinct pathogens.
Natural antibodies
Humans and higher primates also produce "natural antibodies" that are present in serum before viral infection. Natural antibodies have been defined as antibodies that are produced without any previous infection, vaccination, other foreign antigen exposure or passive immunization. These antibodies can activate the classical complement pathway leading to lysis of enveloped virus particles long before the adaptive immune response is activated. Antibodies are produced exclusively by B cells in response to antigens where initially, antibodies are formed as membrane-bound receptors, but upon activation by antigens and helper T cells, B cells differentiate to produce soluble antibodies. Many natural antibodies are directed against the disaccharide galactose α(1,3)-galactose (α-Gal), which is found as a terminal sugar on glycosylated cell surface proteins, and generated in response to production of this sugar by bacteria contained in the human gut. These antibodies undergo quality checks in the endoplasmic reticulum (ER), which contains proteins that assist in proper folding and assembly. Rejection of xenotransplantated organs is thought to be, in part, the result of natural antibodies circulating in the serum of the recipient binding to α-Gal antigens expressed on the donor tissue.
Immunoglobulin diversity
Virtually all microbes can trigger an antibody response. Successful recognition and eradication of many different types of microbes requires diversity among antibodies; their amino acid composition varies allowing them to interact with many different antigens. It has been estimated that humans generate about 10 billion different antibodies, each capable of binding a distinct epitope of an antigen. Although a huge repertoire of different antibodies is generated in a single individual, the number of genes available to make these proteins is limited by the size of the human genome. Several complex genetic mechanisms have evolved that allow vertebrate B cells to generate a diverse pool of antibodies from a relatively small number of antibody genes.
Domain variability
The chromosomal region that encodes an antibody is large and contains several distinct gene loci for each domain of the antibody—the chromosome region containing heavy chain genes (IGH@) is found on chromosome 14, and the loci containing lambda and kappa light chain genes (IGL@ and IGK@) are found on chromosomes 22 and 2 in humans. One of these domains is called the variable domain, which is present in each heavy and light chain of every antibody, but can differ in different antibodies generated from distinct B cells. Differences between the variable domains are located on three loops known as hypervariable regions (HV-1, HV-2 and HV-3) or complementarity-determining regions (CDR1, CDR2 and CDR3). CDRs are supported within the variable domains by conserved framework regions. The heavy chain locus contains about 65 different variable domain genes that all differ in their CDRs. Combining these genes with an array of genes for other domains of the antibody generates a large cavalry of antibodies with a high degree of variability. This combination is called V(D)J recombination discussed below.
V(D)J recombination
Somatic recombination of immunoglobulins, also known as V(D)J recombination, involves the generation of a unique immunoglobulin variable region. The variable region of each immunoglobulin heavy or light chain is encoded in several pieces—known as gene segments (subgenes). These segments are called variable (V), diversity (D) and joining (J) segments. V, D and J segments are found in Ig heavy chains, but only V and J segments are found in Ig light chains. Multiple copies of the V, D and J gene segments exist, and are tandemly arranged in the genomes of mammals. In the bone marrow, each developing B cell will assemble an immunoglobulin variable region by randomly selecting and combining one V, one D and one J gene segment (or one V and one J segment in the light chain). As there are multiple copies of each type of gene segment, and different combinations of gene segments can be used to generate each immunoglobulin variable region, this process generates a huge number of antibodies, each with different paratopes, and thus different antigen specificities. The rearrangement of several subgenes (i.e. V2 family) for lambda light chain immunoglobulin is coupled with the activation of microRNA miR-650, which further influences biology of B-cells.
RAG proteins play an important role with V(D)J recombination in cutting DNA at a particular region. Without the presence of these proteins, V(D)J recombination would not occur.
After a B cell produces a functional immunoglobulin gene during V(D)J recombination, it cannot express any other variable region (a process known as allelic exclusion) thus each B cell can produce antibodies containing only one kind of variable chain.
Somatic hypermutation and affinity maturation
Following activation with antigen, B cells begin to proliferate rapidly. In these rapidly dividing cells, the genes encoding the variable domains of the heavy and light chains undergo a high rate of point mutation, by a process called somatic hypermutation (SHM). SHM results in approximately one nucleotide change per variable gene, per cell division. As a consequence, any daughter B cells will acquire slight amino acid differences in the variable domains of their antibody chains.
This serves to increase the diversity of the antibody pool and impacts the antibody's antigen-binding affinity. Some point mutations will result in the production of antibodies that have a weaker interaction (low affinity) with their antigen than the original antibody, and some mutations will generate antibodies with a stronger interaction (high affinity). B cells that express high affinity antibodies on their surface will receive a strong survival signal during interactions with other cells, whereas those with low affinity antibodies will not, and will die by apoptosis. Thus, B cells expressing antibodies with a higher affinity for the antigen will outcompete those with weaker affinities for function and survival allowing the average affinity of antibodies to increase over time. The process of generating antibodies with increased binding affinities is called affinity maturation. Affinity maturation occurs in mature B cells after V(D)J recombination, and is dependent on help from helper T cells.
Class switching
Isotype or class switching is a biological process occurring after activation of the B cell, which allows the cell to produce different classes of antibody (IgA, IgE, or IgG). The different classes of antibody, and thus effector functions, are defined by the constant (C) regions of the immunoglobulin heavy chain. Initially, naive B cells express only cell-surface IgM and IgD with identical antigen binding regions. Each isotype is adapted for a distinct function; therefore, after activation, an antibody with an IgG, IgA, or IgE effector function might be required to effectively eliminate an antigen. Class switching allows different daughter cells from the same activated B cell to produce antibodies of different isotypes. Only the constant region of the antibody heavy chain changes during class switching; the variable regions, and therefore antigen specificity, remain unchanged. Thus the progeny of a single B cell can produce antibodies, all specific for the same antigen, but with the ability to produce the effector function appropriate for each antigenic challenge. Class switching is triggered by cytokines; the isotype generated depends on which cytokines are present in the B cell environment.
Class switching occurs in the heavy chain gene locus by a mechanism called class switch recombination (CSR). This mechanism relies on conserved nucleotide motifs, called switch (S) regions, found in DNA upstream of each constant region gene (except in the δ-chain). The DNA strand is broken by the activity of a series of enzymes at two selected S-regions. The variable domain exon is rejoined through a process called non-homologous end joining (NHEJ) to the desired constant region (γ, α or ε). This process results in an immunoglobulin gene that encodes an antibody of a different isotype.
Specificity designations
An antibody can be called monospecific if it has specificity for a single antigen or epitope, or bispecific if it has affinity for two different antigens or two different epitopes on the same antigen. A group of antibodies can be called polyvalent (or unspecific) if they have affinity for various antigens or microorganisms. Intravenous immunoglobulin, if not otherwise noted, consists of a variety of different IgG (polyclonal IgG). In contrast, monoclonal antibodies are identical antibodies produced by a single B cell.
Asymmetrical antibodies
Heterodimeric antibodies, which are also asymmetrical antibodies, allow for greater flexibility and new formats for attaching a variety of drugs to the antibody arms. One of the general formats for a heterodimeric antibody is the "knobs-into-holes" format. This format is specific to the heavy chain part of the constant region in antibodies. The "knobs" part is engineered by replacing a small amino acid with a larger one. It fits into the "hole", which is engineered by replacing a large amino acid with a smaller one. What connects the "knobs" to the "holes" are the disulfide bonds between each chain. The "knobs-into-holes" shape facilitates antibody dependent cell mediated cytotoxicity. Single-chain variable fragments (scFv) are connected to the variable domain of the heavy and light chain via a short linker peptide. The linker is rich in glycine, which gives it more flexibility, and serine/threonine, which gives it specificity. Two different scFv fragments can be connected together, via a hinge region, to the constant domain of the heavy chain or the constant domain of the light chain. This gives the antibody bispecificity, allowing for the binding specificities of two different antigens. The "knobs-into-holes" format enhances heterodimer formation but does not suppress homodimer formation.
To further improve the function of heterodimeric antibodies, many scientists are looking towards artificial constructs. Artificial antibodies are largely diverse protein motifs that use the functional strategy of the antibody molecule, but are not limited by the loop and framework structural constraints of the natural antibody. Being able to control the combinational design of the sequence and three-dimensional space could transcend the natural design and allow for the attachment of different combinations of drugs to the arms.
Heterodimeric antibodies have a greater range in shapes they can take and the drugs that are attached to the arms do not have to be the same on each arm, allowing for different combinations of drugs to be used in cancer treatment. Pharmaceuticals are able to produce highly functional bispecific, and even multispecific, antibodies. The degree to which they can function is impressive given that such a change of shape from the natural form should lead to decreased functionality.
Interchromosomal DNA Transposition
Antibody diversification typically occurs through somatic hypermutation, class switching, and affinity maturation targeting the BCR gene loci, but on occasion more unconventional forms of diversification have been documented. For example, in the case of malaria caused by Plasmodium falciparum, some antibodies from those who had been infected demonstrated an insertion from chromosome 19 containing a 98-amino acid stretch from leukocyte-associated immunoglobulin-like receptor 1, LAIR1, in the elbow joint. This represents a form of interchromosomal transposition. LAIR1 normally binds collagen, but can recognize repetitive interspersed families of polypeptides (RIFIN) family members that are highly expressed on the surface of P. falciparum-infected red blood cells. In fact, these antibodies underwent affinity maturation that enhanced affinity for RIFIN but abolished affinity for collagen. These "LAIR1-containing" antibodies have been found in 5-10% of donors from Tanzania and Mali, though not in European donors. European donors did show 100-1000 nucleotide stretches inside the elbow joints as well, however. This particular phenomenon may be specific to malaria, as infection is known to induce genomic instability.
History
The first use of the term "antibody" occurred in a text by Paul Ehrlich. The term Antikörper (the German word for antibody) appears in the conclusion of his article "Experimental Studies on Immunity", published in October 1891, which states that, "if two substances give rise to two different Antikörper, then they themselves must be different". However, the term was not accepted immediately and several other terms for antibody were proposed; these included Immunkörper, Amboceptor, Zwischenkörper, substance sensibilisatrice, copula, Desmon, philocytase, fixateur, and Immunisin. The word antibody has formal analogy to the word antitoxin and a similar concept to Immunkörper (immune body in English). As such, the original construction of the word contains a logical flaw; the antitoxin is something directed against a toxin, while the antibody is a body directed against something.
The study of antibodies began in 1890 when Emil von Behring and Kitasato Shibasaburō described antibody activity against diphtheria and tetanus toxins. Von Behring and Kitasato put forward the theory of humoral immunity, proposing that a mediator in serum could react with a foreign antigen. His idea prompted Paul Ehrlich to propose the side-chain theory for antibody and antigen interaction in 1897, when he hypothesized that receptors (described as "side-chains") on the surface of cells could bind specifically to toxins – in a "lock-and-key" interaction – and that this binding reaction is the trigger for the production of antibodies. Other researchers believed that antibodies existed freely in the blood and, in 1904, Almroth Wright suggested that soluble antibodies coated bacteria to label them for phagocytosis and killing; a process that he named opsoninization.
In the 1920s, Michael Heidelberger and Oswald Avery observed that antigens could be precipitated by antibodies and went on to show that antibodies are made of protein. The biochemical properties of antigen-antibody-binding interactions were examined in more detail in the late 1930s by John Marrack. The next major advance was in the 1940s, when Linus Pauling confirmed the lock-and-key theory proposed by Ehrlich by showing that the interactions between antibodies and antigens depend more on their shape than their chemical composition. In 1948, Astrid Fagraeus discovered that B cells, in the form of plasma cells, were responsible for generating antibodies.
Further work concentrated on characterizing the structures of the antibody proteins. A major advance in these structural studies was the discovery in the early 1960s by Gerald Edelman and Joseph Gally of the antibody light chain, and their realization that this protein is the same as the Bence-Jones protein described in 1845 by Henry Bence Jones. Edelman went on to discover that antibodies are composed of disulfide bond-linked heavy and light chains. Around the same time, antibody-binding (Fab) and antibody tail (Fc) regions of IgG were characterized by Rodney Porter. Together, these scientists deduced the structure and complete amino acid sequence of IgG, a feat for which they were jointly awarded the 1972 Nobel Prize in Physiology or Medicine. The Fv fragment was prepared and characterized by David Givol. While most of these early studies focused on IgM and IgG, other immunoglobulin isotypes were identified in the 1960s: Thomas Tomasi discovered secretory antibody (IgA); David S. Rowe and John L. Fahey discovered IgD; and Kimishige Ishizaka and Teruko Ishizaka discovered IgE and showed it was a class of antibodies involved in allergic reactions. In a landmark series of experiments beginning in 1976, Susumu Tonegawa showed that genetic material can rearrange itself to form the vast array of available antibodies.
Medical applications
Disease diagnosis
Detection of particular antibodies is a very common form of medical diagnostics, and applications such as serology depend on these methods. For example, in biochemical assays for disease diagnosis, a titer of antibodies directed against Epstein-Barr virus or Lyme disease is estimated from the blood. If those antibodies are not present, either the person is not infected or the infection occurred a very long time ago, and the B cells generating these specific antibodies have naturally decayed.
In clinical immunology, levels of individual classes of immunoglobulins are measured by nephelometry (or turbidimetry) to characterize the antibody profile of patient. Elevations in different classes of immunoglobulins are sometimes useful in determining the cause of liver damage in patients for whom the diagnosis is unclear. For example, elevated IgA indicates alcoholic cirrhosis, elevated IgM indicates viral hepatitis and primary biliary cirrhosis, while IgG is elevated in viral hepatitis, autoimmune hepatitis and cirrhosis.
Autoimmune disorders can often be traced to antibodies that bind the body's own epitopes; many can be detected through blood tests. Antibodies directed against red blood cell surface antigens in immune mediated hemolytic anemia are detected with the Coombs test. The Coombs test is also used for antibody screening in blood transfusion preparation and also for antibody screening in antenatal women.
Practically, several immunodiagnostic methods based on detection of complex antigen-antibody are used to diagnose infectious diseases, for example ELISA, immunofluorescence, Western blot, immunodiffusion, immunoelectrophoresis, and magnetic immunoassay. Antibodies raised against human chorionic gonadotropin are used in over the counter pregnancy tests.
New dioxaborolane chemistry enables radioactive fluoride (18F) labeling of antibodies, which allows for positron emission tomography (PET) imaging of cancer.
Disease therapy
Targeted monoclonal antibody therapy is employed to treat diseases such as rheumatoid arthritis, multiple sclerosis, psoriasis, and many forms of cancer including non-Hodgkin's lymphoma, colorectal cancer, head and neck cancer and breast cancer.
Some immune deficiencies, such as X-linked agammaglobulinemia and hypogammaglobulinemia, result in partial or complete lack of antibodies. These diseases are often treated by inducing a short-term form of immunity called passive immunity. Passive immunity is achieved through the transfer of ready-made antibodies in the form of human or animal serum, pooled immunoglobulin or monoclonal antibodies, into the affected individual.
Prenatal therapy
Rh factor, also known as Rh D antigen, is an antigen found on red blood cells; individuals that are Rh-positive (Rh+) have this antigen on their red blood cells and individuals that are Rh-negative (Rh–) do not. During normal childbirth, delivery trauma or complications during pregnancy, blood from a fetus can enter the mother's system. In the case of an Rh-incompatible mother and child, consequential blood mixing may sensitize an Rh- mother to the Rh antigen on the blood cells of the Rh+ child, putting the remainder of the pregnancy, and any subsequent pregnancies, at risk for hemolytic disease of the newborn.
Rho(D) immune globulin antibodies are specific for human RhD antigen. Anti-RhD antibodies are administered as part of a prenatal treatment regimen to prevent sensitization that may occur when a Rh-negative mother has a Rh-positive fetus. Treatment of a mother with Anti-RhD antibodies prior to and immediately after trauma and delivery destroys Rh antigen in the mother's system from the fetus. This occurs before the antigen can stimulate maternal B cells to "remember" Rh antigen by generating memory B cells. Therefore, her humoral immune system will not make anti-Rh antibodies, and will not attack the Rh antigens of the current or subsequent babies. Rho(D) Immune Globulin treatment prevents sensitization that can lead to Rh disease, but does not prevent or treat the underlying disease itself.
Research applications
Specific antibodies are produced by injecting an antigen into a mammal, such as a mouse, rat, rabbit, goat, sheep, or horse for large quantities of antibody. Blood isolated from these animals contains polyclonal antibodies—multiple antibodies that bind to the same antigen—in the serum, which can now be called antiserum. Antigens are also injected into chickens for generation of polyclonal antibodies in egg yolk. To obtain antibody that is specific for a single epitope of an antigen, antibody-secreting lymphocytes are isolated from the animal and immortalized by fusing them with a cancer cell line. The fused cells are called hybridomas, and will continually grow and secrete antibody in culture. Single hybridoma cells are isolated by dilution cloning to generate cell clones that all produce the same antibody; these antibodies are called monoclonal antibodies. Polyclonal and monoclonal antibodies are often purified using Protein A/G or antigen-affinity chromatography.
In research, purified antibodies are used in many applications. Antibodies for research applications can be found directly from antibody suppliers, or through use of a specialist search engine. Research antibodies are most commonly used to identify and locate intracellular and extracellular proteins. Antibodies are used in flow cytometry to differentiate cell types by the proteins they express; different types of cells express different combinations of cluster of differentiation molecules on their surface, and produce different intracellular and secretable proteins. They are also used in immunoprecipitation to separate proteins and anything bound to them (co-immunoprecipitation) from other molecules in a cell lysate, in Western blot analyses to identify proteins separated by electrophoresis, and in immunohistochemistry or immunofluorescence to examine protein expression in tissue sections or to locate proteins within cells with the assistance of a microscope. Proteins can also be detected and quantified with antibodies, using ELISA and ELISpot techniques.
Antibodies used in research are some of the most powerful, yet most problematic reagents with a tremendous number of factors that must be controlled in any experiment including cross reactivity, or the antibody recognizing multiple epitopes and affinity, which can vary widely depending on experimental conditions such as pH, solvent, state of tissue etc. Multiple attempts have been made to improve both the way that researchers validate antibodies and ways in which they report on antibodies. Researchers using antibodies in their work need to record them correctly in order to allow their research to be reproducible (and therefore tested, and qualified by other researchers). Less than half of research antibodies referenced in academic papers can be easily identified. Papers published in F1000 in 2014 and 2015 provide researchers with a guide for reporting research antibody use. The RRID paper, is co-published in 4 journals that implemented the RRIDs Standard for research resource citation, which draws data from the antibodyregistry.org as the source of antibody identifiers (see also group at Force11).
Antibody regions can be used to further biomedical research by acting as a guide for drugs to reach their target. Several application involve using bacterial plasmids to tag plasmids with the Fc region of the antibody such as pFUSE-Fc plasmid.
Regulations
Production and testing
There are several ways to obtain antibodies, including in vivo techniques like animal immunization and various in vitro approaches, such as the phage display method. Traditionally, most antibodies are produced by hybridoma cell lines through immortalization of antibody-producing cells by chemically induced fusion with myeloma cells. In some cases, additional fusions with other lines have created "triomas" and "quadromas". The manufacturing process should be appropriately described and validated. Validation studies should at least include:
The demonstration that the process is able to produce in good quality (the process should be validated)
The efficiency of the antibody purification (all impurities and virus must be eliminated)
The characterization of purified antibody (physicochemical characterization, immunological properties, biological activities, contaminants, ...)
Determination of the virus clearance studies
Before clinical trials
Product safety testing: Sterility (bacteria and fungi), in vitro and in vivo testing for adventitious viruses, murine retrovirus testing..., product safety data needed before the initiation of feasibility trials in serious or immediately life-threatening conditions, it serves to evaluate dangerous potential of the product.
Feasibility testing: These are pilot studies whose objectives include, among others, early characterization of safety and initial proof of concept in a small specific patient population (in vitro or in vivo testing).
Preclinical studies
Testing cross-reactivity of antibody: to highlight unwanted interactions (toxicity) of antibodies with previously characterized tissues. This study can be performed in vitro (reactivity of the antibody or immunoconjugate should be determined with a quick-frozen adult tissues) or in vivo (with appropriates animal models).
Preclinical pharmacology and toxicity testing: preclinical safety testing of antibody is designed to identify possible toxicity in humans, to estimate the likelihood and severity of potential adverse events in humans, and to identify a safe starting dose and dose escalation, when possible.
Animal toxicity studies: Acute toxicity testing, repeat-dose toxicity testing, long-term toxicity testing
Pharmacokinetics and pharmacodynamics testing: Use for determinate clinical dosages, antibody activities, evaluation of the potential clinical effects
Structure prediction and computational antibody design
The importance of antibodies in health care and the biotechnology industry demands knowledge of their structures at high resolution. This information is used for protein engineering, modifying the antigen binding affinity, and identifying an epitope, of a given antibody. X-ray crystallography is one commonly used method for determining antibody structures. However, crystallizing an antibody is often laborious and time-consuming. Computational approaches provide a cheaper and faster alternative to crystallography, but their results are more equivocal, since they do not produce empirical structures. Online web servers such as Web Antibody Modeling (WAM) and Prediction of Immunoglobulin Structure (PIGS) enable computational modeling of antibody variable regions. Rosetta Antibody is a novel antibody FV region structure prediction server, which incorporates sophisticated techniques to minimize CDR loops and optimize the relative orientation of the light and heavy chains, as well as homology models that predict successful docking of antibodies with their unique antigen. However, describing an antibody's binding site using only one single static structure limits the understanding and characterization of the antibody's function and properties. To improve antibody structure prediction and to take the strongly correlated CDR loop and interface movements into account, antibody paratopes should be described as interconverting states in solution with varying probabilities.
The ability to describe the antibody through binding affinity to the antigen is supplemented by information on antibody structure and amino acid sequences for the purpose of patent claims. Several methods have been presented for computational design of antibodies based on the structural bioinformatics studies of antibody CDRs.
There are a variety of methods used to sequence an antibody including Edman degradation, cDNA, etc.; albeit one of the most common modern uses for peptide/protein identification is liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). High volume antibody sequencing methods require computational approaches for the data analysis, including de novo sequencing directly from tandem mass spectra and database search methods that use existing protein sequence databases. Many versions of shotgun protein sequencing are able to increase the coverage by utilizing CID/HCD/ETD fragmentation methods and other techniques, and they have achieved substantial progress in attempt to fully sequence proteins, especially antibodies. Other methods have assumed the existence of similar proteins, a known genome sequence, or combined top-down and bottom up approaches. Current technologies have the ability to assemble protein sequences with high accuracy by integrating de novo sequencing peptides, intensity, and positional confidence scores from database and homology searches.
Antibody mimetic
Antibody mimetics are organic compounds, like antibodies, that can specifically bind antigens. They consist of artificial peptides or proteins, or aptamer-based nucleic acid molecules with a molar mass of about 3 to 20 kDa. Antibody fragments, such as Fab and nanobodies are not considered as antibody mimetics. Common advantages over antibodies are better solubility, tissue penetration, stability towards heat and enzymes, and comparatively low production costs. Antibody mimetics have been developed and commercialized as research, diagnostic and therapeutic agents.
Binding antibody unit
BAU (binding antibody unit, often as BAU/mL) is a measurement unit defined by the WHO for the comparison of assays detecting the same class of immunoglobulins with the same specificity.
See also
Affimer
Anti-mitochondrial antibodies
Anti-nuclear antibodies
Antibody mimetic
Aptamer
Colostrum
ELISA
Humoral immunity
Immunology
Immunosuppressive drug
Intravenous immunoglobulin (IVIg)
Magnetic immunoassay
Microantibody
Monoclonal antibody
Neutralizing antibody
Optimer Ligand
Secondary antibodies
Single-domain antibody
Slope spectroscopy
Surrobody
Synthetic antibody
Western blot normalization
References
External links
Mike's Immunoglobulin Structure/Function Page at University of Cambridge
Antibodies as the PDB molecule of the month Discussion of the structure of antibodies at RCSB Protein Data Bank
A hundred years of antibody therapy History and applications of antibodies in the treatment of disease at University of Oxford
How Lymphocytes Produce Antibody from Cells Alive!
Glycoproteins
Immunology
Reagents for biochemistry | Antibody | Chemistry,Biology | 11,074 |
37,166,559 | https://en.wikipedia.org/wiki/Sea%20chest%20%28nautical%29 | The term sea chest is used for a rectangular or cylindrical recess in the hull of a ship.
Ship's reservoir/filter
The sea chest provides an intake reservoir from which piping systems draw raw water. Most sea chests are protected by removable gratings, and contain baffle plates to dampen the effects of vessel speed or sea state. The intake size of sea chests varies from less than 10 cm2 to several square metres.
Zebra mussel control in sea chests
When the ship is in freshwater, the hard steel surfaces of the sea chest, protective grates and baffles, combined with low water velocities created in this immediate area, provide a suitable environment for zebra mussel attachment. Zebra mussel infestations have been found to clog the individual intakes and gates of the various water piping systems, decreasing the availability of water for onboard operations, which could result in damage to engines and other components that require water for cooling. Sea chests are, therefore, considered to be the most susceptible component to serious infestation.
Control strategies include coating all surfaces with an antifoulant such as copper-based epoxy paint or hot-dip galvanizing. Periodic inspection and replacement of grates and screens also reduces the risk. Increasing the size of the sea chests by 20% to 30% may delay the onset of serious problems that could force an engine shutdown. Thermal treatment is a highly effective strategy for the control of zebra mussels (McMahon et al. 1995). Thermal treatment may include retrofitting a closed loop system to recirculate the heated water to the sea chest or the addition of a second sea chest system, allowing engine cooling water to be discharged through the idle sea chest. Recirculation of engine cooling water as a thermal control strategy has proved extremely effective in controlling zebra mussels (Palermo 1992, U.S. Coast Guard 1994).
References
This article uses public domain text taken directly from: http://el.erdc.usace.army.mil/zebra/zmis/zmishelp/sea_chest_floating_plant.html
Shipbuilding
Nautical terminology | Sea chest (nautical) | Engineering | 440 |
19,994,607 | https://en.wikipedia.org/wiki/Welded%20sculpture | Welded sculpture (related to visual art and works of art) is an art form in which sculpture is made using welding techniques.
History
Welded sculptures have a relatively short history, dating back only to the 20th century. Before the development of current welding technology, sculptures made from metal were either cast or forged, and welding was primarily used in the construction industry.
The first welded sculptures were credited to the Russian artist Vladimir Tatlin, who created his first piece of art in 1913. Tatlin was an important figure in the Russian Constructivist movement, which influenced the use of industrial materials in forms they had not yet been used in, mainly art.
In the 1920s and 1930s, more artists followed this path and the experimenting and artistic work of metalworking came to light. Some of the earliest founders in this practice included Julio Gonzalez, and Alexander Calder. Gonzalez was credited and noticed for his welded sculptures that were not only expressive in an abstract manner but functional as well. Calder’s pieces are among some of the most famous examples of welded sculptures, as they often hang from ceilings or trees. They were mobile structures that responded to air currents, moving in mesmerizing ways that were intriguing to the human eye.
The Catalan artist Julio González is credited as one of the most well known developers of welded sculpture. González came from a line of metalsmith workers; his grandfather was a goldsmith in Galicia, who established in the Catalan capital in the early 19th century. González's father, Concordio González, owned a workshop and as a young boy, González learned from him the techniques of gold, silver, and iron metalwork. He is associated with the Spanish circle of artists of Montmartre, including Pablo Gargallo, Juan Gris and Max Jacob. In 1918, he developed an interest in the artistic possibilities of welding, after learning the technique whilst working in the Renault Factory at Boulogne-Billancourt. This technique would subsequently become his principal contribution to sculpture, though during this period he also painted and —especially— created jewellery pieces. In 1920 he renewed his acquaintance with Pablo Picasso, for whom he later provided technical assistance in executing sculptures in iron, participating to Picasso's researches on analytic cubism. He also forged the infrastructures of Constantin Brâncuși's plasters. In the winter of 1927-28, he showed Picasso how to use oxy-fuel welding and cutting. When their friendship re-established itself, Picasso and González collaborated on a piece called Woman in the Garden between 1928-1930. From October 1928 till 1932, both men worked together. Influenced by Picasso, the fifty-year-old González changed his style, exchanging bronze for iron, and volumes for lines. González began to formalize a new visual language in sculpture that would change the course of his career.
During the mid-20th century, welded sculptures continued to evolve, artists now have access to different materials, techniques, and technology that weren’t available to the early founders. In the 1950s and 1960s, large-scale industrial materials such as steel beams and large plates were utilized to construct monumental sculptures that were march larger than the ones in the past. Artists such as David Smith, Anthony Caro, and Richard Serra were among the first group of artists to create these large-scale sculptures.
Today, welded sculptures are an established form of contemporary art, with artists continuously pushing the boundaries of what’s possible with modern materials and technology.
Welding was increasingly used in sculpture from the 1930s as new industrial processes such as arc welding were adapted to aesthetic purposes. Welding techniques, including digital cutting, can be used to cut and join metal. Welded sculptures are sometimes site-specific. Artist Richard Hunt said "The idea of exploiting welding methods and the tensile strength of metals opened up many possibilities to me. This idea was actually linked to the increasing recognition among artists that an art which was representative of our own time ought to use materials and techniques that were at hand, whether it was new experiments using plastics, new kinds of paints, new kinds of surfaces in painting, or using materials developed during the war effort.""
Associated artists
Aleš Veselý
Alexander Calder
Andrew French
Anthony Caro
Antoine Pevsner
Beverly Pepper
Bruce Gray
Charles Ginnever
David Smith
James Rosati
John Raymond Henry
Julio González
Ken Macklin
Kevin Caron
Lyman Kipp
Nancy Graves
Paul Kuniholm
Pablo Gargallo
Pablo Picasso
Peter Hide
Peter Reginato
Revs
Richard Serra
Richard Hunt
Robert H. Hudson
Robert Willms
Royden Mills
Ryan McCourt
Tim Scott
TEJN
Todor Todorov
Vera Mukhina
External links
Richard Hunt: Freeing the Human Soul
Janet Goldner: Welded Steel Sculpture
Notes and references
Further reading
Creating Welded Sculpture, by Nathan Cabot Hale, Courier Dover Publications, 1994
Welded Sculpture of the Twentieth Century, Judy K.Van Wagner Collischan, Lund Humphries, 2000
Sculptures by medium
Welding | Welded sculpture | Engineering | 987 |
689,628 | https://en.wikipedia.org/wiki/Virilization | Virilization or masculinization is the biological development of adult male characteristics in young males or females. Most of the changes of virilization are produced by androgens.
Virilization is a medical term commonly used in three medical and biology of sex contexts: prenatal biological sexual differentiation, the postnatal changes of typical chromosomal male (46, XY) puberty, and excessive androgen effects in typical chromosomal females (46, XX). It is also the intended result of androgen replacement therapy in males with delayed puberty and low testosterone.
Prenatal virilization
In the prenatal period, virilization refers to closure of the perineum, thinning and wrinkling (rugation) of the scrotum, growth of the penis, and closure of the urethral groove to the tip of the penis. In this context, masculinization is synonymous with virilization.
Prenatal virilization of XX fetuses and undervirilization of XY fetuses are common causes of ambiguous genitalia such as in conditions like Congenital adrenal hyperplasia and 5α-Reductase 2 deficiency.
For many years, it was widely believed that in mammals, the female is the "default" developmental pathway, and the SRY gene on the Y chromosome is responsible for suppressing the development of female characteristics and stimulating males characteristics. In this scenario, an embryo would passively develop female sexual characteristics without intervention by the SRY gene. However, in the early 2000s, other genes, such as WNT4 and RSPO1, were discovered that perform the opposite function – i.e., genes which suppress masculinization and stimulate feminization.
Two processes: defeminization, and masculinization, are involved in producing male typical morphology and behavior.
High
Prenatal virilization of a genetically female fetus can occur when an excessive amount of androgen is produced by the fetal adrenal glands or is present in maternal blood, resulting in virilization of the female genitalia such as an enlarged clitoris.
It can also be associated with progestin-induced virilisation.
Low
Undervirilization can occur if a genetic male cannot produce enough androgen or the body tissues cannot respond to it. Extreme undervirilization occurs when no significant androgen hormones can be produced or the body is completely insensitive to androgens, in which case a female phenotype will develop. Partial undervirilization produces ambiguous genitalia part-way between male and female. Examples of undervirilization in fetuses with a 46,XY karyotype are androgen insensitivity syndrome and 5 alpha reductase deficiency.
Normal virilization
In common as well as medical usage, virilization often refers to the process of normal male puberty. These effects include growth of the penis and the testes, accelerated growth, development of pubic hair, and other androgenic hair of face, torso, and limbs, deepening of the voice, increased musculature, thickening of the jaw, prominence of the neck cartilage, and broadening of the shoulders.
Abnormal childhood virilization
Virilization can occur in childhood in both males and females due to excessive amounts of androgens. Typical effects of virilization in children are pubic hair, accelerated growth and bone maturation, increased muscle strength, acne, and adult body odor. In males, virilization may signal precocious puberty, while congenital adrenal hyperplasia and androgen producing tumors (usually) of the gonads or adrenals are occasional causes in both sexes.
In adolescent or adult females
Virilization in females can manifest as clitoral enlargement, increased muscle strength, acne, hirsutism, frontal hair thinning, deepening of the voice, menstrual disruption due to anovulation, and a strengthened libido. Some of the possible causes of virilization in females are:
Androgen-producing tumors of the
ovaries
adrenal glands (see adrenal tumor)
pituitary gland (see pituitary adenoma)
Hyperthecosis
Hypothyroidism
Anabolic steroid exposure
Congenital adrenal hyperplasia due to 21-hydroxylase deficiency (late-onset)
Conn's syndrome
Medically induced virilization in transgender people
Transgender people who were medically assigned female at birth sometimes elect to take hormone replacement therapy. This process causes virilization by inducing many of the effects of a typically male puberty. Many of these effects are permanent, but some effects can be reversed if the transgender individual stops or pauses their medical treatment.
Permanent virilization effects
Deepening of the voice
Growth of facial and body hair
Male-pattern baldness
Enlargement of the clitoris
Breast atrophy – possible shrinking and/or softening of breasts
Reversible virilization effects
Further muscle development (especially upper body)
Increased sweat and changes in body odor
Prominence of veins and coarser skin
Alterations in blood lipids (cholesterol and triglycerides)
Increased red blood cell count
Demasculinization
Demasculinization refers to the reversal of virilization. Some but not all aspects of virilization are reversible. Demasculinization occurs naturally with andropause, pathologically with hypogonadism, and artificially or medically with antiandrogens, estrogens, and orchiectomy. It is desired by many transgender women who have undergone the changes of pubertal masculinization. Some virilized traits remain though (such as body hair, a hard jawline and an enlarged larynx), due to the fashion in which virilization affects a body's physiology.
See also
Ambiguous genitalia
Androgen
Clitoromegaly
Defeminization
Feminization (biology)
Hirsutism
Secondary sex characteristics
Sexual differentiation
References
Further reading
Howell, W. M., Black, D. A., & Bortone, S. A. (1980). Abnormal expression of secondary sex characters in a population of mosquitofish, Gambusia affinis holbrooki: evidence for environmentally-induced masculinization. Copeia, 676–681.
External links
Sexual dimorphism
Metabolism
Physiology
Testosterone | Virilization | Physics,Chemistry,Biology | 1,336 |
1,448,579 | https://en.wikipedia.org/wiki/Black%20%26%20Lane%27s%20Ident%20Tones%20for%20Surround | Black & Lane's Ident Tones for Surround (BLITS) is a way of keeping track of channels in a mixed surround-sound, stereo, and mono world. It was developed by Martin Black and Keith Lane of Sky TV London in 2004. BLITS is used by Sky, the BBC and other European and US broadcasters to identify and lineup 5.1 broadcast circuits. It is also an EBU standard: EBU Tech 3304. It is designed to function as a 5.1 identification and phase-checking signal and to be meaningful in stereo when an automated downmix to stereo is employed.
BLITS is a set of tones designed for television 5.1 sound line-up.
It consists of three distinct sections.
The first section is made up from short tones at -18 dBfs to identify each
channel individually:
Ø L/R: Front LEFT and Front RIGHT – 880 Hz
Ø C: CENTRE – 1320 Hz
Ø Lfe: (Low Frequency Effects) – 82.5 Hz
Ø Ls/Rs: Surround LEFT and Surround RIGHT – 660 Hz.
The second section identifies front left and right channels (L/R) only:
1 kHz tone at -18 dBfs is interrupted four times on the left channel and is continuous on the right. This pattern of interrupts has been chosen to prevent confusion with either the EBU stereo ident or BBC GLITS tone after stereo mix down.
The last section consists of 2 kHz tone at -24dBFS on all six channels. This can be used to check phase between any of the 5.1 legs.
When the tone is summed to stereo using default down-mix values this section should produce tones of approximately -18 dBfs on each channel.
The BLITS sequence repeats approximately every 14 seconds.
See also
Glits
References
EBU Tech.3304 – BLITS Ident
External links
A zipped .wav file (interleaved multichannel format) of the BLITS 5.1 ident sequence is available from Sky.
Broadcast engineering
Test items
Telecommunications-related introductions in 2004
2004 in British television
2004 establishments in the United Kingdom
British inventions | Black & Lane's Ident Tones for Surround | Engineering | 439 |
10,004,115 | https://en.wikipedia.org/wiki/Brushed%20DC%20electric%20motor | A brushed DC electric motor is an internally commutated electric motor designed to be run from a direct current power source and utilizing an electric brush for contact.
Brushed motors were the first commercially important application of electric power to driving mechanical energy, and DC distribution systems were used for more than 100 years to operate motors in commercial and industrial buildings. Brushed DC motors can be varied in speed by changing the operating voltage or the strength of the magnetic field. Depending on the connections of the field to the power supply, the speed and torque characteristics of a brushed motor can be altered to provide steady speed or speed inversely proportional to the mechanical load. Brushed motors continue to be used for electrical propulsion, cranes, paper machines and steel rolling mills. Since the brushes wear down and require replacement, brushless DC motors using power electronic devices have displaced brushed motors from many applications.
Simple two-pole DC motor
The following graphics illustrate a simple, two-pole, brushed, DC motor.
When a current passes through the coil wound around a soft iron core situated inside an external magnetic field, the side of the positive pole is acted upon by an upwards force, while the other side is acted upon by a downward force. According to Fleming's left hand rule, the forces cause a turning effect on the coil, making it rotate. To make the motor rotate in a constant direction, direct current commutators make the current reverse in direction every half a cycle (in a two-pole motor) thus causing the motor to continue to rotate in the same direction.
A problem with the motor shown above is that when the plane of the coil is parallel to the magnetic field—i.e. when the rotor poles are 90 degrees from the stator poles—the torque is zero. In the pictures above, this occurs when the core of the coil is horizontal—the position it is just about to reach in the next-to-last picture on the right. The motor would not be able to start in this position. However, once it was started, it would continue to rotate through this position by momentum.
There is a second problem with this simple pole design. At the zero-torque position, both commutator brushes are touching (bridging) both commutator plates, resulting in a short circuit. The power leads are shorted together through the commutator plates, and the coil is also short-circuited through both brushes (the coil is shorted twice, once through each brush independently). Note that this problem is independent of the non-starting problem above; even if there were a high current in the coil at this position, there would still be zero torque. The problem here is that this short uselessly consumes power without producing any motion (nor even any coil current.) In a low-current battery-powered demonstration this short-circuiting is generally not considered harmful. However, if a two-pole motor were designed to do actual work with several hundred watts of power output, this shorting could result in severe commutator overheating, brush damage, and potential welding of the brushes—if they were metallic—to the commutator. Carbon brushes, which are often used, would not weld. In any case, a short like this is very wasteful, drains batteries rapidly and, at a minimum, requires power supply components to be designed to much higher standards than would be needed just to run the motor without the shorting.
One simple solution is to put a gap between the commutator plates which is wider than the ends of the brushes. This increases the zero-torque range of angular positions but eliminates the shorting problem; if the motor is started spinning by an outside force it will continue spinning. With this modification, it can also be effectively turned off simply by stalling (stopping) it in a position in the zero-torque (i.e. commutator non-contacting) angle range. This design is sometimes seen in homebuilt hobby motors, e.g. for science fairs and such designs can be found in some published science project books. A clear downside of this simple solution is that the motor now coasts through a substantial arc of rotation twice per revolution and the torque is pulsed. This may work for electric fans or to keep a flywheel spinning but there are many applications, even where starting and stopping are not necessary, for which it is completely inadequate, such as driving the capstan of a tape transport, or any similar instance where to speed up and slow down often and quickly is a requirement. Another disadvantage is that, since the coils have a measure of self inductance, current flowing in them cannot suddenly stop. The current attempts to jump the opening gap between the commutator segment and the brush, causing arcing.
Even for fans and flywheels, the clear weaknesses remaining in this design—especially that it is not self-starting from all positions—make it impractical for working use, especially considering the better alternatives that exist. Unlike the demonstration motor above, DC motors are commonly designed with more than two poles, are able to start from any position, and do not have any position where current can flow without producing electromotive power by passing through some coil. Many common small brushed DC motors used in toys and small consumer appliances, the simplest mass-produced DC motors to be found, have three-pole armatures. The brushes can now bridge two adjacent commutator segments without causing a short circuit. These three-pole armatures also have the advantage that current from the brushes either flows through two coils in series or through just one coil. Starting with the current in an individual coil at half its nominal value (as a result of flowing through two coils in series), it rises to its nominal value and then falls to half this value. The sequence then continues with current in the reverse direction. This results in a closer step-wise approximation to the ideal sinusoidal coil current, producing a more even torque than the two-pole motor where the current in each coil is closer to a square wave. Since current changes are half those of a comparable two-pole motor, arcing at the brushes is consequently less.
If the shaft of a DC motor is turned by an external force, the motor will act like a generator and produce an Electromotive force (EMF). During normal operation, the spinning of the motor produces a voltage, known as the counter-EMF (CEMF) or back EMF, because it opposes the applied voltage on the motor. The back EMF is the reason that the motor when free-running does not appear to have the same low electrical resistance as the wire contained in its winding. This is the same EMF that is produced when the motor is used as a generator (for example when an electrical load, such as a light bulb, is placed across the terminals of the motor and the motor shaft is driven with an external torque). Therefore, the total voltage drop across a motor consists of the CEMF voltage drop, and the parasitic voltage drop resulting from the internal resistance of the armature's windings. The current through a motor is given by the following equation:
The mechanical power produced by the motor is given by:
As an unloaded DC motor spins, it generates a backwards-flowing electromotive force that resists the current being applied to the motor. The current through the motor drops as the rotational speed increases, and a free-spinning motor has very little current. It is only when a load is applied to the motor that slows the rotor that the current draw through the motor increases.
The commutating plane
In a dynamo, a plane through the centers of the contact areas where a pair of brushes touch the commutator and parallel to the axis of rotation of the armature is referred to as the commutating plane. In this diagram the commutating plane is shown for just one of the brushes, assuming the other brush made contact on the other side of the commutator with radial symmetry, 180 degrees from the brush shown.
Compensation for stator field distortion
In a real dynamo, the field is never perfectly uniform. Instead, as the rotor spins it induces field effects which drag and distort the magnetic lines of the outer non-rotating stator.
The faster the rotor spins, the further the degree of field distortion. Because the dynamo operates most efficiently with the rotor field at right angles to the stator field, it is necessary to either retard or advance the brush position to put the rotor's field into the correct position to be at a right angle to the distorted field.
These field effects are reversed when the direction of spin is reversed. It is therefore difficult to build an efficient reversible commutated dynamo, since for highest field strength it is necessary to move the brushes to the opposite side of the normal neutral plane.
The effect can be considered to be somewhat similar to timing advance in an internal combustion engine. Generally a dynamo that has been designed to run at a certain fixed speed will have its brushes permanently fixed to align the field for highest efficiency at that speed.
DC machines with wound stators compensate the distortion with commutating field windings and compensation windings.
Motor design variations
DC motors
Brushed DC motors are constructed with wound rotors and either wound or permanent-magnet stators.
Wound stators
The field coils have conventionally existed in four basic formats: separately excited (sepex), series-wound, shunt-wound, and a combination of the latter two, compound-wound.
In a series-wound motor, the field coils are connected electrically in series with the armature coils (via the brushes). In a shunt-wound motor, the field coils are connected in parallel, or shunted to the armature coils. In a separately excited (sepex) motor, the field coils are supplied from an independent source, such as a motor–generator, and the field current is unaffected by changes in the armature current. The sepex system was sometimes used in DC traction motors to facilitate control of wheelslip.
Permanent-magnet motors
Permanent-magnet types have some performance advantages over direct-current, excited, synchronous types, and have become predominant in fractional horsepower applications. They are smaller, lighter, more efficient and more reliable than other singly-fed electric machines.
Originally all large industrial DC motors used wound field or rotor magnets. Permanent magnets have conventionally only been useful in small motors because it was difficult to find a material capable of retaining a high-strength field. Only recently have advances in materials technology allowed the creation of high-intensity permanent magnets, such as neodymium magnets, allowing the development of compact, high-power motors without the extra volume of field coils and excitation means. But as these high-performance permanent magnets are applied more in electric motor and generator systems other problems are realized (see Permanent magnet synchronous generator).
Axial field motors
Traditionally, the field has been applied radially—in and away from the rotation axis of the motor. However some designs have the field flowing along the axis of the motor, with the rotor cutting the field lines as it rotates. This allows for much stronger magnetic fields, particularly if halbach arrays are employed. This, in turn, gives power to the motor at lower speeds. However, the focused flux density cannot rise about the limited residual flux density of the permanent magnet despite high coercivity and like all electric machines, the flux density of magnetic core saturation is the design constraint.
Speed control
Generally, the rotational speed of a DC motor is proportional to the EMF in its coil (= the voltage applied to it minus voltage lost on its resistance), and the torque is proportional to the current. Speed control can be achieved by variable battery tappings, variable supply voltage, resistors or electronic controls. A simulation example can be found here and. The direction of a wound field DC motor can be changed by reversing either the field or armature connections but not both. This is commonly done with a special set of contactors (direction contactors). The effective voltage can be varied by inserting a series resistor or by an electronically controlled switching device made of thyristors, transistors, or, formerly, mercury arc rectifiers.
Series–parallel
Series–parallel control was the standard method of controlling railway traction motors before the advent of power electronics. An electric locomotive or train would typically have four motors which could be grouped in three different ways:
All four in series (each motor receives one quarter of the line voltage), lowest speed
Two parallel groups of two in series (each motor receives half the line voltage)
All four in parallel (each motor receives the full line voltage), highest speed
This provided three running speeds with minimal resistance losses. For starting and acceleration, additional control was provided by resistances. This system has been superseded by electronic control systems.
Field weakening
The speed of a DC motor can be increased by field weakening. Reducing the field strength is done by inserting resistance in series with a shunt field, or inserting resistances around a series-connected field winding, to reduce current in the field winding. When the field is weakened, the back-emf reduces, so a larger current flows through the armature winding and this increases the speed. Field weakening is not used on its own but in combination with other methods, such as series–parallel control.
Chopper
In a circuit known as a chopper, the average voltage applied to the motor is varied by switching the supply voltage very rapidly. As the on to off ratio is varied to alter the average applied voltage, the speed of the motor varies. The percentage on time multiplied by the supply voltage gives the average voltage applied to the motor. Therefore, with a 100 V supply and a 25% on time, the average voltage at the motor will be 25 V. During the off time, the armature's inductance causes the current to continue through a diode called a flyback diode, in parallel with the motor. At this point in the cycle, the supply current will be zero, and therefore the average motor current will always be higher than the supply current unless the percentage on time is 100%. At 100% on time, the supply and motor current are equal. The rapid switching wastes less energy than series resistors. This method is also called pulse-width modulation (PWM) and is often controlled by a microprocessor. An output filter is sometimes installed to smooth the average voltage applied to the motor and reduce motor noise.
Since the series-wound DC motor develops its highest torque at low speed, it is often used in traction applications such as electric locomotives, and trams. Another application is starter motors for petrol and small diesel engines. Series motors must never be used in applications where the drive can fail (such as belt drives). As the motor accelerates, the armature (and hence field) current reduces. The reduction in field causes the motor to speed up, and in extreme cases the motor can even destroy itself, although this is much less of a problem in fan-cooled motors (with self-driven fans). This can be a problem with railway motors in the event of a loss of adhesion since, unless quickly brought under control, the motors can reach speeds far higher than they would do under normal circumstances. This can not only cause problems for the motors themselves and the gears, but due to the differential speed between the rails and the wheels it can also cause serious damage to the rails and wheel treads as they heat and cool rapidly. Field weakening is used in some electronic controls to increase the top speed of an electric vehicle. The simplest form uses a contactor and field-weakening resistor; the electronic control monitors the motor current and switches the field weakening resistor into circuit when the motor current reduces below a preset value (this will be when the motor is at its full design speed). Once the resistor is in circuit, the motor will increase speed above its normal speed at its rated voltage. When motor current increases, the control will disconnect the resistor and low speed torque is made available.
Ward Leonard
A Ward Leonard control is usually used for controlling a shunt or compound wound DC motor, and developed as a method of providing a speed-controlled motor from an AC supply, though it is not without its advantages in DC schemes. The AC supply is used to drive an AC motor, usually an induction motor that drives a DC generator or dynamo. The DC output from the armature is directly connected to the armature of the DC motor (sometimes but not always of identical construction). The shunt field windings of both DC machines are independently excited through variable resistors. Extremely good speed control from standstill to full speed, and consistent torque, can be obtained by varying the generator and/or motor field current. This method of control was the de facto method from its development until it was superseded by solid state thyristor systems. It found service in almost any environment where good speed control was required, from passenger lifts through to large mine pit head winding gear and even industrial process machinery and electric cranes. Its principal disadvantage was that three machines were required to implement a scheme (five in very large installations, as the DC machines were often duplicated and controlled by a tandem variable resistor). In many applications, the motor-generator set was often left permanently running, to avoid the delays that would otherwise be caused by starting it up as required. Although electronic (thyristor) controllers have replaced most small to medium Ward-Leonard systems, some very large ones (thousands of horsepower) remain in service. The field currents are much lower than the armature currents, allowing a moderate sized thyristor unit to control a much larger motor than it could control directly. For example, in one installation, a 300 amp thyristor unit controls the field of the generator. The generator output current is in excess of 15,000 amperes, which would be prohibitively expensive (and inefficient) to control directly with thyristors.
Torque and speed of a DC motor
A DC motor's speed and torque characteristics vary according to three different magnetization sources, separately excited field, self-excited field or permanent-field, which are used selectively to control the motor over the mechanical load's range. Self-excited field motors can be series, shunt, or a compound wound connected to the armature.
Basic properties
Define
, counter-electromotive force (V)
, armature current (A)
, counter EMF equation constant
, speed equation constant
, torque equation constant
, armature frequency (rpm)
, motor resistance (Ω)
, motor torque (Nm)
, motor input voltage (V)
, machine's total flux (Wb)
Carter's coefficient (kC) is a parameter that is often used as a way to estimate the effective slot pitch in the armature of a motor with open (or semi-enclosed) slots.
Counter EMF equation
The DC motor's counter emf is proportional to the product of the machine's total flux strength and armature speed:
Voltage balance equation
The DC motor's input voltage must overcome the counter emf as well as the voltage drop created by the armature current across the motor resistance, that is, the combined resistance across the brushes, armature winding and series field winding, if any:
Torque equation
The DC motor's torque is proportional to the product of the armature current and the machine's total flux strength:
where
Speed equation
Since
and
we have
where
Torque and speed characteristics
Shunt wound motor
With the shunt wound motor's high-resistance field winding connected in parallel with the armature, Vm, Rm and Ø are constant such that the no load to full load speed regulation is seldom more than 5%. Speed control is achieved three ways:
Varying the field voltage
Field weakening
Variable resistance in the field circuit.
Series wound motor
The series motor responds to increased load by slowing down; the current increases and the torque rises in proportional to the square of the current since the same current flows in both the armature and the field windings. If the motor is stalled, the current is limited only by the total resistance of the windings and the torque can be very high, but there is a danger of the windings becoming overheated. Series wound motors were widely used as traction motors in rail transport of every kind, but are being phased out in favour of power inverter-fed AC induction motors. The counter EMF aids the armature resistance to limit the current through the armature. When power is first applied to a motor, the armature does not rotate, the counter EMF is zero and the only factor limiting the armature current is the armature resistance. As the prospective current through the armature is very large, the need arises for an additional resistance in series with the armature to limit the current until the motor rotation can build up the counter EMF. As the motor rotation builds up, the resistance is gradually cut out.
The series wound DC motor's most notable characteristic is that its speed is almost entirely dependent on the torque required to drive the load. This suits large inertial loads as motor accelerates from maximum torque, torque reducing gradually as speed increases.
As the series motor's speed can be dangerously high, series motors are often geared or direct-connected to the load.
Permanent magnet motor
A permanent magnet DC motor is characterized by a linear relationship between stall torque when the torque is maximum with the shaft at standstill and no-load speed with no applied shaft torque and maximum output speed. There is a quadratic power relationship between these two speed-axis points.
Protection
To extend a DC motor's service life, protective devices and motor controllers are used to protect it from mechanical damage, excessive moisture, high dielectric stress and high temperature or thermal overloading. These protective devices sense motor fault conditions and either activate an alarm to notify the operator or automatically de-energize the motor when a faulty condition occurs. For overloaded conditions, motors are protected with thermal overload relays. Bi-metal thermal overload protectors are embedded in the motor's windings and made from two dissimilar metals. They are designed such that the bimetallic strips will bend in opposite directions when a temperature set point is reached to open the control circuit and de-energize the motor. Heaters are external thermal overload protectors connected in series with the motor's windings and mounted in the motor contactor. Solder pot heaters melt in an overload condition, which cause the motor control circuit to de-energize the motor. Bimetallic heaters function the same way as embedded bimetallic protectors. Fuses and circuit breakers are overcurrent or short circuit protectors. Ground fault relays also provide overcurrent protection. They monitor the electric current between the motor's windings and earth system ground. In motor-generators, reverse current relays prevent the battery from discharging and motorizing the generator. Since D.C. motor field loss can cause a hazardous runaway or overspeed condition, loss of field relays are connected in parallel with the motor's field to sense field current. When the field current decreases below a set point, the relay will deenergize the motor's armature. A locked rotor condition prevents a motor from accelerating after its starting sequence has been initiated. Distance relays protect motors from locked-rotor faults. Undervoltage motor protection is typically incorporated into motor controllers or starters. In addition, motors can be protected from overvoltages or surges with isolation transformers, power conditioning equipment, MOVs, arresters and harmonic filters. Environmental conditions, such as dust, explosive vapors, water, and high ambient temperatures, can adversely affect the operation of a DC motor. To protect a motor from these environmental conditions, the National Electrical Manufacturers Association (NEMA) and the International Electrotechnical Commission (IEC) have standardized motor enclosure designs based upon the environmental protection they provide from contaminants. Modern software can also be used in the design stage, such as Motor-CAD, to help increase the thermal efficiency of a motor.
DC motor starters
The counter-emf aids the armature resistance to limit the current through the armature. When power is first applied to a motor, the armature does not rotate. At that instant the counter-emf is zero and the only factor limiting the armature current is the armature resistance and inductance. Usually the armature resistance of a motor is less than 1 Ω; therefore the current through the armature would be very large when the power is applied. This current can make an excessive voltage drop affecting other equipment in the circuit and even trip overload protective devices.
Therefore, the need arises for an additional resistance in series with the armature to limit the current until the motor rotation can build up the counter-emf. As the motor rotation builds up, the resistance is gradually cut out.
Manual-starting rheostat
When electrical and DC motor technology was first developed, much of the equipment was constantly tended by an operator trained in the management of motor systems. The very first motor management systems were almost completely manual, with an attendant starting and stopping the motors, cleaning the equipment, repairing any mechanical failures, and so forth.
The first DC motor-starters were also completely manual, as shown in this image. Normally it took the operator about ten seconds to slowly advance the rheostat across the contacts to gradually increase input power up to operating speed. There were two different classes of these rheostats, one used for starting only, and one for starting and speed regulation. The starting rheostat was less expensive, but had smaller resistance elements that would burn out if required to run a motor at a constant reduced speed.
This starter includes a no-voltage magnetic holding feature, which causes the rheostat to spring to the off position if power is lost, so that the motor does not later attempt to restart in the full-voltage position. It also has overcurrent protection that trips the lever to the off position if excessive current over a set amount is detected.
Three-point starter
The incoming power wires are called L1 and L2. As the name implies there are only three connections to the starter, one to incoming power, one to the armature, and one to the field. The connections to the armature are called A1 and A2. The ends of the field (excitement) coil are called F1 and F2. In order to control the speed, a field rheostat is connected in series with the shunt field. One side of the line is connected to the arm of the starter. The arm is spring-loaded so, it will return to the "Off" position when not held at any other position.
On the first step of the arm, full line voltage is applied across the shunt field. Since the field rheostat is normally set to minimum resistance, the speed of the motor will not be excessive; additionally, the motor will develop a large starting torque.
The starter also connects an electromagnet in series with the shunt field. It will hold the arm in position when the arm makes contact with the magnet.
Meanwhile, that voltage is applied to the shunt field, and the starting resistance limits the current to the armature.
As the motor picks up speed counter-emf is built up; the arm is moved slowly to short.
Four-point starter
The four-point starter eliminates the drawback of the three-point starter. In addition to the same three points that were in use with the three-point starter, the other side of the line, L1, is the fourth point brought to the starter when the arm is moved from the "Off" position. The coil of the holding magnet is connected across the line. The holding magnet and starting resistors function identical as in the three-point starter.
The possibility of accidentally opening the field circuit is quite remote. The four-point starter provides the no-voltage protection to the motor. If the power fails, the motor is disconnected from the line.
Parameters and stats estimation
Several studies propose either non-intelligent estimators which depend on the model, such as the extended Kalman filter (EKF) and Luenberger's observer, or intelligent estimators such as cascade-forward neural network (CFNN) and quasi-Newton BFGS backpropagation .
See also
Alternating current
Brushless DC electric motor
References
Bibliography
External links
How Electric Motors Work (retrieved from Web Archive on 2014/31/01)
DC motors
Electric motors | Brushed DC electric motor | Technology,Engineering | 5,940 |
39,675,386 | https://en.wikipedia.org/wiki/Stressed%20member%20engine | A stressed member engine is a vehicle engine used as an active structural element of the chassis to transmit forces and torques, rather than being passively contained by the chassis with anti-vibration mounts. Automotive engineers use the method for weight reduction and mass centralization in vehicles. Applications are found in several vehicles where mass reduction is critical for performance reasons, usually after several iterations of conventional frame/chassis designs have been employed.
Applications
Motorcycles
Stressed member engines was patented in 1900 by Joah ("John") Carver Phelon and his nephew Harry Rayner. and were pioneered at least as early as the 1916 Harley-Davidson 8-valve racer, and incorporated in the production Harley-Davidson Model W by 1919. The technique was developed in the 20th century by Vincent and others, and by the end of the century was common feature of chassis built by Ducati, BMW and others. In 2019, KTM Duke 790's engine is used as a stressed member.
Automobiles
Many mid-engine sport cars have used stressed engine design.
Race cars
The 1967 Lotus 49 is credited for establishing a solution copied by "everyone" in Formula One. This requirement is cited as a reason the rules committee changed from an inline-four to a V-6 configuration for the 2014 Formula One season.
Production automobiles
The limited-production De Tomaso Vallelunga mid-engine car prototyped in 1963 used the engine as a stressed member.
In GM's Chevrolet Bolt and Tesla Motors Model S and Roadster electric cars, the battery pack is a stressed member to increase rigidity.
Tractors
The Fordson tractor Model F, designed during World War I, eliminated the frame to reduce cost of materials and assembly, and was probably influenced by the similar design of the 1913 Wallis Cub.
References
Automotive chassis types
Structural system
Motorcycle frames | Stressed member engine | Technology,Engineering | 364 |
63,770,785 | https://en.wikipedia.org/wiki/Construction%20robots | Construction robots are a subset of industrial robots used for building and infrastructure construction at site. Despite being traditionally slow to adopt new technologies, 55% of construction companies in the United States, Europe, and China now say they use robots on job sites. Most of the robots working on jobsites today are designed to remove strains on humans, e.g., excavating and lifting heavy objects. Robots that survey and layout markers, tie rebar, and install drywall are also now on the market.
Other robots are being developed to perform tasks such as finishing the exterior, steel placement, construction of masonry wall, reinforcement concrete, etc. The main challenge to use robots in site is due to limitation in workspace.
Features
General features include:
It must be able to move.
It must be able to handle components of variable size and weight.
It must be able to adjust with changing environment.
It must be able to interact with its surroundings.
It must be able to perform multiple tasks.
Capabilities
Construction robots have been tested to carry out the followings:
Building walls
Monitor the construction progress
Inspection robots are used to investigate the infrastructures, mainly at dangerous locations
Notable construction by robots
30 storied Rail City Building at Yokohama, Japan was constructed by an automated system.
Concrete floor finish robot was used by Kajima and Tokimec companies in Japan.
Obayashi Corporation in Japan has developed and used a system to lay concrete layers in dam construction.
Social impact
Use of the construction robots in the USA is rare, mainly due to opposition from labour unions. However, in Japan, these robots are taken positively.
See also
Industrial robots
References
Robotics | Construction robots | Engineering | 330 |
54,078,197 | https://en.wikipedia.org/wiki/Fusarin | Fusarins are a class of mycotoxins produced mainly by fungi of the genus Fusarium, which can infect agriculturally important crops such as wheat, barley, oats, rye, and corn. Chemically, they are polyketides that are also derived from amino acids.
Some members of the class, particularly fusarin C, are mutagenic.
Examples:
References
External links
Mycotoxins
Mutagens
Polyketides
Polyenes | Fusarin | Chemistry | 98 |
10,443,191 | https://en.wikipedia.org/wiki/Pink%20Visual | Pink Visual is an independent reality and gonzo pornography film production company, based in Van Nuys, California, United States. It began as an Internet pornography provider before eventually moving into DVD production. Pink Visual also licenses adult content to cable, satellite, pay-per-view, hotel chain channels, and other Internet content licensees. Currently marketing their content with the tagline of "Raw. Raunchy. Real.", Pink Visual content is largely reality-based, taking inspiration from reality television. Pink Visual's porn productions typically utilize amateur performers and are shot in a 'Pro-am' style, utilizing digital video, including the high definition format.
History
Founded in June 2004, Pink Visual evolved out of the formerly established TopBucks webmaster affiliate program, which gave Pink Visual the content and marketing resources to launch into the DVD market.
On February 4, 2009, Pink Visual offered a $10 discount on selected sites in "compensation for (the) Super Bowl porn mishap" in which Tucson, AZ area Comcast customers had their service interrupted for 30 seconds by an uncensored Pink Visual video.
Apparently closed its online site Aug. 15, 2023.
iPinkVisual
In 2008, Pink Visual launched iPinkVisual.com and iPinkVisualPass.com, the first major U.S.-based mobile porn websites designed especially for iPhones.
In June 2009, Pink Visual tweaked the mobile compatibility of their sites to include functionality with other WebKit based browsers, including the Palm Pre and mobile devices running on Google Android. Pink Visual mobile porn sports a limited functionality with certain BlackBerry devices. Pink Visual has released an app on the MiKandi app store for Android.
PinkVisualPad
In April 2010, Pink Visual launched PinkVisualPad.com, the first major porn website designed especially for the newly released iPad. This release was followed soon after by the release of MaleSpectrumPad.com, the first gay website designed for iPad compatibility.
Male Spectrum
In December 2008, Pink Visual premiered Male Spectrum, a new line of gay pornography home video titles focusing on premium, high-quality gay reality porn content. In addition to the DVD line, Male Spectrum has also launched two gay mobile sites compatible with the iPhone and other multimedia capable mobile devices, iMaleSpectrum.com and iMaleSpectrumPass.com. Recently, Male Spectrum made an initial donation of $2500 to the Human Rights Campaign to assist in the HRC's fight against discrimination.
PVLocker
Pink Visual launched PVLocker.com in March 2011 as a way to fulfill the evolving consumer demands for adult content that was affordable and accessible from multiple devices from mobile phones, to tablets, to PCs. PVLocker allows customers to purchase just the scenes that they want and access them forever from within their locker. Additionally, PVLocker has an upload feature where customers can store already purchased adult content from other sources and access the content from multiple devices. PVLocker.com allows consumers to hide or store their porn off their local computers and in the cloud.
PVLocker.com also aggregates adult content from various XXX studios including: Private Media, Holly Randall, Acid Rain, Grind House, Wasteland, Juicy Pink Box, and Sssh.
Pink Visual Apocalypse Bunker
In September 2011, Pink Visual announced that in preparation for the 2012 apocalypse predicted by the Mayan calendar, they are building a massive underground bunker. The bunker will contain all of the obvious emergency supplies and facilities as well as a few amenities. The bunker will have multiple fully stocked bars, an enormous performing stage with a rotating hydraulic platform and a sophisticated content production studio. The apocalypse bunker was scheduled to be ready by September 2012 and preliminary blueprints have been released.
Green initiative
In March 2009, Pink Visual and Male Spectrum have made news by donating a portion of their proceeds to Trees for the Future as well as for releasing an environmentally friendly DVD line, Plant Your Wood. The company is also working to turn their web sites carbon neutral.
Conan the Boobarian
On January 18, 2010, Conan O'Brien revealed that he was offered to star in a Pink Visual porno entitled "Conan the Boobarian" among other job offers following his high-profile exit from The Tonight Show.
Anti-Piracy Efforts
Pink Visual’s Anti-Piracy strategy is directed by its General Counsel, Jessica Pena. Pena joined the company in 2008 and immediately recognized the widespread effect of online piracy both for Pink Visual and the adult entertainment industry as a whole. Pena began using litigation strategies to combat online copyright infringement focusing not only on the recovery of damages, but the use of technology to prevent future infringement. Pena’s approach has evolved to incorporate site operator litigation, legal pressure, end-user education, content removal services and the development of reasonable alternatives to piracy. “In some ways, the mainstream entertainment space is way ahead of the adult sector in terms of how it fights piracy.” Pena states, “Having said that, there are some interesting anti-piracy approaches that adult rights-holders are taking, so my goal is to encourage the minds from both sectors to come together and share ideas that will create even more effective strategies.”
In February 2010, Pink Visual’s holding company Ventura Content, Ltd. filed suit in the U.S. District Court for New York against Mansef, Inc. the owners of Brazzers, alleging that four company-owned tube sites infringed on 45 copyrighted movies. The suit was settled in October 2010, with terms that remain confidential, other than an agreement between the parties that the site operators would implement digital fingerprint filtering on their sites.
In December 2010, Pink Visual filed suit against the operators of SlutLoad.com, alleging infringement on 53 Pink Visual works. The suit was settled in March, 2011, and once again included an agreement that the defendant would implement digital fingerprint filtering.
In July 2011, Pink Visual filed suit against Motherless.com, alleging copyright infringement in connection with 19 Pink Visual works, as well as unfair competition for its failure to abide by the adult industries’ age verification and record keeping requirements. Motherless won the case, with the judge ruling they were entitled to the DMCA's safe-harbor provisions. Pink Visual's appeal was thrown out in 2018.
In September 2011, Pink Visual filed suit against Two Point Oh Ltd., which operates multiple popular adult sites, alleging infringement on 92 Pink Visual works. The suit was settled in December, 2011 under confidential terms. However, the parties entered into a consent judgment whereby Two Point Oh recognizes that digital fingerprint filtering is a reasonable technical measure to prevent online copyright infringement in the adult arena.
In addition to the litigation the company has undertaken to combat copyright infringement, Pink Visual has also organized and hosted two Content Protection Retreats, (CPR) in order to provide information to other adult studios on copyright law and engage in discussions regarding industry strategies, as a whole, to combat piracy. The first CPR took place in October 2010, in Tucson, Arizona. A second CPR was held in February, 2011 in Hollywood California. Dozens of adult entertainment studios participated in the events, hearing presentations from intellectual property attorneys, companies that provide content take-down services and other experts in copyright enforcement and anti-piracy strategy.
Pink Visual cites a clear anti-piracy policy to their consumers and looks to educate end-users about the dangers and risks of piracy. Pink Visual content is prohibited from being distributed on torrents and Cyberlocker sites. Pink Visual recommends that consumers purchase legally and provides numerous methods for consumers to access their content legally. Illegal downloads of PinkVisual content are prohibited and considered copyright infringement.
In 2012, Pink Visual established an anti-piracy service of its own that performs online copyright infringement location, trademark monitoring, copyright registration and DMCA takedown notice services for rights-holders.
Awards
2006: 7 AVN Award nominations
2006: Won AVN Award in 'Best Specialty Release – MILF' category for Milf Seeker
2007: 17 AVN Award nominations including nominations for Best Marketing Campaign – Overall, and Best Marketing Campaign – Online
2007: Won AVN Award in 'Best Specialty Series – MILF' category for Milf Seeker
2008: 15 AVN Award nominations
2008: Won AVN Award in 'Best Solo Release' category for Extreme Holly Goes Solo
Won AVN Awards for two consecutive years in Best Specialty Series – MILF category
2009: 20 AVN Award nominations
2010: 16 AVN Award nominations
2011: 18 AVN Award nominations including nominations for Best Membership Site - PinkVisualPass.com and Best Membership Site Network - PinkVisual.com
2011: Won Future Mobile Award for Mobile Adult from Juniper Research
2012: 7 AVN Award nominations including nominations for Best Affiliate program: TopBucks Mobile and Best Studio website: pinkvisual.com
2013: XBIZ Award Nominations - 'All-Sex Release of the Year' for It's Her Fantasy, 'Vignette Series of the Year' for Wife Switch, 'All-Girl Series of the Year' for Her First Lesbian Sex
References
External links
American pornographic film studios
Mass media companies established in 2004
Film production companies of the United States
Gonzo pornography
Mobile content
Pornography in Los Angeles | Pink Visual | Technology | 1,882 |
14,556,606 | https://en.wikipedia.org/wiki/Dehydrocholic%20acid | Dehydrocholic acid is a synthetic bile acid, manufactured by the oxidation of cholic acid. It acts as a hydrocholeretic, increasing bile output to clear increased bile acid load.
References
Bile acids
Cholanes
Ketones | Dehydrocholic acid | Chemistry | 50 |
1,553,317 | https://en.wikipedia.org/wiki/Optical%20medium | In optics, an optical medium is material through which light and other electromagnetic waves propagate. It is a form of transmission medium. The permittivity and permeability of the medium define how electromagnetic waves propagate in it.
Properties
The optical medium has an intrinsic impedance, given by
where and are the electric field and magnetic field, respectively.
In a region with no electrical conductivity, the expression simplifies to:
For example, in free space the intrinsic impedance is called the characteristic impedance of vacuum, denoted Z0, and
Waves propagate through a medium with velocity , where is the frequency and is the wavelength of the electromagnetic waves. This equation also may be put in the form
where is the angular frequency of the wave and is the wavenumber of the wave. In electrical engineering, the symbol , called the phase constant, is often used instead of .
The propagation velocity of electromagnetic waves in free space, an idealized standard reference state (like absolute zero for temperature), is conventionally denoted by c0:
where is the electric constant and is the magnetic constant.
For a general introduction, see Serway For a discussion of synthetic media, see Joannopoulus.
Types
Homogeneous medium vs. heterogeneous medium
Transparent medium vs. opaque body
Translucent medium
See also
Čerenkov radiation
Electromagnetic spectrum
Electromagnetic radiation
Optics
SI units
Free space
Metamaterial
Photonic crystal
Photonic crystal fiber
Notes and references
Optics
Electric and magnetic fields in matter | Optical medium | Physics,Chemistry,Materials_science,Engineering | 299 |
28,301,803 | https://en.wikipedia.org/wiki/Crypto%2B%2B | Crypto++ (also known as CryptoPP, libcrypto++, and libcryptopp) is a free and open-source C++ class library of cryptographic algorithms and schemes written by Wei Dai. Crypto++ has been widely used in academia, student projects, open-source, and non-commercial projects, as well as businesses. Released in 1995, the library fully supports 32-bit and 64-bit architectures for many major operating systems and platforms, including Android (using STLport), Apple (macOS and iOS), BSD, Cygwin, IBM AIX, Linux, MinGW, Solaris, Windows, Windows Phone and Windows RT. The project also supports compilation using C++03, C++11, C++14, and C++17 runtime libraries; and a variety of compilers and IDEs, including Borland Turbo C++, Borland C++ Builder, Clang, CodeWarrior Pro, GCC (including Apple's GCC), Intel C++ Compiler (ICC), Microsoft Visual C/C++, and Sun Studio.
Crypto++ 1.0 was released in June 1995, but the download is no longer available. The Crypto++ 1.0 release was withdrawn due to RSA Data Security, Inc asserting its patent over the RSA algorithm. All other versions of the library are available for download.
Algorithms
Crypto++ ordinarily provides complete cryptographic implementations and often includes less popular, less frequently-used schemes. For example, Camellia is an ISO/NESSIE/IETF-approved block cipher roughly equivalent to AES, and Whirlpool is an ISO/NESSIE/IETF-approved hash function roughly equivalent to SHA; both are included in the library.
Additionally, the Crypto++ library sometimes makes proposed and bleeding-edge algorithms and implementations available for study by the cryptographic community. For example, VMAC, a universal hash-based message authentication code, was added to the library during its submission to the Internet Engineering Task Force (CFRG Working Group); and Brainpool curves, proposed in March 2009 as an Internet Draft in RFC 5639, were added to Crypto++ 5.6.0 in the same month.
The library also makes available primitives for number-theoretic operations such as fast multi-precision integers; prime number generation and verification; finite field arithmetic, including GF(p) and GF(2n); elliptical curves; and polynomial operations.
Furthermore, the library retains a collection of insecure or obsolescent algorithms for backward compatibility and historical value: MD2, MD4, MD5, Panama Hash, DES, ARC4, SEAL 3.0, WAKE, WAKE-OFB, DESX (DES-XEX3), RC2, SAFER, 3-WAY, GOST, SHARK, CAST-128, and Square.
Performance
In a 2007 ECRYPT workshop paper focusing on public key implementations of eight libraries, Ashraf Abusharekh and Kris Kaj found that "Crypto++ 5.1 [sic] leads in terms of support for cryptographic primitives and schemes, but is the slowest of all investigated libraries."
In 2008, speed tests carried out by Timo Bingmann using seven open-source security libraries with 15 block ciphers, Crypto++ 5.5.2 was the top-performing library under two block ciphers and did not rank below the average library performance under the remaining block ciphers.
Crypto++ also includes an auto-benchmarking feature, available from the command line (cryptest.exe b), the results of which are available at Crypto++ 5.6.0 Benchmarks.
As with many other cryptographic libraries available for 32-bit and 64-bit x86 architectures, Crypto++ includes assembly routines for AES using AES-NI. With AES-NI, AES performance improves dramatically: 128-bit AES-GCM throughput increases from approximately 28.0 cycles per byte to 3.5 cycles per byte.
Version releases
Crypto++ 1.0 was released in June 1995. Since its initial release, the library has seen nearly two dozen revisions, including an architectural change in version 5.0. There have been ten releases using the version 5.0 architecture since March 2009.
Lawrence Teo's compilation of previous Crypto++ releases dating back to 1995 can be found in the users group archives.
FIPS validations
Crypto++ has received three Federal Information Processing Standard (FIPS) 140-2 Level 1 module validations with no post-validation issues.
Crypto++ was moved to the CMVP's Historical Validation List in 2016. The move effectively means the library is no longer validated.
Licensing
As of version 5.6.1, Crypto++ consists of only public domain files, with a compilation copyright and a single open source license for the compilation copyright:
See also
Computer science
Symmetric cipher
Comparison of cryptography libraries
References
External links
Crypto++ GitHub project
List of projects that use Crypto++ (Includes nonprofit and for profit projects)
Crypto++ users group
Cryptographic software
C++ libraries
Cryptographic algorithms
Free computer libraries
Public-domain software with source code
1995 software | Crypto++ | Mathematics | 1,117 |
54,456,719 | https://en.wikipedia.org/wiki/IRAS%2008544%E2%88%924431 | IRAS 08544−4431 is a binary system surrounded by a dusty ring in the constellation of Vela. The system contains an RV Tauri variable star and a more massive but much less luminous companion.
Binary
In 2003, IRAS 08544−4431 was being studied as a likely RV Tauri variable and was identified as a binary star from periodic variations in its observed radial velocity. The primary is a luminous F3 star surrounded by a dusty disc, and the invisible secondary is a less massive star.
The two components of IRAS 08544−4431 orbit in 499 days in a mildly eccentric orbit. The projected semi-major axis is 0.32 AU but the inclination of the orbit is not known so the actual separation may be considerably larger, although the inclination is thought to be fairly large because the type of brightness variation implies a face-on system.
Variability
IRAS 08544-4431 is classified as an RV Tauri star, a type of pulsating variable star which shows cycles with alternating shallow and deep minima. In addition, IRAS 08544-4431 shows slow variations in amplitude from cycle to cycle over approximately 1,600 days, a defining characteristic of a type b RV Tauri variable. The maximum amplitude is only 0.18 magnitudes. It was given the variable star designation of V390 Velorum in 2006.
The period, defined for an RV Tau star as the time between two deep minima, is 72 days. The slow variations in amplitude have been measured, represented by a period of 69 days producing beats. None of these variations correspond to the orbital motion.
Post-AGB
The primary star is thought to be a post-AGB star, a highly evolved star that has ceased fusion and is ejecting its outer layers on its way to becoming a white dwarf. Although many post-AGB stars become planetary nebulae once they become hot enough to ionise their ejected outer layers, it is thought that IRAS 08544−4431 is not massive enough to do this.
Dusty disc
The warm material surrounding IRAS 08544−4431 has been resolved using interferometry with the AMBER and MIDI instruments at the Very Large Telescope. It is a circumbinary disc surrounding both stars, is heated mainly by the primary post-AGB star, and has a total mass of . The disc starts 9 AU from the stars and is approximately 4 AU thick at its inner edge. The thick disc protects much of the dust from direct heating out to 70 AU from the stars. Beyond 70 AU, the disc is thick enough to receive direct radiation from the stars.
The disc is at a temperature of 1,150 K. Although the companion is far less luminous than the primary, it is brighter than expected, especially at infrared wavelengths. It is suspected to be a main sequence star with its own compact accretion disc. The best images of the disc and stars, taken using the PIONIER interferometer, show the primary star to be 0.5 mas across, the secondary to be an unresolved point source 0.91 mas away, and the circumbinary disc to be 14.15 mas in diameter. The disc is oriented at 19° to the plane of the sky aligned at an angle of about 6° away from N-S.
References
F-type stars
Velorum, V390
IRAS catalogue objects
J08561419−4443107
Vela (constellation)
RV Tauri variables
Binary stars
Post-asymptotic-giant-branch stars
Emission-line stars
Durchmusterung objects
TIC objects | IRAS 08544−4431 | Astronomy | 749 |
13,460,657 | https://en.wikipedia.org/wiki/New%20York%20v.%20United%20States | New York v. United States, 505 U.S. 144 (1992), was a decision of the United States Supreme Court. Justice Sandra Day O'Connor, writing for the majority, found that the federal government may not require states to “take title” to radioactive waste through the "Take Title" provision of the Low-Level Radioactive Waste Policy Amendments Act, which the Court found to exceed Congress's power under the Commerce Clause. The Court permitted the federal government to induce shifts in state waste policy through other means.
Background
The Low-Level Radioactive Waste Policy Amendments Act was an attempt to imbue a negotiated agreement of states with federal incentives for compliance. The problem of what to do with radioactive waste was a national issue complicated by the political reluctance of the states to deal with the problem individually. New York was a willing participant in the compromise. After the Act was passed, it announced locations in the counties of Allegany and Cortland, as potential places for waste storage. Public opposition in both counties was immediate and very determined and eventually helped motivate New York to challenge the law.
Decision
The Act provided three "incentives" for states to comply with the agreement.
The first two incentives were held constitutional. The first incentive allowed states to collect gradually increasing surcharges for waste that was received from other states. The Secretary of Energy would then collect a portion of the income and redistribute it to reward states that achieved a series of milestones in waste disposal. That was held within Congress's power under the Taxing and Spending Clause, an "unexceptionable" exercise of that power.
The second incentive, the "access" incentive, allowed states to reprimand other states that missed certain deadlines by raising surcharges or eventually denying access to disposal at their facilities completely. That was held to be a permitted exercise of Congress's power, under the Commerce Clause.
The third incentive, requiring states to "take title" and assume liability for waste generated within their borders if they failed to comply, was held to be impermissibly coercive and a threat to state sovereignty, thereby violating the Tenth Amendment.
After noting the constitutionality of the first two incentives, Justice O'Connor characterized the "take title" incentive as an attempt to "commandeer" the state governments by directly compelling them to participate in the federal regulatory program. The federal government "crossed the line distinguishing encouragement from coercion." The distinction was that with respect to the "take title" provision, states had to choose between conforming to federal regulations or taking title to the waste. Since Congress cannot directly force states to legislate according to their scheme, and since Congress likewise cannot force them to take title to radioactive waste, O'Connor reasoned that Congress cannot force States to choose between the two. Such coercion would be counter to the federalist structure of government in which a "core of state sovereignty" is enshrined in the Tenth Amendment.
The Court found the "take title" provision to be severable and, noting the seriousness of the "pressing national problem" being addressed, allowed the remainder of the Act to survive.
Dissenting opinion
Justice White wrote a dissenting opinion that was joined by Justices Blackmun and Stevens. White stressed that the Act was a product of "cooperative federalism," as the states "bargained among themselves to achieve compromises for Congress to sanction." Noting that Congress can directly regulate radioactive waste, as opposed to "compelling state legislatures" to regulate according to their scheme, he said that the "ultimate irony of the decision today is that in its formalistically rigid obeisance to 'federalism,' the Court gives Congress fewer incentives to defer to the wishes of state officials in achieving local solutions to local problems."
See also
List of United States Supreme Court cases, volume 505
List of United States Supreme Court cases
Lists of United States Supreme Court cases by volume
List of United States Supreme Court cases by the Rehnquist Court
References
External links
United States Constitution Article One case law
United States Tenth Amendment case law
United States Supreme Court cases
United States Supreme Court cases of the Rehnquist Court
United States Commerce Clause case law
1992 in the environment
1992 in United States case law
Radioactive waste
Energy in New York (state) | New York v. United States | Chemistry,Technology | 875 |
59,340,033 | https://en.wikipedia.org/wiki/Robert%20T.%20Clubb | Robert Thompson Clubb is an American scientist. He a professor of chemistry, biochemistry, and molecular biology at University of California, Los Angeles.
Early life and education
Robert Thompson Clubb was born to surgical nurse Vera Alice Thompson of Yakima, Washington and Jerome M. Clubb a professor of history. Clubb has a sister. He earned a bachelor of science at University of Wisconsin. He completed a doctor of philosophy in biological chemistry at University of Michigan. His 1993 dissertation was titled Application and development of multi-dimensional NMR spectroscopic techniques to study protein structure in solution. Clubb's advisors and co-chairs of his thesis committee were Gerhard Wagner and Martha L. Ludwig. He received training in practical nuclear magnetic resonance spectroscopy from Venkataraman Thanabal. From 1993 to 1996, Clubb was a post-doctoral research fellow at the National Institutes of Health. His advisors were G. Marius Clore and Angela Gronenborn.
Career
Clubb is a professor of chemistry, biochemistry, and molecular biology at University of California, Los Angeles. He is the lab director of the Clubb Lab and co-director and staff researcher at the Nuclear Magnetic Resonance (NMR) Core Technology Center (DOE).
Personal life
Clubb is married to Joanna Hoffman Clubb. They reside in Culver City, California.
References
Living people
Year of birth missing (living people)
University of Wisconsin–Madison alumni
University of Michigan alumni
University of California, Los Angeles faculty
20th-century American chemists
21st-century American biochemists | Robert T. Clubb | Chemistry | 312 |
68,856,568 | https://en.wikipedia.org/wiki/Personal%20Information%20Protection%20Law%20of%20the%20People%27s%20Republic%20of%20China | The Personal Information Protection Law of the People's Republic of China (Chinese: 中华人民共和国个人信息保护法; pinyin: Zhōnghuá rénmín gònghéguó gèrén xìnxī bǎohù fǎ) referred to as the Personal Information Protection Law or ("PIPL") protecting personal information rights and interests, standardize personal information handling activities, and promote the rational use of personal information. It also addresses the transfer of personal data outside of China.
The PIPL was adopted on August 20, 2021, and is effective November 1, 2021. It is related to, and builds on top of both China's Cybersecurity Law ("CSL") and China's Data Security Law ("DSL").
A reference English version was published on December 29, 2021.
History
On August 20, 2021, the Standing Committee of the 13th National People's Congress passed the Private Information Protection Law or ("PIPL"). The law, which took effect on November 1, 2021, applies to the activities of handling the personal information of natural persons within the borders of the China.
In comparison to countries in the West, China has developed its privacy laws over time at a slower pace. In recent years, though, China has more actively developed regulations, as the nation is considered a “global cyberforce.” China’s policies differ from Western nations, in that their perception of privacy is different due to historical and cultural reasons.
During the drafting process, the European Union's General Data Protection Regulation ("GDPR") was used as a model and in some areas, PIPL closely tracks the GDPR.
Provisions
Scope
The PIPL generally covers all organizations operating in China processing personal information.
Long Arm Jurisdiction
Some provisions also include Long Arm Jurisdiction over data collection and processes of organizations outside of China. These apply when:
The purpose is to provide products or services to natural persons inside the borders;
Analyzing or assessing activities of natural persons inside the borders;
Other circumstances provided in laws or administrative regulations.
This presumably applies to offshore or multi-national companies with Chinese customers in China, for example Amazon who might be shipping goods to a Chinese buyer, or Apple who may have Chinese users in the American App Store.
All such entities are required to establish a dedicated entity or appoint a representative within China.
Exemptions
There are few exemptions, but one that was added during late drafting provides a non-consent legal basis for handling employee data, though employee consent is still needed for overseas transfer, such as to a global corporate parent.
Key Themes
Individual privacy, control and consent are consistent themes throughout the law, which lays down key principles including:
Personal Information - Defining personal information, including sensitive information;
Legal Basis - All data collection must have a legal basis for collection. There are several bases, but unlike in the GDPR, there is no legitimate interests basis;
Consent - A key legal basis is consent, which, unlike in the GDPR, must be obtained for each type of data processing activity, especially for transferring an individual's data overseas. Consent must also be "informed" with various types of notification and required content specified in the law;
Sensitive Data - Some types of personal information is sensitive, and the law provides an open-ended list of examples (unlike the GDPR's specific list of "special categories"), including biometrics, religion, specially-designated status, medical health, financial accounts, and location tracking;
Protecting Children - All personal information of minors under the age of 14 is sensitive, and specific consent is required from parents to process this information. This is much stricter than in the GDPR;
Individual Rights - The PIPL gives individuals several key rights over their information, such as the right to correct, delete, and view or transfer the data collected about them.
Responsibilities - Several articles lay out the various responsibilities of various parties collecting, transferring, and handling personal information;
Government Use of Personal Information - The PIPL includes when and how government agencies can collect and process data on individuals, including for national security, emergency, and other purposes;
Overseas Transfers - Specific restrictions on transfer of personal data outside of China;
Enforcement - Severe penalties for violations.
Definitions
The law defines the following:
Personal Information - Any type of information that identifies or can identify natural persons recorded electronically or by other means, but does not include anonymized information.
Sensitive Personal Information - Personal information that once leaked or illegally used can easily cause natural persons to suffer encroachments on their dignity or harms to their persons or property; including information such as biometrics (including facial recognition), religious faith, particular identities, medical care and health, financial status, and location tracking, as well as the personal information of minors under the age of 14.
Individuals - People whose data is being collected for processed (similar to the GDPR's Data Subject).
Personal Information Handlers: - Organizations or individuals that independently make decisions about the purposes and methods of personal information handling in personal information handling activities.
Entrusted Persons - External entities who Information Handlers entrust to handle personal information, essentially third parties.
Large Processors - Companies that process large amounts of data, as defined in Article 40, including Critical Information Infrastructure Operators ("CIIO") from the China's Critical Infrastructure Regulations.
Handling of Personal Information: Personal information handling includes personal information collection, storage, use, processing, transmission, provision, disclosure, deletion, etc.
Automated Decision-Making: The use of computer programs to automatically analyse, evaluate, and make decisions on personal information on personal behavior habits, hobbies or economic, health, credit status, and so forth.
De-Identification: The process of handling personal information to make it impossible to identify a specific natural person without the help of additional information.
Anonymization: The process in which personal information is handled so that it cannot be used to identify a specific natural person and cannot be restored after being so handled.
Legal Basis
All personal information collection and processing must have one of the following legal bases:
Individuals’ consent obtained;
Where necessary to conclude or fulfill a contract in which the individual is an interested party, or where necessary to conduct human resources management according to lawfully formulated labor rules and structures and lawfully concluded collective contracts;
Where necessary to fulfill statutory duties and responsibilities or statutory obligations;
Where necessary to respond to sudden public health incidents or protect natural persons’ lives and health, or the security of their property, under emergency conditions;
Handling personal information within a reasonable scope to implement news reporting, public opinion supervision, and other such activities for the public interest;
When handling personal information disclosed by persons themselves or otherwise already lawfully disclosed, within a reasonable scope in accordance with the provisions of this Law.
Other circumstances provided in laws and administrative regulations.
Unlike in the GDPR, there is no legitimate interests basis. Therefore, most consumers will likely be covered by giving their direct consent (such as for cookies, newsletters, etc.) or by contract fulfillment (such as shipping goods to them or providing services).
Consent
Consent is a major concern of the PIPL and a key legal basis on which handlers can process personal information.
If there is no other legal basis for processing data, handlers must get consent for data collection and processing, and this consent can be revoked by any individual at any time. Handlers are not allowed to refuse to provide products or services if an individual withholds or withdraws their consent for non-essential processing.
Separate consent is also specifically required in a number of situations:
Transfer of personal data by data controllers to third parties (Article 23);
Publication of personal data (Article 25);
Publication or provision of personal data collected by equipment installed in the public places for security purposes, such as personal images (Article 26);
Processing of sensitive personal data (Article 29); and
Cross-border transfers of personal data (Article 39).
Consent for these situations cannot be "bundled" and thus must be obtained separately from the individual.
Where a change occurs in the purpose of personal information handling, the handling method, or the categories of handled personal information, the individual's consent shall be obtained again.
Individual Rights
Individuals have several specific rights under the PIPL - they can:
Know & Decide - Refuse and limit how their data is handled.
Access & Copy - View and copy their data.
Correct or Complete - Request to correct inaccurate data.
Erasure - Request their information be deleted and/or revoke consent.
Explanation - Request handlers explain their handling of an individual's personal information.
Portability - Request moving their data to another handler.
Automated Decision Making
There are specific rules for automated decision making in the PIPL, including the right of individuals to opt-out, such as disabling product recommendations.
The law specifically requires "transparency of the decision-making and the fairness and justice of the handling result shall be guaranteed, and they may not engage in unreasonable differential treatment of individuals in trading conditions such as trade price, etc."
For companies pushing delivery or commercial sales to individuals through automated decision-making methods shall simultaneously provide the option to not target an individual's characteristics, or provide the individual with a convenient method to refuse.
When the use of automated decision-making produces decisions with a major influence on the rights and interests of the individual, they have the right to require personal information handlers to explain the matter, and they have the right to refuse that personal information handlers make decisions solely through automated decision-making methods.
Automated Decision Making is defined as "refers to the activity of using computer programs to automatically analyze or assess personal behaviors, habits, interests, or hobbies, or financial, health, credit, or other status, and make decisions."
Facial Recognition
The PIPL specifically covers the use of facial recognition in public spaces, including that it can only be used for public security reasons unless each individual separately consents:
"The installation of image collection or personal identity recognition equipment in public venues shall occur as required to safeguard public security and observe relevant State regulations, and clear indicating signs shall be installed. Collected personal images and personal distinguishing identity characteristic information can only be used for the purpose of safeguarding public security; it may not be used for other purposes, except where individuals’ separate consent is obtained."
Handler Obligations
Personal information handlers have several specific obligations:
Formulating internal management structures and operating rules;
Implementing categorized management of personal information;
Adopting corresponding technical security measures such as encryption, de-identification, etc.;
Reasonably determining operational limits for personal information handling, and regularly conducting security education and training for employees;
Formulating and organizing the implementation of personal information security incident response plans;
Other measures provided in laws or administrative regulations.
All handlers must "regularly engage in audits of their personal information handling and compliance with laws and administrative regulations."
Personal Information Protection Officers
In addition, at a certain (not yet defined) data handling scale, handlers must appoint "personal information protection officers, to be responsible for supervising personal information handling activities as well as adopted protection measures, etc."
Impact Assessment
Under the following circumstances, handlers must perform a personal information protection impact assessment and report the results:
Handling sensitive personal information;
Using personal information to conduct automated decision-making;
Entrusting personal information handling, providing personal information to other personal information handlers, or disclosing personal information;
Providing personal information abroad;
Other personal information handling activities with a major influence on individuals.
Such assessments must include:
Whether or not the personal information handling purpose, handling method, etc., are lawful, legitimate, and necessary;
The influence on individuals' rights and interests, and the security risks;
Whether protective measures undertaken are legal, effective, and suitable to the degree of risk.
Data Localization
The PIPL has specific requirements on data localization, the storage and processing of personal information in China.
Data Security
Information handlers have several responsibilities, including adopting the following measures to ensure personal information handling conforms to the provisions of laws and administrative regulations, and prevent unauthorized access as well as personal information leaks, distortion, or loss:
Formulating internal management structures and operating rules;
Implementing categorized management of personal information;
Adopting corresponding technical security measures such as encryption, de-identification, etc.;
Reasonably determining operational limits for personal information handling, and regularly conducting security education and training for employees;
Formulating and organizing the implementation of personal information security incident response plans;
Other measures provided in laws or administrative regulations.
Impact Assessments
Impact Assessments are required in a number of situations, including:
Handling sensitive personal information;
Using personal information to conduct automated decision-making;
Entrusting personal information handling, providing personal information to other personal information handlers, or disclosing personal information;
Providing personal information abroad;
Other personal information handling activities with a major influence on individuals.
Contractual Elements
Agreements are required when a handler entrusts personal data handling to another handler. Some law firms have suggested this will result in specific standard contractual clauses ("SCC"), similar to in the GDPR.
Breach Notification
All data leaks must be reported internally, and if "harm may have been created" they may be required to notify the individuals affected. Notification details must include:
The information categories, causes, and possible harm caused by the leak, distortion, or loss that occurred or might have occurred;
The remedial measures taken by the personal information handler and measures individuals can adopt to mitigate harm;
Contact method of the personal information handler.
Large Handlers
Large-scale handlers, such as those "providing important Internet platform services, that have a large number of users, and whose business models are complex" also have the obligations:
Establish and complete personal information protection compliance systems and structures according to State regulations, and establish an independent body composed mainly of outside members to supervise personal information protection circumstances;
Abide by the principles of openness, fairness, and justice; formulate platform rules; and clarify the standards for intra-platform product or service providers' handling of personal information and their personal information protection duties;
Stop providing services to product or service providers on the platform that seriously violate laws or administrative regulations in handling personal information;
Regularly release personal information protection social responsibility reports, and accept society's supervision.
Overseas Transfers
Moving personal information outside of China is only allowed if one of these conditions is satisfied:
Passing a security assessment organized by the State cybersecurity and information department according to Article 40 of this Law;
Undergoing personal information protection certification conducted by a specialized body according to provisions by the State cybersecurity and information department;
Concluding a contract with the foreign receiving side in accordance with a standard contract formulated by the State cyberspace and information department, agreeing upon the rights and responsibilities of both sides;
Other conditions provided in laws or administrative regulations or by the State cybersecurity and information department.
All such transfers require each individual's separate consent and notification about "the foreign receiving side’s name or personal name, contact method, handling purpose, handling methods, and personal information categories, as well as ways or procedures for individuals to exercise the rights provided in this Law with the foreign receiving side, and other such matters."
Sharing data with foreign governments
Information handlers are prohibited from sharing any personal information with foreign judicial or law enforcement agencies with approval.
This has raised concerns among law firms about how multi-national corporations would or could respond to judicial inquiries in other countries, such as a warrant for data held about a Chinese citizen in those countries.
Government Departments
The PIPL includes legal basis for how government ("State Organs") can collect and process data. Generally, the government must follow the same rules as non-government entities, including notifications. There are some exceptions, such as when it "shall impede State organs’ fulfillment of their statutory duties and responsibilities".
See also
Cybersecurity Law of the People's Republic of China (CSL)
Cybersecurity
Data Security Law of the People's Republic of China (DSL)
Data Governance
Information Privacy
General Data Protection Regulation (GDPR)
California Consumer Privacy Act (CCPA)
Children's Online Privacy Protection Act (COPPA)
EU–US Privacy Shield
Human rights
Data portability
Do Not Track legislation
Privacy Impact Assessment
Citations
Privacy legislation
Information privacy
Data protection
2021 in China
Law of the People's Republic of China | Personal Information Protection Law of the People's Republic of China | Engineering | 3,312 |
50,906,849 | https://en.wikipedia.org/wiki/Intel%208257 | The Intel 8257 is a direct memory access (DMA) controller, a part of the MCS 85 microprocessor family. The chip is supplied in 40-pin DIP package.
See also
Intel 8237 - DMA Controller
External links
Intel: 8257/8257-5 Programmable DMA Controller (PDF; 2,2 MB).
NEC Electronics (Europe) GmbH, 1982 Catalog, p. 665–674 (μPD8257; μPD8257C-5).
NEC Electronics Inc.
Intel chipsets
Input/output integrated circuits | Intel 8257 | Technology | 121 |
20,623,257 | https://en.wikipedia.org/wiki/Form%20grabbing | Form grabbing is a form of malware that works by retrieving authorization and log-in credentials from a web data form before it is passed over the Internet to a secure server. This allows the malware to avoid HTTPS encryption. This method is more effective than keylogger software because it will acquire the user’s credentials even if they are input using virtual keyboard, auto-fill, or copy and paste. It can then sort the information based on its variable names, such as email, account name, and password. Additionally, the form grabber will log the URL and title of the website the data was gathered from.
History
The method was invented in 2003 by the developer of a variant of a trojan horse called Downloader.Barbew, which attempts to download Backdoor.Barbew from the Internet and bring it over to the local system for execution. However, it was not popularized as a well known type of malware attack until the emergence of the infamous banking trojan Zeus in 2007. Zeus was used to steal banking information by man-in-the-browser keystroke logging and form grabbing. Like Zeus, the Barbew trojan was initially spammed to large numbers of individuals through e-mails masquerading as big-name banking companies. Form grabbing as a method first advanced through iterations of Zeus that allowed the module to not only detect the grabbed form data but to also determine how useful the information taken was. In later versions, the form grabber was also privy to the website where the actual data was submitted, leaving sensitive information more vulnerable than before.
Known occurrences
A trojan known as Tinba (Tiny Banker Trojan) has been built with form grabbing and is able to steal online banking credentials and was first discovered in 2012. Another program called Weyland-Yutani BOT was the first software designed to attack the macOS platform and can work on Firefox. The web injects templates in Weyland-Yutani BOT were different from existing ones such as Zeus and SpyEye.
Another known version is British Airways breach in September 2018. In the British Airways’ case, the organizations’ servers appeared to have been compromised directly, with the attackers modifying one of the JavaScript files (Modernizr JavaScript library, version 2.6.2) to include a PII/credit card logging script that would grab the payment information and send the information to the server controlled by the attacker hosted on “[.]com” domain with an SSL certificate issued by “Comodo” Certificate Authority.
The British Airways mobile application also loads a webpage built with the same CSS and JavaScript components as the main website, including the malicious script installed by Magecart. Thus, the payments made using the British Airways mobile app were also affected.
Countermeasures
Due to the recent increase in keylogging and form grabbing, antivirus companies are adding additional protection to counter the efforts of key-loggers and prevent collecting passwords. These efforts have taken different forms varying from antivirus companies, such as safepay, password manager, and others. To further counter form grabbing, users' privileges can become limited which would prevent them from installing Browser Helper Objects (BHOs) and other form grabbing software. Administrators should create a list of malicious servers to their firewalls.
New countermeasures, such as using Out-of-band communication, to circumvent form grabbers and Man-in-the-browser are also emerging; examples include FormL3SS.; those that circumvent the threat use a different communication channel to send the sensitive data to the trusted server. Thus, no information is entered on the compromised device. Alternative Initiatives such as Fidelius use added hardware to protect the input/output to the compromised or believed compromised device.
See also
Keystroke logging
Malware
Trojan horse
Web security exploits
Computer insecurity
Internet privacy
Tiny Banker Trojan
References
Hacking (computer security)
Types of malware
Web security exploits | Form grabbing | Technology | 816 |
70,217,936 | https://en.wikipedia.org/wiki/Guy%20de%20T%C3%A9ramond%20Peralta | Guy de Téramond Peralta is a Costa Rican-French theoretical physicist. His research has been focused on nuclear and high energy physics. Following the quest for a wave equation similar to the Schrödinger equation in atomic physics, he introduced with Stanley Brodsky a nonperturbative first approximation to quantum chromodynamics to describe hadronic structure, known as light front holography. This analytic approach to the strong interactions is based on light front quantization and the AdS/CFT correspondence. He is also known for his role in the pioneering interconnections in Costa Rica and the Central American region to the Internet. In 2023, de Téramond was inducted into the Internet Hall of Fame by the Internet Society.
Education and scientific career
De Téramond obtained his Doctorat de Troisième Cycle from the Pierre et Marie Curie University in 1973 and completed his Doctorat d'État in Theoretical Physics in 1977 from the University of Paris at Orsay, under the supervision of Mary K. Gaillard and Jean Trân Thanh Vân. He became Assistant Professor of Physics at the University of Costa Rica in 1977 and Full Professor in 1982. He was a visiting scientist at the Lyman Laboratory of Physics at Harvard University (1983-1984), SLAC National Accelerator Laboratory at Stanford University (1986-1988) and at the École Polytechnique in 2007.
Research
De Téramond's thesis led, in a joint experiment of the Universities of Lausanne, Munich and Zurich in 1979, to the confirmation of the charge symmetry breaking of the nuclear forces. In collaboration with Stanley Brodsky and Ivan Schmidt he studied in 1990 the properties of a possible form of nuclear matter catalyzed by heavy quarks, known as hadro-charmonium.
His research in collaboration with Brodsky and Hans Günter Dosch is centered on the extension and applications of holographic light front QCD (HLFQCD) to hadron structure and dynamics, based on the holographic embedding of light-front physics in a higher dimensional gravity theory (gauge/gravity duality). Using the new holographic approach he also explored with Brodsky and Alexandre Deur the strength of the strong force at large distances where QCD iteration methods fail.
More recently, also in collaboration with Brodsky and Dosch, it was found that color symmetry and confinement are manifest as an underlying superconformal algebraic structure in holographic QCD, which also leads to specific connections between mesons and baryons.
De Téramond is an active member of the HLFHS Collaboration for the applications of the new holographic theories to strong interactions; in particular, to the study of the quark and gluon distribution functions in hadrons, including the strange and charm quark sea distribution in the proton, which are evolved to higher scales for meaningful comparisons with existing or upcoming experimental results.
Networking projects
In January 1990, de Téramond was commissioned by the Vice-President for Research of the University of Costa Rica (UCR) to lead the project for the connection of the University to BITNET, the academic computer network at the City University of New York and Yale University.
The first BITNET connection was achieved in November 1990 with Florida Atlantic University using a digital satellite link from PanAmSat, followed by the connection of Panama in 1992 to the UCR node. Concurrently, de Téramond led the project which culminated with the interconnection of the University of Costa Rica to the Internet in January 1993 using a point of presence (POP) established by the National Science Foundation (NSF) in Homestead, Florida. He coordinated the initiative for the implementation of the National Research Network (CRNet) based on the TCP/IP protocols. The project (1993-2000) was driven by the University of Costa Rica and the Ministry of Science and Technology and became operational in April 1993.
Under Saul Hahn's Hemisphere Wide Inter-University Scientific and Technological Information Network project (RedHUCyT) of the Organization of American States, de Téramond and his team of engineers from the University of Costa Rica participated in the pioneering connections of the Central American and Caribbean region to the Internet: Nicaragua (1994), Panama (1994), Honduras (1995), Jamaica (1995), Guatemala (1995), El Salvador (1996) and Belize (1997). With the support of the Costa Rican government, RedHUCyT provided a satellite ground station for the academic network. The antenna was inaugurated at the UCR campus on Abril 1997, thus ending a long controversy with the telecommunication's monopoly.
De Téramond was the Director of the Computer Center at the University of Costa Rica (1997–2000) and Minister of Science and Technology of Costa Rica (2000-2002), where he led, jointly with the Instituto Costarricense de Electricidad (ICE), the implementation of the Advanced Internet Network to bring broadband connectivity across the country. The project network architecture was based on IP over ICE's optical fiber and the MPLS routing protocol. The first phase of this project was successfully implemented in April 2001.
He is a member of the board of directors of the Network Information Center (NIC CR) since its creation in the early 90's. More recently, he participated in the establishment of the Internet Exchange Point (CRIX) to allow the direct data exchange among all the participant autonomous systems, lowering the network delay and the costs of the international links. CRIX was inaugurated in 2014. He also contributed setting-up the Internet Consulting Council in Costa Rica (CCI) which has become a reference point for Internet Governance.
Awards
Fulbright Research Award (1983)
Guggenheim Fellow (1986)
Leonov Medallion (1997)
Wolfram Innovator Award (2020)
Internet Hall of Fame (2023)
References
External links
Guy de Téramond Scientific publications on INSPIRE-HEP
ORCID digital identifier
Holographic light front QCD on nLab
Living people
People from Biarritz
Costa Rican scientists
French physicists
Costa Rican people of French descent
Theoretical physicists
Paris-Sorbonne University alumni
Paris-Saclay University alumni
Academic staff of the University of Costa Rica
Particle physicists
Year of birth missing (living people) | Guy de Téramond Peralta | Physics | 1,276 |
77,262,226 | https://en.wikipedia.org/wiki/F%C5%ABrin | A is a small, bowl-shaped Japanese wind chime typically hung during the summer. A piece of paper called tanzaku (短冊) is usually hung from each fūrin to cause it to ring even with just a slight breeze. The sound of the fūrin and the sight of the paper blowing in the wind are seen by many Japanese people as having a cooling effect during the hot Japanese summer.
History
The origins of fūrin are believed to be from the Chinese Tang Dynasty when metal wind chimes were hung in bamboo forests and used to tell fortunes. The word fūrin was first used in Japan during the Heian period when they were hung from eaves, particularly at Buddhist temples, as talismans to ward off evil spirits. They can still be found at many shrines and temples in Japan.
Glass fūrin were first made during the late Edo period. Glass is the most popular material used for fūrin in modern Japan and these glass fūrin are referred to as Edo Fūrin (江戸風鈴). It was also during the Edo period that fūrin were first seen to have cooling properties during the Japanese summer. It is this perceived effect that makes fūrin a summer fūbutsushi (風物詩), or an item characteristic of a certain Japanese season.
During the Edo period, these fūrin, which were made by free glassblowing, were very expensive and primarily used by feudal lords and wealthy merchants. Mass-produced glass fūrin in modern Japan have made them affordable and widespread at Japanese households, but the tradition of free-blowing glass to make fūrin is still practiced by some craftsmen in Japan. Fūrin made from metal and other materials can also still be found throughout Japan.
Fūrin events
During summer in Japan, various events are held throughout the country in which many, sometimes thousands, of fūrin are hung. These fūrin displays, often at temples or shrines, are popular seasonal attractions. Notable events include:
Mizusawa Station, Ōshū, Iwate Prefecture - During summer hundreds of fūrin are displayed at the platform of Mizusawa Station. The sound of these fūrin was chosen as one of the 100 Soundscapes of Japan.
Kawasaki Daishi Fūrin Market - A summer market at Kawasaki Daishi Temple in Kawasaki, Kanagawa Prefecture which sells thousands of fūrin from across Japan.
Kawagoe Hikawa Shrine - about 1,500 fūrin decorate Hikawa Shrine in Kawagoe, Saitama Prefecture during summer.
Gallery
References
Lucky symbols
Wind-activated musical instruments
Objects believed to protect from evil
Amulets
Talismans
Religious objects
Shinto religious objects
Superstitions of Japan
Culture of Japan | Fūrin | Physics | 543 |
2,098,913 | https://en.wikipedia.org/wiki/NGC%20147 | NGC 147 (also known as DDO3 or Caldwell 17) is a dwarf spheroidal galaxy about 2.58 Mly away in the constellation Cassiopeia. NGC 147 is a member of the Local group of galaxies and a satellite galaxy of the Andromeda Galaxy (M31). It forms a physical pair with the nearby galaxy NGC 185,
another remote satellite of M31. It was discovered by John Herschel in September 1829. Visually it is both fainter and slightly larger than NGC 185 (and therefore has a considerably lower surface brightness). This means that NGC 147 is more difficult to see than NGC 185, which is visible in small telescopes. In the Webb Society Deep-Sky Observer's Handbook, the visual appearance of NGC 147 is described as follows:
Large, quite faint, irregularly round; it brightens in the middle to a stellar nucleus.
The membership of NGC 147 in the Local Group was confirmed by Walter Baade in 1944 when he was able to resolve the galaxy into individual stars with the telescope at Mount Wilson near Los Angeles.
Characteristics
A survey of the brightest asymptotic giant branch (AGB) stars in the area of radius 2 from the center of NGC 147 shows that the last significant star-forming activity in NGC 147 occurred around 3 Gyr ago.
NGC 147 contains a large population of older stars which show a spread in metallicity and age. The metallicity spread suggests that NGC 147 has had chemical enrichment. However, H I has not been observed and the interstellar medium (ISM) mass upper limit is much lower than expected had the material which is emitted from evolving stars been kept in the galaxy. This implies depletion of the ISM.
Distance measurements
At least two techniques have been used to measure distances to NGC 147. The surface brightness fluctuations distance measurement technique estimates distances to spiral galaxies based on the graininess of the appearance of their bulges. The distance measured to NGC 147 using this technique is 2.67 ± 0.18 Mly (870 ± 60 kpc). However, NGC 147 is close enough that the tip of the red giant branch (TRGB) method may be used to estimate its distance. The estimated distance to NGC 147 using this technique is 2.21 ± 0.09 Mly (680 ± 30 kpc). Averaged together, these distance measurements give a distance estimate of ().
See also
Andromeda's satellite galaxies
Notes
average(870 ± 60, 680 ± 30) = ((870 + 680) / 2) ± ((602 + 302)0.5 / 2) = 780 ± 30
References
External links
SEDS – NGC 147
Dwarf spheroidal galaxies
Local Group
Andromeda Subgroup
Cassiopeia (constellation)
0147
00326
002004
017b
18290908 | NGC 147 | Astronomy | 576 |
3,615,656 | https://en.wikipedia.org/wiki/Iota%20Horologii%20b | Iota Horologii b (ι Hor b / ι Horologii b), often catalogued HR 810 b, is an extrasolar planet approximately 56.5 light-years away in the constellation of Horologium (the Pendulum Clock). Iota Horologii b has a minimum mass 2.26 times that of Jupiter; astrometric measurements from Gaia suggest it has a true mass of .
Detection and discovery
The discovery of Iota Horologii b was the result of a long-term survey of forty Solar analog stars that was begun in November 1992. The planet represents the first discovery of an extrasolar planet with a European Southern Observatory instrument, with the data found at the La Silla Observatory in Chile.
The Keplerian signal found the planet to have an orbital period of 320.1 days, indicative of an orbiting planet with minimum mass of 2.26 Jupiter masses. Iota Horologii b was announced in the summer of 1999 as the first planet found by a team of planet hunters led by Martin Kürster.
The measurements of Iota Horologii show that the planet orbits the star approximately every 320 days. From this period, the known mass of the central star (1.25 solar masses) and the amplitude of the velocity changes, a mass of at least 2.26 times that of planet Jupiter is deduced for the planet.
It revolves around the host star in a somewhat elongated orbit. If it were located in the Solar System, this orbit would stretch from just outside the orbit of Venus (at 117 million km or 0.78 astronomical units [AU] from the Sun) to just outside the orbit of the Earth (at 162 million km or 1.08 AU). Because the planet is about 2,000 more massive than the Earth, it is predicted that Iota Horologii b is more similar to planet Jupiter.
Preliminary astrometric analysis of Iota Horologii b suggested that planet b may have as much as 24 times the mass of Jupiter with an inclination of 5.5 degrees from Earth's line of sight. With these calculations, Iota Horologii b may actually be an extremely dim brown dwarf and a substellar companion of Iota Horologii. However, these measurements were later proved useful only for upper limits of inclination. An astrometric measurement of the planet's inclination and true mass was published in 2022 as part of Gaia DR3, revealing it to have a planetary mass of .
See also
List of exoplanets discovered before 2000
References
External links
Horologium (constellation)
Giant planets
Exoplanets discovered in 1999
Exoplanets detected by radial velocity
Exoplanets detected by astrometry | Iota Horologii b | Astronomy | 561 |
454,781 | https://en.wikipedia.org/wiki/Quadratic%20field | In algebraic number theory, a quadratic field is an algebraic number field of degree two over , the rational numbers.
Every such quadratic field is some where is a (uniquely defined) square-free integer different from and . If , the corresponding quadratic field is called a real quadratic field, and, if , it is called an imaginary quadratic field or a complex quadratic field, corresponding to whether or not it is a subfield of the field of the real numbers.
Quadratic fields have been studied in great depth, initially as part of the theory of binary quadratic forms. There remain some unsolved problems. The class number problem is particularly important.
Ring of integers
Discriminant
For a nonzero square free integer , the discriminant of the quadratic field is if is congruent to modulo , and otherwise . For example, if is , then is the field of Gaussian rationals and the discriminant is . The reason for such a distinction is that the ring of integers of is generated by in the first case and by in the second case.
The set of discriminants of quadratic fields is exactly the set of fundamental discriminants (apart from , which is a fundamental discriminant but not the discriminant of a quadratic field).
Prime factorization into ideals
Any prime number gives rise to an ideal in the ring of integers of a quadratic field . In line with general theory of splitting of prime ideals in Galois extensions, this may be
is inert is a prime ideal.
The quotient ring is the finite field with elements: .
splits is a product of two distinct prime ideals of .
The quotient ring is the product .
is ramified is the square of a prime ideal of .
The quotient ring contains non-zero nilpotent elements.
The third case happens if and only if divides the discriminant . The first and second cases occur when the Kronecker symbol equals and , respectively. For example, if is an odd prime not dividing , then splits if and only if is congruent to a square modulo . The first two cases are, in a certain sense, equally likely to occur as runs through the primes—see Chebotarev density theorem.
The law of quadratic reciprocity implies that the splitting behaviour of a prime in a quadratic field depends only on modulo , where is the field discriminant.
Class group
Determining the class group of a quadratic field extension can be accomplished using Minkowski's bound and the Kronecker symbol because of the finiteness of the class group. A quadratic field has discriminant
so the Minkowski bound is
Then, the ideal class group is generated by the prime ideals whose norm is less than . This can be done by looking at the decomposition of the ideals for prime where page 72 These decompositions can be found using the Dedekind–Kummer theorem.
Quadratic subfields of cyclotomic fields
The quadratic subfield of the prime cyclotomic field
A classical example of the construction of a quadratic field is to take the unique quadratic field inside the cyclotomic field generated by a primitive th root of unity, with an odd prime number. The uniqueness is a consequence of Galois theory, there being a unique subgroup of index in the Galois group over . As explained at Gaussian period, the discriminant of the quadratic field is for and for . This can also be predicted from enough ramification theory. In fact, is the only prime that ramifies in the cyclotomic field, so is the only prime that can divide the quadratic field discriminant. That rules out the 'other' discriminants and in the respective cases.
Other cyclotomic fields
If one takes the other cyclotomic fields, they have Galois groups with extra -torsion, so contain at least three quadratic fields. In general a quadratic field of field discriminant can be obtained as a subfield of a cyclotomic field of -th roots of unity. This expresses the fact that the conductor of a quadratic field is the absolute value of its discriminant, a special case of the conductor-discriminant formula.
Orders of quadratic number fields of small discriminant
The following table shows some orders of small discriminant of quadratic fields. The maximal order of an algebraic number field is its ring of integers, and the discriminant of the maximal order is the discriminant of the field. The discriminant of a non-maximal order is the product of the discriminant of the corresponding maximal order by the square of the determinant of the matrix that expresses a basis of the non-maximal order over a basis of the maximal order. All these discriminants may be defined by the formula of .
For real quadratic integer rings, the ideal class number, which measures the failure of unique factorization, is given in OEIS A003649; for the imaginary case, they are given in OEIS A000924.
Some of these examples are listed in Artin, Algebra (2nd ed.), §13.8.
See also
Eisenstein–Kronecker number
Genus character
Heegner number
Infrastructure (number theory)
Quadratic integer
Quadratic irrational
Stark–Heegner theorem
Dedekind zeta function
Quadratically closed field
Notes
References
Chapter 6.
Chapter 3.1.
External links
Algebraic number theory
Field (mathematics) | Quadratic field | Mathematics | 1,156 |
55,848,604 | https://en.wikipedia.org/wiki/Moment%20distance%20index | The moment distance index (MDI) is a shape-based metric or shape index that can be used to analyze spectral reflectance curves and waveform LiDAR, proposed by Dr. Eric Ariel L. Salas and Dr. Geoffrey M. Henebry (Salas and Henebry, 2014). In the case of spectral data, the shape of the reflectance curve should unmask fine points of the spectra usually not considered by existing band-specific indices. It has been used to identify spectral regions for chlorophyll and carotenoids, detect greenhouses using WorldView-2, Landsat, and Sentinel-2 satellite data, identify greenhouse crops, compute canopy heights, estimate green vegetation fraction, and optimize Fourier-transform infrared (FTIR) scans for soil spectroscopy.
Various approaches have been devised to analyze medium and fine spectral resolution data and maximize their use to extract specific information for vegetation biophysical and biochemical properties. Combinations of spectral bands, called indices, have been used to diminish the effects of soil background and/or atmospheric conditions while highlighting specific spectral features associated with plant or canopy properties. Vegetation indices (VIs) use the concept of band ratio and differences or weighted linear combinations to take advantage of the visible and NIR bands, two important spectral bands for vegetation studies, in measuring the photosynthetic activity of the plant and explore vegetation dynamics. There is an extensive list of such indices, including the normalized difference vegetation index (NDVI), ratio-based indices such as the modified simplerRatio, soil-distance-based indices such as the modified soil adjusted vegetation index (MSAVI), and many others. Whereas most indices incorporate two-band or three-band relations – slope-based, distance-based on soil line or optimized (slope-based and distance-based concepts combined) – no approach deals with the raw shape of the spectral curve. MDI, however, investigates the shape of the reflectance curve using multiple spectral bands not considered by other indices, which could carry additional spectral information useful for vegetation monitoring.
A full-waveform Light Detection And Ranging (LiDAR) system has the ability to record many returns per emitted pulse, as a function of time, to reveal the vertical structure of the illuminated object, showing position of the individual targets, and finer details of the signature of intercepted surfaces or the proportion of the canopy complexity. Information associated with the illuminated object can be decoded from the generated backscattered waveform, as key features of the waveform such as the shape, area, and power are directly related to the geometry of the illuminated object. The richness of the LiDAR waveform holds a promise to address the challenge of characterizing in detail the geometric and reflection characteristics of vegetation structure, e.g., the vertical canopy volume distribution. MDI utilizes the raw waveform and place importance on its shape and its return power. MDI departs from the usual Gaussian modeling in detecting peaks (canopy and ground) for example in canopy height estimation and focus more on the full geometry (raw shape) and radiometry (raw power) of the LiDAR waveform to retain richness of the data.
The moment distance is a matrix of distances computed from two reference locations (pivots) to each spectral or waveform point within the specified range.
Assume that a curve (reflectance or absorption curve or backscattered waveform) is displayed in Cartesian coordinates with the abscissa displaying the wavelength λ or time lapse t and the ordinate displaying the reflectance ρ or the backscattered power p. Let the subscript LP denote the left pivot (located in a shorter wavelength for the spectral curve and earlier temporal reference point for the waveform) and subscript RP denote the right pivot (located in a longer wavelength for the spectral curve and later temporal reference point for the waveform). Let λLP and λRP be the wavelength locations observed at the left and right pivots for a reflectance data, respectively, where left (right) indicates a shorter (longer) wavelength. Let tLP and tRP be the time value observed at the left and right pivots for a waveform data, respectively, where left (right) indicates an earlier (later) time. The proposed MD approach can be described in a set of equations.
For spectral data, the index is given as:
where is the moment distance from the right pivot, is the moment distance from the left pivot, is the wavelength location at left pivot, is the wavelength location at right pivot, is the spectral reflectance at a given wavelength, and is successive wavelength location.
For waveform LiDAR data, the index is given as:
where the moment distance from the left pivot (MDLP) is the sum of the hypotenuses constructed from the left pivot to the power at successively later times (index from tLP to tRP): one base of each triangle is difference from the left pivot ( − tLP) along the abscissa and the other base is simply the backscattered power at . Similarly, the moment distance from the right pivot (MDRP) is the sum of the hypotenuses constructed from the right pivot to the power at successively earlier times (index from tRP to tLP): one base of each triangle is the difference from the right pivot (tRP − ) along the abscissa and the other base is simply the backscattered power at .
MDI is an unbounded metric. It increases or decreases as a nontrivial function of the number of spectral bands or bins considered and the shape of the spectrum or waveform that spans those contiguous bands or bins. The number of bands or bins is a function of the spectral resolution of the imaging spectrometer or the temporal resolution of the LiDAR (digitization rate) and the length of the reference range (i.e., full extent or subsets of the curve) being analyzed.
References
Remote sensing
Biogeography | Moment distance index | Biology | 1,247 |
31,790,975 | https://en.wikipedia.org/wiki/KLF14 | Krüppel-like factor 14, also known as basic transcription element-binding protein 5 (BTEB5) is a protein that in humans is encoded by the KLF14 gene. The corresponding Klf14 mouse gene is known as Sp6.
Function
KLF14 is a member of the Krüppel-like factor family of transcription factors. It regulates the transcription of various genes, including TGFβRII (the type II receptor for TGFβ). KLF14 is expressed in many tissues, lacks introns, and is subject to parent-specific expression.
KLF14 appears to be a master regulator of gene expression in adipose tissue.
Protein structure
Like the other members of the KLF family, KLF14 has three zinc-finger domains near the C-terminus, all three of which are of the classical C2H2 type. In the human, they are at amino acids 195–219, 225–249, and 255–277.
Human KLF14 is 323 amino acids in length, with a molecular weight of 33,124; in the mouse its length is 325.
Clinical significance
There appears to be a connection between KLF14 and coronary artery disease, hypercholesterolemia and type 2 diabetes.
References
External links
Huffington Post: Scientists Find Genetic 'Switch' For Obesity
Transcription factors | KLF14 | Chemistry,Biology | 271 |
9,387,395 | https://en.wikipedia.org/wiki/Stacked%20Volumetric%20Optical%20Disc | The Stacked Volumetric Optical Disc (or SVOD) is an optical disc format developed by Hitachi Maxell, which uses an array of wafer-thin optical discs to allow data storage.
Each "layer" (a thin polycarbonate disc) holds around 9.4 GB of information, and the wafers are stacked in layers of 20, 25, 100, or more, giving a substantially larger overall data capacity; for example, 100× cartridges could hold 940 GB using the system as announced.
Hitachi Maxell announced the creation of the SVOD standard in 2006, intending to launch it the next year. Aimed primarily at commercial users, the target price was ¥40,000 for a cartridge of 100 thin discs, with the potential to expand into the home user market. When they announced the system, Hitachi Maxell publicly recognized the possibility that the system could be eventually modified for use with a blue-violet laser, similar to Blu-ray discs, which could have expanded the capacity of the system to 3-5 TB. It is possible that they in fact developed this "second generation" SVOD for use with standard Blu-ray lasers, with each thin disc having a storage capacity of 25 GB, or a 100-disc cartridge having a storage of 5 TB. Hitachi Maxell developed systems both for burning to the media using standard DVD optical heads, and pre-recording to the media using a special heat imprint technique they called "nanoimprinting." Though nanoimprinting initially required 6 minutes per disc for pressing, they had improved it to 8 seconds, and intended to achieve a comparable throughput to standard DVD pressing. The primary application of the SVOD system seemed to be business data archival, replacing digital tape archives.
In 2007, Japanese broadcaster NHK announced a similar system, based on Blu-ray discs, of stacked optical storage media specifically designed to rotate at high speeds, up to 15,000 RPM.
SVOD was anticipated to be a likely be a candidate, along with Holographic Versatile Discs (HVDs), to be a next-generation optical disc standard. However, as of 2021, little has been done with the format.
References
External links
Hitachi Maxell develops wafer-thin storage disc details and interview from IDG News Service (dead link, archived) (4 October 2006)
Maxell details in Japanese language (dead link, archived) (19 April 2006)
Vaporware
Rotating disc computer storage media
Audio storage
Video storage
120 mm discs
DVD
Optical discs | Stacked Volumetric Optical Disc | Technology | 512 |
68,444,148 | https://en.wikipedia.org/wiki/Walther%20graph | In the mathematical field of graph theory, the Walther graph, also called the Tutte fragment, is a planar bipartite graph with 25 vertices and 31 edges named after Hansjoachim Walther. It has chromatic index 3, girth 3 and diameter 8.
If the single vertex of degree 1 whose neighbour has degree 3 is removed, the resulting graph has no Hamiltonian path. This property was used by Tutte when combining three Walther graphs to produce the Tutte graph, the first known counterexample to Tait's conjecture that every 3-regular polyhedron has a Hamiltonian cycle.
Algebraic properties
The Walther graph is an identity graph; its automorphism group is the trivial group.
The characteristic polynomial of the Walther graph is :
References
Individual graphs
Bipartite graphs
Planar graphs
Hamiltonian paths and cycles | Walther graph | Mathematics | 174 |
70,634,752 | https://en.wikipedia.org/wiki/Atomitat | Atomitat (1962) was an underground bunker-home in Plainview, Texas, designed by architect Jay Swayze. The name of the home came from the combination of the words "atomic" and "habitat". It was the first home in the U.S. to meet civil defense specifications for a nuclear shelter.
History
Architect Jay Swayze stated that the idea for the Atomitat was born when he attended a civil defense discussion on fallout shelters. The home completed in 1962 and it was designed during the cold war when Americans feared nuclear war. Swayze said that the Atomitat was designed to be an atomic habitat which met the civil defense specifications. The cost of the furnished Atomitat with two vehicles was estimated to be $135,000. The Swayze's also stated that because the Atomitat home was secure against damaging weather, their home insurance rate was about 87.5% less than the rate of an above ground home.
In 1967 the Atomitat was featured in a U.S. Information Agency propaganda film. The film was part of a series showing scenes of American life, and it would be shown in Arab countries.
Design
Architect Jay Swayze compared his design to a "ship in a bottle". There was a reinforced steel and concrete shell and it was underground and it is under of soil. It is in size. The bunker had 4 bedrooms and 3 bathrooms and windows throughout which were meant to mimic outdoor scenes and outdoor lighting. The home was outfitted with an emergency generator and sewage system. The above ground structure was a garage with a door between two large garage doors. The door led to the shelter which had 2 large steel lined things with lead to protect against radiation.
The house was designed to make the occupant feel as if they were above ground. Lights could be made to mimic the different parts of the day and there was an space between the living space and the outer wall which had a flow of air. This allowed an occupant to open a window and feel a breeze.
The house was occupied by the same family for 35 years. The couple who owned it decided to sell it in 2002 because it was too large now that their family had grown up.
References
External links
1962 introductions
Air raid shelters in the United States
Cold War sites
Survivalism
Radiation protection
Nuclear fallout | Atomitat | Chemistry,Technology | 473 |
52,325,183 | https://en.wikipedia.org/wiki/Polyporus%20gayanus | Polyporus gayanus is a species of fungus in the genus Polyporus. It was first documented in 1846 by French mycologist Joseph-Henri Léveillé.
References
External links
gayanus
Fungi described in 1846
Fungus species | Polyporus gayanus | Biology | 48 |
7,058,047 | https://en.wikipedia.org/wiki/History%20of%20Lorentz%20transformations | The history of Lorentz transformations comprises the development of linear transformations forming the Lorentz group or Poincaré group preserving the Lorentz interval and the Minkowski inner product .
In mathematics, transformations equivalent to what was later known as Lorentz transformations in various dimensions were discussed in the 19th century in relation to the theory of quadratic forms, hyperbolic geometry, Möbius geometry, and sphere geometry, which is connected to the fact that the group of motions in hyperbolic space, the Möbius group or projective special linear group, and the Laguerre group are isomorphic to the Lorentz group.
In physics, Lorentz transformations became known at the beginning of the 20th century, when it was discovered that they exhibit the symmetry of Maxwell's equations. Subsequently, they became fundamental to all of physics, because they formed the basis of special relativity in which they exhibit the symmetry of Minkowski spacetime, making the speed of light invariant between different inertial frames. They relate the spacetime coordinates of two arbitrary inertial frames of reference with constant relative speed v. In one frame, the position of an event is given by x,y,z and time t, while in the other frame the same event has coordinates x′,y′,z′ and t′.
Mathematical prehistory
Using the coefficients of a symmetric matrix A, the associated bilinear form, and a linear transformations in terms of transformation matrix g, the Lorentz transformation is given if the following conditions are satisfied:
It forms an indefinite orthogonal group called the Lorentz group O(1,n), while the case det g=+1 forms the restricted Lorentz group SO(1,n). The quadratic form becomes the Lorentz interval in terms of an indefinite quadratic form of Minkowski space (being a special case of pseudo-Euclidean space), and the associated bilinear form becomes the Minkowski inner product. Long before the advent of special relativity it was used in topics such as the Cayley–Klein metric, hyperboloid model and other models of hyperbolic geometry, computations of elliptic functions and integrals, transformation of indefinite quadratic forms, squeeze mappings of the hyperbola, group theory, Möbius transformations, spherical wave transformation, transformation of the Sine-Gordon equation, Biquaternion algebra, split-complex numbers, Clifford algebra, and others.
Electrodynamics and special relativity
Overview
In the special relativity, Lorentz transformations exhibit the symmetry of Minkowski spacetime by using a constant c as the speed of light, and a parameter v as the relative velocity between two inertial reference frames. Using the above conditions, the Lorentz transformation in 3+1 dimensions assume the form:
In physics, analogous transformations have been introduced by Voigt (1887) related to an incompressible medium, and by Heaviside (1888), Thomson (1889), Searle (1896) and Lorentz (1892, 1895) who analyzed Maxwell's equations. They were completed by Larmor (1897, 1900) and Lorentz (1899, 1904), and brought into their modern form by Poincaré (1905) who gave the transformation the name of Lorentz. Eventually, Einstein (1905) showed in his development of special relativity that the transformations follow from the principle of relativity and constant light speed alone by modifying the traditional concepts of space and time, without requiring a mechanical aether in contradistinction to Lorentz and Poincaré. Minkowski (1907–1908) used them to argue that space and time are inseparably connected as spacetime.
Regarding special representations of the Lorentz transformations: Minkowski (1907–1908) and Sommerfeld (1909) used imaginary trigonometric functions, Frank (1909) and Varićak (1910) used hyperbolic functions, Bateman and Cunningham (1909–1910) used spherical wave transformations, Herglotz (1909–10) used Möbius transformations, Plummer (1910) and Gruner (1921) used trigonometric Lorentz boosts, Ignatowski (1910) derived the transformations without light speed postulate, Noether (1910) and Klein (1910) as well Conway (1911) and Silberstein (1911) used Biquaternions, Ignatowski (1910/11), Herglotz (1911), and others used vector transformations valid in arbitrary directions, Borel (1913–14) used Cayley–Hermite parameter,
Voigt (1887)
Woldemar Voigt (1887) developed a transformation in connection with the Doppler effect and an incompressible medium, being in modern notation:
If the right-hand sides of his equations are multiplied by γ they are the modern Lorentz transformation. In Voigt's theory the speed of light is invariant, but his transformations mix up a relativistic boost together with a rescaling of space-time. Optical phenomena in free space are scale, conformal, and Lorentz invariant, so the combination is invariant too. For instance, Lorentz transformations can be extended by using factor :
.
l=1/γ gives the Voigt transformation, l=1 the Lorentz transformation. But scale transformations are not a symmetry of all the laws of nature, only of electromagnetism, so these transformations cannot be used to formulate a principle of relativity in general. It was demonstrated by Poincaré and Einstein that one has to set l=1 in order to make the above transformation symmetric and to form a group as required by the relativity principle, therefore the Lorentz transformation is the only viable choice.
Voigt sent his 1887 paper to Lorentz in 1908, and that was acknowledged in 1909:
Also Hermann Minkowski said in 1908 that the transformations which play the main role in the principle of relativity were first examined by Voigt in 1887. Voigt responded in the same paper by saying that his theory was based on an elastic theory of light, not an electromagnetic one. However, he concluded that some results were actually the same.
Heaviside (1888), Thomson (1889), Searle (1896)
In 1888, Oliver Heaviside investigated the properties of charges in motion according to Maxwell's electrodynamics. He calculated, among other things, anisotropies in the electric field of moving bodies represented by this formula:
.
Consequently, Joseph John Thomson (1889) found a way to substantially simplify calculations concerning moving charges by using the following mathematical transformation (like other authors such as Lorentz or Larmor, also Thomson implicitly used the Galilean transformation z-vt in his equation):
Thereby, inhomogeneous electromagnetic wave equations are transformed into a Poisson equation. Eventually, George Frederick Charles Searle noted in (1896) that Heaviside's expression leads to a deformation of electric fields which he called "Heaviside-Ellipsoid" of axial ratio
Lorentz (1892, 1895)
In order to explain the aberration of light and the result of the Fizeau experiment in accordance with Maxwell's equations, Lorentz in 1892 developed a model ("Lorentz ether theory") in which the aether is completely motionless, and the speed of light in the aether is constant in all directions. In order to calculate the optics of moving bodies, Lorentz introduced the following quantities to transform from the aether system into a moving system (it's unknown whether he was influenced by Voigt, Heaviside, and Thomson)
where x* is the Galilean transformation x-vt. Except the additional γ in the time transformation, this is the complete Lorentz transformation. While t is the "true" time for observers resting in the aether, t′ is an auxiliary variable only for calculating processes for moving systems. It is also important that Lorentz and later also Larmor formulated this transformation in two steps. At first an implicit Galilean transformation, and later the expansion into the "fictitious" electromagnetic system with the aid of the Lorentz transformation. In order to explain the negative result of the Michelson–Morley experiment, he (1892b) introduced the additional hypothesis that also intermolecular forces are affected in a similar way and introduced length contraction in his theory (without proof as he admitted). The same hypothesis had been made previously by George FitzGerald in 1889 based on Heaviside's work. While length contraction was a real physical effect for Lorentz, he considered the time transformation only as a heuristic working hypothesis and a mathematical stipulation.
In 1895, Lorentz further elaborated on his theory and introduced the "theorem of corresponding states". This theorem states that a moving observer (relative to the ether) in his "fictitious" field makes the same observations as a resting observers in his "real" field for velocities to first order in v/c. Lorentz showed that the dimensions of electrostatic systems in the ether and a moving frame are connected by this transformation:
For solving optical problems Lorentz used the following transformation, in which the modified time variable was called "local time" () by him:
With this concept Lorentz could explain the Doppler effect, the aberration of light, and the Fizeau experiment.
Larmor (1897, 1900)
In 1897, Larmor extended the work of Lorentz and derived the following transformation
Larmor noted that if it is assumed that the constitution of molecules is electrical then the FitzGerald–Lorentz contraction is a consequence of this transformation, explaining the Michelson–Morley experiment. It's notable that Larmor was the first who recognized that some sort of time dilation is a consequence of this transformation as well, because "individual electrons describe corresponding parts of their orbits in times shorter for the [rest] system in the ratio 1/γ". Larmor wrote his electrodynamical equations and transformations neglecting terms of higher order than (v/c)2 – when his 1897 paper was reprinted in 1929, Larmor added the following comment in which he described how they can be made valid to all orders of v/c:
In line with that comment, in his book Aether and Matter published in 1900, Larmor used a modified local time t″=t′-εvx′/c2 instead of the 1897 expression t′=t-vx/c2 by replacing v/c2 with εv/c2, so that t″ is now identical to the one given by Lorentz in 1892, which he combined with a Galilean transformation for the x′, y′, z′, t′ coordinates:
Larmor knew that the Michelson–Morley experiment was accurate enough to detect an effect of motion depending on the factor (v/c)2, and so he sought the transformations which were "accurate to second order" (as he put it). Thus he wrote the final transformations (where x′=x-vt and t″ as given above) as:
by which he arrived at the complete Lorentz transformation. Larmor showed that Maxwell's equations were invariant under this two-step transformation, "to second order in v/c" – it was later shown by Lorentz (1904) and Poincaré (1905) that they are indeed invariant under this transformation to all orders in v/c.
Larmor gave credit to Lorentz in two papers published in 1904, in which he used the term "Lorentz transformation" for Lorentz's first order transformations of coordinates and field configurations:
Lorentz (1899, 1904)
Also Lorentz extended his theorem of corresponding states in 1899. First he wrote a transformation equivalent to the one from 1892 (again, x* must be replaced by x-vt):
Then he introduced a factor ε of which he said he has no means of determining it, and modified his transformation as follows (where the above value of t′ has to be inserted):
This is equivalent to the complete Lorentz transformation when solved for x″ and t″ and with ε=1. Like Larmor, Lorentz noticed in 1899 also some sort of time dilation effect in relation to the frequency of oscillating electrons "that in S the time of vibrations be kε times as great as in S0", where S0 is the aether frame.
In 1904 he rewrote the equations in the following form by setting l=1/ε (again, x* must be replaced by x-vt):
Under the assumption that l=1 when v=0, he demonstrated that l=1 must be the case at all velocities, therefore length contraction can only arise in the line of motion. So by setting the factor l to unity, Lorentz's transformations now assumed the same form as Larmor's and are now completed. Unlike Larmor, who restricted himself to show the covariance of Maxwell's equations to second order, Lorentz tried to widen its covariance to all orders in v/c. He also derived the correct formulas for the velocity dependence of electromagnetic mass, and concluded that the transformation formulas must apply to all forces of nature, not only electrical ones. However, he didn't achieve full covariance of the transformation equations for charge density and velocity. When the 1904 paper was reprinted in 1913, Lorentz therefore added the following remark:
Lorentz's 1904 transformation was cited and used by Alfred Bucherer in July 1904:
or by Wilhelm Wien in July 1904:
or by Emil Cohn in November 1904 (setting the speed of light to unity):
or by Richard Gans in February 1905:
Poincaré (1900, 1905)
Local time
Neither Lorentz or Larmor gave a clear physical interpretation of the origin of local time. However, Henri Poincaré in 1900 commented on the origin of Lorentz's "wonderful invention" of local time. He remarked that it arose when clocks in a moving reference frame are synchronised by exchanging signals which are assumed to travel with the same speed in both directions, which lead to what is nowadays called relativity of simultaneity, although Poincaré's calculation does not involve length contraction or time dilation. In order to synchronise the clocks here on Earth (the x*, t* frame) a light signal from one clock (at the origin) is sent to another (at x*), and is sent back. It's supposed that the Earth is moving with speed v in the x-direction (= x*-direction) in some rest system (x, t) (i.e. the luminiferous aether system for Lorentz and Larmor). The time of flight outwards is
and the time of flight back is
.
The elapsed time on the clock when the signal is returned is δta+δtb and the time t*=(δta+δtb)/2 is ascribed to the moment when the light signal reached the distant clock. In the rest frame the time t=δta is ascribed to that same instant. Some algebra gives the relation between the different time coordinates ascribed to the moment of reflection. Thus
identical to Lorentz (1892). By dropping the factor γ2 under the assumption that , Poincaré gave the result t*=t-vx*/c2, which is the form used by Lorentz in 1895.
Similar physical interpretations of local time were later given by Emil Cohn (1904) and Max Abraham (1905).
Lorentz transformation
On June 5, 1905 (published June 9) Poincaré formulated transformation equations which are algebraically equivalent to those of Larmor and Lorentz and gave them the modern form:
.
Apparently Poincaré was unaware of Larmor's contributions, because he only mentioned Lorentz and therefore used for the first time the name "Lorentz transformation". Poincaré set the speed of light to unity, pointed out the group characteristics of the transformation by setting l=1, and modified/corrected Lorentz's derivation of the equations of electrodynamics in some details in order to fully satisfy the principle of relativity, i.e. making them fully Lorentz covariant.
In July 1905 (published in January 1906) Poincaré showed in detail how the transformations and electrodynamic equations are a consequence of the principle of least action; he demonstrated in more detail the group characteristics of the transformation, which he called Lorentz group, and he showed that the combination x2+y2+z2-t2 is invariant. He noticed that the Lorentz transformation is merely a rotation in four-dimensional space about the origin by introducing as a fourth imaginary coordinate, and he used an early form of four-vectors. He also formulated the velocity addition formula, which he had already derived in unpublished letters to Lorentz from May 1905:
.
Einstein (1905) – Special relativity
On June 30, 1905 (published September 1905) Einstein published what is now called special relativity and gave a new derivation of the transformation, which was based only on the principle of relativity and the principle of the constancy of the speed of light. While Lorentz considered "local time" to be a mathematical stipulation device for explaining the Michelson-Morley experiment, Einstein showed that the coordinates given by the Lorentz transformation were in fact the inertial coordinates of relatively moving frames of reference. For quantities of first order in v/c this was also done by Poincaré in 1900, while Einstein derived the complete transformation by this method. Unlike Lorentz and Poincaré who still distinguished between real time in the aether and apparent time for moving observers, Einstein showed that the transformations applied to the kinematics of moving frames.
The notation for this transformation is equivalent to Poincaré's of 1905, except that Einstein didn't set the speed of light to unity:
Einstein also defined the velocity addition formula:
and the light aberration formula:
Minkowski (1907–1908) – Spacetime
The work on the principle of relativity by Lorentz, Einstein, Planck, together with Poincaré's four-dimensional approach, were further elaborated and combined with the hyperboloid model by Hermann Minkowski in 1907 and 1908. Minkowski particularly reformulated electrodynamics in a four-dimensional way (Minkowski spacetime). For instance, he wrote x, y, z, it in the form x1, x2, x3, x4. By defining ψ as the angle of rotation around the z-axis, the Lorentz transformation assumes the form (with c=1):
Even though Minkowski used the imaginary number iψ, he for once directly used the tangens hyperbolicus in the equation for velocity
with .
Minkowski's expression can also by written as ψ=atanh(q) and was later called rapidity. He also wrote the Lorentz transformation in matrix form:
As a graphical representation of the Lorentz transformation he introduced the Minkowski diagram, which became a standard tool in textbooks and research articles on relativity:
Sommerfeld (1909) – Spherical trigonometry
Using an imaginary rapidity such as Minkowski, Arnold Sommerfeld (1909) formulated the Lorentz boost and the relativistic velocity addition in terms of trigonometric functions and the spherical law of cosines:
Frank (1909) – Hyperbolic functions
Hyperbolic functions were used by Philipp Frank (1909), who derived the Lorentz transformation using ψ as rapidity:
Bateman and Cunningham (1909–1910) – Spherical wave transformation
In line with Sophus Lie's (1871) research on the relation between sphere transformations with an imaginary radius coordinate and 4D conformal transformations, it was pointed out by Bateman and Cunningham (1909–1910), that by setting u=ict as the imaginary fourth coordinates one can produce spacetime conformal transformations. Not only the quadratic form , but also Maxwells equations are covariant with respect to these transformations, irrespective of the choice of λ. These variants of conformal or Lie sphere transformations were called spherical wave transformations by Bateman. However, this covariance is restricted to certain areas such as electrodynamics, whereas the totality of natural laws in inertial frames is covariant under the Lorentz group. In particular, by setting λ=1 the Lorentz group can be seen as a 10-parameter subgroup of the 15-parameter spacetime conformal group .
Bateman (1910–12) also alluded to the identity between the Laguerre inversion and the Lorentz transformations. In general, the isomorphism between the Laguerre group and the Lorentz group was pointed out by Élie Cartan (1912, 1915–55), Henri Poincaré (1912–21) and others.
Herglotz (1909/10) – Möbius transformation
Following Felix Klein (1889–1897) and Fricke & Klein (1897) concerning the Cayley absolute, hyperbolic motion and its transformation, Gustav Herglotz (1909–10) classified the one-parameter Lorentz transformations as loxodromic, hyperbolic, parabolic and elliptic. The general case (on the left) and the hyperbolic case equivalent to Lorentz transformations or squeeze mappings are as follows:
Varićak (1910) – Hyperbolic functions
Following Sommerfeld (1909), hyperbolic functions were used by Vladimir Varićak in several papers starting from 1910, who represented the equations of special relativity on the basis of hyperbolic geometry in terms of Weierstrass coordinates. For instance, by setting l=ct and v/c=tanh(u) with u as rapidity he wrote the Lorentz transformation:
and showed the relation of rapidity to the Gudermannian function and the angle of parallelism:
He also related the velocity addition to the hyperbolic law of cosines:
Subsequently, other authors such as E. T. Whittaker (1910) or Alfred Robb (1911, who coined the name rapidity) used similar expressions, which are still used in modern textbooks.
Plummer (1910) – Trigonometric Lorentz boosts
w:Henry Crozier Keating Plummer (1910) defined the Lorentz boost in terms of trigonometric functions
Ignatowski (1910)
While earlier derivations and formulations of the Lorentz transformation relied from the outset on optics, electrodynamics, or the invariance of the speed of light, Vladimir Ignatowski (1910) showed that it is possible to use the principle of relativity (and related group theoretical principles) alone, in order to derive the following transformation between two inertial frames:
The variable n can be seen as a space-time constant whose value has to be determined by experiment or taken from a known physical law such as electrodynamics. For that purpose, Ignatowski used the above-mentioned Heaviside ellipsoid representing a contraction of electrostatic fields by x/γ in the direction of motion. It can be seen that this is only consistent with Ignatowski's transformation when n=1/c2, resulting in p=γ and the Lorentz transformation. With n=0, no length changes arise and the Galilean transformation follows. Ignatowski's method was further developed and improved by Philipp Frank and Hermann Rothe (1911, 1912), with various authors developing similar methods in subsequent years.
Noether (1910), Klein (1910) – Quaternions
Felix Klein (1908) described Cayley's (1854) 4D quaternion multiplications as "Drehstreckungen" (orthogonal substitutions in terms of rotations leaving invariant a quadratic form up to a factor), and pointed out that the modern principle of relativity as provided by Minkowski is essentially only the consequent application of such Drehstreckungen, even though he didn't provide details.
In an appendix to Klein's and Sommerfeld's "Theory of the top" (1910), Fritz Noether showed how to formulate hyperbolic rotations using biquaternions with , which he also related to the speed of light by setting ω2=-c2. He concluded that this is the principal ingredient for a rational representation of the group of Lorentz transformations:
Besides citing quaternion related standard works by Arthur Cayley (1854), Noether referred to the entries in Klein's encyclopedia by Eduard Study (1899) and the French version by Élie Cartan (1908). Cartan's version contains a description of Study's dual numbers, Clifford's biquaternions (including the choice for hyperbolic geometry), and Clifford algebra, with references to Stephanos (1883), Buchheim (1884–85), Vahlen (1901–02) and others.
Citing Noether, Klein himself published in August 1910 the following quaternion substitutions forming the group of Lorentz transformations:
or in March 1911
Conway (1911), Silberstein (1911) – Quaternions
Arthur W. Conway in February 1911 explicitly formulated quaternionic Lorentz transformations of various electromagnetic quantities in terms of velocity λ:
Also Ludwik Silberstein in November 1911 as well as in 1914, formulated the Lorentz transformation in terms of velocity v:
Silberstein cites Cayley (1854, 1855) and Study's encyclopedia entry (in the extended French version of Cartan in 1908), as well as the appendix of Klein's and Sommerfeld's book.
Ignatowski (1910/11), Herglotz (1911), and others – Vector transformation
Vladimir Ignatowski (1910, published 1911) showed how to reformulate the Lorentz transformation in order to allow for arbitrary velocities and coordinates:
Gustav Herglotz (1911) also showed how to formulate the transformation in order to allow for arbitrary velocities and coordinates v=(vx, vy, vz) and r=(x, y, z):
This was simplified using vector notation by Ludwik Silberstein (1911 on the left, 1914 on the right):
Equivalent formulas were also given by Wolfgang Pauli (1921), with Erwin Madelung (1922) providing the matrix form
These formulas were called "general Lorentz transformation without rotation" by Christian Møller (1952), who in addition gave an even more general Lorentz transformation in which the Cartesian axes have different orientations, using a rotation operator . In this case, v′=(v′x, v′y, v′z) is not equal to -v=(-vx, -vy, -vz), but the relation holds instead, with the result
Borel (1913–14) – Cayley–Hermite parameter
Émile Borel (1913) started by demonstrating Euclidean motions using Euler-Rodrigues parameter in three dimensions, and Cayley's (1846) parameter in four dimensions. Then he demonstrated the connection to indefinite quadratic forms expressing hyperbolic motions and Lorentz transformations. In three dimensions:
In four dimensions:
Gruner (1921) – Trigonometric Lorentz boosts
In order to simplify the graphical representation of Minkowski space, Paul Gruner (1921) (with the aid of Josef Sauter) developed what is now called Loedel diagrams, using the following relations:
In another paper Gruner used the alternative relations:
See also
Derivations of the Lorentz transformations
History of special relativity
References
Historical mathematical sources
Historical relativity sources
. For Minkowski's and Voigt's statements see p. 762.
. See also: English translation.
; English translation by David Delphenich: On the mechanics of deformable bodies from the standpoint of relativity theory.
(Reprint of Larmor (1897) with new annotations by Larmor.)
. See also the English translation.
Written by Poincaré in 1912, printed in Acta Mathematica in 1914 though belatedly published in 1921.
Secondary sources
See also "Michelson, FitzGerald and Lorentz: the origins of relativity revisited", Online.
(Only pages 1–21 were published in 1915, the entire article including pp. 39–43 concerning the groups of Laguerre and Lorentz was posthumously published in 1955 in Cartan's collected papers, and was reprinted in the Encyclopédie in 1991.)
; First edition 1911, second expanded edition 1913, third expanded edition 1919.
In English:
External links
Mathpages: 1.4 The Relativity of Light
Equations
History of physics
Hendrik Lorentz
Historical treatment of quaternions | History of Lorentz transformations | Mathematics | 5,939 |
71,632,647 | https://en.wikipedia.org/wiki/Solorinic%20acid | Solorinic acid is an anthraquinone pigment found in the leafy lichen Solorina crocea. It is responsible for the strong orange colour of the medulla and the underside of the thallus in that species. In its purified crystalline form, it exists as orange-red crystals with a melting point of .
The structure of solorinic acid, 2-n-hexanoyl-1,3,8-trihydroxy-6-methoxy-anthraquinone, was proposed by Koller and Russ in 1937, and verified by chemical synthesis in 1966.
Norsolorinic acid, (C20H18O7, 2-hexanoyl-1,3,6,8-tetrahydroxyanthraquinone), is a closely related compound also found in Solorina crocea.
Solorinic acid was used as the internal standard in the establishment of a standardized method for the identification of lichen products using high-performance liquid chromatography. This is because it is quite a hydrophobic compound, and consequently will elute more slowly than most lichen products, making possible the identification of lichen extracts containing chlorinated xanthones or long chain depsides.
Although usually associated with Solorina crocea, solorinic acid was reported as a lichen product from the crustose, rock-dwelling lichen Placolecis kunmingensis, described as a species new to science in 2019.
References
Lichen products
Polyketides
Anthraquinones | Solorinic acid | Chemistry | 322 |
6,710,186 | https://en.wikipedia.org/wiki/Capnomor | Capnomor (from Greek smoke + part) is a colorless oil with an aromatic odor which is extracted by distillation from beechwood tar. Its specific gravity is 0.9775 at 20 °C and boiling point is 185 °C. It was discovered in the 1830s by the German chemist Baron Karl von Reichenbach.
References
Hydrocarbons | Capnomor | Chemistry | 72 |
2,145,009 | https://en.wikipedia.org/wiki/Heliocentric%20orbit | A heliocentric orbit (also called circumsolar orbit) is an orbit around the barycenter of the Solar System, which is usually located within or very near the surface of the Sun. All planets, comets, and asteroids in the Solar System, and the Sun itself are in such orbits, as are many artificial probes and pieces of debris. The moons of planets in the Solar System, by contrast, are not in heliocentric orbits, as they orbit their respective planet (although the Moon has a convex orbit around the Sun).
The barycenter of the Solar System, while always very near the Sun, moves through space as time passes, depending on where other large bodies in the Solar System, such as Jupiter and other large gas planets, are located at that time. A similar phenomenon allows the detection of exoplanets by way of the radial-velocity method.
The helio- prefix is derived from the Greek word "ἥλιος", meaning "Sun", and also Helios, the personification of the Sun in Greek mythology.
The first spacecraft to be put in a heliocentric orbit was Luna 1 in 1959. An incorrectly timed upper-stage burn caused it to miss its planned impact on the Moon.
Trans-Mars injection
A trans-Mars injection (TMI) is a heliocentric orbit in which a propulsive maneuver is used to set a spacecraft on a trajectory, also known as Mars transfer orbit, which will place it as far as Mars orbit.
Every two years, low-energy transfer windows open up, which allow movement between the two planets with the lowest possible energy requirements. Transfer injections can place spacecraft into either a Hohmann transfer orbit or bi-elliptic transfer orbit. Trans-Mars injections can be either a single maneuver burn, such as that used by the NASA MAVEN orbiter in 2013, or a series of perigee kicks, such as that used by the ISRO Mars Orbiter Mission in 2013.
See also
Astrodynamics
Earth's orbit
Geocentric orbit
Heliocentrism
List of artificial objects in heliocentric orbit
List of orbits
Low-energy transfer
References
Orbits
Astrodynamics
Spacecraft propulsion
Orbital maneuvers
Exploration of Mars | Heliocentric orbit | Engineering | 457 |
78,738,009 | https://en.wikipedia.org/wiki/China%20National%20Clearing%20Center | The China National Clearing Center (CNCC, , ) is a non-profit public institution administered by the People's Bank of China and created in May 1990. It runs several of China's key payment systems.
Specifically, the CNCC operates the China National Advanced Payment System (CNAPS, ), a payment system with two main applications: the High-Value Payment System (HVPS, ), a real-time gross settlement (RTGS) system comparable to Fedwire in the United States or T2 in the euro area; and the Bulk Electronic Payment System (BEPS, ), a retail payment system. The CNCC also operates China's Check Imaging System (CIS), the Internet Banking Payment System (IBPS, ), and the China Foreign Exchange Payment System (CFXPS, ), another RTGS system that specializes in domestic transactions in foreign currencies, previously known in English as China Domestic Foreign Currency Payment System (CDFCPS or FCPS).
The CNCC only operates infrastructures for domestic payments, whether denominated in renminbi (RMB) or in foreign currencies. It thus complements other payments infrastructures that handle foreign exchange and offshore RMB payments, including China UnionPay, CFETS and CIPS; and those that support China's securities and derivatives markets, including CSDC, CCDC and the Shanghai Clearing House.
High-Value Payment System
As China's domestic RTGS system launched in June 2005, the HVPS has been described as "the backbone of the national payments system in China". In 2009, the HVPS processed 247 million transactions amounting to RMB760 trillion; In 2023, these numbers had grown to 382 million transactions and turnover of RMB8,481 trillion.
By end-2010, the HVPS had 1,729 direct participants and 100,510 indirect participants. Bank of China (Hong Kong) and were direct participants of HVPS and are its clearing agents in Hong Kong and Macau respectively. By end-2016, the number of direct participants had shrunk to 305, but that of indirect participants had grown to 141,023.
Bulk Electronic Payment System
The BEPS, launched in June 2006, is a retail payment system which is embedded into the HVPS. It is based on real-time netting and settlement at regular times during the day. In 2009, it processed 226 million transactions amounting to RMB11 trillion. By 2023, these numbers had grown to 4.6 billion transactions and RMB186 trillion.
By end-2010, BEPS had 1,730 direct participants and 100,510 indirect participants, almost exactly overlapping with HVPS with which it shares the CNAPS platform.
Internet Banking Payment System
IBPS started operations in August 2010 and mainly handles instant payment transactions via the internet. It is a deferred net settlement (DNS) system. By end-July 2020, it has 218 direct participants and 175 proxy-access participants. In 2023, it processed 17 billion transactions amounting to RMB301 trillion.
China Domestic Foreign Currency Payment System / China Foreign Exchange Payment System
The CDFCPS, sometimes abbreviated as FCPS, was created in 2008 by the People's Bank of China as a dedicated RTGS system to handle domestic transactions that are entirely denominated in foreign currencies. As of the early 2010s, it handled payments in Australian dollar (AUD), Canadian dollar (CAD), Swiss franc (CHF), euro (EUR), Pound sterling (GBP), Hong Kong dollar (HKD), Japanese yen (JPY), and US dollar (USD). By then, four commercial banks were designated as the system's proxy settlement banks, namely Bank of China, China Construction Bank, Industrial and Commercial Bank of China, and Shanghai Pudong Development Bank. A that time, the overwhelming majority of CDFCPS-settled transactions were in US dollar. The CDFCPS had 31 participants at the end of 2010. Its Chinese name's transcription in English was changed from CDFCPS to CFXPS in the early 2020s.
Domestic foreign-exchange (FX) transactions involving the RMB, by contrast, are not settled on CFXPS but can use the China Foreign Exchange Trade System (CFETS) as clearing house, which in turn uses the HVPS to settle the transactions' RMB legs. Thus, by the early 2010s the CFETS handled most domestic FX transactions. In 2023, CFXPS processed 5 million transactions amounting to USD2.6 trillion (RMB19 trillion).
Check Imaging System
Launched in June 2007, the CNCC's Check Imaging System enables electronic exchange of check images and multilateral net settlement of the corresponding exchange instruments at the HVPS.
See also
Clearing House Automated Transfer System
References
Payment clearing systems
Real-time gross settlement | China National Clearing Center | Technology | 1,011 |
70,114,313 | https://en.wikipedia.org/wiki/Xiaomi%20Mi%2011%20Ultra | Xiaomi Mi 11 Ultra is an Android high-end smartphone developed by Xiaomi, released in April 2021. It serves as the successor to the Xiaomi Mi 10 Ultra. Unlike its China-only predecessor, the Mi 11 Ultra is available for retail in the global market.
The Mi 11 Ultra is heavily marketed around its camera capabilities. At the time of release, the Mi 11 Ultra featured the largest main camera sensor of any conventional smartphone, at 1/1.12 inch. Paired with the main camera are two auxiliary cameras, a 13mm equivalent ultra-wide angle camera and a 120mm equivalent periscope telephoto camera capable of 5x optical zoom. The Mi 11 Ultra features a 1.1-inch secondary display at the back of the phone, next to its camera module.
The Mi 11 Ultra employs a 6.81-inch WQHD+ curved OLED display with a 120 Hz refresh rate, capable of a touch sampling rate of 480 Hz and a peak brightness of 1700 nits. The Mi 11 Ultra is powered by a Snapdragon 888 chipset, the flagship Android processor at the time of release. The Mi 11 Ultra utilises a 5000 mAh battery, capable of 67W wired, 67W wireless, and 10W reverse charging. Upon release, the Mi 11 Ultra had a starting price of £1,199 in the UK, on par with the competition.
See also
List of longest smartphone telephoto lenses
References
External links
Android (operating system) devices
Phablets
Mobile phones with multiple rear cameras
Mobile phones with 8K video recording
Xiaomi smartphones
Mobile phones with infrared transmitter
Mobile phones introduced in 2021 | Xiaomi Mi 11 Ultra | Technology | 338 |
71,177,861 | https://en.wikipedia.org/wiki/Edmund%20Harriss | Edmund Orme Harriss (born 1976 in Worcester, UK) is a British mathematician, writer and artist. Since 2010 he has been at the Fulbright College of Arts & Sciences at The University of Arkansas in Fayetteville, Arkansas where he is an Assistant Professor of Arts & Sciences (ARSC) and Mathematical Sciences (MASC). He does research in the Geometry of Tilings and Patterns, a branch of Convex and Discrete Geometry. He is the discoverer of the spiral that bears his name.
Education and career
Harriss earned a Master of Mathematics at the University of Warwick (2000) and then obtained his PhD at Imperial College London (2003) with the dissertation "On Canonical Substitution Tilings" under Jeroen Lamb.
Harriss has been a speaker at FSCONS, a Nordic Free software conference.
Harriss is active on Numberphile where he has given talks on Heesch numbers, Tribonacci numbers, the Rauzy fractal and the plastic ratio.
In May and June 2020 Harriss was a visiting fellow at The Institute for Advanced Study of Aix-Marseille University (IMéRA) where he studied the possibilities of visual and spatial models and animations to illustrate a wide variety of mathematical ideas.
Mathematical art
The Gauss–Bonnet theorem gives the relationship between the curvature of a surface and the amount of turning as you traverse the surface’s boundary. Harriss used this theorem to invent shapes called Curvahedra which were then incorporated into sculpture. Scientists at MIT are investigating ways in which curvahedra may have applications in construction.
Art and mathematics are intertwined in Harris's work. He uses public art to demonstrate deep mathematical ideas and his academic work frequently involves the visualization of mathematics. Mathematically themed sculptures by Harriss have been installed at Oklahoma State University, at the University of Arkansas, and at Imperial College London.
Combining his interest in art and mathematical tilings he is one of 24 mathematicians and artists who make up the Mathemalchemy Team.
Harriss Spiral
Harriss noticed that the golden ratio is just one example of a more general idea: In how many ways can a rectangle be divided into squares and rectangles? The golden ratio results when a rectangle is divided into a one square and one similar rectangle. But by varying the number of squares and sub-rectangles, we arrive at what Harriss calls "proportion systems". The solutions in all cases are algebraic numbers and the golden ratio is just one of them.
"The golden ratio is this incredibly well-explored corner of a whole city,” he said. “I wanted to give signposts to other locations in that city."
Harriss investigated the next simplest case, dividing a rectangle into one square and two similar rectangles. The ratio that emerged in this case is the so-called plastic ratio. The golden spiral is closely related to the first case, dissection into one square and one similar rectangle. Harriss applied the same idea to this second case and discovered a new fractal spiral related to the plastic ratio and since named after him.
Selected publications
Books
Harriss has published several books designed to spread joy in mathematics. The sales of his colouring books run well beyond 100,000.
(2015) Snowflake Seashell Star: Colouring Adventures in Numberland with Alex Bellos
(2016) Patterns of the Universe: A Coloring Adventure in Math and Beauty, with Alex Bellos
(2016) Visions of the Universe: A Coloring Journey Through Math's Great Mysteries, with Alex Bellos
(2020) Hello Numbers! What Can You Do? 'An Adventure Beyond Counting, with Houston Hughes, Illustrated by Brian Rea
Papers
(2011) "From oranges to modems" in "The unplanned impact of mathematics", Nature, vol 475, pp. 166–169
(2011) "Algebraic numbers, free group automorphisms and substitutions on the plane" with Pierre Arnoux, Maki Furukado and Shunji Ito, Transactions of the American Mathematical Society 363 (2011), pp. 4651-4699
(2015) "Strain and the optoelectronic properties of nonplanar phosphorene monolayers" with Mehrshad Mehboudi et al, Proceedings of the National Academy of Sciences of the United States of America
(2020) "Algebraic Number Starscapes" with Katherine E. Stange and Steve Trettel
References
External links
Edmund O. Harriss website
University of Arkansas faculty
Alumni of the University of Warwick
Alumni of Imperial College London
21st-century English mathematicians
Recreational mathematicians
Mathematics popularizers
1976 births
Living people
Mathematical artists
21st-century British sculptors
21st-century English male writers
Writers from Worcester, England
English male sculptors
21st-century English male artists | Edmund Harriss | Mathematics | 980 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.