id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
13,262,231
https://en.wikipedia.org/wiki/Revati%20%28nakshatra%29
Revati is the Hindu name for Zeta Piscium, a star on the edge of the Pisces zodiac constellation. In Hindu sidereal astronomy this star is identified as the First Point of Aries, i.e. when the Sun crosses this star, a new solar year begins. Revathi is the last star in the Pisces constellation, which is the last zodiac sign. Ashwini is the first star in Aries constellation, which is the first zodiacal sign. Astrology Revati (Devanagari: रेवती) is the twenty-seventh nakshatra in Hindu astrology (or the 28th, if Abhijit is counted) corresponding to ζ Piscium. It is ruled by Puṣan, one of the 12 Ādityas. According to the beliefs of traditional electional astrology, Revati is a sweet or delicate nakshatra, meaning that while Revati has the most influence, it is best to begin working on things of physical beauty like music and jewellery . Revati is symbolized by fish (often a pair of fish). It is also associated with the sea. Traditional Hindu given names are determined by which pada (quarter) of a nakshatra the Moon was in, at the time of birth. In the case of Revati, the given name would begin with the following syllables: De (दे) Do (दो) Cha (च) Chi (ची) Revati is seen as a nakshatra that nurtures and fosters wealth, expansion, and vigour. Revati is great for beginning a new trip, marriage rituals, childbirth, or shopping for new clothing. Fish are used to represent the nakshatra and symbolize learning and movement. Revati Nakshatra Born Personality Traits Revati Nakshatra natives typically have good, balanced bodies and are tall. They are believed to have a charming personality and a fair skin tone. This Nakshatra's inhabitants have pure hearts and are able to respond appropriately in various circumstances. They have high educational aspirations and a constant desire to learn more. When given a task to complete for an extended amount of time, people become disinterested and lose focus. At that point, they want to take a different action. These peoples physical issues cause them to continue to feel anxious. Individuals born under the influence of this planet do not readily trust others, but once they do, they completely believe them completely. They value harmony, thus they honor the society and its customs. They complete any assignment in a systematic way. They respond to events maturely and sensitively. Revati Nakshatra is regarded as lucky and auspicious. This sign's natives are exceedingly dependable. The person born under this nakshatra is capable of maintaining composure under pressure. They are naturally quite sensible and always willing to lend a helping hand. They are very sensitive and gentle by nature. People like them because of their polite demeanour. They may show a lot of self-confidence in their talents and abilities. They occasionally don't respond appropriately to circumstances. To succeed, they must put up a lot of effort in their life. It has frequently been observed that those born under this sign continue to waste their skills in the wrong fields. These people are recommended to keep a close eye on their spending. Revati Nakshatra Born Health Guru and Budh are the rulers of this nakshatra and are in charge of the fingers and toes. In addition to these, this planet also regulates illnesses of the thighs, knees, and digestive system. The natives might have a disease affecting these body parts as a result of the effects of this planet. The people born under the influence of this planet frequently complain of a cough, which makes them particularly vulnerable to climate change. Revati Nakshatra Born Family Revati Nakshatra people are very close to their family, they are able to articulate their feelings. These people are seen to reside distant from where they were born. Their close friends and family are very helpful to them. Also, they are able to get enough help from their father. Disputeful environments might arise in familial life. On minor issues, their viewpoints might diverge from other family members. They get along well with their relatives for all of the mentioned reasons. Yet, they lead normal married lives. A loving and equal attitude is displayed by a life partner. Revati Nakshatra Born Business Those born under the Revati Nakshatra sign are fascinated by historical discoveries, scientific study, and primitive culture. It is obvious that if they choose to work in these industries, it will be advantageous for them. They also enjoy astrological-related work. Medical courses are another excellent choice for them. They can succeed only because of their diligence. In addition to this, they can achieve success in the humanities. These people are somewhat interested in work related to orphanages, religious communities, security, motor vehicle training, commute, etc. They are also somewhat interested in acting, singing, dancing, linguistics, magic, railroads and roads, civil engineering, architecture, writer, air hostess, gem dealer, and drawing and painting. Remedy It is advised that the natives of Revati Nakshatra to worship Lord Vishnu. It is good for them to chant and listen to all of the names of Vishnu. The mantra "Om La, Om Shang, Om Ang" should be chanted while Chandrama (Moon) is moving through Revati Nakshatra. By doing this, he can improve his life and achieve success. Also, for some folks, dressing in bright blue, green, or blended colours can be beneficial. Revati Nakshatra As Per Padas First Pada of Revati Nakshatra: Jupiter is the ruling planet of Sagittarius, the first pada of Revati Nakshatra. The locals emphasise being friendly and helpful here. These people will be really positive. Second Pada of Revati Nakshatra: Saturn is the ruler of Capricorn in the second pada of Revati Nakshatra. Here, the native's emphasis is on being orderly, following the tried-and-true way, and avoiding taking any risks. Third Pada of Revati Nakshatra: Saturn is the ruler of the Aquarius Navamsa, which includes the third pada of Revati Nakshatra. Here, the emphasis is on showing empathy for others' suffering and making every effort to assist them. Revati Nakshatra Fourth Pada: The Jupiter-ruled Pisces Navamsa is where the fourth pada of the Revati Nakshatra falls. The locals are prone to daydreaming and erecting imaginary fortresses in the sky. The natives are also vulnerable to outside influences. References Nakshatra
Revati (nakshatra)
[ "Astronomy" ]
1,397
[ "Nakshatra", "Constellations" ]
13,262,417
https://en.wikipedia.org/wiki/Polyembryony
Polyembryony is the phenomenon of two or more embryos developing from a single fertilized egg. Due to the embryos resulting from the same egg, the embryos are identical to one another, but are genetically diverse from the parents. The genetic difference between the offspring and the parents, but the similarity among siblings, are significant distinctions between polyembryony and the process of budding and typical sexual reproduction. Polyembryony can occur in humans, resulting in identical twins, though the process is random and at a low frequency. Polyembryony occurs regularly in many species of vertebrates, invertebrates, and plants. Evolution of polyembryony The evolution of polyembryony and the potential evolutionary advantages that may entail have been studied. In parasitoid wasps, there are several hypotheses surrounding the evolutionary advantages of polyembryony, one of them being that it allows female wasps that are small in size to increase the number of potential offspring in comparison to wasps that are mono embryonic. There are limitations to monoembryony, but with this method of development, multiple embryos can be derived from each of the individual eggs that are laid. The potential advantages of polyembryony in competing invasive plant species has been studied as well. Vertebrates Armadillos are the most well studied vertebrate that undergoes polyembryony, with six species of armadillo in the genus Dasypus that are always polyembryonic. The nine banded armadillo, for instance, always gives birth to four identical young. There are two conditions that are expected to promote the evolution of polyembryony: the mother does not know the environmental conditions of her offspring as in the case of parasitoids, or a constraint on reproduction. It is thought that nine banded armadillos evolved to be polyembryonic because of the latter. Invertebrates A more striking example of the use of polyembryony as a competitive reproductive tool is found in the parasitoid Hymenoptera, family Encyrtidae. The progeny of the splitting embryo develop into at least two forms, those that will develop into adults and those that become a type of soldier, called precocious larvae. These latter larvae patrol the host and kill any other parasitoids they find with the exception of their siblings, usually sisters. Obligately polyembryonic insects fall in two classes: Hymenoptera (certain wasps), and Strepsiptera. From one egg, these insects can produce over thousands of offspring. Polyembryonic wasps from the Hymenoptera group can be further subdivided into four families including Braconidae (Macrocentrus), Platygastridae (Platygaster), Encyrtidae (Copidosoma), and Dryinidae. Polyembryony also occurs in Bryozoa. Through genotype analysis and molecular data, it has been suggested that polyembryony happens in the entire bryozoan order Cyclostomatida. Plants The term is also used in botany to describe the phenomenon of seedlings emerging from one embryo. Around 20 genera of gymnospores undergo polyembryony, termed "cleavage polyembryony," where the original zygote splits into many identical embryos. In some plant taxa, the many embryos of polyembryony eventually gives rise to only a single offspring. The mechanism underlying the phenomenon of a resulting single (or in some cases a few) offspring is described in Pinus sylvestris to be programmed cell death (PCD), which removes all but one embryo. Originally, all embryos have equal opportunity to develop into full seeds, but during the early stages of development, one embryo becomes dominant through competition, and therefore the now dormant seed, while the other embryos are destroyed through PCD. The genus Citrus has a number of species that undergo polyembryony, where multiple nucellar-cell-derived embryos exist alongside sexually-derived embryos. Antonie van Leeuwenhoek first described polyembryony in 1719 when the seed in Citrus was observed to have two germinating embryos. In Citrus, polyembryony is genetically controlled by a shared polyembryony locus among the species, determined by single-nucleotide polymorphism in the genotypes sequenced. The variation within the species of citrus is based on the amount of embryos that develop, the impact of the environment, and gene expression. As with other species, due to the many embryos developing in close proximity, competition occurs, which can cause variation in seed success or vigor. See also Monoembryony References External links Plant reproduction Embryology Insect physiology
Polyembryony
[ "Biology" ]
982
[ "Behavior", "Plant reproduction", "Plants", "Reproduction" ]
13,263,312
https://en.wikipedia.org/wiki/IEEE%20Emanuel%20R.%20Piore%20Award
The IEEE Emanuel R. Piore Award was a Technical Field Award given each year by the IEEE to an individual or team of two people who have made outstanding contributions to information processing systems in relation to computer science. The award was discontinued in 2012. The award was established in 1976 and named in honor of Emanuel R. Piore. It could be presented to an individual or a team of two. Recipients of this award received a bronze medal, certificate, and honorarium. Recipients The following people received the IEEE Emanuel R. Piore Award: References External links IEEE Emanuel R. Piore Award page at IEEE List of recipients of the IEEE Emanuel R. Piore Award Emanuel R. Piore Award
IEEE Emanuel R. Piore Award
[ "Technology" ]
141
[ "Science and technology awards", "Science award stubs" ]
13,263,408
https://en.wikipedia.org/wiki/Content%20creation
Content creation or content creative is the act of producing and sharing information or media content for specific audiences, particularly in digital contexts. According to Dictionary.com, content refers to "something that is to be expressed through some medium, as speech, writing or any of various arts" for self-expression, distribution, marketing and/or publication. Content creation encompasses various activities including maintaining and updating web sites, blogging, article writing, photography, videography, online commentary, social media accounts, and editing and distribution of digital media. In a survey conducted by the Pew Research Center, content creation was defined as "the material people contribute to the online world". Content creators News organizations News organizations, especially those with a large and global reach like The New York Times, NPR, and CNN, consistently create some of the most shared content on the Web, especially in relation to current events. In the words of a 2011 report from the Oxford School for the Study of Journalism and the Reuters Institute for the Study of Journalism, "Mainstream media is the lifeblood of topical social media conversations in the UK." While the rise of digital media has disrupted traditional news outlets, many have adapted and have begun to produce content that is designed to function on the web and be shared on social media. The social media site Twitter is a major distributor and aggregator of breaking news from various sources, and the function and value of Twitter in the distribution of news is a frequent topic of discussion and research in journalism. User-generated content, social media blogging and citizen journalism have changed the nature of news content in recent years. The company Narrative Science is now using artificial intelligence to produce news articles and interpret data. Colleges, universities, and think tanks Academic institutions, such as colleges and universities, create content in the form of books, journal articles, white papers, and some forms of digital scholarship, such as blogs that are group edited by academics, class wikis, or video lectures that support a massive open online course (MOOC). Through an open data initiative, institutions may make raw data supporting their experiments or conclusions available on the Web. Academic content may be gathered and made accessible to other academics or the public through publications, databases, libraries, and digital libraries. Academic content may be closed source or open access (OA). Closed-source content is only available to authorized users or subscribers. For example, an important journal or a scholarly database may be a closed source, available only to students and faculty through the institution's library. Open-access articles are open to the public, with the publication and distribution costs shouldered by the institution publishing the content. Companies Corporate content includes advertising and public relations content, as well as other types of content produced for profit, including white papers and sponsored research. Advertising can also include auto-generated content, with blocks of content generated by programs or bots for search engine optimization. Companies also create annual reports which are part of their company's workings and a detailed review of their financial year. This gives the stakeholders of the company insight into the company's current and future prospects and direction. Artists and writers Cultural works, like music, movies, literature, and art, are also major forms of content. Examples include traditionally published books and e-books as well as self-published books, digital art, fanfiction, and fan art. Independent artists, including authors and musicians, have found commercial success by making their work available on the Internet. Government Through digitization, sunshine laws, open records laws and data collection, governments may make statistical, legal or regulatory information available on the Internet. National libraries and state archives turn historical documents, public records, and unique relics into online databases and exhibits. This has raised significant privacy issues. In 2012, The Journal News, a New York state paper, sparked an outcry when it published an interactive map of the state's gun owner locations using legally obtained public records. Governments also create online or digital propaganda or misinformation to support domestic and international goals. This can include astroturfing, or using media to create a false impression of mainstream belief or opinion. Governments can also use open content, such as public records and open data, in service of public health, educational and scientific goals, such as crowdsourcing solutions to complex policy problems. In 2013, the National Aeronautics and Space Administration (NASA) joined the asteroid mining company Planetary Resources to crowdsource the hunt for near-Earth objects. Describing NASA's crowdsourcing work in an interview, technology transfer executive David Locke spoke of the "untapped cognitive surplus that exists in the world" which could be used to help develop NASA technology. In addition to making governments more participatory, open records and open data have the potential to make governments more transparent and less corrupt. Users The introduction of Web 2.0 made it possible for content consumers to be more involved in the generation and sharing of content. With the advent of digital media, the amount of user generated content, as well as the age and class range of users, has increased. 8% of Internet users are very active in content creation and consumption. Worldwide, about one in four Internet users are significant content creators, and users in emerging markets lead the world in engagement. Research has also found that young adults of a higher socioeconomic background tend to create more content than those from lower socioeconomic backgrounds. 69% of American and European internet users are "spectators", who consume—but don't create—online and digital media. The ratio of content creators to the amount of content they generate is sometimes referred to as the 1% rule, a rule of thumb that suggests that only 1% of a forum's users create nearly all of its content. Motivations for creating new content may include the desire to gain new knowledge, the possibility of publicity, or simple altruism. Users may also create new content in order to bring about social reforms. However, researchers caution that in order to be effective, context must be considered, a diverse array of people must be included, and all users must participate throughout the process. According to a 2011 study, minorities create content in order to connect with their communities online. African-American users have been found to create content as a means of self-expression that was not previously available. Media portrayals of minorities are sometimes inaccurate and stereotypical which affects the general perception of these minorities. African-Americans respond to their portrayals digitally through the use of social media such as Twitter and Tumblr. The creation of Black Twitter has allowed a community to share their problems and ideas. Teens Younger users now have greater access to content, content creating applications, and the ability to publish to different types of media, such as Facebook, Blogger, Instagram, DeviantArt, or Tumblr. As of 2005, around 21 million teens used the internet and 57%, or 12 million teens, consider themselves content creators. This proportion of media creation and sharing is higher than that of adults. With the advent of the Internet, teens have had more access to tools for sharing and creating content. Increase in accessibility to technology, especially due to lower prices, has led to an increase in accessibility of content creation tools as well for teens. Some teens use this to become content creators through online platforms like YouTube, while others use it to connect to friends through social networking sites. Issues The rise of anonymous and user-generated content presents both opportunities and challenges to Web users. Blogging, self-publishing and other forms of content creation give more people access to larger audiences. However, this can also perpetuate rumors and lead to misinformation. It can make it more difficult to find content that users' information needs. The feature of user-generated content and personalized recommendation algorithms of digital media also gives a rise to confirmation bias. Users may tend to seek out information that confirms their existing beliefs and ignore information that contradicts them. This can lead to one-sided, unbalanced content that does not present a complete picture of an issue. The quality of digital contents varies from traditional academic or published writing. Digital media writing is often more engaging and accessible to a broader audience than academic writing, which is usually intended for a specialized audience. Digital media writers often use a conversational tone, personal anecdotes, and multimedia elements like images and videos to enhance the reader's experience. For example, the veteran populist anti-EU campaigner Farage's tweets in 2017–2018 used a lot of colloquial expressions and catchphrases to resonate the "common sense" with audiences. At the same time, digital media is also necessary for professional (academic) communicators to reach an audience, as well as with connecting to scholars in their areas of expertise. The quality of digital contents is also influenced by capitalism and market-driven consumerism. Writers may have commercial interests that influence the content they produce. For example, a writer who is paid to promote a particular product or service may write articles that are biased in favor of that product or service, even if it is not the best option for the reader. Metadata Digital content is difficult to organize and categorize. Websites, forums, and publishers all have different standards for metadata, or information about the content, such as its author and date of creation. The perpetuation of different standards of metadata can create problems of accessibility and discoverability. Ethics Digital writing and content creation has evolved significantly. This has led to various ethical issues, including privacy, individual rights, and representation. A focus on cultural identity has helped increase accessibility, empowerment, and social justice in digital media, but might also prevent users from freely communicating and expressing. Intellectual property The ownership, origin, and right to share digital content can be difficult to establish. User-generated content presents challenges to traditional content creators (professional writers, artists, filmmakers, musicians, choreographers, etc.) with regard to the expansion of unlicensed and unauthorized derivative works, piracy and plagiarism. Also, the enforcement of copyright laws, such as the Digital Millennium Copyright Act in the U.S., makes it less likely that works will fall into the public domain. Social movements 2011 Egyptian revolution Content creation serves as a useful form of protest on social media platforms. The 2011 Egyptian revolution was one example of content creation being used to network protestors globally for the common cause of protesting the "authoritarian regimes in the Middle East and North Africa throughout 2011". The protests took place in multiple cities in Egypt, and quickly evolved from peaceful protest into open conflict. Social media outlets allowed protestors from different regions to network with each other and raise awareness of the widespread corruption in Egypt's government, as well as helping coordinate their response. Youth activists promoting the rebellion were able to formulate a Facebook group, "Progressive Youth of Tunisia". Other Examples of recent social media protest through online content include the global widespread use of the hashtags #MeToo, used to raise awareness against sexual abuse, and #BlackLivesMatter, which focused on police brutality against black people. See also References Digital media Advertising Social media
Content creation
[ "Technology" ]
2,264
[ "Multimedia", "Digital media", "Computing and society", "Social media" ]
13,263,551
https://en.wikipedia.org/wiki/Ridership
In public transportation, ridership refers to the number of people using a transit service. It is often summed or otherwise aggregated over some period of time for a given service or set of services and used as a benchmark of success or usefulness. Common statistics include the number of people served by an entire transit system in a year and the number of people served each day by a single transit line. The concept should not be confused with the maximum capacity of a particular vehicle or transit line. See also References Transportation planning Public transport
Ridership
[ "Physics" ]
108
[ "Physical systems", "Transport", "Transport stubs" ]
13,263,961
https://en.wikipedia.org/wiki/Silvestri%20camera
Silvestri is an Italian manufacturer of professional photographic cameras and large format cameras. The history - SLV and T30 The production of the Silvestri cameras started in Florence, Italy, at the beginning of the eighties by the work of Vincenzo Silvestri who designed and developed the original project. The intent was to provide photographers of architecture, indoor and outdoor, with a wide-angle camera that was compact and light-weight, compared to the large view cameras produced in that period, and with the essential movements for perspective correction. The first camera, the SLV, was born with the 6X7 / 6X9 format, with a rotating back with click stop every 90 degrees and the lens, a Super Angulon 5,6/47 mm in focusing helical mount by Schneider, was not interchangeable. The shift mechanism permitted a total rise or fall of 25 mm, it consisted of a control knob and two counter-posed screws right/left and allowed a precise setting and locking of the shift. The attachment of the roll film back was Graflex compatible which opened the system to the application of various backs like Mamiya, Horseman, Wista, etc. The image viewing and the focusing were made on the round glass using a magnifying lens in a leather bellows. The whole camera structure was made in anodized aluminum and worked with CNC machinery, ensuring constructive exactness and reliability. From a conceptual point of view, the SLV camera is allowed to shift in any direction by simply placing and leveling the back horizontally or vertically and by orienting the camera body by leaning it to the right or left, or upwards or upside down. Some samples of this first model were made in an almost handcrafted way but meeting a good interest among the specialized photographers, Silvestri was pushed to develop a new and improved model of SLV. This second model had a bayonet for attaching the lenses and an interchangeable system for the backs. This gave the SLV a major extension and flexibility and the range of lenses grew to 3 Schneider Schneider lenses: Super Angulon 5,6/65 mm, Super Angulon 5,6/75 mm and Symmar 5,6/100 mm, beside the Super Angulon 5,6/47 mm, all lenses had a bayonet attachment and a focusing helical mount. The interchangeable backs allowed to insertion of the extension rings to compensate for the difference in focal distance among the various lenses. The 4 points of 8° attachment, quick and precise to use, also accept backs of different formats like the 6x12cm and the 4x5”. These modifications opened to the SLV new fields of application and attracted other photographers from Italy and abroad. The SLV was substituted by the T30 camera in 1997, The T30, having 30 mm of shift movement, was more suitable to the new lenses with lager image circle that were introduced on the market in that period. The T30 is still in production. A further step towards flexibility of use consists in the design and production of the shiftable viewfinder with interchangeable frames. The viewfinder is extremely useful for quick work and in difficult work situations compared to viewing the image on the ground glass. Mod.H A new concept camera that renews the characteristics of the SLV, using most of its accessories, has the shiftable viewfinder embodied which coupled with the shift movement gives operative easiness and simplicity of use. The lens-format frames are interchangeable, starting from the Schneider series the including the Rodenstock one too. The camera is now out of production. S4 The S4 camera was designed later to answer the need for full coverage of the 4x5 inches format, whereas the SLV camera could not give enough versatility with this format. Standard 4x5 inches back with format adapters for 6x9 and 6x12, interchangeable backs with short rotation attachment 8° and bayonet attachment or interchangeable lens boards for the lenses. Large in dimension, it is later provided with front bellows (Flexibellow) that perform the lens focusing, tilting, and swinging. With this accessory, the lenses can be used without a focusing mount and permitted the focus extension on the two orthogonal axes by using the lens tilting and swinging. The S4 camera is still in production. Bicam With the arrival of digital photography and considering that it would substitute le film in a short time, Silvestri began to study a camera able to face the double need of working with film but at the same time can be converted back to digital applications. There were two possible solutions, the scan backs and the matrix ones. The choice selected the matrix backs, creating a compact, easy-to-carry, proportioned to the small size of the high-resolution sensors. Two series of Rodenstock and Schneider lenses, from the 23 mm to the long focal ones, specifically designed for high-resolution digital photography are its range of lenses. The Bicam introduced in the late nineties was added with new accessories and components so to follow the continuous evolution of the sensors’ technology. Its main characteristics are the possibility to work with lenses mounted in helical focusing mount and bayonet, or with a bellows system which adds to the camera all the necessary correction movements typical of view cameras; side shift, rise and fall, tilt and swing; all movements are extremely precise and micrometric. The reversible and interchangeable backs have a large range of accessories, from the sliding adapters with viewing screens and drop-in plates to interface the most popular digital backs: Hasselblad V, Hasselblad H, Mamiya 645, Contax 645, Rollei AFI. S5 Micron Classical view camera. Designed for studio photography, it has full micrometric movements. All the parts related to lenses and backs are interchangeable. The lenses are on board or bayonet, le lens boards are flat or recessed, and the lenses have no need for a helical focusing mount. The backs and their accessories are in common with the Bicam system. The bellows are interchangeable. The peculiarity of the S5 micron is to be built on two separate shifting blocks that do not interfere between them allowing the two standards to get to touching. This characteristic makes it possible to use extremely wide-angle lenses and to perform adjustment movements otherwise impossible. The S5 camera was published in the ADI Index 2005 for the industrial design award the Compasso d'Oro. Flexicam Award winner for the best project at the Premio Vespucci 2008. This camera was conceived for on-location works, lightweight (less than 1 kg.), and with absolute precision for the use of high-resolution digital backs. It offers the flexibility of a mini view camera having the same essential correction movements. Rise and fall, rail extension with the micrometric movement of the focus, tilt, and swing. Rodenstock and Schneider lenses on Silvestri bayonet, from 23 mm to 120 mm, back adapter for high-resolution digital backs, T attachment for SLR cameras. Camera models and the year of their introduction Silvestri SLV – 1982 Silvestri SG612 – 1990 Silvestri Mod. H – 1992 Silvestri S4 – 1995 Silvestri T30 – 1997 Silvestri Bicam – 1998 Silvestri S5 micron – 2005 Silvestri Flexicam – 2006 References General references ADI Design Index 2005, Editrice Compositori, 2005 Bologna. L'Ottica in Toscana, Nardini Editore, 2005 Firenze. Alla Photokina e ritorno, Photographialibri, 2008 Milano. Shutterbug, n.3 vol.33 January 2004, article "Medium format update" by George Schaub, pages 100-114. FV Foto-Video Actualidad, n.50 1992, article "Una càmara poco corriente" by Valentìn Sama, pages 60–66. PHOTO Technique International, n.1 1996, article "Primarily for architecture" by Hans Bluth, pages 10–11. PHOTO Technik International, n.1 1994, article " Kamerakonzept fur die Architekturfotografie" by W.D. Georg, pages 48–51. External links Cameras Italian brands Photography equipment manufacturers of Italy
Silvestri camera
[ "Technology" ]
1,730
[ "Recording devices", "Cameras" ]
13,264,388
https://en.wikipedia.org/wiki/POS%20Solutions
POS Solutions, is an Australian company that provides software and services to small and medium businesses, where the company has enjoyed steady growth so making it particularly well known in Australia Newsagencies where its software has taken hold as the market leader. Its product and services range from entry software to multi user enterprise software for retail point of sale software or retail POS. See also Comparison of accounting software Point of Sale Malware References External links https://www.possolutions.com.au/ Point of sale companies Retail point of sale systems Business software Software companies of Australia Accounting software Companies established in 1983 Companies based in Melbourne
POS Solutions
[ "Technology" ]
128
[ "Retail point of sale systems", "Information systems" ]
13,265,459
https://en.wikipedia.org/wiki/Hydraulic%20Launch%20Assist
Hydraulic Launch Assist (HLA) is the name of a hydraulic hybrid regenerative braking system for land vehicles produced by the Eaton Corporation. Background The HLA system recycles energy by converting kinetic energy into potential energy during deceleration via hydraulics, storing the energy at high pressure in an accumulator filled with nitrogen gas. The energy is then returned to the vehicle during subsequent acceleration thereby reducing the amount of work done by the internal combustion engine. This system provides considerable increase in vehicle productivity while reducing fuel consumption in stop-and-go use profiles like refuse vehicles and other heavy duty vehicles. Parallel vs. series hybrids The HLA system is called a parallel hydraulic hybrid. In parallel systems the original vehicle drive-line remains, allowing the vehicle to operate normally when the HLA system is disengaged. When the HLA is engaged, energy is captured during deceleration and released during acceleration, in contrast to series hydraulic hybrid systems which replace the entire traditional drive-line to provide power transmission in addition to regenerative braking. Hydraulic vs. electric hybrids Hydraulic hybrids are said to be power dense, while electric hybrids are energy dense. This means that electric hybrids, while able to deliver large amounts of energy over long periods of time are limited by the rate at which the chemical energy in the batteries is converted to mechanical energy and . This is largely governed by reaction rates in the battery and current ratings of associated components. Hydraulic hybrids on the other hand are capable of transferring energy at a much higher rate, but are limited by the amount of energy that can be stored. For this reason, hydraulic hybrids lend themselves well to stop-and-go applications and heavy vehicles. Applications Concept vehicles Ford Motor Company included the HLA system in their 2002 F-350 Tonka truck concept vehicle, reported to have lower emissions and better fuel economy than any V-8 diesel truck engine of the time, with HLA designed to eventually improve fuel economy by 25%-35% in heavy truck city driving. Shuttle bus Eaton, Ford, the US Army, and IMPACT Engineering, Inc. (of Kent, Washington), built an E-450 shuttle bus as part of the Army's HAMMER (Hydraulic Hybrid Advanced Materials Multifuel Engine Research) project. Refuse Eaton has been awarded the Texas government’s New Technology Research and Development grant to build 12 refuse vehicles with HLA systems. Peterbilt Motors has designed a Model 320 chassis that incorporates the HLA system, which was featured on the cover of the December 13, 2007, issue of Machine Design. References Green vehicles Hybrid vehicles Hybrid powertrain Hybrid trucks Hydraulics
Hydraulic Launch Assist
[ "Physics", "Chemistry" ]
529
[ "Physical systems", "Hydraulics", "Fluid dynamics" ]
13,265,673
https://en.wikipedia.org/wiki/Lanix
Lanix Internacional, S.A. de C.V. is a multinational computer and mobile phone manufacturer company based in Hermosillo, Mexico. Lanix primarily markets and sells its products in Mexico and the Latin American export market. History Lanix was founded in Hermosillo, Sonora, Mexico in 1990, and released its first computer, the PC 286 the same year. Throughout the 1990s Lanix expanded into the development and production of more sophisticated electronics components such as optical drives, servers, memory drives and flash memory. In 2002 Lanix opened its first factory outside of Mexico in Santiago, Chile to cater to the South American market. By 2006 Lanix had gained a market share of 5% of Mexico's electronics market and began diversifying its product line to include LCD televisions and monitors and in 2007 began manufacturing mobile phones. Currently Lanix offers products in the consumer, professional and government markets throughout Latin America. In 2010 Lanix announced an ambitious plan to gain market share in the Latin American computer market and expanded operations to include every country in Latin America Lanix has production facilities at its original headquarters in Hermosillo, Sonora, Mexico and international facilities in Santiago, Chile and Bogota, Colombia. At the 2009 Intel Solutions Summit hosted by Intel, Lanix won an award in the "mobile solution" category. In March 2011, Lanix began offering a system where buyers can custom build their own computer, choosing different types of chipsets, memory, and other components. In 2012 Lanix expanded its product portfolio by integrating its first Smartphone, Ilium S100, and positioned itself as one of the bestselling brands in the Mexican market. In 2015 announces the first smartphone with Windows Phone of the company. In June 2017 Lanix image is renewed by updating its logo, launching new high-end smartphones, and updating its webpage. Products , Lanix manufactures desktops, laptops, tablets, servers, netbooks, monitors, optical disc drives, smartphones flash memory and random-access memory. As of 2010, it made one of the most powerful production Windows desktops in the world, the Lanix Titan Magnum Extreme. Smartphones and tablet computers In 2007, Lanix announced a mobile division specializing in developing smartphones and tablets. In 2010, it showed a smartphone named the Illium running the Android operating system. Lanix smartphones are offered by Telcel, a subsidiary of América Móvil. In 2010, Lanix unveiled a tablet computer named the W10 running Windows 7. An Android version will be available through Telcel. In 2017, Lanix announces its new portfolio of innovative smartphones with competitive features in the current market. Mexican government contracts Lanix has won several major contracts to provide electronics to government entities in Mexico which has been a key part of the company's success including a contract from the Mexican Secretariat of Public Education to supply 16,000 classrooms across Mexico with computers. See also References Mobile phone manufacturers Electronics companies established in 1990 Consumer electronics brands Computer companies of Mexico Computer hardware companies Computer systems companies Hermosillo Mexican brands Mexican companies established in 1990 Electronics companies of Mexico
Lanix
[ "Technology" ]
629
[ "Computer hardware companies", "Computer systems companies", "Computers", "Computer systems" ]
13,265,758
https://en.wikipedia.org/wiki/Cobb%E2%80%93Eickelberg%20Seamount%20chain
The Cobb-Eickelberg seamount chain is a range of undersea mountains formed by volcanic activity of the Cobb hotspot located in the Pacific Ocean. The seamount chain extends to the southeast on the Pacific Plate, beginning at the Aleutian Trench and terminating at Axial Seamount, located on the Juan de Fuca Ridge. The seamount chain is spread over a vast length of approximately 1,800 km. The location of the Cobb hotspot that gives rise to these seamounts is 46° N—130° W. The Pacific plate is moving to the northwest over the hotspot, causing the seamounts in the chain to decrease in age to the southeast. Axial is the youngest seamount and is located approximately 480 km west of Cannon Beach, Oregon. The most studied seamounts that make up this chain are Axial, Brown Bear, Cobb, and Patton seamounts. There are many other seamounts in this chain which have not been explored. Formation Seamounts are created at hotspots. These are isolated areas within tectonic plates where plumes of magma rise through the crust and erupt at the surface. This creates a chain of submarine volcanoes and seamounts. The Cobb hotspot is located at the Juan de Fuca Ridge in the Pacific Ocean. The Pacific Plate is moving north-westward direction at a speed of ~5.5 cm per year. Periodic volcanic events have led to magma eruption onto the seafloor, forming seamounts. Given the length of the chain this hotspot must have been active over a period of at least 30 million years (probably longer since older seamount would have been subsided). The last known volcanic activity was at Axial Seamount, which is currently directly overlying the hotspot. The total magmatic flux from the Cobb Hotspot is about 0.3 cubic m/yr. Although the Cobb hotspot is currently located beneath the Juan de Fuca ridge, this has not always been the case. It went under the Juan de Fuca Ridge when the Pacific plate started moving northwest and eventually the boundary came right on top of the hotspot. Currently the Axial seamount is the only active seamount. The most recent eruption took place in April–May 2015. Seamounts 1. Axial Seamount (46° 03′ 36″ N, 130° 00′ 0″ W) Axial Seamount is the youngest seamount in the Cobb Eickelberg Seamount chain. Since this is the most active of all the Cobb-Eickelberg Seamounts, it is studied the most: to help understand the dynamics of seamounts, volcanic activity, earthquakes, biodiversity, geology and chemistry. The Axial Volcano is about 700 meters higher than the Juan de Fuca Ridge and about 1,000 meters higher than the front of the flanking basins on either side. The Axial Volcano plateaus on top, and has a relatively smooth relief, with a rectangular shape of size 3 km x 8 km. 42% of the lava surrounding the volcano ranges from ropy, whorly, or lineated pāhoehoe to jumbled chaotic form. The remaining area is mostly pillow basalt. Colonial protozoans, bacterial mats, pogonophorans, metazoans, polychaetes, bivalves, tubeworms, copepods and many other organisms are found in the region where there are hydrothermal vents present in the caldera. This helps with the study of varying biodiversity at great depths. Axial seamount is the only active seamount in the chain, because it is on top of the Cobb hotspot. All the other seamounts in this chain are inactive because their source of magma, the Cobb hotspot, has moved out from underneath them. 2. Brown Bear Volcano (46° 02′ 24″ N, 130° 27′ 36″ W) With an age of 0.5-1.5 million years, Brown Bear Seamount is the second youngest seamount in the Cobb-Eickelberg chain. It is northwest of Axial Seamount and connected by a small ridge. Due to its separation from the Juan de Fuca ridge, spreading has very little effect on Brown Bear, so it is not as geologically complex and is not studied in detail. It has a volcanic cone of width 5 km and rises approximately 1,000 meters from the ocean floor. The Brown Bear Seamount summit is at a depth of 1,400 meters. Geographically, the Brown Bear Seamount is separated into two areas, Northwestern and Southeastern, with distinct morphological features. This is thought to be caused by the influence of the mid ocean ridge extensional stress regime. The morphology of the western portion also suggests that it was formed before the hotspot interacted with the Juan de Fuca ridge. The northwestern section of Brown Bear is dominated by a large (5 km diameter) rounded volcanic cone structure. The southern portion extends south of 46.1 degrees North Latitude, and consists of relatively small (1–2 km diameter) volcanic cones. 3. Cobb Seamount (46° 44′ 0″ N, 130° 47′ 0″ W) The Cobb Seamount rises from a 2,750 m basin to within 37 meters of the ocean surface. It is located just 100 km to the northwest of the hotspot. This seamount is at least 3.3 million years old. The Cobb seamount has been extensively studied for its geological features. Cobb seamount was once an island that was 914 meters above sea level which due to erosion became a seamount. Samples collected from this location were used in studies that determined the age and geological composition of the rocks. The Cobb Seamount is all basalt and contains phenocrysts of plagioclase and clinopyroxene; the intergranular/interstitial matrix was found to have iron and titanium oxides. Video and photographs collected in 2012 from Cobb Seamount have shown a wide variety of biodiversity at the location. 17 benthic taxa were observed through pictures collected from the ROV dives. Most common species included sea cucumbers, squat lobsters, thornyheads, and corals. 4. Patton Seamount (54° 34′ 48″ N, 150° 26′ 24″ W) Patton Seamount is about 33 million years old. Although there is not much information about its geology, the biology at the Patton Seamount is very well studied. The seamount's summit is 183 meters below the ocean surface, and its height from the seafloor is 3,048 meters. In July 1999, DSV Alvin was used to explore the biodiversity at the Patton Seamount. The shallow water community mostly consisted of rockfish, flatfish, sea stars and attached suspension feeders. The community at mid-depths consisted of attached suspension feeding organisms like corals, sponges, crinoids, sea anemones and sea cucumbers. The common fish species were the sablefish and the giant grenadier. The deep water community consisted of fewer attached suspension feeders and more highly mobile species like Pacific grenadier, popeye grenadier, Pacific flatnose and large mobile crabs. Volcanic activity in the past and eruptions Currently, the only active seamount is Axial Seamount, located directly overtop the hotspot at Juan de Fuca Ridge. The most recent eruption was in April–May 2015, with a prior eruption in 2011. Another eruption was detected seismically in January 1998. Lava erupted from a 9 km long fissure, and the caldera subsided by 3 meters during the eruption. In 1983, hydrothermal venting was discovered. Foraminiferan fossil studies have suggested that Cobb Seamount was a pre-late Eocene volcano. Thus, it was likely volcanically active approximately 40 million years ago, and remained volcanically active until about 3.3 million years ago when the Cobb seamount was formed. Ar40-Ar39 dating of deep basalt from the Patton seamount shows it to be 33 million years old, which coincides with the time when the seamount was above the Cobb hotspot. However, there are samples collected from shallower depths of basalt which are younger, suggesting that even after the hotspot volcanism ceases, non-hotspot volcanism can sometimes take place. Other seamounts of the Cobb Eickelberg seamount chain The following list is according to Desonie et al. 1990. Thompson Son of Brown Bear Corn Pipe Warwick Eickelberg Forster Miller Murray References Oceanography Volcanic belts Seamounts of the Pacific Ocean Seamount chains Volcanoes of Oregon Volcanoes of Washington (state)
Cobb–Eickelberg Seamount chain
[ "Physics", "Environmental_science" ]
1,779
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
13,266,445
https://en.wikipedia.org/wiki/Cyanate%20ester
Cyanate esters are chemical compounds in which the hydrogen atom of the cyanic acid is replaced by an organyl group (for example aryl group). The resulting compound is termed a cyanate ester, with the formula , where R is an organyl group. Cyanate esters contain a monovalent cyanate group . Use in resins Cyanate esters can be cured and postcured by heating, either alone at elevated temperatures or at lower temperatures in presence of a suitable catalyst. The most common catalysts are transition metal complexes of cobalt, copper, manganese and zinc. The result is a thermoset material with a very high glass-transition temperature (Tg) of up to 400 °C, and a very low dielectric constant, providing excellent long term thermal stability at elevated end use temperatures, very good fire, smoke and toxicity performance and specific suitability for printed circuit boards installed in critical electrical devices. This is also due to its low moisture uptake. This property, together with a higher toughness compared to epoxies, also makes it a valuable material in aerospace applications. For example, the Lynx Mark II spaceplane is primarily made of carbon/cyanate ester. The chemistry of the cure reaction is a trimerization of three CN groups to a triazine ring. When the monomer contains two cyanate groups the resulting structure is a 3D polymer network. Thermoset polymer matrix properties can be fine tuned by the choice of substituents in the bisphenolic compound. Bisphenol A and novolac based cyanate esters are the major products; bisphenol F and bisphenol E are also used. The aromatic ring of the bisphenol can be substituted with an allylic group for improved toughness of the material. Cyanate esters can also be mixed with bismaleimides to form BT-resins or with epoxy resins to optimize the end use properties. References Cyanates Functional groups Esters
Cyanate ester
[ "Chemistry" ]
426
[ "Organic compounds", "Esters", "Functional groups", "Cyanates" ]
13,266,701
https://en.wikipedia.org/wiki/BT-Epoxy
BT-Epoxy (BT short for Bismaleimide-Triazine resin) is one of a number of thermoset resins used in printed circuit boards (PCBs). It is a mixture of epoxy resin, a common raw material for PCBs and BT resins. This is in turn a mixture of bismaleimide, which is also used as a raw material for PCBs and cyanate ester. Three cyano groups of the cyanate ester are trimerized to a triazine ring structure, hence the T in the name. In presence of a bismaleimide, the double bond of the maleimide group can copolymerize with the cyano groups to heterocyclic 6-membered aromatic ring structures with two nitrogen atoms (pyrimidines). The cure reaction occurs at temperatures up to , and is catalyzed by strongly basic molecules like Dabco (diazabicyclooctane) and 4-DMAP (4-dimethylaminopyridin). Products with very high glass transition temperatures (Tg) up to and very low dielectric constant can be obtained. These properties make these materials very attractive for use in PCBs, which are often subjected to such conditions. See also Printed circuit board Synthetic resin Epoxy References Synthetic resins
BT-Epoxy
[ "Chemistry" ]
286
[ "Polymer stubs", "Synthetic materials", "Organic chemistry stubs", "Synthetic resins" ]
13,266,723
https://en.wikipedia.org/wiki/Flash%20of%20unstyled%20content
A flash of unstyled content (FOUC, or flash of unstyled text) is an instance where a web page appears briefly with the browser's default styles prior to loading an external CSS stylesheet, due to the web browser engine rendering the page before all information is retrieved. The page corrects itself as soon as the style rules are loaded and applied; however, the shift may be distracting. Related problems include flash of invisible text and flash of faux text. Technical information The issue was documented in an article named "Flash of Unstyled Content". At first, FOUC appeared to be a browser problem unique to Internet Explorer but later became apparent in other browsers, and has since been described as "a Safari epidemic". A flash of unstyled content is indifferent to changes in CSS or HTML versions. This problem, which leaves the core content unaffected, originates from a set of priorities programmed into the browser. As the browser collects HTML and all the ancillary files referenced in the markup, the browser builds the Document Object Model on-the-fly. The browser may choose to first display the text so that it can parse the quickest. Flashes of unstyled content are more prevalent now that HTML pages are more apt to reference multiple style sheets. Web pages often include style references to media other than the browser screen, such as CSS rules for printers and mobile devices. Web pages may import layers of style files, and reference alternative style sheets. Online advertisements and other inserted offsite content, like videos and search engines, often dictate their own style rules within their code block. The cascading nature of CSS rules encourages some browsers to wait until all the style datasets have been collected before applying them. With the advent of JavaScript libraries such as jQuery which can be employed to further define and apply the styling of a web page, flashes of unstyled content have also become more prominent. In an attempt to avoid unstyled content, front-end developers may choose to hide all content until it is fully loaded, at which point a load event handler is triggered and the content appears, though an interruption of loading might leave behind a blank page, to which unstyled content would be preferable. To emulate a flash of unstyled content, developers can use browser add-ons that are capable of disabling a web page's CSS on the fly. Firebug and Async CSS are such add-ons. Other techniques include manually stopping a page from completing the loading of CSS components. Another option entails using script-blocking tools. While, by 2016, several different techniques had been developed to avoid undesired display behaviours, a change in rendering behaviour in Google Chrome version 50, whereby stylesheets injected by JavaScript are prevented from blocking page loading, as required by the HTML5 specification, brought the situation to website creators' attentions again, particularly affecting users of Typekit, a web typography product from Adobe Systems. Within 2 months, Adobe had changed the way in which their fonts were included into third-party websites in order to avoid the undesired rendering behaviour. See also Progressive enhancement References Web design
Flash of unstyled content
[ "Engineering" ]
668
[ "Design", "Web design" ]
13,266,770
https://en.wikipedia.org/wiki/Sialon
SiAlON ceramics are a specialist class of high-temperature refractory materials, with high strength at ambient and high temperatures, good thermal shock resistance and exceptional resistance to wetting or corrosion by molten non-ferrous metals, compared to other refractory materials such as, for example, alumina. A typical use is with handling of molten aluminium. They also are exceptionally corrosion resistant and hence are also used in the chemical industry. SiAlONs also have high wear resistance, low thermal expansion and good oxidation resistance up to above ~1000 °C. They were first reported around 1971. Forms m and n are the numbers of Al–N and Al–O bonds substituting for Si–N bonds SiAlONs are ceramics based on the elements silicon (Si), aluminium (Al), oxygen (O) and nitrogen (N). They are solid solutions of silicon nitride (Si3N4), where Si–N bonds are partly replaced with Al–N and Al–O bonds. The substitution degrees can be estimated from the lattice parameters. The charge discrepancy caused by the substitution can be compensated by adding metal cations such as Li+, Mg2+, Ca2+, Y3+ and Ln3+, where Ln stands for lanthanide. SiAlONs exist in three basic forms, which are iso-structural with one of the two common forms of silicon nitride, alpha and beta, and with orthorhombic silicon oxynitride; they are hence named as α, β and O'-SiAlONs. Production SiAlONs are produced by first combining a mixture of raw materials including silicon nitride, alumina, aluminium nitride, silica and the oxide of a rare-earth element such as yttrium. The powder mix is fabricated into a "green" compact by isostatic powder compaction or slipcasting, for example. Then the shaped form is densified, typically by pressureless sintering or hot isostatic pressing. Abnormal grain growth has been extensively reported for SiAlON ceramics, and results in a bimodal grain size distribution of the sintered material. The sintered part may then need to be machined by diamond grinding (abrasive cutting). Alternatively, they can be forged into various shapes at a temperature of ca. 1200 °C. Applications SiAlON ceramics have found extensive use in non-ferrous molten metal handling, particularly aluminium and its alloys, including metal feed tubes for aluminum die casting, burner and immersion heater tubes, injector and degassing for nonferrous metals, thermocouple protection tubes, crucibles and ladles. In metal forming, SiAlON is used as a cutting tool for machining chill cast iron and as brazing and welding fixtures and pins, particularly for resistance welding. Other applications include in the chemical and process industries and the oil and gas industries, due to sialons excellent chemical stability and corrosion resistance and wear resistance properties. Some rare-earth activated SiAlONs are photoluminescent and can serve as phosphors. Europium(II)-doped β-SiAlON absorbs in ultraviolet and visible light spectrum and emits intense broadband visible emission. Its luminance and color does not change significantly with temperature, due to the temperature-stable crystal structure. It has a great potential as a green down-conversion phosphor for white LEDs; a yellow variant also exists. For white LEDs, a blue LED is used with a yellow phosphor, or with a green and yellow SiAlON phosphor and a red CaAlSiN3-based (CASN) phosphor. References Ceramic materials Nitrides Superhard materials Phosphors and scintillators
Sialon
[ "Physics", "Chemistry", "Engineering" ]
790
[ "Luminescence", "Materials", "Superhard materials", "Phosphors and scintillators", "Ceramic materials", "Ceramic engineering", "Matter" ]
13,267,336
https://en.wikipedia.org/wiki/Bionade
Bionade [ˌbi.(j)oˈnaːdə] is a German range of non-alcoholic, organic fermented and carbonated beverages. It is manufactured in the Bavarian town of Ostheim vor der Rhön by the Peter beer brewery. Sales started in 1995 and Bionade is now available in most European countries. Until 2018 Bionade GmbH was a subsidiary of Radeberger, a group of breweries which is a division of Dr. Oetker. Now Bionade is part of the Hassia Group. History Dieter Leipold was the master brewer at Privatbrauerei Peter in Ostheim, a small town in northern Bavaria, and a relation by marriage of the Kowalsky family, owners of the brewery. Worried about the future of the company, which was facing bankruptcy, he had the idea of producing a nonalcoholic drink by fermentation, on the same principles and under the same purity laws as German beer: the drink would consist only of the natural ingredients malt, water, sugar, and fruit essences, and would not contain corn syrup or other artificial additives. He experimented for eight years in a bathroom laboratory, spending €1.5 million of the brewery owner Peter Kowalsky's money. He isolated a strain of bacteria capable of converting the sugar that normally becomes alcohol into nonalcoholic gluconic acid, which he used to ferment the new drink. Bionade went on sale in 1995, at first in health resorts and fitness centres. It was picked up by Hamburg's largest beverage distributor, Göttsche, in 1998, but was not reaching a broad public. In 1999 Kowalsky hired the marketing expert Wolfgang Blum. Blum devised a new marketing strategy for Bionade. A retro blue, white, and red logo was designed. The bottle was made out of clear glass (instead of brown) but its form was based on a classical vitreous longneck beer format. This eased the logistics of distributing the drink and also helped to sell Bionade in bars and nightclubs, as a non-alcoholic drink that looked like a beer. The product was branded as a new trendy drink. As the company could not afford advertising on television or in the print media, they first placed it in bars and restaurants in Hamburg which were frequented by marketers and advertisers. Further viral marketing attempts included sponsoring sporting, cultural and children's events throughout Germany. By 2002-03 two million bottles of the drink were sold. A wave of health awareness was sweeping Germany: for example, 75% of all Germans approved a ban on smoking in bars. Sales reached seven million in 2004, twenty million in 2005, seventy million in 2006, when 73 million bottles were sold, and by 2007 total sales had reached 200 million. In 2004, Coca-Cola offered to buy the rights to the drink and the Bionade brand, but the producer rejected the offer, citing its plans to expand internationally on its own. By 2006 it was available in Switzerland, Austria and the Benelux countries, and it has since reached Scandinavia, Italy, Spain, Portugal and Ireland. In 2007 the company planned expansion into the US. In 2007, Bionade started its first advertising campaign, using the slogan "Bionade. Das offizielle Getränk einer besseren Welt" ("Bionade. The official beverage of a better world"). This appealed to some of the protesters against the 33rd G8 summit, which was taking place at the time in Heiligendamm, Mecklenburg-Western Pomerania, and led to the stereotype that anti-globalisation activists always drink the beverage. The campaign encompassed billboards in fifteen German cities and radio commercials. In 2007 plastic bottles were introduced and Bionade appeared in McDonald's cafés. However, sales declined in 2008 and further declined after an increase in prices, to 60 million bottles in 2011. The Kowalsky family sold part ownership to Schindel-Holding and then to Radeberger, who became sole owners in 2012 until 2018. Since 2018 Bionade is part of the Hassia Group (Hassia Gruppe). Metaphorical use In 2007 German journalist Henning Sußebach coined Bionade-Biedermeier a German neologism combining Bionade, and the word Biedermeier, an era in Central Europe between 1815 and 1848. He used it to describe the lifestyle in Berlin’s Prenzlauer Berg district in an article published by Die Zeit. The term is a German equivalent of e.g. LOHAS and Bobo (bourgeois bohémiens) and gained some media attention and expanded use since. The underlying allusion is being used as well in related wordings like Bionade-Bourgeoisie, Biohème and Generation Biedermeier. Henning Sußebach described the Prenzlauer Berg (Prenzelberg) as an experimental field of "New Germany" and Biotop of the rich and creative and young urban professionals. As e.g. in Social Consciousness in the Bionade-Biedermeier, the term has been used to describe current films and social tendencies in Germany. Product Leipold refuses to divulge the exact chemical process he used. According to him, the gluconic acid strengthens the taste of the sugar, so that less is needed. After fermentation, natural flavours—elderberry, lychee, orange-ginger, quince or herbs—are added along with carbonation. All flavours of Bionade contain water, sugar, malt from barley (2%), carbon acid, calcium carbonate and magnesium carbonate. The herbal and lychee flavoured versions also contain natural aromas. The elderberry flavoured version additionally contains concentrated elderberry juice and natural aromas, and the orange-ginger flavoured bottles contain extract of ginger and natural aroma. The sugar, barley, and elderberries are organic. The manufacturer emphasises that Bionade tastes like a soft drink, but is healthier than conventional high-sugar soft drinks. They support the claim of healthiness by referring to the relatively low level of sugar, sodium and flavor-enhancing additives, the absence of phosphorus or a stabilising agent, while both calcium and magnesium are present. The website makes more health claims and suggests that calcium is needed for bones and teeth, for nerves and muscles, while magnesium is said to work against listlessness and fatigue. References External links www.bionade.com - international site (in English) www.bionade.de - German site (in German) Lucrative Lemonade, Reuters video Goods manufactured in Germany Fermented drinks Carbonated drinks Soft drinks Dr. Oetker
Bionade
[ "Biology" ]
1,403
[ "Fermented drinks", "Biotechnology products" ]
8,628,942
https://en.wikipedia.org/wiki/NACEVI
NACEVI (NAtional CEnter for VIdeo) is a Czech content delivery network operated by Visual Unity. It is partially financed by the Czech government through the Czech Broadband Forum. NACEVI serves Windows Media format for audio and video. It is transparent for use of digital rights management. Total streaming capacity is about 10 Gbit/s. Traffic management is optimised for the Czech republic. Distribution for IPv6 is also available. External links NACEVI Internet technology companies of the Czech Republic
NACEVI
[ "Technology" ]
101
[ "Computing stubs", "Computer network stubs" ]
8,629,097
https://en.wikipedia.org/wiki/Common%20species
Common species and uncommon species are designations used in ecology to describe the population status of a species. Commonness is closely related to abundance. Abundance refers to the frequency with which a species is found in controlled samples; in contrast, species are defined as common or uncommon based on their overall presence in the environment. A species may be locally abundant without being common. However, "common" and "uncommon" are also sometimes used to describe levels of abundance, with a common species being less abundant than an abundant species, while an uncommon species is more abundant than a rare species. Common species are frequently regarded as being at low risk of extinction simply because they exist in large numbers, and hence their conservation status is often overlooked. While this is broadly logical, there are several cases of once common species being driven to extinction such as the passenger pigeon and the Rocky Mountain locust, which numbered in the billions and trillions respectively before their demise. Moreover, a small proportional decline in a common species results in the loss of a large number of individuals, and the contribution to ecosystem function that those individuals represented. A recent paper argued that because common species shape ecosystems, contribute disproportionately to ecosystem functioning, and can show rapid population declines, conservation should look more closely at how the trade-off between species extinctions and the depletion of populations. See also Rare species Abundance (ecology) Notes Environmental conservation Environmental terminology Ecology terminology
Common species
[ "Biology" ]
287
[ "Ecology terminology" ]
8,630,194
https://en.wikipedia.org/wiki/MetaLib
MetaLib is a federated search system developed by Ex Libris Group. MetaLib conducts simultaneous searches in multiple, and often heterogeneous, information resources such as library catalogs, journal articles, newspapers and selected quality Internet resources. The resources are often subscription based, and MetaLib provides access for legitimate users. MetaLib is often used in conjunction with the SFX OpenURL resolver. References External links MetaLib Library automation Scholarly search services Defunct internet search engines
MetaLib
[ "Engineering" ]
98
[ "Library automation", "Automation" ]
8,630,521
https://en.wikipedia.org/wiki/Dilithium%20%28Star%20Trek%29
In the Star Trek fictional universe, dilithium is a fictional material that serves as a controlling agent in the matter-antimatter reactors. In the original series, dilithium crystals were rare and could not be replicated, making the search for them a recurring plot element. According to a periodic table shown during a Next Generation episode, it has the atomic number 87 (which in reality belongs to francium), and the chemical symbol Dt. In reality, dilithium (Li) is a molecule composed of two covalently bonded lithium atoms which exists naturally in gaseous lithium. Dilithium is depicted as a valuable, extremely hard crystalline mineral that occurs naturally on some planets. Use The fictional properties of the material in the authors' guide Star Trek: The Next Generation Technical Manual (1991) explain it as uniquely suited to contain and regulate the annihilation reaction of matter and antimatter in a starship's warp core: In a high-frequency electromagnetic field, eddy currents are induced in the dilithium crystal structure, which keep charged particles away from the crystal lattice. This prevents it from coming in contact with antimatter when so energized, hence never annihilating, because the antimatter particles never actually touch it. In the original series, dilithium crystals were rare, and crystals made by replicator were unsatisfactory for use in warp drives. Hence story lines based on the need for natural dilithium crystals for interstellar travel – much like real-world equivalents such as oil – made deposits of this material a highly contested resource between fictional factions in the stories, and as such, dilithium crystals have been used by writers to introduce interstellar conflict more than all other reasons combined. As depicted on the show, the streams of matter (deuterium gas) and antimatter (anti-deuterium) directed into crystallized dilithium are unbalanced – there is usually much more matter in the stream than antimatter. The annihilation reaction heats the excess deuterium gas, which produces plasma for the nacelles and allows faster than light travel. In addition, film sets representing the crawl-spaces for the inner workings of starships tend to be depicted as adjacent to the "EPS" conduits that channel plasma to critical ships' internal systems. Fictional properties Dilithium is a member of a so-called "hypersonic" series of elements, according to a fictional periodic table graphic presented in episodes of Star Trek: The Next Generation and Star Trek: Deep Space Nine (1993–1999). The material is suspected by its fictional users of existing in more dimensions than the conventional three + one dimensions of spacetime, and that this somehow is related to its unconventional or paradoxical properties. The dilithium mineral structure is 2(5)6 dilithium 2(:)l diallosilicate 1:9:1 heptoferranide, according to the authors' guide Star Trek: The Next Generation Technical Manual (1991). In respect of the fictional replication technology used as background for the Star Trek universe, although low-quality artificial crystals can be grown or replicated, the synthetic dilithium crystals can only regulate a limited amount of power without fragmenting, and are largely unsuitable for use in warp drives. See also List of Star Trek materials Footnotes References External links "The Mineralogy of Star Trek" Fictional materials Star Trek terminology
Dilithium (Star Trek)
[ "Physics" ]
707
[ "Materials", "Fictional materials", "Matter" ]
8,631,119
https://en.wikipedia.org/wiki/Flap%20endonuclease
Flap endonucleases (FENs, also known as 5' durgs in older references) are a class of nucleolytic enzymes that act as both 5'-3' exonucleases and structure-specific endonucleases on specialised DNA structures that occur during the biological processes of DNA replication, DNA repair, and DNA recombination. Flap endonucleases have been identified in eukaryotes, prokaryotes, archaea, and some viruses. Organisms can have more than one FEN homologue; this redundancy may give an indication of the importance of these enzymes. In prokaryotes, the FEN enzyme is found as an N-terminal domain of DNA polymerase I, but some prokaryotes appear to encode a second homologue. The endonuclease activity of FENs was initially identified as acting on a DNA duplex which has a single-stranded 5' overhang on one of the strands (termed a "5' flap", hence the name flap endonuclease). FENs catalyse hydrolytic cleavage of the phosphodiester bond at the junction of single- and double-stranded DNA. Some FENs can also act as 5'-3' exonucleases on the 5' terminus of the flap strand and on 'nicked' DNA substrates. Protein structure models based on X-ray crystallography data suggest that FENs have a flexible arch created by two α-helices through which the single 5' strand of the 5' flap structure can thread. Flap endonucleases have been used in biotechnology, for example the Taqman PCR assay and the Invader Assay for mutation and single nucleotide polymorphism (SNP) detection. See also Endonucleases References External links External link Flap endonucleases, 5'-3' exonucleases & 5' nucleases Biotechnology EC 3.1
Flap endonuclease
[ "Biology" ]
416
[ "Biotechnology", "nan" ]
8,631,522
https://en.wikipedia.org/wiki/Shallow%20water%20equations
The shallow-water equations (SWE) are a set of hyperbolic partial differential equations (or parabolic if viscous shear is considered) that describe the flow below a pressure surface in a fluid (sometimes, but not necessarily, a free surface). The shallow-water equations in unidirectional form are also called (de) Saint-Venant equations, after Adhémar Jean Claude Barré de Saint-Venant (see the related section below). The equations are derived from depth-integrating the Navier–Stokes equations, in the case where the horizontal length scale is much greater than the vertical length scale. Under this condition, conservation of mass implies that the vertical velocity scale of the fluid is small compared to the horizontal velocity scale. It can be shown from the momentum equation that vertical pressure gradients are nearly hydrostatic, and that horizontal pressure gradients are due to the displacement of the pressure surface, implying that the horizontal velocity field is constant throughout the depth of the fluid. Vertically integrating allows the vertical velocity to be removed from the equations. The shallow-water equations are thus derived. While a vertical velocity term is not present in the shallow-water equations, note that this velocity is not necessarily zero. This is an important distinction because, for example, the vertical velocity cannot be zero when the floor changes depth, and thus if it were zero only flat floors would be usable with the shallow-water equations. Once a solution (i.e. the horizontal velocities and free surface displacement) has been found, the vertical velocity can be recovered via the continuity equation. Situations in fluid dynamics where the horizontal length scale is much greater than the vertical length scale are common, so the shallow-water equations are widely applicable. They are used with Coriolis forces in atmospheric and oceanic modeling, as a simplification of the primitive equations of atmospheric flow. Shallow-water equation models have only one vertical level, so they cannot directly encompass any factor that varies with height. However, in cases where the mean state is sufficiently simple, the vertical variations can be separated from the horizontal and several sets of shallow-water equations can describe the state. Equations Conservative form The shallow-water equations are derived from equations of conservation of mass and conservation of linear momentum (the Navier–Stokes equations), which hold even when the assumptions of shallow-water break down, such as across a hydraulic jump. In the case of a horizontal bed, with negligible Coriolis forces, frictional and viscous forces, the shallow-water equations are: Here η is the total fluid column height (instantaneous fluid depth as a function of x, y and t), and the 2D vector (u,v) is the fluid's horizontal flow velocity, averaged across the vertical column. Further g is acceleration due to gravity and ρ is the fluid density. The first equation is derived from mass conservation, the second two from momentum conservation. Non-conservative form Expanding the derivatives in the above using the product rule, the non-conservative form of the shallow-water equations is obtained. Since velocities are not subject to a fundamental conservation equation, the non-conservative forms do not hold across a shock or hydraulic jump. Also included are the appropriate terms for Coriolis, frictional and viscous forces, to obtain (for constant fluid density): where It is often the case that the terms quadratic in u and v, which represent the effect of bulk advection, are small compared to the other terms. This is called geostrophic balance, and is equivalent to saying that the Rossby number is small. Assuming also that the wave height is very small compared to the mean height (), we have (without lateral viscous forces): One-dimensional Saint-Venant equations The one-dimensional (1-D) Saint-Venant equations were derived by Adhémar Jean Claude Barré de Saint-Venant, and are commonly used to model transient open-channel flow and surface runoff. They can be viewed as a contraction of the two-dimensional (2-D) shallow-water equations, which are also known as the two-dimensional Saint-Venant equations. The 1-D Saint-Venant equations contain to a certain extent the main characteristics of the channel cross-sectional shape. The 1-D equations are used extensively in computer models such as TUFLOW, Mascaret (EDF), SIC (Irstea), HEC-RAS, SWMM5, InfoWorks, Flood Modeller, SOBEK 1DFlow, MIKE 11, and MIKE SHE because they are significantly easier to solve than the full shallow-water equations. Common applications of the 1-D Saint-Venant equations include flood routing along rivers (including evaluation of measures to reduce the risks of flooding), dam break analysis, storm pulses in an open channel, as well as storm runoff in overland flow. Equations The system of partial differential equations which describe the 1-D incompressible flow in an open channel of arbitrary cross section – as derived and posed by Saint-Venant in his 1871 paper (equations 19 & 20) – is: and where x is the space coordinate along the channel axis, t denotes time, A(x,t) is the cross-sectional area of the flow at location x, u(x,t) is the flow velocity, ζ(x,t) is the free surface elevation and τ(x,t) is the wall shear stress along the wetted perimeter P(x,t) of the cross section at x. Further ρ is the (constant) fluid density and g is the gravitational acceleration. Closure of the hyperbolic system of equations ()–() is obtained from the geometry of cross sections – by providing a functional relationship between the cross-sectional area A and the surface elevation ζ at each position x. For example, for a rectangular cross section, with constant channel width B and channel bed elevation zb, the cross sectional area is: . The instantaneous water depth is , with zb(x) the bed level (i.e. elevation of the lowest point in the bed above datum, see the cross-section figure). For non-moving channel walls the cross-sectional area A in equation () can be written as: with b(x,h) the effective width of the channel cross section at location x when the fluid depth is h – so for rectangular channels. The wall shear stress τ is dependent on the flow velocity u, they can be related by using e.g. the Darcy–Weisbach equation, Manning formula or Chézy formula. Further, equation () is the continuity equation, expressing conservation of water volume for this incompressible homogeneous fluid. Equation () is the momentum equation, giving the balance between forces and momentum change rates. The bed slope S(x), friction slope Sf(x, t) and hydraulic radius R(x, t) are defined as: and Consequently, the momentum equation () can be written as: Conservation of momentum The momentum equation () can also be cast in the so-called conservation form, through some algebraic manipulations on the Saint-Venant equations, () and (). In terms of the discharge : where A, I1 and I2 are functions of the channel geometry, described in the terms of the channel width B(σ,x). Here σ is the height above the lowest point in the cross section at location x, see the cross-section figure. So σ is the height above the bed level zb(x) (of the lowest point in the cross section): Above – in the momentum equation () in conservation form – A, I1 and I2 are evaluated at . The term describes the hydrostatic force in a certain cross section. And, for a non-prismatic channel, gives the effects of geometry variations along the channel axis x. In applications, depending on the problem at hand, there often is a preference for using either the momentum equation in non-conservation form, () or (), or the conservation form (). For instance in case of the description of hydraulic jumps, the conservation form is preferred since the momentum flux is continuous across the jump. Characteristics The Saint-Venant equations ()–() can be analysed using the method of characteristics. The two celerities dx/dt on the characteristic curves are: with The Froude number determines whether the flow is subcritical () or supercritical (). For a rectangular and prismatic channel of constant width B, i.e. with and , the Riemann invariants are: and so the equations in characteristic form are: The Riemann invariants and method of characteristics for a prismatic channel of arbitrary cross-section are described by Didenkulova & Pelinovsky (2011). The characteristics and Riemann invariants provide important information on the behavior of the flow, as well as that they may be used in the process of obtaining (analytical or numerical) solutions. Hamiltonian structure for frictionless flow In case there is no friction and the channel has a rectangular prismatic cross section, the Saint-Venant equations have a Hamiltonian structure. The Hamiltonian is equal to the energy of the free-surface flow: with constant the channel width and the constant fluid density. Hamilton's equations then are: since . Derived modelling Dynamic wave The dynamic wave is the full one-dimensional Saint-Venant equation. It is numerically challenging to solve, but is valid for all channel flow scenarios. The dynamic wave is used for modeling transient storms in modeling programs including Mascaret (EDF), SIC (Irstea), HEC-RAS, InfoWorks_ICM , MIKE 11, Wash 123d and SWMM5. In the order of increasing simplifications, by removing some terms of the full 1D Saint-Venant equations (aka Dynamic wave equation), we get the also classical Diffusive wave equation and Kinematic wave equation. Diffusive wave For the diffusive wave it is assumed that the inertial terms are less than the gravity, friction, and pressure terms. The diffusive wave can therefore be more accurately described as a non-inertia wave, and is written as: The diffusive wave is valid when the inertial acceleration is much smaller than all other forms of acceleration, or in other words when there is primarily subcritical flow, with low Froude values. Models that use the diffusive wave assumption include MIKE SHE and LISFLOOD-FP. In the SIC (Irstea) software this options is also available, since the 2 inertia terms (or any of them) can be removed in option from the interface. Kinematic wave For the kinematic wave it is assumed that the flow is uniform, and that the friction slope is approximately equal to the slope of the channel. This simplifies the full Saint-Venant equation to the kinematic wave: The kinematic wave is valid when the change in wave height over distance and velocity over distance and time is negligible relative to the bed slope, e.g. for shallow flows over steep slopes. The kinematic wave is used in HEC-HMS. Derivation from Navier–Stokes equations The 1-D Saint-Venant momentum equation can be derived from the Navier–Stokes equations that describe fluid motion. The x-component of the Navier–Stokes equations – when expressed in Cartesian coordinates in the x-direction – can be written as: where u is the velocity in the x-direction, v is the velocity in the y-direction, w is the velocity in the z-direction, t is time, p is the pressure, ρ is the density of water, ν is the kinematic viscosity, and fx is the body force in the x-direction. If it is assumed that friction is taken into account as a body force, then can be assumed as zero so: Assuming one-dimensional flow in the x-direction it follows that: Assuming also that the pressure distribution is approximately hydrostatic it follows that: or in differential form: And when these assumptions are applied to the x-component of the Navier–Stokes equations: There are 2 body forces acting on the channel fluid, namely, gravity and friction: where fx,g is the body force due to gravity and fx,f is the body force due to friction. fx,g can be calculated using basic physics and trigonometry: where Fg is the force of gravity in the x-direction, θ is the angle, and M is the mass. The expression for sin θ can be simplified using trigonometry as: For small θ (reasonable for almost all streams) it can be assumed that: and given that fx represents a force per unit mass, the expression becomes: Assuming the energy grade line is not the same as the channel slope, and for a reach of consistent slope there is a consistent friction loss, it follows that: All of these assumptions combined arrives at the 1-dimensional Saint-Venant equation in the x-direction: where (a) is the local acceleration term, (b) is the convective acceleration term, (c) is the pressure gradient term, (d) is the friction term, and (e) is the gravity term. Terms The local acceleration (a) can also be thought of as the "unsteady term" as this describes some change in velocity over time. The convective acceleration (b) is an acceleration caused by some change in velocity over position, for example the speeding up or slowing down of a fluid entering a constriction or an opening, respectively. Both these terms make up the inertia terms of the 1-dimensional Saint-Venant equation. The pressure gradient term (c) describes how pressure changes with position, and since the pressure is assumed hydrostatic, this is the change in head over position. The friction term (d) accounts for losses in energy due to friction, while the gravity term (e) is the acceleration due to bed slope. Wave modelling by shallow-water equations Shallow-water equations can be used to model Rossby and Kelvin waves in the atmosphere, rivers, lakes and oceans as well as gravity waves in a smaller domain (e.g. surface waves in a bath). In order for shallow-water equations to be valid, the wavelength of the phenomenon they are supposed to model has to be much larger than the depth of the basin where the phenomenon takes place. Somewhat smaller wavelengths can be handled by extending the shallow-water equations using the Boussinesq approximation to incorporate dispersion effects. Shallow-water equations are especially suitable to model tides which have very large length scales (over hundred of kilometers). For tidal motion, even a very deep ocean may be considered as shallow as its depth will always be much smaller than the tidal wavelength. Turbulence modelling using non-linear shallow-water equations Shallow-water equations, in its non-linear form, is an obvious candidate for modelling turbulence in the atmosphere and oceans, i.e. geophysical turbulence. An advantage of this, over Quasi-geostrophic equations, is that it allows solutions like gravity waves, while also conserving energy and potential vorticity. However, there are also some disadvantages as far as geophysical applications are concerned - it has a non-quadratic expression for total energy and a tendency for waves to become shock waves. Some alternate models have been proposed which prevent shock formation. One alternative is to modify the "pressure term" in the momentum equation, but it results in a complicated expression for kinetic energy. Another option is to modify the non-linear terms in all equations, which gives a quadratic expression for kinetic energy, avoids shock formation, but conserves only linearized potential vorticity. See also Waves and shallow water Notes Further reading External links Derivation of the shallow-water equations from first principles (instead of simplifying the Navier–Stokes equations, some analytical solutions) Equations of fluid dynamics Partial differential equations Physical oceanography Water waves
Shallow water equations
[ "Physics", "Chemistry" ]
3,325
[ "Equations of fluid dynamics", "Physical phenomena", "Applied and interdisciplinary physics", "Equations of physics", "Water waves", "Waves", "Physical oceanography", "Fluid dynamics" ]
8,631,841
https://en.wikipedia.org/wiki/Glycosome
The glycosome is a membrane-enclosed organelle that contains the glycolytic enzymes. The term was first used by Scott and Still in 1968 after they realized that the glycogen in the cell was not static but rather a dynamic molecule. It is found in a few species of protozoa including the Kinetoplastida which include the suborders Trypanosomatida and Bodonina, most notably in the human pathogenic trypanosomes, which can cause sleeping sickness, Chagas's disease, and leishmaniasis. The organelle is bounded by a single membrane and contains a dense proteinaceous matrix. It is believed to have evolved from the peroxisome. This has been verified by work done on Leishmania genetics. The glycosome is currently being researched as a possible target for drug therapies. Glycosomes are unique to kinetoplastids and their sister diplonemids. The term glycosome is also used for glycogen-containing structures found in hepatocytes responsible for storing sugar, but these are not membrane bound organelles. Structure Glycosomes are composed of glycogen and proteins. The proteins are the enzymes that are associated with the metabolism of glycogen. These proteins and glycogen form a complex to make a distinct and separate organelle. The proteins for glycosomes are imported from free cytosolic ribosomes. The proteins imported into the organelle have a specific sequence, a PTS1 ending sequence to make sure they go to the right place. They are similar to alpha-granules in the cytosol of a cell that are filled with glycogen. Glycosomes are typically round-to-oval shape with size varying in each cell. Although glycogen is found in the cytoplasm, that in the glycosome is separate, surrounded by membrane. The membrane is a lipid bilayer. The glycogen that is found within the glycosome is identical to glycogen found freely in the cytosol. Glycosomes can be associated or attached to many different types of organelles. They have been found to be attached to the sarcoplasmic reticulum and its intermediate filaments. Other glycosomes have been found to be attached to myofibrils and mitochondria, rough endoplasmic reticulum, sarcolemma, polyribosomes, or the Golgi apparatus. Glycosome attachment may bestow a functional distinction between them; the glycosomes attached to the myofibrils seem to serve the myosin by providing energy substrates for generation of ATP through glycolysis. The glycosomes in the rough and smooth endoplasmic reticulum make use of its glycogen synthase and phosphorylase phosphatases. Function Glycosomes function in many processes in the cell. These processes include glycolysis, purine salvage, beta oxidation of fatty acids, and ether lipid synthesis. Glycolysis The main function that the glycosome serves is of the glycolytic pathway that is done inside its membrane. By compartmentalizing glycolysis inside of the glycosome, the cell can be more successful. In the cell, action in the cytosol, the mitochondria, and the glycosome are all completing the function of energy metabolism. This energy metabolism generates ATP through the process of glycolysis. The glycosome is a host of the main glycolytic enzymes in the pathway for glycolysis. This pathway is used to break down fatty acids for their carbon and energy. The entire process of glycolysis does not take place in the glycosome however. Rather, only the Embden-Meyerhof segment where the glucose enters into the glycosome. Importantly, the process in the organelle has no net ATP synthesis. This ATP comes later from processes outside of the glycosome. Inside of the glycosome does need NAD+ for functioning and its regeneration. Fructose 1,6-biphosphate is used in the glycosome as a way to help obtain oxidizing agents to help start glycolysis. The glycosome converts the sugar into 3-phosphoglycerate. Purine salvage Another function of glycosomes is purine salvage. The parasites which have glycosomes present in their cells cannot make purine de novo. This purine that is made in the glycosome is then exported out of the glycosome to be used in the cell in nucleic acid. In other cells the enzymes responsible for this are present in the cytosol. These enzymes found in the glycosome to help with synthesis are guanine and adenine phosphoribosyl transferase, hypoxanthine, and xanthine pho tran. All of these enzymes contain a PTS1 sequence at their carboxyl sequence so that they are sent to the glycosome. Evidence Microscopic evidence Microscopic techniques have revealed a lot about the glycosome in the cell and have indeed proven that there is a membrane-bound organelle in the cell for glycogen and its processes. Paul Erlich's findings as early as 1883 noted that from the microscope he could tell that glycogen in the cell was always found with what he called a carrier, later known to be protein. The glycogen itself was also always seen in the cell towards the lower pole in one group, fixed. When scientists tried to stain what was assumed was simple glycogen molecules, the staining had different outcomes. This is due to the fact that they weren't free glycogen molecules but really a glycosome. The glycosome was studied in the microscope by examining the glycosome that was stained with uranyl acetate. The U/Pb that was seen stained was the protein that was part of the glycosome. The glycogen in the glycosome in the cells is normally associated with protein that is two to four times the weight of the glycogen. The glycogen itself however, after purified, is found with very little protein, less than three percent normally, showing that the glycosome is responsible and functions by having the proteins and enzymes needed for the glycogen in the glycosome. With the uranyl staining, as an acid, it would cause dissociation of the protein from the glycogen. The glycogen without the protein would form large aggregates and the stain would be the protein. This gives the illusion of glycogen disappearing as it is not stained, but it dissociates from the protein that it is normally associated with in the glycosome. Biochemical evidence There has been a variety of evidence found biochemically to give evidence that glycosomes are present in cells. In the organelle that is assumed to be a glycosome, numerous proteins are found. These include glycogen synthase, phosphorylase, and branching and debranching enzymes for glycogen. All of these are regulatory enzymes that are needed in glycogen synthesis. The initiation of synthesis of glycogen requires glycogenin, found in glycosomes, a protein primer. Glycogen synthase as mentioned helps in glycogen elongation and the removal of the glucose from glycogen is aided by debranching enzymes and phosphorylase. All of these enzymes are found in the glycosome, showing that this organelle complete with glycogen as well is responsible for storing glycogen and separate from the cytosol. Types There are two types of glycosomes that are found in cells exhibiting these specialized organelles. These two groups are lyoglycosomes and desmoglycosomes. They differ in their association with other organelles in the cell, along with their relative abundance. Studies have shown that healthy cells have more lyoglycosomes while starved cells have more desmoglycosomes. Lyoglycosomes Lyoglycosomes are glycosomes that are free in the cytosol of the cell. These types of glycosomes are affected by acid. They tend to be less electron dense than the other type of glycosome. Lyoglycosomes also are usually found in chains in the cytosol. Because the lyoglycosomes are not bound to tissue, it is possible to extract these glycosomes with water that is boiling. Desmoglycosomes Desmoglycosomes are not free in the cytosol but rather are with other organelles or structures in the cell. These structures relate to the other organelles mentioned such as the myofibrils, mitochondria, and endoplasmic reticulum. This accounts for why desmoglycosomes are found in muscle cells. These glycosomes are not affected by acid. These glycosomes are not found to form groups but rather stay separate as single organelles. Because of the high amount of protein that the glycosome associates with, a high electron density is usually observed. Desmoglycosomes are not extractable from boiling water as they are bound to tissue through their connection to protein. Peroxisome origin The glycosomes are the most divergent of the different types of organelles stemming from peroxisomes, especially as seen in the trypanosomes. Peroxisomes of higher eukaryotes are very similar to the glycosomes and the glyoxysomes that are found in some plants and fungi. The glycosome shares the same basic level structure of a single membrane and a very dense protein matrix. Some studies have shown that some of the enzymes and pathways that are found in the peroxisome are also seen in glycosomes of some species of the trypanosomes. Also, the targeting sequences on the proteins that are sent to the glycosome for the protein matrix are similar in sequence to those sequences on proteins being imported into the peroxisome. The same is seen in the actual sequences for the proteins going into the matrices for these two organelles, not just the targeting sequences. It has been speculated that the since it has been found that glycosomes possess plastid like proteins, a lateral gene transfer happened long ago from an organism capable of photosynthesis whose genes were transferred to have the resultant peroxisomes and glycosomes. The glycosome itself, along with the peroxisome, lacks a genome. Potential drug target Unlike peroxisomes, for most of the trypanosomes their glycosomes are needed for them to be able to survive. Because of this need for the glycosome, it has been suggested as a possible drug target to find a drug to halt its function. When the glycosome is not functioning correctly there is a severe lack of enzymes in the cell. These enzymes are those associated with ether-lipid synthesis or the beta oxidation of certain fatty acids. Cells without glycosomes are deficient in these enzymes as without the compartmentalization of the glycosome the enzymes are degraded in the cell in the cytosol. The organelle keeps metabolism of the enzymes from occurring. For parasites, ether-lipid synthesis is vital to be able to complete its life cycle, making the enzymes protected by the glycosome also vital. In their life cycle, glycolysis partly through the glycosome is very high in the blood stream form comparatively to the pro-cyclic form. The glycosomal glycolysis pathway is necessary in stress situations for the pathogen as glycolysis can be started when the substrates for the pathway are available even when ATP is not available yet. So as this organelle is so essential for the trypanosome, if a drug could target this organelle, it could be a successful therapy as studies have shown without the glycosome parasite death occurs. References Glycolysis Cell biology Organelles
Glycosome
[ "Chemistry", "Biology" ]
2,641
[ "Cell biology", "Carbohydrate metabolism", "Glycolysis" ]
8,632,154
https://en.wikipedia.org/wiki/Comparison%20of%20content-control%20software%20and%20providers
This is a list of content-control software and services. The software is designed to control what content may or may not be viewed by a reader, especially when used to restrict material delivered over the Internet via the Web, e-mail, or other means. Restrictions can be applied at various levels: a government can apply them nationwide, an ISP can apply them to its clients, an employer to its personnel, a school to its teachers or students, a library to its patrons or staff, a parent to a child's computer or computer account or an individual to his or her own computer. Programs and services Providers Amesys Awareness Technologies Barracuda Networks Blue Coat Systems CronLab Cyberoam Detica Dope.security Fortinet GoGuardian Huawei Isheriff Lightspeed Systems Retina-X Studios SafeDNS Securly SmoothWall SonicWall Sophos SurfControl Webroot Websense MICT, 456.ir See also Accountability software Ad filtering Computer surveillance Deep packet inspection Deep content inspection Internet censorship Internet safety Parental controls Wordfilter References Content-control software and providers Computing-related lists
Comparison of content-control software and providers
[ "Technology" ]
232
[ "Computing-related lists", "Software comparisons", "Computing comparisons" ]
8,632,359
https://en.wikipedia.org/wiki/Software%20metering
Software metering is the monitoring and controlling of software for analytics and enforcing of agreements. It can be either passive data collection, or active restriction. Types Software metering can take different forms: Tracking and maintaining software licenses. Making sure that the number of concurrent users of the software do not exceed the terms of the license. This can include monitoring of concurrent usage of software for real-time enforcement of license limits. Real-time monitoring of all (or selected) applications running on the computers within the organization in order to detect unregistered or unlicensed software and prevent their execution, or limit their execution to within certain hours. The system administrator can configure the software metering agent on each computer in the organization. Fixed planning to allocate software usage to computers according to the policies a company specifies and to maintain a record of usage and attempted usage. A company can check out and check in licenses for mobile users, and can also keep a record of all licenses in use. This is often used when limited license counts are available to avoid violating strict license controls. A method of software licensing where the licensed software automatically records how many times, or for how long one or more functions in the software are used, and the user pays fees based on this actual usage (also known as 'pay-per-use') References See also License manager Product activation Software license Systems management System administration Key server (software licensing) System administration Computer systems
Software metering
[ "Technology", "Engineering" ]
293
[ "Computer engineering", "Computer systems", "Computer science stubs", "System administration", "Information systems", "Computer science", "Computing stubs", "Computers" ]
8,632,578
https://en.wikipedia.org/wiki/Gravitation%20%28book%29
Gravitation is a widely adopted textbook on Albert Einstein's general theory of relativity, written by Charles W. Misner, Kip S. Thorne, and John Archibald Wheeler. It was originally published by W. H. Freeman and Company in 1973 and reprinted by Princeton University Press in 2017. It is frequently abbreviated MTW (for its authors' last names). The cover illustration, drawn by Kenneth Gwin, is a line drawing of an apple with cuts in the skin to show the geodesics on its surface. The book contains 10 parts and 44 chapters, each beginning with a quotation. The bibliography has a long list of original sources and other notable books in the field. While this may not be considered the best introductory text because its coverage may overwhelm a newcomer, and even though parts of it are now out of date, it has remained a highly valued reference for advanced graduate students and researchers as of 1998. Content Subject matter After a brief review of special relativity and flat spacetime, physics in curved spacetime is introduced and many aspects of general relativity are covered; particularly about the Einstein field equations and their implications, experimental confirmations, and alternatives to general relativity. Segments of history are included to summarize the ideas leading up to Einstein's theory. The book concludes by questioning the nature of spacetime and suggesting possible frontiers of research. Although the exposition on linearized gravity is detailed, one topic which is not covered is gravitoelectromagnetism. Some quantum mechanics is mentioned, but quantum field theory in curved spacetime and quantum gravity are not included. The topics covered are broadly divided into two "tracks", the first contains the core topics while the second has more advanced content. The first track can be read independently of the second track. The main text is supplemented by boxes containing extra information, which can be omitted without loss of continuity. Margin notes are also inserted to annotate the main text. The mathematics, primarily tensor calculus and differential forms in curved spacetime, is developed as required. An introductory chapter on spinors near the end is also given. There are numerous illustrations of advanced mathematical ideas such as alternating multilinear forms, parallel transport, and the orientation of the hypercube in spacetime. Mathematical exercises and physical problems are included for the reader to practice. The prose in the book is conversational; the authors use plain language and analogies to everyday objects. For example, Lorentz transformed coordinates are described as a "squashed egg-crate" with an illustration. Tensors are described as "machines with slots" to insert vectors or one-forms, and containing "gears and wheels that guarantee the output" of other tensors. Sign and unit conventions MTW uses the sign convention, and discourages the use of the metric with an imaginary time coordinate . In the front endpapers, the sign conventions for the Einstein field equations are established and the conventions used by many other authors are listed. The book also uses geometrized units, in which the gravitational constant and speed of light are each set to 1. The back end papers contain a table of unit conversions. Editions and translations The book has been reprinted in English 24 times. Hardback and softcover editions have been published. The original citation is . It has also been translated into other languages, including Russian (in three volumes), Chinese, and Japanese. This is a recent reprinting with new foreword and preface. Reprinting. Reviews The book is still considered influential in the physics community, with generally positive reviews, but with some criticism of the book's length and presentation style. To quote Ed Ehrlich: James Hartle notes in his book: Sean M. Carroll states in his own introductory text: Pankaj Sharan writes: Ray D'Inverno suggests: Many texts on general relativity refer to it in their bibliographies or footnotes. In addition to the four given, other modern references include George Efstathiou et al., Bernard F. Schutz, James Foster et al., Robert Wald, and Stephen Hawking et al. Other prominent physics books also cite it. For example, Classical Mechanics (second edition) by Herbert Goldstein, who comments: The third edition of Goldstein's text still lists Gravitation as an "excellent" resource on field theory in its selected biography. A 2019 review of another work by Gerard F. Gilmore opened: "Every teacher of General Relativity depends heavily on two texts: one, the massive ‘Gravitation’ by Misner, Thorne and Wheeler, the second the diminutive ‘The Meaning of Relativity’ by Einstein." See also General Relativity (textbook) The Large Scale Structure of Space-Time (textbook) Einstein Gravity in a Nutshell (textbook) List of books on general relativity References Further reading General relativity Physics textbooks 1973 non-fiction books
Gravitation (book)
[ "Physics" ]
993
[ "General relativity", "Theory of relativity" ]
8,632,768
https://en.wikipedia.org/wiki/Cottonera%20Lines
The Cottonera Lines (), also known as the Valperga Lines (), are a line of fortifications in Bormla and Birgu, Malta. They were built in the 17th and 18th centuries on higher ground and further outwards than the earlier line of fortifications, known as the Santa Margherita or Firenzuola lines, which also surround Bormla. History In 1638, the construction of Santa Margherita fortifications began around Bormla but work stopped soon after due to a lack of funds, and they remained in an unfinished state. In 1669, fears of an Ottoman attack rose after the fall of Candia, and a new city, the Civitas Cotonera, named after the reigning Grand Master, Nicolas Cotoner was designed by the Italian engineer Antonio Maurizio Valperga, who also modified the Floriana Lines and some other fortifications of the Grand Harbour. In times of siege, the Civitas Cotonera was meant to offer shelter to the 40,000 island's inhabitants and their animals. The Civitas Cotonera was called the "most ambitious work of fortification ever undertaken by the Knights of St. John in Malta". Construction of the Civita Cotonera and conversion of the earlier fortifications into the Santa Margherita castle, commenced in 1670 but following an outbreak of the plague, which only helped to put more pressure on the Order's already depleted funds, work was discontinued. In 1680 Grand Master Nicolas Coroner passed away and his project was shelved. By this time, the bastioned enceinte was mostly complete and parts of the ditch had been excavated, but other crucial parts such as cavaliers, ravelins, the glacis and the covertway had not yet been built. In the early 18th century, some efforts were made to complete the Cotonera fortifications. Contrary to Grandmaster Cotoner's plan for a castle at the centre of the new city, the Santa Margherita was continued as a line of fortifications. Gunpowder magazines were built on St. James and St. Clement Bastions, while Fort San Salvatore was built on St. Salvatore Bastion. The lines were eventually completed in the 1760s, but the ditch was left unfinished while the outworks and cavaliers were never built. During the French blockade of 1798–1800, the Cottonera lines were held by the French. The Maltese insurgents who had rebelled against them built an entrenchment around the Cottonera and the other fortifications in the harbour area. A number of batteries and lookout posts, such as Tal-Borg Battery and Windmill Redoubt, were also built in the vicinity. Meanwhile, the French bombarded the Maltese in Żabbar. The British modified the incomplete Civitas Cotonera in the 19th century with the construction of St. Clement's Retrenchment, which connected the Cotonera with the Santa Margherita fortifications. As part of this project the British also built the Fort Verdala on the same site that Grandmaster Nicolas Cotoner had intended to build his castle. In the 1870s, the Valperga Bastion and St. Paul's Curtain, the St. Paul's Gate and a church dedicated to St. Francis De Paule were demolished to make way for the new road and Ghajn Dwieli tunnel, which formed part of an extension of the Malta Dockyard. The fortifications were included in the Antiquities List of 1925. Originally Cottonera was a town between Cottonera lines and St Margaret fortifications. When the knights came to Malta and started planning projects, the Cottonera and the three cities were a land named Birmula. It was big enough to divide this land into three cities and a town named Civitas Cotonera. Originally it's not part of Cospicua or Birgu. Layout The Cottonera Lines consist of the following bastions and curtain walls (going clockwise from Kalkara Creek to French Creek): St. Laurence Demi-Bastion – a two-tiered demi-bastion linking the Cottonera Lines to the Birgu Land Front. Its lower part was damaged in World War II, and its upper part now houses a school. San Salvatore Curtain – curtain wall between St. Laurence Demi-Bastion and San Salvatore Bastion. It contains San Salvatore Gate and two modern breaches. San Salvatore Bastion – a pentagonal bastion retrenched with Fort San Salvatore, which was built in 1724. St. Louis Curtain – curtain wall between San Salvatore and St. Louis Bastions. It contains the blocked-up St. Louis Gate. St. Louis Bastion – a pentagonal bastion containing a World War II-era machine gun post, a 19th-century cemetery and a private orchard. St. James Curtain – curtain wall between St. Louis and St. James Bastions. It contains the blocked-up St. James Gate. St. James Bastion – a pentagonal bastion, containing a gunpowder magazine which was later converted into a chapel. It now forms part of the grounds of St. Edward's College. Notre Dame Curtain – curtain wall between St. James and Notre Dame Bastions. It contains Notre Dame Gate (the lines' main gate) and two modern breaches. It was originally protected by a ditch and tenaille, but these no longer exist. Notre Dame Bastion – a pentagonal bastion, containing a 19th-century redoubt. an unnamed curtain wall between Notre Dame and St. Clement's Bastions. It was heavily altered in the 19th century when it was incorporated into St. Clement's Retrenchment, which links the Cottonera Lines to the Santa Margherita Lines. It is protected by a tenaille. St. Clement's Bastion – a pentagonal bastion which was heavily altered in the 19th century when it was incorporated into St. Clement's Retrenchment. It contains a demi-bastioned retrenchment, a gunpowder magazine and a World War II-era anti-aircraft battery with a control station and four concrete emplacements. St. Clement's Curtain – curtain wall between St. Clement's and St. Nicholas Bastions. It contains the walled-up St. Clement Gate. St. Nicholas Bastion – a pentagonal bastion containing a casemated battery and a barrack block. St. Nicholas Curtain, also known as Polverista Curtain – curtain wall between St. Nicholas and St. John Bastions. It contains a modern arched opening. St. John Bastion – a pentagonal bastion containing a casemated battery and a World War II-era machine gun post. Housing estates were built in its piazza in the 1960s. St. John Curtain – curtain wall between St. John and St. Paul Bastions. It contains the walled-up St. John Gate. St. Paul Bastion – a pentagonal bastion containing casemates which were eventually converted into barracks. In the 19th century, it was linked to the Corradino Lines. A tunnel allowing vehicular access to the Three Cities now cuts into bastion's base. St. Paul Curtain – curtain wall between St. Paul and Valperga Bastions. It contained St. Paul Gate, also known as Porta Haynduieli. The curtain and gate were demolished in the 1870s to make way for the extension of the dockyard. Valperga Bastion – a large demi-bastion which was demolished in the 1870s to make way for the extension of the dockyard. Today, St. Laurence Demi-Bastion to Notre Dame Curtain fall within the limits of Birgu, while Notre Dame to St. Paul Bastions fall within the limits of Cospicua. References External links National Inventory of the Cultural Property of the Maltese Islands Cospicua Buildings and structures in Birgu City walls in Malta Hospitaller fortifications in Malta Fortification lines Unfinished buildings and structures Limestone buildings in Malta National Inventory of the Cultural Property of the Maltese Islands 17th-century fortifications 18th-century fortifications 18th Century military history of Malta
Cottonera Lines
[ "Engineering" ]
1,634
[ "Fortification lines" ]
8,632,838
https://en.wikipedia.org/wiki/Xplore%20M98
The Xplore M98 is a Palm OS-powered clamshell smartphone created by Group Sense PDA. As of July 2008, it was still available for sale in Hong Kong from retailers. Specifications See also Brighthand Photo Tour - GSPDA Xplore M28, M68 and M98 GSL Xplore M98 Product Info Page Mobile Phone Accessories Review By The Inquirer Discontinued smartphones Group Sense PDA mobile phones
Xplore M98
[ "Technology" ]
93
[ "Mobile computer stubs", "Mobile technology stubs", "Mobile phone stubs" ]
8,632,918
https://en.wikipedia.org/wiki/Floriana%20Lines
The Floriana Lines () are a line of fortifications in Floriana, Malta, which surround the fortifications of Valletta and form the capital city's outer defences. Construction of the lines began in 1636 and they were named after the military engineer who designed them, Pietro Paolo Floriani. The Floriana Lines were modified throughout the course of the 17th and 18th centuries, and they saw use during the French blockade of 1798–1800. Today, the fortifications are still largely intact but rather dilapidated and in need of restoration. The Floriana Lines are considered to be among the most complicated and elaborate of the Hospitaller fortifications of Malta. Since 1998, they have been on the tentative list of UNESCO World Heritage Sites, as part of the Knights' Fortifications around the Harbours of Malta. History Background, controversy and construction The city of Valletta was founded on 28 March 1566 by Jean de Valette, the Grand Master of the Order of St. John. The city occupied about half the Sciberras Peninsula, a large promontory separating the Grand Harbour from Marsamxett Harbour, and was protected by tracce italiane fortifications, including a land front with four bastions, two cavaliers and a deep ditch. Although these fortifications were well designed, by the early 17th century they were not strong enough to resist a large attack due to new technological developments which increased the range of artillery. In 1634, there were fears that the Ottomans would attack Malta. Grand Master Antoine de Paule asked Pope Urban VIII for help in improving the island's fortifications. The Pope sent Pietro Paolo Floriani to examine the defences, who in 1635 proposed building a second line of fortifications around the Valletta Land Front. Some members of the Order and a number of military engineers strongly opposed these plans, since the large garrison needed to man the lines was deemed too expensive. Eventually De Paule decided to construct the lines, since it would have been improper to disagree with the Pope's military engineer. The Bailiff Gattinara resigned from his post in the Commission of Fortifications in protest. Work on the lines began in 1636, but no ceremony was carried out to commemorate laying the foundation stone due to controversy surrounding the construction. Since fortification was expensive, the new Grand Master Giovanni Paolo Lascaris imposed a new tax on immovable property. This tax created dispute between the Order and the clergy, who protested to the Pope. Some priests also influenced the population to take part in a national protest, but plans leaked out to authorities and the leaders were arrested. The fortifications were named the Floriana Lines after their architect. By June 1640, the lines were considered partially defensible, although still incomplete. Improvements and modifications Fears of an Ottoman attack rose again after the fall of Candia in 1669, and the following year Grand Master Nicolas Cotoner invited the military engineer Antonio Maurizio Valperga to improve the fortifications. At the time the Floriana Lines were still under construction, and a number of weak points had been identified in their original design, especially since the demi-bastions forming the two extremities of the land front were too acute and could not be well defended. Valperga attempted to correct these flaws by making a number of alterations to San Salvatore Bastion on the western end of the lines, and constructing a faussebraye around the entire land front and a crowned hornwork near the eastern end. In the 1680s some minor modifications were made by the Flemish engineer Carlos de Grunenbergh. Work on Valperga's modifications to the lines progressed slowly, and by the beginning of the 18th century the outworks, glacis and enceinte facing Marsamxett were still unfinished. Works continued under a number of other engineers, including Charles François de Mondion, and the lines were largely complete when Porte des Bombes was constructed in 1721. Further alterations were made over the following decades, such as the construction of the Northern Entrenchment in the 1730s. In 1724, the suburb of Floriana was founded in the area between the Floriana Lines and the Valletta Land Front. The suburb was named Borgo Vilhena after Grand Master António Manoel de Vilhena, but it was commonly known as Floriana. It is now a town in its own right. French occupation and British rule French forces invaded Malta in June 1798, and the Order capitulated after a couple of days. The French occupied the island by September, when the Maltese rebelled and blockaded the French forces in the harbour area with foreign help. The Floriana Lines remained under French control throughout the blockade, and the Maltese built Tas-Samra Battery and a battery on Corradino in order to bombard them. After the British took over Malta in 1800, the lines remained a functional military establishment. A number of minor alterations were made, including the enlargement of Porte des Bombes, the demolition of a lunette and some other gates, and the addition of gunpowder magazines and traverses. Recent history The fortifications were included on the Antiquities List of 1925, and they are now also listed on the National Inventory of the Cultural Property of the Maltese Islands. In the 1970s, parts of the covertway and glacis were destroyed to make way for large storage tanks. Today, the lines are still more or less intact, but some parts are in a rather dilapidated state and in need of restoration. Layout Land front The Floriana Land Front is the large bastioned enceinte enclosing the landward approach to the Floriana. It consists of the following: Bastion of Provence, also known as San Salvatore Bastion or Sa Maison Bastion – a retrenched demi-bastion which was heavily altered over the course of the 17th and 18th centuries. Notre Dame Curtain – curtain wall linking San Salvatore and St. Philip Bastions. It contained the Notre Dame Gate, which was partially demolished in the 1920s to accommodate for traffic requirements. St. Philip Bastion – a large obtuse-angled bastion at the centre of the land front. It is retrenched with the following bastions: St. James Bastion St. Luke Bastion St. Anne Curtain – curtain wall linking St. Philip and St. Francis Bastions. It contained St. Anne's Gate, which was replaced by a larger gate in 1859. The larger gate was also demolished in 1897 to facilitate the flow of traffic. St. Francis Bastion – a large demi-bastion linked to the Polverista Bastion of the Grand Harbour enceinte. It is retrenched with St. Mark Bastion. The land front is surrounded by a ditch, which contains the following outworks: San Salvatore Counterguard – a counterguard near San Savatore Bastion. Pietà Lunette – a pentagonal lunette between San Salvatore Bastion and Notre Dame Ravelin, facing Pietà Creek. It was damaged by aerial bombardment in World War II. Notre Dame Ravelin, also known as the Lower Ravelin – a pentagonal ravelin near Notre Dame Curtain, between San Salvatore and St. Philip Bastions. A number of modern government buildings are located in the open area within the ravelin. a pentagonal lunette between Notre Dame Ravelin and St. Philip Bastion. It was damaged by aerial bombardment in World War II, but the damage was repaired. Porte des Bombes Lunette – a lunette between St. Philip Bastion and St. Francis Ravelin. It was demolished in the early 20th century to make way for the modern road to Valletta. St. Francis Ravelin, also known as the Upper Ravelin – a pentagonal ravelin near St. Anne Curtain, between St. Philip and St. Francis Bastions. The Malta Environment and Planning Authority (MEPA) offices are located in the open area within the ravelin. The outworks are surrounded by a faussebraye, advanced ditch, covertway, and glacis. In the 1720s, a gate known as Porta dei Cannoni was built in the faussebraye. The gate was enlarged by the British, and became known as Porte des Bombes. It was eventually detached from the faussebraye to facilitate the flow of traffic, and it now looks like a triumphal arch. A crowned hornwork consisting of an inner hornwork with two demi-bastions and an outer crownwork with one full bastion and two demi-bastions is located near St. Francis Ravelin. The crownwork was protected by a musketry gallery overlooking Marsa and by two lunettes, one near its land front and another near its flank. Marsamxett enceinte The enceinte along the side facing Marsamxett Harbour starts from San Salvatore Bastion of the Floriana Land Front, and originally ended at St. Michael's Counterguard of the Valletta Land Front. It consists of the following: La Vittoria Bastion – a small casemated bastion grafted onto the Bastion of Provence which forms part of the land front. Polverista Curtain – a long casemated curtain wall between La Vittoria and Msida Bastions. It overlooks the AFM base at Hay Wharf. Msida Bastion – a polygonal asymmetrical bastion with a demi-bastioned retrenchment. A cemetery (today Msida Bastion Historic Garden) was built on its upper part in the 19th century. an unnamed curtain wall between Msida and Quarantine Bastions Quarantine Bastion – a polygonal asymmetrical bastion with a demi-bastioned retrenchment. It is breached by a modern road. In addition, a bastioned enceinte known as the North Entrenchment is located behind the entire Marsamxett enceinte, acting as a secondary line of defence. Grand Harbour enceinte The enceinte along the side facing the Grand Harbour starts from St. Francis Bastion of the Floriana Land Front, and ends at St. Peter and St. Paul Counterguard of the Valletta Land Front. It consists of the following: Capuchin Bastion, also known as Dhoccara, Magazine or Polverista Bastion – a demi-bastion linked to St. Francis Bastion of the land front. It contains an 18th-century gunpowder magazine. a curtain wall linking Capuchin Bastion to the platform near Crucifix Curtain a flat-faced platform or bastion near Crucifix Curtain Crucifix Curtain – curtain wall linking the platform to Crucifix Bastion Crucifix Bastion – a large asymmetrical bastion containing a 19th-century gunpowder magazine. It also had a concrete emplacement for a 9-inch BL gun, but this has been removed. Kalkara Curtain – curtain wall linking Crucifix and Kalkara Bastions. It is breached by a modern road. Kalkara Bastion – a bastioned enceinte linking to St. Peter & St. Paul Counterguard of the Valletta Land Front. References External links National Inventory of the Cultural Property of the Maltese Islands Floriana City walls in Malta Hospitaller fortifications in Malta Fortification lines Buildings and structures completed in the 18th century Limestone buildings in Malta Architectural controversies National Inventory of the Cultural Property of the Maltese Islands Controversies in Malta 17th-century controversies 17th-century fortifications 18th-century fortifications 18th Century military history of Malta
Floriana Lines
[ "Engineering" ]
2,327
[ "Architectural controversies", "Fortification lines", "Architecture" ]
8,632,926
https://en.wikipedia.org/wiki/Security%20bug
A security bug or security defect is a software bug that can be exploited to gain unauthorized access or privileges on a computer system. Security bugs introduce security vulnerabilities by compromising one or more of: Authentication of users and other entities Authorization of access rights and privileges Data confidentiality Data integrity Security bugs do not need be identified nor exploited to be qualified as such and are assumed to be much more common than known vulnerabilities in almost any system. Causes Security bugs, like all other software bugs, stem from root causes that can generally be traced to either absent or inadequate: Software developer training Use case analysis Software engineering methodology Quality assurance testing and other best practices Taxonomy Security bugs generally fall into a fairly small number of broad categories that include: Memory safety (e.g. buffer overflow and dangling pointer bugs) Race condition Secure input and output handling Faulty use of an API Improper use case handling Improper exception handling Resource leaks, often but not always due to improper exception handling Preprocessing input strings before they are checked for being acceptable Mitigation See software security assurance. See also Computer security Hacking: The Art of Exploitation IT risk Threat (computer) Vulnerability (computing) Hardware bug Secure coding References Further reading Computer security Software bugs Software testing
Security bug
[ "Engineering" ]
250
[ "Software engineering", "Software testing" ]
8,632,927
https://en.wikipedia.org/wiki/Volker%20Heine
Volker Heine (born 19 September 1930) is a German-born New Zealand and British physicist who is a Professor Emeritus at University of Cambridge. He is considered a pioneer of theoretical and computational studies of the electronic structure of solids and liquids and the determination of physical properties derived from it. Biography Born in Hamburg, Germany, Volker Heine was educated at Wanganui Collegiate School and the University of Otago (New Zealand). In 1954, he came to University of Cambridge on a Shell Post-Graduate Scholarship to do his Ph.D. in physics (1956) as student of Sir Nevill Mott. In the following years he obtained a Fellowship at Clare College and became part of the new theory group in the Cavendish Laboratory and apart from a post-doc year and several sabbaticals and summer visits in the US, he stayed in Cambridge for the remainder of his career. In 1976, Heine became a professor and took over as head of the theory group which was by then called "Theory of Condensed Matter". He held that position until his retirement in 1997. Volker Heine has been a very active figure in the international scientific community, shaping in particular the landscape of the field of atomistic computer simulations in Europe. He initiated and later led the Psi-k network, a worldwide network of researchers working on the advancement of first-principles computational materials science. Psi-k's mission is to develop fundamental theory, algorithms, and computer codes in order to understand, predict, and design materials properties and functions. Key activities of Psi-k are the organization of conferences, workshops, tutorials and training schools as well as the dissemination of scientific thinking in society. Volker Heine was elected Fellow of the Royal Society in 1974 and of the American Physical Society in 1987. He was awarded the Maxwell Medal and Prize in 1972, the Royal Medal of the Royal Society (London) in 1993, the Dirac Medal of the Institute of Physics in 1994, and the Max Born Prize in 2001. He has been visiting professor at several universities around the world and External Scientific Member of the Max Planck Institute for Solid State Research in Stuttgart. Heine is married to Daphne Heine with three children. Research Volker Heine's research essentially covered three areas: (a) Understanding the behavior of materials from the calculation of their electronic structure; (b) Understanding the origin of incommensurately modulated materials; (c) Understanding the structure and properties of minerals from an atomic point of view. His main research topic is electronic structure theory and particularly the development of various fundamental concepts for condensed matter physics. Here, his pioneering work on pseudopotentials forms a basis of most presently undertaken electronic structure and total-energy calculations, in particular for semiconductors and so-called sp-bonded metals. He also developed the basic description of electron-phonon coupling, and much of our understanding of the structure and atomic relaxation at surfaces was established by Heine. Furthermore, his groundbreaking work on the complex band structure and pioneering ideas in the theory of surface states provides the basis of present-day description and understanding of electronic properties of bulk and interfaces. This includes the concept of metal-induced gap states at metal-semiconductor heterostructures and the understanding of Schottky barriers. Amongst his seminal contributions are also the formulation of a recursion method for electronic structure studies, a theory of incommensurate structures of polytypes of silicon carbide, and a model for incommensurate and framework structures of minerals. He studied magnetic properties of solids, various aspects of crystal phase transitions e.g. and thermal expansion and more. Volker Heine has published more than 200 research papers, several review articles and one text book. References External links Volker Heine, Cavendish Laboratory, University of Cambridge (TCM). Volker Heine, Fellows' Directory, Clare College 1930 births Living people Fellows of the Royal Society Fellows of Clare College, Cambridge Royal Medal winners People educated at Whanganui Collegiate School University of Otago alumni Maxwell Medal and Prize recipients Fellows of the American Physical Society Scientists from Hamburg Alumni of the University of Cambridge 20th-century British physicists 21st-century British physicists 20th-century New Zealand physicists 21st-century New Zealand physicists 20th-century German physicists 21st-century German physicists German emigrants to New Zealand Condensed matter physicists
Volker Heine
[ "Physics", "Materials_science" ]
891
[ "Condensed matter physicists", "Condensed matter physics" ]
8,632,932
https://en.wikipedia.org/wiki/ALGOL%2068-R
ALGOL 68-R was the first implementation of the Algorithmic Language ALGOL 68. In December 1968, the report on the Algorithmic Language ALGOL 68 was published. On 20–24 July 1970 a working conference was arranged by the International Federation for Information Processing (IFIP) to discuss the problems of implementing the language, a small team from the Royal Radar Establishment (RRE) attended to present their compiler, written by I. F. Currie, Susan G. Bond, and J. D. Morrison. In the face of estimates of up to 100 man-years to implement the language, using multi-pass compilers with up to seven passes, they described how they had already implemented a one-pass compiler which was in production for engineering and scientific uses. The compiler The ALGOL 68-R compiler was initially written in a local dialect of ALGOL 60 with extensions for address manipulation and list processing. The parser was written using J. M. Foster's Syntax Improving Device (SID) parser generator. The first version of the compiler occupied 34 K words. It was later rewritten in ALGOL 68-R, taking around 36 K words to compile most programs. ALGOL 68-R was implemented under the George 3 operating system on an ICL 1907F. The compiler was distributed at no charge by International Computers Limited (ICL) on behalf of the Royal Radar Establishment (RRE). Restrictions in the language compiled To allow one pass compiling, ALGOL 68-R implemented a subset of the language defined in the original report: Identifiers, modes and operators must be specified before use. No automatic proceduring Explicit VOID mode No formal declarers No parallel processing GOTO may not be omitted Uniting is only valid in strong positions Many of these restrictions were adopted by the revised report on ALGOL 68. Specification before use To allow compiling in one pass ALGOL 68-R insisted that all identifiers were specified (declared) before use. The standard program: PROC even = (INT number) BOOL: ( number = 0 | TRUE | odd (ABS (number - 1))); PROC odd = (INT number) BOOL: ( number = 0 | FALSE | even (ABS (number - 1))); would have to be rewritten as: PROC (INT) BOOL odd; PROC even = (INT number) BOOL : ( number = 0 | TRUE | odd (ABS (number - 1))); odd := (INT number) BOOL : ( number = 0 | FALSE | even (ABS (number - 1))); To allow recursive declarations of modes (types) a special stub mode declaration was used to inform the compiler that an up-coming symbol was a mode rather than an operator: MODE B; MODE A = STRUCT (REF B b); MODE B = [1:10] REF A; No proceduring In the standard language the proceduring coercion could, in a strong context, convert an expression of some type into a procedure returning that type. This could be used to implement call by name. Another case where proceduring was used was the declaration of procedures, in the declaration: PROC x plus 1 = INT : x + 1; the right hand side was a cast of x + 1 to integer, which was then converted to procedure returning integer. The ALGOL 68-R team found this too difficult to handle and made two changes to the language. The proceduring coercion was dropped, and the form mode : expression was redefined as a procedure denotation, casts being indicated by an explicit VAL symbol: REAL : x CO a cast to REAL in ALGOL 68 CO REAL VAL x CO a cast to REAL in ALGOL 68-R CO Code that had a valid use for call by name (for example, Jensen's device) could simply pass a procedure denotation: PROC sum = (INT lo, hi, PROC (INT) REAL term) REAL : BEGIN REAL temp := 0; FOR i FROM lo TO hi DO temp +:= term (i); temp END; print (sum (1, 100, (INT i) REAL: 1/i)) In the version of the language defined in the revised report these changes were accepted, although the form of the cast was slightly changed to mode (expression). REAL (x) CO a cast to REAL in revised ALGOL 68 CO Explicit void mode In the original language the VOID mode was represented by an empty mode: : x := 3.14; CO cast (x := 3.14) to void CO PROC endit = GOTO end; CO a procedure returning void CO The ALGOL 68-R team decided to use an explicit VOID symbol in order to simplify parsing (and increase readability): VOID VAL x := 3.14; CO cast (x := 3.14) to void CO PROC endit = VOID : GOTO end; CO a procedure returning void CO This modification to the language was adopted by the ALGOL 68 revised report. No formal declarers Formal declarers are the modes on the left hand side of an identity declaration, or the modes specified in a procedure declaration. In the original language, they could include array bounds and specified whether the matching actual declarer was fixed, FLEX or EITHER: [ 15 ] INT a; CO an actual declarer, bounds 1:15 CO REF [ 3 : ] INT b = a; CO This is an error CO PROC x = (REF [ 1 : EITHER] INT a) : ... The ALGOL 68-R team redefined formal declarers to be the same as virtual declarers which include no bound information. They found that this reduced the ambiguities in parsing the language and felt that it was not a feature that would be used in working programs. If a procedure needed certain bounds for its arguments it could check them itself with the UPB (upper bound) and LWB (lower bound) operators. In ALGOL 68-R the example above could be recoded like this: (the bounds of a in the procedure would depend on the caller). [ 15 ] INT a; CO an actual declarer, bounds 1:15 CO REF [] INT b = a [ AT 3]; CO use slice so b has bounds 3:17 CO PROC x = (REF [] INT a) VOID: ... CO bounds given by caller CO In the revised report on ALGOL 68 formal bounds were also removed, but the FLEX indication was moved in position so it could be include in formal declarers: [ 1: FLEX ] INT a; CO original ALGOL 68, or ALGOL 68-R CO FLEX [ 1: ] INT a; CO revised ALGOL 68, CO PROC x = (REF [ 1: FLEX ] INT a) : ... CO Original ALGOL 68 CO PROC x = (REF [ ] INT a) VOID: ... CO ALGOL 68-R CO PROC x = (REF FLEX [ ] INT a) VOID: ... CO Revised ALGOL 68 CO No parallel processing In ALGOL 68 code can be run in parallel by writing PAR followed by a collateral clause, for example in: PAR BEGIN producer, consumer END the procedures producer and consumer will be run in parallel. A semaphore type (SEMA) with the traditional P (DOWN) and V (UP) operators is provided for sysynchronizing between the parts of the parallel clause, This feature was not implemented in ALGOL 68-R. An extension named ALGOL 68-RT was written which used the subprogramming feature of the ICL 1900 to provide multithreading facilities to ALGOL 68-R programs with semantics similar to modern thread libraries. No changes were made to the compiler, only the runtime library and the linker. goto may not be omitted In ALGOL 68 the GOTO symbol could be omitted from a jump: PROC stop = : ...; ... BEGIN IF x > 3 THEN stop FI; CO a jump, not a call CO ... stop: SKIP END As ALGOL 68-R was a one pass compiler this was too difficult, so the GOTO symbol was made obligatory. The same restriction was made in the official sublanguage, ALGOL 68S. Uniting is only allowed in strong positions In ALGOL 68 uniting is the coercion that produces a UNION from a constituent mode, for example: MODE IBOOL = UNION (INT, BOOL); CO an IBOOL is an INT or a BOOL CO IBOOL a = TRUE; CO the BOOL value TRUE is united to an IBOOL CO In standard ALGOL 68 uniting was possible in firm or strong contexts, so for example could be applied to the operands of formulas: OP ISTRUE = (IBOOL a) BOOL: ...; IF ISTRUE 1 CO legal because 1 (INT) can be united to IBOOL CO THEN ... The ALGOL 68-R implementers found this gave too many ambiguous situations so restricted the uniting coercion to strong contexts. The effects of this restriction were rarely important and, if necessary, could be worked around by using a cast to provide a strong context at the required point in the program. F00L The ALGOL 68-R compiler initialised unused memory to the value -6815700. This value was chosen because: As an integer it was a large negative value As an address it was beyond the maximum address for any practical program on an ICL 1900 As an instruction it was illegal As text it displayed as F00L As a floating point number it had the overflow bit set The same value was used to represent NIL. Stropping In ALGOL family languages, it is necessary to distinguish between identifiers and basic symbols of the language. In printed texts this was usually accomplished by printing basic symbols in boldface or underlined (BEGIN or begin for example). In source code programs, some stropping technique had to be used. In many ALGOL like languages, before ALGOL 68-R, this was accomplished by enclosing basic symbols in single quote characters ('begin' for example). In 68-R, basic symbols could be distinguished by writing them in upper case, lower case being used for identifiers. As ALGOL 68-R was implemented on a machine with 6-bit bytes (and hence a 64 character set) this was quite complex and, at least initially, programs had to be composed on paper punched tape using a Friden Flexowriter. Partly based on the experience of ALGOL 68-R, the revised report on ALGOL 68 specified hardware representations for the language, including UPPER stropping. Extensions to ALGOL 68 ALGOL 68-R included extensions for separate compiling and low-level access to the machine. Separate compiling Since ALGOL 68 is a strongly typed language, the simple library facilities used by other languages on the ICL 1900 system were insufficient. ALGOL 68-R was delivered with its own library format and utilities which allowed sharing of modes, functions, variables, and operators between separately compiled segments of code which could be stored in albums. A segment to be made available to other segments would end with a list of declarations to be made available: graphlib CO the segment name CO BEGIN MODE GRAPHDATA = STRUCT ( ... ); MODE GRAPH = REF GRAPHDATA; PROC new graph = ( ... ) GRAPH : ...; PROC draw graph = (GRAPH g) VOID : ...; ... END KEEP GRAPH, new graph, draw graph FINISH And then the graph functions could be used by another segment: myprog WITH graphlib FROM graphalbum BEGIN GRAPH g = new graph (...); ... draw graph (g); ... END FINISH Low level system access As a strongly typed high level language, ALGOL 68 prevents programs from directly accessing the low level hardware. No operators exist for address arithmetic, for example. Since ALGOL 68-R didn't compile to standard ICL semicompiled (link-ready) format, it was necessary to extend the language to provide features in ALGOL 68-R to write code that would normally be written in assembly language. Machine instructions could be written inline, inside CODE ... EDOC sections and the address manipulation operators INC, DEC, DIF, AS were added. An example, using a George peri operation to issue a command: [1 : 120] CHAR buff; INT unitnumber; STRUCT (BITS typemode, reply, INT count, REF CHAR address) control area := (8r47400014,0,120,buff[1]); ...; CODE 0,6/unitnumber; 157,6/typemode OF control area EDOC Availability A copy of the ALGOL 68-R compiler, runnable under the George 3 operating system emulator, by David Holdsworth (University of Leeds), is available, with source code, under a GNU General Public License (GPL). References External links Algol 68 – Malvern Radar and Technology History Society R History of computing in the United Kingdom
ALGOL 68-R
[ "Technology" ]
2,768
[ "History of computing", "History of computing in the United Kingdom" ]
8,633,008
https://en.wikipedia.org/wiki/Weakly%20contractible
In mathematics, a topological space is said to be weakly contractible if all of its homotopy groups are trivial. Property It follows from Whitehead's Theorem that if a CW-complex is weakly contractible then it is contractible. Example Define to be the inductive limit of the spheres . Then this space is weakly contractible. Since is moreover a CW-complex, it is also contractible. See Contractibility of unit sphere in Hilbert space for more. The Long Line is an example of a space which is weakly contractible, but not contractible. This does not contradict Whitehead theorem since the Long Line does not have the homotopy type of a CW-complex. Another prominent example for this phenomenon is the Warsaw circle. References Topology Homotopy theory
Weakly contractible
[ "Physics", "Mathematics" ]
159
[ "Topology stubs", "Topology", "Space", "Geometry", "Spacetime" ]
8,633,062
https://en.wikipedia.org/wiki/Lake%20capture
In geology, lake capture is the process of capture (see Stream capture) of the waters collected in a lake by a neighbor river basin. The occurrence of a lake capture is mainly controlled by the water balance at the lake's basin and the changes in topography due to erosion, sedimentation, and tectonism. If evaporation at the surface of a lake, plus the water losses through underground infiltration and plant evapotranspiration are high enough to account for all precipitation water collected by the lake, then the lake becomes endorheic, closed, or internally drained. This situation prevails until the water balance changes again and the lake overburdens the limits of its basin or until the lake capture occurs. Opening the drainage of an endorheic lacustrine basin by fluvial erosion generally implies a lake capture. Lake captures are therefore very sensitive to the preexisting topography as well as to climatic and lithological factors. A climatic change towards more humid conditions can result in a higher water level in the internally drained basin, eventually causing overflow, this . In a longer time-scale, sediment colmatation of the lacustrine basin can also lead to overflow. Both can hinder the relative importance of the capture process carried out by erosion. Examples include the Late Neogen capture of the endorheic Ebro Basin (capture) or the Pleistocene Lake Bonneville. See also River capture Regressive erosion References Endorheic lakes Geomorphology
Lake capture
[ "Environmental_science" ]
311
[ "Hydrology", "Hydrology stubs" ]
8,634,205
https://en.wikipedia.org/wiki/Jim%20Al-Khalili
Jameel Sadik "Jim" Al-Khalili (; born 20 September 1962) is an Iraqi-British theoretical physicist and science populariser. He is professor of theoretical physics and chair in the public engagement in science at the University of Surrey. He is a regular broadcaster and presenter of science programmes on BBC radio and television, and a frequent commentator about science in other British media. In 2014, Al-Khalili was named as a RISE (Recognising Inspirational Scientists and Engineers) leader by the UK's Engineering and Physical Sciences Research Council (EPSRC). He was President of Humanists UK between January 2013 and January 2016. Early life and education Al-Khalili was born in Baghdad in 1962. His father was an Iraqi Air Force engineer, and his English mother was a librarian. Al-Khalili settled permanently in the United Kingdom in 1979. After completing (and retaking) his A-levels over three years until 1982, he studied physics at the University of Surrey and graduated with a Bachelor of Science degree in 1986. He stayed on at Surrey to pursue a Doctor of Philosophy degree in nuclear reaction theory, which he obtained in 1989, rather than accepting a job offer from the National Physical Laboratory. Career and research In 1989, Al-Khalili was awarded a Science and Engineering Research Council (SERC) postdoctoral fellowship at University College London, after which he returned to Surrey in 1991, first as a research assistant, then as a lecturer. In 1994, Al-Khalili was awarded an Engineering and Physical Sciences Research Council (EPSRC) Advanced Research Fellowship for five years, during which time he established himself as a leading expert on mathematical models of exotic atomic nuclei. He has published widely in his field. Al-Khalili is a professor of physics at the University of Surrey, where he also holds a chair in the Public Engagement in Science. He has been a trustee (2006–2012) and vice president (2008–2011) of the British Science Association. He also held an EPSRC Senior Media Fellowship. Al-Khalili was awarded the Royal Society of London Michael Faraday Prize for science communication for 2007 and elected an Honorary Fellow of the British Association for the Advancement of Science. He has been a Fellow of the Institute of Physics since 2000, when he also received the Institute's Public Awareness of Physics Award. He has lectured widely both in the UK and around the world, particularly for the British Council. He is a member of the British Council Science and Engineering Advisory Group, a member of the Royal Society Equality and Diversity Panel, an external examiner for the Open University Department of Physics and Astronomy, a member of the Editorial Board for the open access Journal PMC Physics A, and Associate Editor of Advanced Science Letters. He is also a member of the Advisory Committee for the Cheltenham Science Festival. In 2007, he was a judge on the BBC Samuel Johnson Prize for non-fiction and has been a celebrity judge at the National Science & Engineering Competition Finals at The Big Bang Fair. He was appointed Officer of the Order of the British Empire (OBE) in the 2008 Birthday Honours. In 2012, he delivered the Gifford Lectures on Alan Turing: Legacy of a Code Breaker at the University of Edinburgh. In 2013 he was awarded an Honorary Degree (DSc) from the University of London. Al-Khalili was elected as a Fellow of the Royal Society in 2018 and elected an Honorary Fellow of the Royal Academy of Engineering in 2023. He was appointed Commander of the Order of the British Empire (CBE) in the 2021 Birthday Honours for services to science and public engagement in STEM. Broadcasting As a broadcaster, Al-Khalili is frequently on television and radio and also writes articles for the British press. In 2004, he co-presented the Channel 4 documentary The Riddle of Einstein's Brain, produced by Icon Films. His big break as a presenter came in 2007 with Atom, a three-part series on BBC Four about the history of our understanding of the atom and atomic physics. This was followed by a special archive edition of Horizon, "The Big Bang". In early 2009, Al-Khalili presented the BBC Four three-part series Science and Islam about the leap in scientific knowledge that took place in the Islamic world between the 8th and 14th centuries. He has contributed to programmes ranging from Tomorrow's World, BBC Four's Mind Games, The South Bank Show to BBC One's Bang Goes the Theory. In 2010 he presented the BBC documentary on the history of chemistry, Chemistry: A Volatile History. In October 2011, he began a programme on famous contemporary scientists on Radio Four, called The Life Scientific. The first of this series featured his interview with Paul Nurse. He has since interviewed a series of notable scientists, including Richard Dawkins, Alice Roberts, James Lovelock, Steven Pinker, Martin Rees, Jocelyn Bell Burnell, Mark Walport and Tim Hunt, and he has himself been interviewed on the show by Adam Rutherford. Al-Khalili hosts a regular "Jim meets..." interview series at the University of Surrey, which is published on the university's YouTube channel. Guests have included David Attenborough, Robert Winston, Brian Cox and Rowan Williams, Archbishop of Canterbury. In 2011, Al-Khalili hosted a three-part documentary series on BBC Four entitled Shock and Awe: The Story of Electricity. In 2012, Al-Khalili presented a Horizon special on BBC 2, which examined the latest scientific developments in the quest to discover the Higgs Boson, with preliminary results from the Large Hadron Collider experiment at CERN suggesting that the elusive particle does indeed exist. Al-Khalili has been one of the experts interviewed in the Philomen Cunk mockumentaries Cunk on Earth (2022) and Cunk on Life (2024). Awards and honours 2007 – Royal Society Michael Faraday Prize for science communication 2008 – Appointed Officer of the Order of the British Empire (OBE) in 2008 Birthday Honours 2013 – Warwick Prize for Writing, shortlist, Pathfinders 2014 – RISE leader award 2013 – Honorary Doctor of Science, Royal Holloway, University of London 2016 – Inaugural winner of the Stephen Hawking Medal for Science Communication 2017 – Honorary Doctorate, University of York 2018 – Elected a Fellow of the Royal Society (FRS) 2019 – Honorary Doctor of Science, University of St Andrews 2019 – Outstanding Achievement in Science & Technology at The Asian Awards. 2021 - Commander of the Order of the British Empire (CBE), "for Services to Science and Public Engagement in STEM." 2022 – Honorary Doctor of Science, University of Birmingham 2023 - Elected Honorary Fellow of the Royal Academy of Engineering Personal life Al-Khalili lives in Southsea, Portsmouth, with his wife Julie. They have a son and daughter. Al-Khalili is an atheist and a humanist, remarking, "as the son of a Protestant Christian mother and a Shia Muslim father, I have nevertheless ended up without a religious bone in my body". Al-Khalili became vice president of Humanists UK in 2016 after stepping down as its president. He is also a patron of Guildford-based educational, cultural and social community hub, The Guildford Institute. Documentaries The Riddle of Einstein's Brain (2004) Atom (2007) Battle for the Beginning (2008) Science and Islam (2009) Genius of Britain: The Scientists Who Changed the World (2010) The Secret Life of Chaos (2010) Chemistry: A Volatile History (2010) Everything and Nothing (2011) Shock and Awe: The Story of Electricity (2011) Order and Disorder (2012) Light and Dark (2013) The Secrets of Quantum Physics (2014) Britain's Nuclear Secrets: Inside Sellafield (2015) The Beginning and End of the Universe (2016) Britain's Nuclear Bomb: The Inside Story (2017) Gravity and Me: The Force That Shapes Our Lives (2017) The Joy of AI (2018) Breakthrough: The Ideas That Changed the World (2019) Secrets of the Solar System (2020) Secrets of Size: Atoms to Supergalaxies (2022) Quantum Physics: The Laws That Govern Our Universe (2022) Publications A list of Jim Al-Khalili's peer reviewed research papers can be found on Google Scholar and Scopus. His published books include: Nucleus: A Trip into the Heart of Matter (2001) (co-author) The House of Wisdom: How Arabic Science Saved Ancient Knowledge and Gave Us the Renaissance (2010) a.k.a. The House of Wisdom: The Flourishing of a Glorious Civilisation and the Golden Age of Arabic Science a.k.a. Pathfinders: The Golden Age of Arabic Science Paradox: The Nine Greatest Enigmas in Science (2012) Life on the Edge: The Coming of Age of Quantum Biology (2014) (co-author) As editor The Euroschool Lectures on Physics with Exotic Beams, Vol. I (Lecture Notes in Physics) (2004) The Euroschool Lectures on Physics with Exotic Beams, Vol. II (Lecture Notes in Physics) (2006) The Euroschool Lectures on Physics with Exotic Beams, Vol. III (Lecture Notes in Physics) (2008) As consultant editor His essays, chapters and other contributions include: The Collins Encyclopedia of the Universe (2001) Scattering and Inverse Scattering in Pure and Applied Science (2001) Quantum Aspects of Life (2008) 30-second Theories: The 50 Most Thought-provoking Theories in Science (2009) Fiction Jim Al-Khalili has written one science fiction novel: References 1962 births Academics of the University of Surrey Academics of University College London Alumni of the University of Surrey British sceptics Commanders of the Order of the British Empire English atheists English humanists English people of Iraqi descent English physicists English television presenters Fellows of the Institute of Physics Fellows of the Royal Society Honorary Fellows of the Royal Academy of Engineering Iraqi atheists Iraqi emigrants to the United Kingdom Iraqi people of English descent Iraqi physicists Living people People educated at Priory School, Portsmouth People from Southsea Presidents_of_Humanists_UK Quantum biology British quantum physicists British science communicators British theoretical physicists Writers from Baghdad
Jim Al-Khalili
[ "Physics", "Biology" ]
2,082
[ "Quantum mechanics", "nan", "Quantum biology" ]
8,634,376
https://en.wikipedia.org/wiki/Dot%20plot%20%28bioinformatics%29
In bioinformatics a dot plot is a graphical method for comparing two biological sequences and identifying regions of close similarity after sequence alignment. It is a type of recurrence plot. History One way to visualize the similarity between two protein or nucleic acid sequences is to use a similarity matrix, known as a dot plot. These were introduced by Gibbs and McIntyre in 1970 and are two-dimensional matrices that have the sequences of the proteins being compared along the vertical and horizontal axes. For a simple visual representation of the similarity between two sequences, individual cells in the matrix can be shaded black if residues are identical, so that matching sequence segments appear as runs of diagonal lines across the matrix. Interpretation Some idea of the similarity of the two sequences can be gleaned from the number and length of matching segments shown in the matrix. Identical proteins will obviously have a diagonal line in the center of the matrix. Insertions and deletions between sequences give rise to disruptions in this diagonal. Regions of local similarity or repetitive sequences give rise to further diagonal matches in addition to the central diagonal. One way of reducing this noise is to only shade runs or 'tuples' of residues, e.g. a tuple of 3 corresponds to three residues in a row. This is effective because the probability of matching three residues in a row by chance is much lower than single-residue matches. Dot plots compare two sequences by organizing one sequence on the x-axis, and another on the y-axis, of a plot. When the residues of both sequences match at the same location on the plot, a dot is drawn at the corresponding position. Note, that the sequences can be written backwards or forwards, however the sequences on both axes must be written in the same direction. Also note, that the direction of the sequences on the axes will determine the direction of the line on the dot plot. Once the dots have been plotted, they will combine to form lines. The closeness of the sequences in similarity will determine how close the diagonal line is to what a graph showing a curve demonstrating a direct relationship is. This relationship is affected by certain sequence features such as frame shifts, direct repeats, and inverted repeats. Frame shifts include insertions, deletions, and mutations. The presence of one of these features, or the presence of multiple features, will cause for multiple lines to be plotted in a various possibility of configurations, depending on the features present in the sequences. A feature that will cause a very different result on the dot plot is the presence of low-complexity region/regions. Low-complexity regions are regions in the sequence with only a few amino acids, which in turn, causes redundancy within that small or limited region. These regions are typically found around the diagonal, and may or may not have a square in the middle of the dot plot. See also Protein contact map Recurrence plot Self-similarity matrix References Statistical charts and diagrams Bioinformatics
Dot plot (bioinformatics)
[ "Engineering", "Biology" ]
597
[ "Bioinformatics", "Biological engineering" ]
8,634,485
https://en.wikipedia.org/wiki/Howe%20truss
A Howe truss is a truss bridge consisting of chords, verticals, and diagonals whose vertical members are in tension and whose diagonal members are in compression. The Howe truss was invented by William Howe in 1840, and was widely used as a bridge in the mid to late 1800s. Development The earliest bridges in North America were made of wood, which was abundant and cheaper than stone or masonry. Early wooden bridges were usually of the Towne lattice truss or Burr truss design. Some later bridges were McCallum trusses (a modification of the Burr truss). About 1840, iron rods were added to wooden bridges. The Pratt truss used wooden vertical members in compression with diagonal iron braces. The Howe truss used iron vertical rods in tension with wooden diagonal braces. Both trusses used counter-bracing, which was becoming essential now that heavy railroad trains were using bridges. In 1830, Stephen Harriman Long received a patent for an all-wood parallel chord truss bridge. Long's bridge contained diagonal braces which were prestressed with wedges. The Long truss did not require a connection between the diagonal and the truss, and was able to remain in compression even when the wood shrank somewhat. William Howe was a construction contractor in Massachusetts when he patented the Howe truss design in 1840. That same year, he established the Howe Bridge Works to build bridges using his design. The first Howe truss ever built was a single-lane, long bridge in Connecticut carrying a road. The second was a railroad bridge over the Connecticut River in Springfield, Massachusetts. This bridge, which drew extensive praise and attention, had seven spans and was in length. Both bridges were erected in 1840. One of Howe's workmen, Amasa Stone, purchased for $40,000 ($ in dollars) in 1842 the rights to Howe's patented bridge design. With his financial backer, Azariah Boody, Stone formed the bridge-building firm of Boody, Stone & Co., which erected a large number of Howe truss bridges throughout New England. Howe made additional improvements to his bridge, and patented a second Howe truss design in 1846. Bridge design The Howe truss bridge consists of an upper and lower "chord", each chord consisting of two parallel beams and each chord parallel to one another. The web consists of verticals, braces, and counter-braces. Vertical posts connect the upper and lower chords to one another, and create "panels". A diagonal brace in each panel strengthens the bridge, and a diagonal counter-brace in each panel enhances this strength. Howe truss bridges may be all wood, a combination of wood and iron, or all iron. Whichever design is used, wooden timbers should have square ends without mortise and tenons. The design of an all-metal Howe truss follows that of the wooden truss. The truss The parallels in each chord are usually built up out of smaller beams, each small beam fastened to one another to create a continuous beam. In wooden Howe trusses, these slender beams are usually no more than wide and deep. In iron trusses, the upper chord beams are the same length as the panel. Upper chord beams are usually made of cast iron, while the lower chord beams are of wrought iron. A minimum of three small beams are used, each uniform in width and depth. Fishplates are usually used to splice beams together. (Lower chord beams may have eyes on each end, in which case they are fastened together with bolts, pins, or rivets.) In wooden trusses, cotters and iron bolts are used every to connect the beams of the upper chord to one another. In the lower chord of a wooden bridge, clamps are used to couple beams together. Although generally of the same length, beams are positioned so that a splice (the point where the end of two beams meet) is near the point where two panels meet but not adjacent to the splice in an adjacent pair of beams. The individual small beams which make up a parallel in a chord are separated along their long side by a space equal to the diameter of the vertical posts, usually about . This allows the vertical posts to pass through the parallel in the chord. Batten plates are placed diagonally between the members of a chord, and nailed in place to reduce bending and to act as a shim to provide ventilation between chord members. The middle third of the lower chord is always reinforced by one or more beams bolted to the chord. This reinforcement is generally one-sixth the width of the cross-section of the lower chord. If a wood chord needs to be strengthened even more, additional slender beams may be bolted to the middle third of the each side of the lower chord. When construction is complete, the upper chord of a Howe truss bridge will be in compression, while the lower chord is in tension. The web Vertical posts connect the upper and lower chords, and divide the truss into panels. The Howe truss usually uses iron or steel verticals. These are straight and round, slightly reduced in circumference at the ends, and a screw thread added. The vertical usually passes through the center of the angle block and then through space left in the upper and lower chord. A nut is used to secure the vertical post to the chord. Special plates or washers of wood or metal are used to help distribute the stress induced by the vertical post onto the chords. Vertical posts are in tension, which is induced by tightening the nuts on the vertical bars. Braces are diagonal beams which connect the bottom of a vertical post to the top of the next vertical post. They are placed in the same plane as the chord. Unlike iron or steel braces which are built up, wooden braces are cut to length. Where the parallel in a chord has a thickness of X number of beams, each brace should have a thickness of X minus 1 beams. The depth-to-width ratio of each member of a diagonal brace should be no greater than that of the brace as a whole. Braces may be a single piece, or several pieces spliced together with fishplate. Braces are in compression due to the tightening of the nuts on the verticals. Counter-braces are diagonal beams which connect the bottom of a vertical post to the top of the next vertical post, and run roughly perpendicular to braces. They are placed in the same plane as the chord, are generally uniform in size, and should have a thickness one beam less than a brace. Unlike braces, counter-braces are a single piece. Generally speaking, a bridge of six panels or less (about long) needs no counter-bracing. An eight-panel truss requires counter-braces in every panel but the end panels, and these should be at least one-fourth as strong as the braces. A 10-panel truss requires counter-braces in every panel but the end panels, and these should be at least one-half as strong as the braces. A Howe truss bridge can be strengthened to achieve a live load to dead load ratio of 2-to-1. If this ratio is 2-to-1 or greater, then a six-panel truss must have counter-braces and these must at least one-third as strong as the braces. The counter-braces in an eight-panel truss must be at least two-thirds as strong as the braces, and the counter-braces in a 10-panel truss must be at least equal in strength to the braces. If rapidly moving live loads of any ratio are expected on the Howe truss, the counter-braces used in the center panel should be equal in strength to the braces, and the panel next to the end panel should have counter-braces at least one-half as strong as the braces. Where diagonal braces and counter-braces meet, they are usually bolted together. Braces and counter-braces are held in place with angle blocks. Angle blocks are triangular in cross-section and should be the same height and width as the parallel of the chord. Angle blocks may be made of wood or iron, although iron is usually used for permanent structures. Angle blocks are attached upside down to the upper chord, and right side up to the lower chord. Angle blocks have lugs—flanges or projections used for carrying, seating, or supporting something. The ends of the braces and counter-braces should cut or cast to rest squarely against the angle block. The upper lug may be a single flange that fits into a groove cut into the surface of the diagonal, or there may be two to four lugs which form an opening into which the brace and counter-brace are seated. The diagonals are kept in place by tightening the nuts on the vertical posts. Cleats can be nailed to a wooden angle block to help keep braces and counter-braces seated. Alternatively, a hole may be drilled in the lug and brace/counter-brace and a dowel inserted to hold the beam in place. Iron angle blocks should have a hole cast in the upper lugs so that a bolt may pass through the lug and brace/counter-brace, securing the braces in place. The lower lugs in an angle block also have holes cast in them, to permit the angle block to be bolted to the chord. Two or more holes are cast through the center of the angle block, to allow the vertical posts to pass through and be anchored on the other side of the chord. End panels are the four panels on either side of the end of a Howe truss bridge. These should be the same height as the chords, but not more. The upper chord does not extend past the portal (the space formed by the last four vertical posts at either end of the bridge). The end panels need only a brace, connected from the top of the last vertical post to the end of the lower chord. Struts are used to connect the two parallels of the chords to prevent lateral bending and reduce vibration. Two diagonals, connecting to the top of the vertical posts, are used. One of the diagonals should be a single piece, while the other is framed into the first piece or made of two pieces connected to it. X-braces, usually made of slender metal rods with threaded ends, are installed between vertical posts to help reduce sway. Knee braces, usually flat bars with eyelets on either end, are used to connect the last strut and last vertical posts on both ends of the bridge. Individual panels may be prefabricated off-site. When panels are connected to one another on-site, shims are used to pack any spaces and bolted in place. The deck Floor beams extend between the parallels of a chord and are used to support the stringers and decking. Floor beams may sit atop the chord below them, or they may be hung from the vertical posts. Floor beams generally have the greatest depth of any beam in the bridge. Floor beams are usually placed where two panels meet. If they are placed somewhere mid-panel, the chord must be reinforced to resist bending, buckling, and shear stress. Stringers are beams set on top of the floor beams, parallel to the chords. A stringer may have a depth-to-width ratio anywhere from 2-to-1 to 6-to-1. A ratio greater than 6-to-1 is avoided in order to avoid buckling. In practice, most wood stringers are in width due to limitations in milling. There are usually six stringers in a bridge. Building the deck for a railroad bridge requires that a stringer lie directly beneath each rail, and that a stringer support each end of the railroad ties. Ties are usually in cross-section, and in length. They are set directly on top of the stringers, about apart. Guard rails in cross-section are set from the center of the ties, and bolted to every third tie. Physics of a Howe truss bridge The inner truss of a Howe truss is statically indeterminate. There are two paths for stress during loading, a pair of diagonals in compression and a pair in tension. This gives the Howe truss a level of redundancy which allows it to withstand excessive loading (such as the loss of a panel due to collision). Prestressing is critical to the proper function of a Howe truss. During its initial construction, the diagonals are connected only loosely to the joints, and rely on prestressing, done at a later stage, to perform correctly. Moreover, diagonals in tension can only withstand stress below the prestressing level. (The size of the member does not matter due to the loose fitting of the diagonal to the joint.) Proper prestressing during construction is therefore critical in the correct performance of the bridge. Maximum stress is placed on the center of the chords when a live load reaches the center of the bridge, or when the live load extends the length of the bridge. Both the vertical posts and braces at the end of the bridge suffer the highest amount of stress. The stress affecting counter-braces depends on the ratio of live load to dead load per unit of length, and how the live load is distributed across the bridge. A uniform distribution of live load will put no stress on the counter-braces, while putting live load on only a portion of the bridge will create maximum stress on the center counter-braces. Because of the stress placed on the bridge, the Howe truss is suitable for spans in length or less. No provision is made in a Howe truss for expansion or contraction due to changes in temperature. Howe truss bridges in use The Howe truss was highly economical due to its ease of construction. The wooden pieces can be designed using little but a steel square and scratch awl, and the truss can be framed using only an adze, auger, and saw. Panels could be prefabricated and transported to the construction site, and sometimes even entire trusses could be manufactured and assembled off-site and transported by rail to the intended location. Some sort of falsework, usually in the form of a trestle, is required to erect the bridge. The development of the Pratt and Howe trusses spurred the construction of iron bridges in the United States. Until 1850, few iron bridges in the country were longer than . The simple design, ease of manufacture, and ease of construction of the Pratt and Howe trusses spurred Benjamin Henry Latrobe II, chief engineer of the Baltimore and Ohio Railroad, to build large numbers of iron bridges. After two famous iron bridge collapses (one in the United States, the other in the United Kingdom), few of these were built in the North. This meant most iron bridges erected prior to the American Civil War were located in the South. About 1867, a surge in iron bridge building occurred throughout the United States. The most commonly used designs were the Howe truss, Pratt truss, Bollman truss, Fink truss, and Warren truss. The Howe and Pratt trusses found favor because they used far fewer members. The 1962 built, World's Longest, Single Span, Wooden Covered Bridge at Bridgeport State Park, California, uses a Burr-Arch in combination with the Howe-Truss to achieve this over 210' span. The only maintenance a Howe truss requires is adjustment of the nuts on the vertical posts to equalize strain. The diagonals in a wooden Pratt truss proved difficult to keep in proper adjustment, so the Howe truss became the preferred design for a wooden bridge or for a "transitional" bridge of wood with iron verticals. Engineering professor Horace R. Thayer, writing in 1913, considered the Howe truss to be the best form of wooden truss bridge, and believed it to be the most commonly used truss bridge in the United States at that time. All-iron Howe trusses began to be built about 1845. Examples include a long iron Howe truss was built for the Boston and Providence Railroad and a long railroad bridge over the Ohio and Erie Canal in Cleveland. Iron, however, was the preferred bridge for automobile and railroads, and the Howe truss did not adapt well to all-iron construction. The Pratt truss' single diagonal bracing system meant less cost, and its ability to use wrought-iron stringers under railroad rails and ties, led bridge builders to favor the Pratt over the Howe. Heavier live loads, particularly by railroads, led bridge builders to favor plate girder and Towne lattice bridges for spans less than , and Warren girder bridges for all other spans. Use in architecture Trusses have been widely used in architecture since ancient times. The Howe truss is widely used in wood buildings, particularly in providing roof support. See also White Mountain Central Railroad, a heritage railroad in New Hampshire with what "appears to be the only Howe railroad bridge left in the world." References Notes Citations Bibliography Trusses Truss bridges by type
Howe truss
[ "Technology" ]
3,446
[ "Structural system", "Trusses" ]
8,634,709
https://en.wikipedia.org/wiki/Father%20figure
A father figure is usually an older man, normally one with power, authority, or strength, with whom one can identify on a deeply psychological level and who generates emotions generally felt towards one's father. Despite the literal term "father figure", the role of a father figure is not limited to the biological parent of a person (especially a child), but may be played by uncles, grandfathers, elder brothers, family friends, role models, or others. The similar term mother figure refers to an older woman. Several studies have suggested that positive father figures and mother figures (whether biological or not) are generally associated with healthy child development, both in boys and in girls. Definition The International Dictionary of Psychology defines "father figure" as "A man to whom a person looks up and whom he treats like a father." The APA Concise Dictionary of Psychology offers a more extensive definition: "a substitute for a person's biological father, who performs typical paternal functions and serves as an object of identification and attachment. [Father figures] may include such individuals as adoptive fathers, stepfathers, older brothers, teachers and others." This dictionary goes on to state that the term is synonymous with father surrogate and surrogate father. The former definition suggests that the term applies to any man, while the latter excludes biological fathers. Significance in child development As a primary caregiver, a father or father-figure fills a key role in a child's life. Attachment theory offers some insight into how children relate to their fathers, and when they seek out a separate "father figure". According to a 2010 study by Posada and Kaloustian, the way that an infant models their attachment to their caregiver has a direct impact on how the infant responds to other people. These attachment-driven responses may persist throughout life. Studies by Parke and Clark-Stewart (2011) and Lamb (2010) have shown that fathers are more likely than mothers to engage in rough-and-tumble play with children. Other functions a father figure can provide include: helping establish personal boundaries between mother and child; promoting self-discipline, teamwork and a sense of gender identity; offering a window into the wider world; and providing opportunities for both idealization and its realistic working-through. Absence Studies have shown that a lack of a father figure in a child's life can have severe negative psychological impacts upon a child's personality and psychology, whereas positive father figures have a significant role in a child's development. Research found that there is a strong negative causal effect of father figure absence on a child’s social emotional development, specifically an increase in externalizing behaviors. Further, if absence occurred in early childhood, effects are more pronounced for boys than girls. Proceeding into adolescence, there is also strong evidence that father figure absence increases adolescent risk behaviors, such as substance use and early childbearing. There is a strong and consistent finding on the negative effects of absence on highschool graduation, resulting in a lower graduation rate. There is little evidence supporting that the absence of a father figure has an effect on children and adolescent’s cognitive ability. Through examining long-term effects of father figure absence on adulthood, there is strong evidence that there is a strong causal effect of father absence on adult mental health. Results denote that psychological harm due to father figure absence in childhood persists throughout life. There is also weak evidence supporting that father figure absence influences adult financial or family outcomes. A few studies indicated that there is a negative correlation on adult employment. There is inconsistent evidence supporting that there are negative effects on marriage and divorce, income, or college education. In psychoanalytic theory From a psychoanalytic point of view, Sigmund Freud described the father figure as essential in child development, specially in pre-Oedipal and Oedipal stages. Particularly for boys, resolution of the Oedipal stage and development through developing a loving attachment with the father figure is crucial and healthy. In Freud’s theory, boys perceived father figures as a rival, a figure causing them to experience guilt and fear, ceases incestuous sexual impulses, and an object of enmity and hatred. Dorothy Burlingham also mentioned that Freud perceived father figures in a more positive light, idealizing the figure as a "protector" who is "great" and "God like" in the child’s perspective. Examples in history and popular culture Leaders such as Franklin D. Roosevelt have been seen as acting as father figures for their followers, while a similar role may be played by the therapist in the transference. Lord Durham adopted his father-in-law, Charles Grey, as a father-figure, the consequent ambivalence in their relationship impacting negatively on their work for the Great Reform Act. Harry Potter has been seen as seeking a succession of father figures, from Rubeus Hagrid to Albus Dumbledore, contrasted from the role of Lord Voldemort as the counterpart and negative aspect of the father figure. Kingsley Martin said of Leonard Woolf that "he was always ready to advise me, and became, I think, something of a Father Figure to me". See also References Developmental psychology Fatherhood Interpersonal relationships Male stock characters Positions of authority
Father figure
[ "Biology" ]
1,073
[ "Behavior", "Developmental psychology", "Behavioural sciences", "Interpersonal relationships", "Human behavior" ]
8,635,114
https://en.wikipedia.org/wiki/Pompeiu%27s%20theorem
Pompeiu's theorem is a result of plane geometry, discovered by the Romanian mathematician Dimitrie Pompeiu. The theorem is simple, but not classical. It states the following: Given an equilateral triangle ABC in the plane, and a point P in the plane of the triangle ABC, the lengths PA, PB, and PC form the sides of a (maybe, degenerate) triangle. The proof is quick. Consider a rotation of 60° about the point B. Assume A maps to C, and P maps to P '. Then , and . Hence triangle PBP ' is equilateral and . Then . Thus, triangle PCP ' has sides equal to PA, PB, and PC and the proof by construction is complete (see drawing). Further investigations reveal that if P is not in the interior of the triangle, but rather on the circumcircle, then PA, PB, PC form a degenerate triangle, with the largest being equal to the sum of the others; this observation is also known as Van Schooten's theorem. Generally, by the point P and the lengths to the vertices of the equilateral triangle - PA, PB, and PC two equilateral triangles ( the larger and the smaller) with sides and are defined: . The symbol △ denotes the area of the triangle whose sides have lengths PA, PB, PC. Pompeiu published the theorem in 1936; however August Ferdinand Möbius had already published a more general theorem about four points in the Euclidean plane in 1852. In this paper Möbius also derived the statement of Pompeiu's theorem explicitly as a special case of his more general theorem. For this reason, the theorem is also known as the Möbius-Pompeiu theorem. External links MathWorld's page on Pompeiu's Theorem Pompeiu's theorem at cut-the-knot.org Notes Elementary geometry Theorems about equilateral triangles Theorems about triangles and circles Articles containing proofs
Pompeiu's theorem
[ "Mathematics" ]
429
[ "Elementary mathematics", "Articles containing proofs", "Elementary geometry" ]
8,635,379
https://en.wikipedia.org/wiki/Reaction%E2%80%93diffusion%20system
Reaction–diffusion systems are mathematical models that correspond to several physical phenomena. The most common is the change in space and time of the concentration of one or more chemical substances: local chemical reactions in which the substances are transformed into each other, and diffusion which causes the substances to spread out over a surface in space. Reaction–diffusion systems are naturally applied in chemistry. However, the system can also describe dynamical processes of non-chemical nature. Examples are found in biology, geology and physics (neutron diffusion theory) and ecology. Mathematically, reaction–diffusion systems take the form of semi-linear parabolic partial differential equations. They can be represented in the general form where represents the unknown vector function, is a diagonal matrix of diffusion coefficients, and accounts for all local reactions. The solutions of reaction–diffusion equations display a wide range of behaviours, including the formation of travelling waves and wave-like phenomena as well as other self-organized patterns like stripes, hexagons or more intricate structure like dissipative solitons. Such patterns have been dubbed "Turing patterns". Each function, for which a reaction diffusion differential equation holds, represents in fact a concentration variable. One-component reaction–diffusion equations The simplest reaction–diffusion equation is in one spatial dimension in plane geometry, is also referred to as the Kolmogorov–Petrovsky–Piskunov equation. If the reaction term vanishes, then the equation represents a pure diffusion process. The corresponding equation is Fick's second law. The choice yields Fisher's equation that was originally used to describe the spreading of biological populations, the Newell–Whitehead-Segel equation with to describe Rayleigh–Bénard convection, the more general Zeldovich–Frank-Kamenetskii equation with and (Zeldovich number) that arises in combustion theory, and its particular degenerate case with that is sometimes referred to as the Zeldovich equation as well. The dynamics of one-component systems is subject to certain restrictions as the evolution equation can also be written in the variational form and therefore describes a permanent decrease of the "free energy" given by the functional with a potential such that In systems with more than one stationary homogeneous solution, a typical solution is given by travelling fronts connecting the homogeneous states. These solutions move with constant speed without changing their shape and are of the form with , where is the speed of the travelling wave. Note that while travelling waves are generically stable structures, all non-monotonous stationary solutions (e.g. localized domains composed of a front-antifront pair) are unstable. For , there is a simple proof for this statement: if is a stationary solution and is an infinitesimally perturbed solution, linear stability analysis yields the equation With the ansatz we arrive at the eigenvalue problem of Schrödinger type where negative eigenvalues result in the instability of the solution. Due to translational invariance is a neutral eigenfunction with the eigenvalue , and all other eigenfunctions can be sorted according to an increasing number of nodes with the magnitude of the corresponding real eigenvalue increases monotonically with the number of zeros. The eigenfunction should have at least one zero, and for a non-monotonic stationary solution the corresponding eigenvalue cannot be the lowest one, thereby implying instability. To determine the velocity of a moving front, one may go to a moving coordinate system and look at stationary solutions: This equation has a nice mechanical analogue as the motion of a mass with position in the course of the "time" under the force with the damping coefficient c which allows for a rather illustrative access to the construction of different types of solutions and the determination of . When going from one to more space dimensions, a number of statements from one-dimensional systems can still be applied. Planar or curved wave fronts are typical structures, and a new effect arises as the local velocity of a curved front becomes dependent on the local radius of curvature (this can be seen by going to polar coordinates). This phenomenon leads to the so-called curvature-driven instability. Two-component reaction–diffusion equations Two-component systems allow for a much larger range of possible phenomena than their one-component counterparts. An important idea that was first proposed by Alan Turing is that a state that is stable in the local system can become unstable in the presence of diffusion. A linear stability analysis however shows that when linearizing the general two-component system a plane wave perturbation of the stationary homogeneous solution will satisfy Turing's idea can only be realized in four equivalence classes of systems characterized by the signs of the Jacobian of the reaction function. In particular, if a finite wave vector is supposed to be the most unstable one, the Jacobian must have the signs This class of systems is named activator-inhibitor system after its first representative: close to the ground state, one component stimulates the production of both components while the other one inhibits their growth. Its most prominent representative is the FitzHugh–Nagumo equation with which describes how an action potential travels through a nerve. Here, and are positive constants. When an activator-inhibitor system undergoes a change of parameters, one may pass from conditions under which a homogeneous ground state is stable to conditions under which it is linearly unstable. The corresponding bifurcation may be either a Hopf bifurcation to a globally oscillating homogeneous state with a dominant wave number or a Turing bifurcation to a globally patterned state with a dominant finite wave number. The latter in two spatial dimensions typically leads to stripe or hexagonal patterns. For the Fitzhugh–Nagumo example, the neutral stability curves marking the boundary of the linearly stable region for the Turing and Hopf bifurcation are given by If the bifurcation is subcritical, often localized structures (dissipative solitons) can be observed in the hysteretic region where the pattern coexists with the ground state. Other frequently encountered structures comprise pulse trains (also known as periodic travelling waves), spiral waves and target patterns. These three solution types are also generic features of two- (or more-) component reaction–diffusion equations in which the local dynamics have a stable limit cycle Three- and more-component reaction–diffusion equations For a variety of systems, reaction–diffusion equations with more than two components have been proposed, e.g. the Belousov–Zhabotinsky reaction, for blood clotting, fission waves or planar gas discharge systems. It is known that systems with more components allow for a variety of phenomena not possible in systems with one or two components (e.g. stable running pulses in more than one spatial dimension without global feedback). An introduction and systematic overview of the possible phenomena in dependence on the properties of the underlying system is given in. Applications and universality In recent times, reaction–diffusion systems have attracted much interest as a prototype model for pattern formation. The above-mentioned patterns (fronts, spirals, targets, hexagons, stripes and dissipative solitons) can be found in various types of reaction–diffusion systems in spite of large discrepancies e.g. in the local reaction terms. It has also been argued that reaction–diffusion processes are an essential basis for processes connected to morphogenesis in biology and may even be related to animal coats and skin pigmentation. Other applications of reaction–diffusion equations include ecological invasions, spread of epidemics, tumour growth, dynamics of fission waves, wound healing and visual hallucinations. Another reason for the interest in reaction–diffusion systems is that although they are nonlinear partial differential equations, there are often possibilities for an analytical treatment. Experiments Well-controllable experiments in chemical reaction–diffusion systems have up to now been realized in three ways. First, gel reactors or filled capillary tubes may be used. Second, temperature pulses on catalytic surfaces have been investigated. Third, the propagation of running nerve pulses is modelled using reaction–diffusion systems. Aside from these generic examples, it has turned out that under appropriate circumstances electric transport systems like plasmas or semiconductors can be described in a reaction–diffusion approach. For these systems various experiments on pattern formation have been carried out. Numerical treatments A reaction–diffusion system can be solved by using methods of numerical mathematics. There exist several numerical treatments in research literature. Numerical solution methods for complex geometries are also proposed. Reaction-diffusion systems are described to the highest degree of detail with particle based simulation tools like SRSim or ReaDDy which employ among others reversible interacting-particle reaction dynamics. See also Autowave Diffusion-controlled reaction Chemical kinetics Phase space method Autocatalytic reactions and order creation Pattern formation Patterns in nature Periodic travelling wave Self-similar solutions Diffusion equation Stochastic geometry MClone The Chemical Basis of Morphogenesis Turing pattern Multi-state modeling of biomolecules Examples Fisher's equation Zeldovich–Frank-Kamenetskii equation FitzHugh–Nagumo model Wrinkle paint References External links Reaction–Diffusion by the Gray–Scott Model: Pearson's parameterization a visual map of the parameter space of Gray–Scott reaction diffusion. A thesis on reaction–diffusion patterns with an overview of the field RD Tool: an interactive web application for reaction-diffusion simulation Mathematical modeling Parabolic partial differential equations Reaction mechanisms Functions of space and time
Reaction–diffusion system
[ "Physics", "Chemistry", "Mathematics" ]
1,944
[ "Reaction mechanisms", "Mathematical modeling", "Functions of space and time", "Applied mathematics", "Physical organic chemistry", "Spacetime", "Chemical kinetics" ]
8,635,577
https://en.wikipedia.org/wiki/Tetrahydroxyborate
Tetrahydroxyborate is an inorganic anion with the chemical formula or . It contributes no colour to tetrahydroxyborate salts. It is found in the mineral hexahydroborite, , originally formulated . It is one of the boron oxoanions, and acts as a weak base. The systematic names are tetrahydroxyboranuide (substitutive) and tetrahydroxidoborate(1−) (additive). It can be viewed as the conjugate base of boric acid. Structure Tetrahydroxyborate has a symmetric tetrahedral geometry, isoelectronic with the hypothetical compound orthocarbonic acid (). Chemical properties Basicity Tetrahydroxyborate acts as a weak Brønsted–Lowry base because it can assimilate a proton (), yielding boric acid with release of water: + + It can also release a hydroxide anion , thus acting as a classical Arrhenius base: + (pK = 9.14 to the left) Thus, when boric acid is dissolved in pure (neutral) water, most of it will exist as tetrahydroxyborate ions. With diols In aqueous solution, the tetrahydroxyborate anion reacts with cis-vicinal diols (organic compounds containing similarly-oriented hydroxyl groups in adjacent carbon atoms), ) such as mannitol, sorbitol, glucose and glycerol, to form anion esters containing one or two five-member rings. For example, the reaction with mannitol can be written as + + 2 + + 2 Giving the overall reaction + 2 + 4 These mannitoborate esters are fairly stable and thus depletes the tetrahydroxyborate from the solution. The addition of mannitol to an initially neutral solution containing boric acid or borates lowers the pH enough for the be titrated by a strong base as NaOH, including with an automated a potentiometric titrator. This is a reliable method to assay the amount of borate content present in the solution. Other chemical reactions Upon treatment with a strong acid, a metal tetrahydroxyborate converts to boric acid and the metal salt. Oxidation of tetrahydroxyborate gives the perborate anion : 2 + 2 → + 2 When heated to a high temperature, tetrahydroxyborate salts decompose to produce metaborate salts and water, or to produce boric acid and a metal hydroxide: n → () + 2n  → + HO− Production Tetrahydroxyborate salts are produced by treating boric acid with an alkali such as sodium hydroxide, with catalytic amounts of water. Other borate salts may be obtained by altering the process conditions. Uses Tetrahydroxyborate can be used as a cross-link in polymers. Occurrence The tetrahydroxyborate anion is found in Na[B(OH)4], Na2[B(OH)4]Cl and CuII[B(OH)4]Cl. See also Borate Tetrafluoroborate References Borates Anions
Tetrahydroxyborate
[ "Physics", "Chemistry" ]
688
[ "Ions", "Matter", "Anions" ]
8,635,864
https://en.wikipedia.org/wiki/Amorphous%20computing
Amorphous computing refers to computational systems that use very large numbers of identical, parallel processors each having limited computational ability and local interactions. The term amorphous computing was coined at MIT in 1996 in a paper entitled "Amorphous Computing Manifesto" by Abelson, Knight, Sussman, et al. Examples of naturally occurring amorphous computations can be found in many fields, such as developmental biology (the development of multicellular organisms from a single cell), molecular biology (the organization of sub-cellular compartments and intra-cell signaling), neural networks, and chemical engineering (non-equilibrium systems). The study of amorphous computation is hardware agnostic—it is not concerned with the physical substrate (biological, electronic, nanotech, etc.) but rather with the characterization of amorphous algorithms as abstractions with the goal of both understanding existing natural examples and engineering novel systems. Amorphous computers tend to have many of the following properties: Implemented by redundant, potentially faulty, massively parallel devices. Devices having limited memory and computational abilities. Devices being asynchronous. Devices having no a priori knowledge of their location. Devices communicating only locally. Exhibit emergent or self-organizational behavior (patterns or states larger than an individual device). Fault-tolerant, especially to the occasional malformed device or state perturbation. Algorithms, tools, and patterns (Some of these algorithms have no known names. Where a name is not known, a descriptive one is given.) "Fickian communication". Devices communicate by generating messages which diffuse through the medium in which the devices dwell. Message strength will follow the inverse square law as described by Fick's law of diffusion. Examples of such communication are common in biological and chemical systems. "Link diffusive communication". Devices communicate by propagating messages down links wired from device to device. Unlike "Fickian communication", there is not necessarily a diffusive medium in which the devices dwell and thus the spatial dimension is irrelevant and Fick's law is not applicable. Examples are found in Internet routing algorithms such as the diffusing update algorithm. Most algorithms described in the amorphous computing literature assume this kind of communication. "Wave Propagation". (Ref 1) A device emits a message with an encoded hop-count. Devices which have not seen the message previously, increment the hop count, and re-broadcast. A wave propagates through the medium and the hop-count across the medium will effectively encode a distance gradient from the source. "Random ID". Each device gives itself a random id, the random space being sufficiently large to preclude duplicates. "Growing-point program". (Coore). Processes that move among devices according to 'tropism' (movement of an organism due to external stimuli). "Wave coordinates". DARPA PPT slides. To be written. "Neighborhood query". (Nagpal) A device samples the state of its neighbors by either a push or pull mechanism. "Peer-pressure". Each device maintains a state and communicates this state to its neighbors. Each device uses some voting scheme to determine whether or not to change state to its neighbor's state. The algorithm partitions space according to the initial distributions and is an example of a clustering algorithm. "Self maintaining line". (Lauren Lauren, Clement). A gradient is created from one end-point on a plane covered with devices via Link Diffusive Communication. Each device is aware of its value in the gradient and the id of its neighbor that is closer to the origin of the gradient. The opposite end-point detects the gradient and informs its closer neighbor that it is part of a line. This propagates up the gradient forming a line which is robust against disruptions in the field. (Illustration needed). "Club Formation". (Coore, Coore, Nagpal, Weiss). Local clusters of processors elect a leader to serve as a local communication hub. "Coordinate formation" (Nagpal). Multiple gradients are formed and used to form a coordinate system via triangulation. Researchers and labs Hal Abelson, MIT Jacob Beal, graduate student MIT (high level languages for amorphous computing) Daniel Coore, University of West Indies (growing point language, tropism, grown inverter series) Nikolaus Correll, University of Colorado (robotic materials) Tom Knight, MIT (computation with synthetic biology) Radhika Nagpal, Harvard (self-organizing systems) Zack Booth Simpson, Ellington Lab, Univ. of Texas at Austin. (Bacterial edge detector) Gerry Sussman, MIT AI Lab Ron Weiss, MIT (rule triggering, microbial colony language, coli pattern formation) See also Unconventional computing Documents The Amorphous Computing Home Page A collection of papers and links at the MIT AI lab Amorphous Computing (Communications of the ACM, May 2000) A review article showing examples from Coore's Growing Point Language as well as patterns created from Weiss's rule triggering language. "Amorphous computing in the presence of stochastic disturbances" A paper investigating the ability of Amorphous computers to deal with failing components. Amorphous Computing Slides from DARPA talk in 1998 An overview of ideas and proposals for implementations Amorphous and Cellular Computing PPT from 2002 NASA Lecture Almost the same as above, in PPT format Infrastructure for Engineered Emergence on Sensor/Actuator Networks, Beal and Bachrach, 2006. An amorphous computing language called "Proto". Self-repairing Topological Patterns Clement, Nagpal. Algorithms for self-repairing and self-maintaining line. Robust Methods of Amorphous Synchronization, Joshua Grochow Methods for inducing global temporal synchronization. Programmable Self-Assembly: Constructing Global Shape Using Biologically-Inspired Local Interactions and Origami Mathematics and Associated Slides Nagpal PhD Thesis A language to compile local-interaction instructions from a high-level description of an origami-like folded structure. Towards a Programmable Material, Nagpal Associated Slides Similar outline to previous paper Self-Healing Structures in Amorphous Computing Zucker Methods for detecting and maintaining topologies inspired by biological regeneration. Resilient serial execution on amorphous machines, Sutherland Master's Thesis A language for running serial processes on amorphous computers Paradigms for Structure in an Amorphous Computer, 1997 Coore, Nagpal, Weiss Techniques for creating hierarchical order in amorphous computers. Organizing a Global Coordinate System from Local Information on an Amorphous Computer, 1999 Nagpal. Techniques for creating coordinate systems by gradient formation and analyzes precision limits. Amorphous Computing: examples, mathematics and theory, 2013 W Richard Stark. This paper presents nearly 20 examples varying from simple to complex, standard mathematical tools are used to prove theorems and compute expected behavior, four programming styles are identified and explored, three uncomputability results are proved, and the computational foundations of a complex, dynamic intelligence system are sketched. Parallel computing Classes of computers
Amorphous computing
[ "Technology" ]
1,468
[ "Classes of computers", "Computers", "Computer systems" ]
8,635,906
https://en.wikipedia.org/wiki/Brown%20truss
A Brown truss is a type of bridge truss, used in covered bridges. It is noted for its economical use of materials and is named after the inventor, Josiah Brown Jr., of Buffalo, New York, who patented it July 7, 1857, as US patent 17,722. Description The Brown truss is a box truss that is a through truss (as contrasted with a deck truss) and consists of diagonal cross compression members connected to horizontal top and bottom stringers. There may be vertical or almost vertical tension members (the diagram shows these members, while the patent application diagram does not) but there are no vertical members in compression. In practice, when used in a covered bridge, the most common application, the truss is protected with outside sheathing. The floor and roof are also trusses, but are horizontal and serve to give the truss rigidity. The bottoms of the diagonals tend to protrude below the sheathing. The Brown truss is noted for economy of materials as it can be built with very little metal. Patent Brown's patent claims did not actually address the economy afforded by lack of vertical members ("braces"). Instead he focused on the improved strength over previous trusses that had members ("braces" in his terminology) come to the horizontal chord near to each other but not exactly together (at "gains" in his terminology), by having several members come together in the same place. From the patent text: I do not claim broadly furnishing the main or counter braces with gains and passing them between the timbers of the chords; What I do claim as my invention, and desire to secure by letters Patent, is— Providing each of the main and counter braces with two gains at top and bottom, and each of the timbers of the chord with a gain at the point where the braces are applied corresponding with the gains in the braces, and the braces thus formed up between the timber, with the gains of the braces in such relation to the gains of the timbers that when the timbers of the chords are brought together they are combined and become, as it were, only one piece, no part of which can be operated upon or affected independently of the other by the downward and upward thrusts common to truss bridges, even if the bolt which passes laterally through and intersects each set of braces and the timbers of the chord were removed. History The Brown truss enjoyed a brief period of favor in the 1860s, and is known to have been used in four covered bridges in Michigan, the Ada Covered Bridge, the Fallasburg Bridge, Whites Bridge and one other. The design did not appear to gain wide acceptance as modern bridges tend to be Howe, Pratt, bowstring or Warren trusses. See also Truss bridge for bridges employing various truss types References Truss bridges by type American inventions Trusses
Brown truss
[ "Technology" ]
579
[ "Structural system", "Trusses" ]
8,636,483
https://en.wikipedia.org/wiki/Taipoxin
Taipoxin is a potent myo- and neurotoxin that was isolated from the venom of the coastal taipan Oxyuranus scutellatus or also known as the common taipan. Taipoxin like many other pre-synaptic neurotoxins are phospholipase A2 (PLA2) toxins, which inhibit/complete block the release of the motor transmitter acetylcholine and lead to death by paralysis of the respiratory muscles (asphyxia). It is the most lethal neurotoxin isolated from any snake venom to date. The molecular mass of the heterotrimer is about 46,000 Dalton; comprising 1:1:1 α, β and γ monomers. Median lethal dose (LD50) for mice is around 1–2 μg/kg (subcutaneous injection). History Taipoxin and other PLA2 toxins have evolved from the digestive PLA2 enzymes. The venom still functions with the almost identical multi-disulphide-bridged protein PLA2 scaffold, which causes the hydrolytic mechanism of the enzyme. However it is thought that under strict evolution selection pressures of prey immobilisation and therefore extended feeding lead to the PLA2 enzyme losing its so called pancreatic loop and mutations for the toxin binding with pre-synaptic membranes of motor neuron end plates. Structure Taipoxin is a ternary complex consisting of three subunits of α, β and γ monomers in a 1:1:1 ratio, also called the A, B and C homologous subunits. These subunits are equally distributed across the structure, and together the three-dimensional structures of these three monomers form a shared core of three α-helices, a Ca2+ binding site and a hydrophobic channel to which the fatty acyl chains binds. The α and β complex consist of 120 amino acid residues which are cross linked by 7 disulfide bridges. The alpha subunit is very basic (pH(I)>10) and the only one that shows neurotoxicity. The β complex is neutral and can be separated into two isoforms. β1 and β2 are interchangeable but differ slightly in amino acid composition. The γ complex contains 135 amino acid residues which are cross linked by 8 disulfide bridges. It is very acidic due to 4 sialic acid residues, which might be important for complex formation. The gamma subunit also seems to function as a protector of the alpha complex, preventing fast renal clearance or proteolytic degradation. It also boosts the specificity on the target and could be involved in the binding of the alpha unit. The whole complex is slightly acidic with a pH(I) of 5, but under a lower pH and/or high ionic strength the subunits dissociate. Just as the PLA2 enzyme the PLA2 toxin is Ca2+ dependent for hydrolysing fatty acyl ester bonds at the sn-2 position of glycerol-phospholipids. Depending on disulphide bridge positions and lengths of C-termini these PLA2 enzymes/PLA2 toxins are categorized into three classes. These classes are also an indication of the toxicity of PLA2/PLA2, as PLA2s from pancreatic secretions, bee venom or the weak elapid venoms are grouped into class I, whereas PLA2s from the more potent viperid venoms which causes inflammatory exudate's are grouped into class II. However most snake venoms are capable of more than one toxic activity, such as cytotoxicity, myotoxicity, neuro-toxicity, anticoagulant activity and hypotensive effects. Isolation process Taipoxin can be purified from the venom of the coastal taipan by gel filtration chromatography. In addition to taipoxin, the venom consists of many different components, responsible for the complex symptoms. Mechanism of action In the beginning taipoxin was thought to be only neurotoxic. Studies showed an increase in acetylcholine release, indicating a presynaptic activity. Further experiments showed that Taipoxin inhibited the responses to electrical stimuli greater than the reaction to additionally administered acetylcholine. This led to the conclusion that taipoxin has pre- and postsynaptic effects. Additional to the increased acetylcholine release it inhibits the vesicular recycling. More recent studies showed that the toxin has a myotoxic effect as well. The injection of taipoxin into the hind limbs of rats leads to oedema formation and muscle degeneration. The study also supports the findings by Fohlman, that the α subunit yields the PLA2 potency, which is similar to the potency of notexin. Even so, the full potential of the raw toxin is only reached by the combination of the α and γ subunits. A similar experiment has been done refocusing on the neural compounds. 24 hours after the injection the innervation was compromised to the extent of being unable to identify intact axons. This showed that taipoxin like toxins lead to the depletion of transmitters from the nerve terminals and lead to the degeneration of nerve terminal and intramuscular axons. In chromaffin cells taipoxin showed the ability to enter the cells via Ca2+ independent mechanisms. There it enhanced catecholamine release in depolarizing cells by disassembling F-actin in the cytoskeletal barrier. This could lead to a vesicle redistribution promoting immediate access into the subplasmalemmal area. More research studies have found potential binding partners of taipoxin, which would give more insight into how taipoxin is transported to the nerve terminals and intramuscular axons. Toxicity The toxicity of Taipoxin or other PLA2 toxins are often measured with their ability to cut short chain phospholipids or phospholipids-analogues. For taipoxin PLA2 activity was set on 0.4 mmol/min/mg, and the binding constant (K) of taipoxin would be equal to: KTaipoxin = KA + KB + KC as it consist out of 3 enzymatic domains/subunits. However no correlation was made between PLA2 activity and toxicity, as the pharmacokinetics and the membrane binding properties are more important. A more specific membrane binding would lead to accumulation of taipoxin in the plasma membranes of motor-neurons. Treatment The treatment of choice is an antivenom produced by CSL Ltd in 1956 in Australia on the basis of immunised horse plasma. After being bitten the majority of patients will develop systemic envenoming of which clinical evidence is usually present within two hours. This effect can be delayed by applying first aid measures, like immobilization. Additional to neurotoxins taipan venom contains anticoagulants whose effect is also inhibited by the antivenom. Similar toxins Similar to taipoxin are toxins with different subunits of the PLA domains: Notexin is a monomer from Notechis scutatus venom, β-bungarotoxin is a heterodimer from Chinese banded krait (Bungarus multicinctus) venom, and textilotoxin is a pentamer from eastern Pseudonaja textilis venom. References Neurotoxins Snake toxins Acetylcholine release inhibitors
Taipoxin
[ "Chemistry" ]
1,578
[ "Neurochemistry", "Neurotoxins" ]
8,636,724
https://en.wikipedia.org/wiki/Winnowing%20Basket%20%28constellation%29
The Winnowing Basket mansion (箕宿, pinyin: Jī Xiù) is one of the Twenty-Eight Mansions of the Chinese constellations. It is one of the eastern mansions of the Azure Dragon. Asterisms References Chinese constellations
Winnowing Basket (constellation)
[ "Astronomy" ]
51
[ "Chinese constellations", "Astronomy stubs", "Constellations" ]
8,636,753
https://en.wikipedia.org/wiki/%CE%93-Carotene
γ-Carotene (gamma-carotene) is a carotenoid, and is a biosynthetic intermediate for cyclized carotenoid synthesis in plants. It is formed from cyclization of lycopene by lycopene cyclase epsilon. Along with several other carotenoids, γ-carotene is a vitamer of vitamin A in herbivores and omnivores. Carotenoids with a cyclized, beta-ionone ring can be converted to vitamin A, also known as retinol, by the enzyme beta-carotene 15,15'-dioxygenase; however, the bioconversion of γ-carotene to retinol has not been well-characterized. γ-Carotene has tentatively been identified as a biomarker for green and purple sulfur bacteria in a sample from the 1.640 ± 0.003-Gyr-old Barney Creek Formation in Northern Australia which comprises marine sediments. Tentative discovery of γ-carotene in marine sediments implies a past euxinic environment, where water columns were anoxic and sulfidic. This is significant for reconstructing past oceanic conditions, but so far γ-carotene has only been potentially identified in the one measured sample. Background γ-Carotene is a carotenoid, a class of pigments giving color to photosynthetic organisms. Specifically, γ-carotene may be derived from myxoxanthophyll found in cyanobacteria, Chlorobiaceae, and green non-sulfur bacteria (Chloroflexi). However, there are over 600 different carotenoids, each with different structures and formulas thus altering their absorption spectrum. In particular, Chromatiaceae lie between 1.5 to 24 meters deep into the water column with more than 75% of the microbial blooms occurring above 12 meters deep. Other carotenoids such as chlorobactane and isorenieratene are also biomarkers for the presence of green non-sulfur bacteria. These carotenoids are indicators of the past aquatic geochemical environment of their source water. In particular, γ-carotene is an indicator of the depth at which oxic conditions move towards anoxic conditions due to its relevance to green and purple sulfur bacteria which occupy the boundary layer. Green non-sulfur bacteria are known to produce 2,3,6-trimethylaryl isoprenoids which are unambiguous, thus permitting the deduction of past aquatic geochemical environments. In γ-carotene, the end group of lycopene produces a β-ring via a β-cyclase enzyme. The other end member is attributed to an open-chain ψ-end. Preservation Biomarkers may be defined as the molecular remnants of lipids and other biological makeups. Often, in sedimentary environments, lipids are decomposed into hydrocarbon skeletons where they remain preserved in the geologic record over long timescales. Specifically, diagnostic biomarkers are used to investigate past paleo-environmental conditions such as salinity, temperature, and oxygen availability. In aquatic environments where green non-sulfur bacteria persist, organic carbon is remineralised into carbon dioxide and water such that 0.1% are deposited into the sedimentary record at the aquatic floor. Although γ-carotene is not the diagnostic biomarker for green non-sulfur bacteria, as it has only been tentatively discovered in a natural environment, it is considered a biomarker for green and purple non-sulfur bacteria. Unlike β-carotene which occurs across a vast array of lineages in all three domains of life, γ-carotene is constrained to only a very few potential precursors. Both bacteria present genera of Chromatiaceae containing γ-carotene after diagenesis which has a unique carbon skeleton; therefore, γ-carotene is identifiable through measurement techniques, namely gas chromatography-mass spectrometry. In some cases it is possible to discriminate between different sources of a biomarker using carbon isotopic fractionation techniques. Measurement techniques GC/MS Gas chromatography-mass spectrometry (GC/MS) is an analytical technique in geochemistry widely employed to identify and quantify organic compounds present in sedimentary rocks. The sample must be extracted from the source rock before the analysis may occur, which is often less than 1% due to the thermal maturity of the source rock. The 1.640 ± 0.003-Gyr-old sample from the Barney Creek Formation underwent an extraction for γ-carotene and subsequent analysis with GC/MS such that there exists a peak at m/z 125 indicating the presence of carotenoid derivatives which elute immediately after β-carotene and γ-carotene. Carbon Isotope Ratios Additional analysis of γ-carotene can be accomplished through the use of an isotope ratio mass spectrometer. Chromatiaceae is generally found to be depleted in δ13C as where Chlorobiaceae are enriched in δ13C in comparison to typical oxygenic bacteria by 7-8 ppm respectively. The results from isotope ratio mass spectroscopy and GC/MS can accurately discriminate the presence of γ-carotene in an extraction from a sedimentary sample. The identification of γ-carotene through these methods would provide a compelling indication of a past euxinic environment, where water columns were anoxic and sulfidic. References Carotenoids Cyclohexenes
Γ-Carotene
[ "Biology" ]
1,164
[ "Biomarkers", "Carotenoids" ]
8,637,261
https://en.wikipedia.org/wiki/Communications%2C%20Computers%2C%20and%20Networks
The Scientific American special issue on Communications, Computers, and Networks is a special issue of Scientific American dedicated to articles concerning impending changes to the Internet in the period prior to the expansion and mainstreaming of the World Wide Web via Mosaic and Netscape. This issue contained essays by a number of important computer science and internet pioneers. It bore the promotional cover title Scientific American presents the September 1991 Single Copy Issue: Communications, Computers, and Networks. Reviews University of California, Berkeley's September 1991 online journal, "Current Cites" commented: "Scientific American Special Issue on Communications, Computers and Networks 265(3) (September 1991): If you purchase a single issue of a magazine this year, this should be it. Filled with eleven articles by some of the biggest names in computer networking, this issue covers all bases and includes suggestions for further readings on the issues." In addition, a 4 September 1991 post to the University of Houston's "Computer System's Forum" also recommends the issue, stating: "These articles cover enough ground that I would recommend the issue to people getting ready to dive into the Internet or understand what is happening in networks these days." An additional post to this same forum on 21 August 1991 comments: "The authors are exceptional, including Mitch Kapor, Mark Weiser, Nicholas Negroponte, Alan Kay, Al Gore, and many others. An excellent issue." Response Of this issue, the Electronic Frontier Foundation stated in the article "Scientific American's September Issue to be Sent to All EFF Members" in its September 1991 newsletter: This month's Scientific American ("Communications, Computers, and Networks") must surely represent the most complete collection of articles and commentary on all aspects of networking to date. As such we feel strongly that it should be made available to as many people as possible. Because of this, we have purchased a large number of copies of this issue that we will be using for various purposes over the coming year. The first use will be to deliver a free copy of to all our members. We are expecting the magazines to be delivered to us at the end of next week and they will go out to our members soon after. We realize that many of our members may already have a copy of their own, but if so we trust that they will use this extra copy to educate and enlighten someone else to the issues and potential of networking. Table of contents Gary Stix: "Profile: Information Theorist David A. Huffman" Michael Dertouzos: "Communications, Computers and Networks" Vint Cerf: "Networks" Larry Tesler: "Networked Computing in the 1990s" Mark Weiser: "The Computer for the 21st Century" Nicholas Negroponte: "Products and Services for Computer Networks" Lee Sproull and Sara Kiesler: "Computers, Networks and Work" Thomas W. Malone and John F. Rockart: "Computers, Networks and the Corporation" Alan Kay: "Computers, Networks and Education" Computers, Networks and Public Policy Al Gore: "Infrastructure for the Global Village" Anne W. Branscomb: "Common Law for the Electronic Frontier" Mitch Kapor: "Civil Liberties in Cyberspace" See also History of the Internet Footnotes References Scientific American September 1991 (Special Issue: Communications, Computers, and Networks), Volume 265, Number 3. External links UC Berkeley, "Current_Cites", Library Technology Watch Program - Sept. 1991 University of Houston Computer Science Forum - Sept. 1991 Overview of the issue - Humanist Discussion Group, Sept. 1991 Texts related to the history of the Internet Computer books Scientific American
Communications, Computers, and Networks
[ "Technology" ]
734
[ "Works about computing", "Computer books" ]
8,638,682
https://en.wikipedia.org/wiki/List%20of%20wireless%20network%20protocols
A wide variety of different wireless data technologies exist, some in direct competition with one another, others designed for specific applications. Wireless technologies can be evaluated by a variety of different metrics of which some are described in this entry. Standards can be grouped as follows in increasing range order: Personal area network (PAN) systems are intended for short range communication between devices typically controlled by a single person. Some examples include wireless headsets for mobile phones or wireless heart rate sensors communicating with a wrist watch. Some of these technologies include standards such as ANT UWB, Bluetooth, Zigbee, and Wireless USB. Wireless Sensor Networks (WSN / WSAN) are, generically, networks of low-power, low-cost devices that interconnect wirelessly to collect, exchange, and sometimes act-on data collected from their physical environments - "sensor networks". Nodes typically connect in a star or mesh topology. While most individual nodes in a WSAN are expected to have limited range (Bluetooth, Zigbee, 6LoWPAN, etc.), particular nodes may be capable of more expansive communications (Wi-Fi, Cellular networks, etc.) and any individual WSAN can span a wide geographical range. An example of a WSAN would be a collection of sensors arranged throughout an agricultural facility to monitor soil moisture levels, report the data back to a computer in the main office for analysis and trend modeling, and maybe turn on automatic watering spigots if the level is too low. For wider area communications, wireless local area network (WLAN) is used. WLANs are often known by their commercial product name Wi-Fi. These systems are used to provide wireless access to other systems on the local network such as other computers, shared printers, and other such devices or even the internet. Typically a WLAN offers much better speeds and delays within the local network than an average consumer's Internet access. Older systems that provide WLAN functionality include DECT and HIPERLAN. These however are no longer in widespread use. One typical characteristic of WLANs is that they are mostly very local, without the capability of seamless movement from one network to another. Cellular networks or WAN are designed for citywide/national/global coverage areas and seamless mobility from one access point (often defined as a base station) to another allowing seamless coverage for very wide areas. Cellular network technologies are often split into 2nd generation 2G, 3G and 4G networks. Originally 2G networks were voice centric or even voice only digital cellular systems (as opposed to the analog 1G networks). Typical 2G standards include GSM and IS-95 with extensions via GPRS, EDGE and 1xRTT, providing Internet access to users of originally voice centric 2G networks. Both EDGE and 1xRTT are 3G standards, as defined by the ITU, but are usually marketed as 2.9G due to their comparatively low speeds and high delays when compared to true 3G technologies. True 3G systems such as EV-DO, W-CDMA (including HSPA and HSPA+) provide combined circuit switched and packet switched data and voice services from the outset, usually at far better data rates than 2G networks with their extensions. All of these services can be used to provide combined mobile voice access and Internet access at remote locations. 4G networks provide even higher bitrates and many architectural improvements, which are not necessarily visible to the consumer. The current 4G systems that are deployed widely are WIMAX and LTE. The two are pure packet based networks without traditional voice circuit capabilities. These networks provide voice services via VoIP or VoLTE. Some systems are designed for point-to-point line-of-sight communications, once two such nodes get too far apart they can no longer communicate. Other systems are designed to form a wireless mesh network using one of a variety of routing protocols. In a mesh network, when nodes get too far apart to communicate directly, they can still communicate indirectly through intermediate nodes. Standards The following standards are included in this comparison. Wireless wide area network (WWAN) EDGE EV-DO x1 Rev 0, Rev A, Rev B and x3 standards. Flash-OFDM: FLASH (Fast Low-latency Access with Seamless Handoff)-OFDM (Orthogonal Frequency Division Multiplexing) GPRS HSPA D and U standards. Lorawan LTE RTT UMTS over W-CDMA UMTS-TDD WiMAX: 802.16 standard Narrowband IoT NR Wireless local area network (WLAN) Wi-Fi: 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 802.11ax standards. Wireless personal area network (WPAN) and most wireless sensor actor networks (WSAN) 6LoWPAN Bluetooth V4.0 with standard protocol and with low energy protocol IEEE 802.15.4-2006 (low-level protocol definitions corresponding to the OSI model physical and link layers. Zigbee, 6LoWPAN, etc. build upward in the protocol stack and correspond to the network and transport layers.) Thread (network protocol) UWB Wireless USB Zigbee ANT+ MiraOS a wireless mesh network from LumenRadio Overview Peak bit rate and throughput When discussing throughput, there is often a distinction between the peak data rate of the physical layer, the theoretical maximum data throughput and typical throughput. The peak bit rate of the standard is the net bit rate provided by the physical layer in the fastest transmission mode (using the fastest modulation scheme and error code), excluding forward error correction coding and other physical layer overhead. The theoretical maximum throughput for end user is clearly lower than the peak data rate due to higher layer overheads. Even this is never possible to achieve unless the test is done under perfect laboratory conditions. The typical throughput is what users have experienced most of the time when well within the usable range to the base station. The typical throughput is hard to measure, and depends on many protocol issues such as transmission schemes (slower schemes are used at longer distance from the access point due to better redundancy), packet retransmissions and packet size. The typical throughput is often even lower because of other traffic sharing the same network or cell, interference or even the fixed line capacity from the base station onwards being limited. Note that these figures cannot be used to predict the performance of any given standard in any given environment, but rather as benchmarks against which actual experience might be compared. Downlink is the throughput from the base station to the user handset or computer. Uplink is the throughput from the user handset or computer to the base station. Range is the maximum range possible to receive data at 25% of the typical rate. Typical spectral use Frequency See also Comparison of mobile phone standards Computer standards List of device bandwidths OFDM system comparison table Spectral efficiency comparison table NFC (Near field communication) RFID (Radio-frequency identification) CIR (Consumer Infrared) References External links Mobile WiMAX - Part I: A Technical Overview and Performance Evaluation Mobile WiMAX – Part II: A Comparative Analysis A Comparison of Bluetooth and IEEE 802.11 WLAN Trainer at different speeds IEEE 802.11 Standard Overview Computing comparisons Lists of standards Wireless networking Telecommunications standards
List of wireless network protocols
[ "Technology", "Engineering" ]
1,504
[ "Wireless networking", "Computer networks engineering", "Computing comparisons" ]
10,779,773
https://en.wikipedia.org/wiki/Trimethylenemethane
Trimethylenemethane (often abbreviated TMM) is a chemical compound with formula . It is a neutral free molecule with two unsatisfied valence bonds, and is therefore a highly reactive free radical. Formally, it can be viewed as an isobutylene molecule with two hydrogen atoms removed from the terminal methyl groups. Structure The electronic structure of trimethylenemethane was discussed in 1948. It is a neutral four-carbon molecule containing four pi molecular orbitals. When trapped in a solid matrix at about , the six hydrogen atoms of the molecule are equivalent. Thus, it can be described either as zwitterion, or as the simplest conjugated hydrocarbon that cannot be given a Kekulé structure. It can be described as the superposition of three states: It has a triplet ground state (3A2′/3B2), and is therefore a diradical in the stricter sense of the term. Calculations predict a planar molecule with three-fold rotational symmetry, with approximate bond lengths 1.40 Å (C–C) and 1.08 Å (C–H). The H–C–H angle in each methylene is about 121°. Of the three singlet excited states, the first one, 11A1 (1.17 eV above ground), is a closed shell diradical with flat geometry and fully degenerate threefold (D3h) symmetry. The second one, 11B2 (also at 1.17 eV), is an open-shell radical with a D3h-symmetric equilibrium between three equal geometries; each has a longer C–C bond (1.48 Å) and two shorter ones (1.38 Å), and is flat and bilaterally symmetric except that the longer methylene is twisted 79° out of the plane (C2 symmetry). The third singlet state, 21A1/1A1′ (3.88 eV), is also a D3h-symmetric equilibrium of three geometries; each is planar with one shorter C–C bond and two longer ones (C2ν symmetry). The next higher energy states are degenerate triplets, 13A1 and 23B2 (4.61 eV), with one excited electron; and a quintet state, 5B2 (7.17 eV), with the p orbitals occupied by single electrons and D3h symmetry. Preparation Trimethylenemethane was first obtained from photolysis of the diazo compound 4-methylene-Δ1-pyrazoline with expulsion of nitrogen, in a frozen dilute glassy solution at . It was also obtained from photolysis of 3-methylenecyclobutanone, both in cold solution and in the form of a single crystal, with expulsion of carbon monoxide. In both cases, trimethylenemethane was detected by electron spin resonance spectroscopy. Trimethylenemethane has been obtained also by treating potassium with 2-iodomethyl-3-iodopropene and isobutylene diiodide ()2C= in the gas phase. However the product quickly dimerizes to yield 1,4-dimethylenecyclohexane, and also 2-methylpropene by abstracting two hydrogen atoms from other molecules (hydrocarbon or potassium hydride). Organometallic chemistry A number of organometallic complexes have been prepared, starting with Fe()(CO)3, which was obtained by the ring-opening of methylenecyclopropane with diiron nonacarbonyl ((CO)9). The same complex was prepared by the salt metathesis reaction of disodium tetracarbonylferrate ((CO)4) with 1,1-bis(chloromethyl)ethylene (H2C=C(CH2Cl)2). Related reactions give M(TMM)(CO)4 (M = Cr, Mo). The reaction leading to (TMM)Mo(CO)4 also gives Mo()(CO)3 containing a dimerized TMM ligand. TMM complexes have been examine for their potential in organic synthesis, specifically in the trimethylenemethane cycloaddition reaction with only modest success. One example is a palladium-catalyzed [3+2]cycloaddition of trimethylenemethane. References Free radicals
Trimethylenemethane
[ "Chemistry", "Biology" ]
929
[ "Senescence", "Free radicals", "Biomolecules" ]
10,780,895
https://en.wikipedia.org/wiki/Celestial%20cartography
Celestial cartography, uranography, astrography or star cartography is the aspect of astronomy and branch of cartography concerned with mapping stars, galaxies, and other astronomical objects on the celestial sphere. Measuring the position and light of charted objects requires a variety of instruments and techniques. These techniques have developed from angle measurements with quadrants and the unaided eye, through sextants combined with lenses for light magnification, up to current methods which include computer-automated space telescopes. Uranographers have historically produced planetary position tables, star tables, and star maps for use by both amateur and professional astronomers. More recently, computerized star maps have been compiled, and automated positioning of telescopes uses databases of stars and of other astronomical objects. Etymology The word "uranography" derived from the Greek "ουρανογραφια" (Koine Greek ουρανος "sky, heaven" + γραφειν "to write") through the Latin "uranographia". In Renaissance times, Uranographia was used as the book title of various celestial atlases. During the 19th century, "uranography" was defined as the "description of the heavens". Elijah H. Burritt re-defined it as the "geography of the heavens". The German word for uranography is "Uranographie", the French is "uranographie" and the Italian is "uranografia". Astrometry Astrometry, the science of spherical astronomy, is concerned with precise measurements of the location of celestial bodies in the celestial sphere and their kinematics relative to a reference frame on the celestial sphere. In principle, astrometry can involve such measurements of planets, stars, black holes and galaxies to any celestial body. Throughout human history, astrometry played a significant role in shaping our understanding of the structure of the visible sky, which accompanies the location of bodies in it, hence making it a fundamental tool to celestial cartography. Star catalogues A determining fact source for drawing star charts is naturally a star table. This is apparent when comparing the imaginative "star maps" of Poeticon Astronomicon – illustrations beside a narrative text from the antiquity – to the star maps of Johann Bayer, based on precise star-position measurements from the Rudolphine Tables by Tycho Brahe. Important historical star tables c:AD 150, Almagest – contains the last known star table from antiquity, prepared by Ptolemy, 1,028 stars. c.964, Book of the Fixed Stars, Arabic version of the Almagest by al-Sufi. 1627, Rudolphine Tables – contains the first West Enlightenment star table, based on measurements of Tycho Brahe, 1,005 stars. 1690, Prodromus Astronomiae – by Johannes Hevelius for his Firmamentum Sobiescanum, 1,564 stars. 1729, Britannic Catalogue – by John Flamsteed for his Atlas Coelestis, position of more than 3,000 stars by accuracy of 10". 1903, Bonner Durchmusterung – by Friedrich Wilhelm Argelander and collaborators, circa 460,000 stars. Star atlases Naked-eye 15th century BC – The ceiling of the tomb TT71 for the Egyptian architect and minister Senenmut, who served Queen Hatshepsut, is adorned with a large and extensive star chart. 1 CE ? Poeticon astronomicon, allegedly by Gaius Julius Hyginus 1092 – Xin Yi Xiang Fa Yao (新儀 象法要), by Su Song, a horological treatise which had the earliest existent star maps in printed form. Su Song's star maps also featured the corrected position of the pole star which had been deciphered due to the efforts of astronomical observations by Su's peer, the polymath scientist Shen Kuo. 1515 – First European printed star charts published in Nuremberg, Germany, engraved by Albrecht Dürer. 1603 – Uranometria, by Johann Bayer, the first western modern star map based on Tycho Brahe's and Johannes Kepler's Tabulae Rudolphinae 1627 – Julius Schiller published the star atlas Coelum Stellatum Christianum, which replaced pagan constellations with biblical and early Christian figures. 1660 – Jan Janssonius' 11th volume of Atlas Major (not to be confused with the similarly named and scoped Atlas Maior) featured the Harmonia Macrocosmica by Andreas Cellarius 1693 – Firmamentum Sobiescanum sive Uranometria, by Johannes Hevelius, a star map updated with many new star positions based on Hevelius's Prodromus Astronomiae (1690) – 1564 stars. Telescopic 1729 Atlas Coelestis by John Flamsteed 1801 Uranographia by Johann Elert Bode 1843 Uranometria Nova by Friedrich Wilhelm Argelander Photographic 1914 Franklin-Adams Charts, by John Franklin-Adams, a very early photographic atlas. The Falkau Atlas (Hans Vehrenberg). Stars to magnitude 13. Atlas Stellarum (Hans Vehrenberg). Stars to magnitude 14. True Visual Magnitude Photographic Star Atlas (Christos Papadopoulos). Stars to magnitude 13.5. The Cambridge Photographic Star Atlas, Axel Mellinger and Ronald Stoyan, 2011. Stars to magnitude 14, natural color, 1°/cm. Modern Bright Star Atlas – Wil Tirion (stars to magnitude 6.5) Cambridge Star Atlas – Wil Tirion (Stars to magnitude 6.5) Norton's Star Atlas and Reference Handbook – Ed. Ian Ridpath (stars to magnitude 6.5) Stars & Planets Guide – Ian Ridpath and Wil Tirion (stars to magnitude 6.0) Cambridge Double Star Atlas – James Mullaney and Wil Tirion (stars to magnitude 7.5) Cambridge Atlas of Herschel Objects – James Mullaney and Wil Tirion (stars to magnitude 7.5) Pocket Sky Atlas – Roger Sinnott (stars to magnitude 7.5) Deep Sky Reiseatlas – Michael Feiler, Philip Noack (Telrad Finder Charts – stars to magnitude 7.5) Atlas Coeli Skalnate Pleso (Atlas of the Heavens) 1950.0 – Antonín Bečvář (stars to magnitude 7.75 and about 12,000 clusters, galaxies and nebulae) SkyAtlas 2000.0, second edition – Wil Tirion & Roger Sinnott (stars to magnitude 8.5) 1987, Uranometria 2000.0 Deep Sky Atlas – Wil Tirion, Barry Rappaport, Will Remaklus (stars to magnitude 9.7; 11.5 in selected close-ups) Herald-Bobroff AstroAtlas – David Herald & Peter Bobroff (stars to magnitude 9 in main charts, 14 in selected sections) Millennium Star Atlas – Roger Sinnott, Michael Perryman (stars to magnitude 11) Field Guide to the Stars and Planets – Jay M. Pasachoff, Wil Tirion charts (stars to magnitude 7.5) SkyGX (still in preparation) – Christopher Watson (stars to magnitude 12) The Great Atlas of the Sky – Piotr Brych (2,400,000 stars to magnitude 12, galaxies to magnitude 18). Interstellarum Deep Sky Atlas (2014) – Ronald Stoyan and Stephan Schurig (stars to magnitude 9.5) Computerized 100,000 Stars Cartes du Ciel Celestia Stars and Planets for Android Stars and Planets for iOS CyberSky GoSkyWatch Planetarium Google Sky KStars Stellarium SKY-MAP.ORG SkyMap Online WorldWide Telescope XEphem, for Unix-like systems Stellarmap.com – online map of the stars Star Walk and Kepler Explorer OpenLab: 2 celestial cartography apps for smartphones SpaceEngine Free and printable from files The TriAtlas Project Toshimi Taki Star Atlases DeepSky Hunter Star Atlas Andrew Johnson mag 7 See also Star chart Astrometry Cosmography Cheonsang Yeolcha Bunyajido History of cartography Planetarium PP3 References External links Star Maps from Ian Ridpath's Star Tales website. The Mag-7 Star Atlas Project Historical Celestial Atlases on the Web Felice Stoppa's ATLAS COELESTIS, an extensive collection of 51 star maps and other astronomy related books stored as a multitude of images. Monthly star maps for every location on Earth Easy to use monthly star maps for northern and southern hemispheres. Helpful target lists for naked eye, binocular, or telescope viewing. Collection of rare star atlases, charts, and maps available in full digital facsimile at Linda Hall Library. Navigable online map of the stars, Stellarmap.com. The Digital Collections of the Linda Hall Library include: "Astronomy: Star Atlases, Charts, and Maps", a collection of more than 60 star atlas volumes. "Astronomy: Selected Images, a collection of high-resolution star map images. "History of Cosmology: Views of the Stars", high-resolution scans of prints relating to the study of the structure of the cosmos. Works about astronomy el:Άτλας Ουρανού
Celestial cartography
[ "Astronomy" ]
1,913
[ "Celestial cartography", "History of astronomy", "Works about astronomy", "Constellations", "Astronomical catalogues", "Sky regions", "Astronomical objects" ]
10,781,041
https://en.wikipedia.org/wiki/Enterocoely
Enterocoelom (adjective forms: enterocoelic and enterocoelous) describes both the process by which some animal embryos develop and the origin of the cells involved. In enterocoely, a mesoderm (middle layer) is formed in a developing embryo, in which the coelom appears from pouches growing and separating from the digestive tract (also known as the embryonic gut, or archenteron). As the incipient coelomic epithelium originates from archenteral diverticula, the endoderm therefore gives rise to the mesodermal cells. Etymology The term enterocoely derives from the Ancient Greek words (), meaning 'intestine', and (), meaning 'cavity'. This refers to the fact that fluid-filled body cavities are formed from pockets related to the embryonic gut. Taxonomic distribution Enterocoely is the stage of embryological development of deuterostomes in which the coelom forms. This type of coelom formation occurs in deuterostome animals, which for this reason are also known as enterocoelomates. By contrast, in protostomes, the body cavity is often formed by schizocoely. Embryonic development Enterocoelous development begins once the embryo reaches the gastrula phase of development. At this point, there are two layers of cells: the ectoderm (outermost) and the endoderm (innermost) layers. The mesoderm begins to form as two "pockets" of tissue (one above the endoderm, and one below) are formed via folding of the endoderm. These "pockets" begin to grow larger, and as they do so, they extend towards each other. When the two "pockets" of cells meet, the mesoderm is formed – a complete layer of tissue right in between the endoderm and ectoderm layers. This then leads to the formation of a coelom. The stage of coelom formation starts with the gastrula; as the archenteron forms, pockets of migrating cells also form, creating another layer between the endoderm and ectoderm, the mesoderm. These pockets gradually expand to form the coelom. See also Deuterostome Development of the digestive system Developmental biology Embryology Embryonic development Ontogeny Protostome Schizocoely References Embryology Developmental biology
Enterocoely
[ "Biology" ]
505
[ "Behavior", "Developmental biology", "Reproduction" ]
10,781,248
https://en.wikipedia.org/wiki/Cass%20identity%20model
The Cass identity model is one of the fundamental theories of LGBT identity development, developed in 1979 by Vivienne Cass. This model was one of the first to treat LGBTQIA+ people as normal in a heterosexist society and in a climate of homophobia and biphobia instead of treating homosexuality and bisexuality themselves as a problem. Cass described a process of six stages of LGBTQIA+ identity development. While these stages are sequential, some people might revisit stages at different points in their lives. The six stages of Cass' models Source: Identity confusion In the first stage, identity confusion, the person is amazed to think of themselves as a queer person. "Could I be queer?" This stage begins with the person's first awareness of homosexual or bisexual thoughts, feelings, and attractions. The people typically feel confused and experience turmoil. To the question "Who am I?", the answers can be acceptance, psychological self-denial and repression, or rejection. Denial This is a sub-stage where one will deny homosexuality. Possible responses can be: to avoid information about homosexuals or bisexuals; inhibited behavior; self-denial of homosexuality and bisexuality ("experimenting", "an accident", "just drunk", "just looking"). The possible needs can be: the person may explore internal positive and negative judgments. Will be allowed to be uncertain regarding sexual identity. May find support in knowing that sexual behavior occurs along a spectrum. May receive permission and encouragement to explore sexual identity as a normal experience (like career identity and social identity). Identity comparison The second stage is called identity comparison. In this stage, the person accepts the possibility of being homosexual or bisexual and examines the wider implications of that tentative commitment. "Maybe this does apply to me." The self-alienation becomes isolation. The task is to deal with the social alienation. Possible responses can be: the person may begin to grieve for losses and the things they give up by embracing their sexual orientation (marriage, children). They may compartmentalize their own sexuality—accept lesbian/gay definition of behavior but maintain "heterosexual" identity. Tells oneself, "It's only temporary"; "I'm just in love with this particular woman/man"; etc. The possible needs can be: will be very important that the person develops own definitions. Will need information about sexual identity, LGBTQIA+ community resources, encouragement to talk about loss of heterosexual life expectations. May be permitted to keep some "heterosexual" identity (as "not an all or none" issue). Identity tolerance In the third stage, identity tolerance, the person comes to the understanding they are "not the only one". The person acknowledges they are likely homosexual or bisexual and seeks out other homosexual or bisexual people to combat feelings of isolation. Increased commitment to being homosexual or bisexual. The task is to decrease social alienation by seeking out homosexual or bisexual people. Possible responses can be: beginning to have language to talk and think about the issue. Recognition that being homosexual or bisexual does not preclude other options. Accentuate difference between self and heterosexuals. Seek out LGBTQIA+ culture (positive contact leads to more positive sense of self, negative contact leads to devaluation of the culture, stops growth). The person may try out variety of stereotypical roles. The possible needs can be: to be supported in exploring own shame feelings derived from heterosexism, as well as internalized homophobia and biphobia. Receive support in finding positive homosexual or bisexual community connections. It is particularly important for the person to know community resources. There are many ways you can get support. Examples include LGBTQIA+ clubs, organizations for the LGBTQIA+ community. Identity acceptance The identity acceptance stage means the person accepts themselves. "I will be okay." The person attaches a positive connotation to their homosexual or bisexual identity and accepts rather than tolerates it. There is continuing and increased contact with the LGBTQIA+ culture. The task is to deal with inner tension of no longer subscribing to society's norm, attempt to bring congruence between private and public view of self. Possible responses can be: accepts homosexual or bisexual self-identification. May compartmentalize "LGBTQIA+ life". Maintain less and less contact with heterosexual community. Attempt to "fit in" and "not make waves" within the LGBTQIA+ community. Begin some selective disclosures of sexual identity. More social coming out; more comfortable being seen with groups of men or women that are identified as "gay". More realistic evaluation of situation. The possible needs can be: continue exploring grief and loss of heterosexual life expectation, continue exploring internalized homophobia and biphobia (learned shame from heterosexist society). Find support in making decisions about where, when, and to whom to disclose. Identity pride In the identity pride stage, sometimes the coming out of the closet arrives, and the main thinking is "I've got to let people know who I am!" The person divides the world into heterosexuals and queers, and is immersed in LGBTQIA+ culture while minimizing contact with heterosexuals. Us-them quality to political/social viewpoint. The task is to deal with the incongruent views of heterosexuals. Possible responses include: splits world into "gay" (good) and "straight" (bad)—experiences disclosure crises with heterosexuals as they are less willing to "blend in"—identify LGBTQIA+ culture as sole source of support, acquiring all gay friends, business connections, social connections. The possible needs can be: to receive support for exploring anger issues, to find support for exploring issues of heterosexism, to develop skills for coping with reactions and responses to disclosure to sexual identity, and to resist being defensive. Identity synthesis The last stage in Cass' model is identity synthesis: the person integrates their sexual identity with all other aspects of self, and sexual orientation becomes only one aspect of self rather than the entire identity. The task is to integrate LGBTQIA+ identity so that instead of being the identity, it is an aspect of self. Possible responses can be: continues to be angry at heterosexism, but with decreased intensity, or allows trust of others to increase and build. LGBTQIA+ identity is integrated with all aspects of "self". The person feels "all right" to move out into the community and not simply define space according to sexual orientation. This is a normal feeling. Criticisms of the model Joanne Kaufman and Cathryn Johnson have argued that based upon more recent research, this model is less valid today than it was at its inception for several reasons: This model does not take into account socio-cultural factors that can impact identity development. The nature of the social stigma and its management practices have changed since the inception of the model. The linear nature of the model would suggest that anyone who abandons the model or fails to go through each of the six stages would not be able to be considered as a well adjusted queer person, which may no longer be true. See also Coming out Homosocialization Fassinger's model of gay and lesbian identity development Labeling theory Sexual orientation Books Vivienne Cass (1979, 1984, 1990). In Ritter and Terndrup (2002) Handbook of Affirmative Psychotherapy with Lesbians and Gay Men External links Stages in “Coming Out” Process Jen Anderson and Mario Brown, Gay, Lesbian, Bisexual, and Transgender Identity Development- Druri university Diagram of Cass Identity Model by Joe Kort(PDF) Joe Kort References 1979 in LGBTQ history Sociological theories Sexology Developmental stage theories Sexual identity models Sexual orientation and psychology
Cass identity model
[ "Biology" ]
1,583
[ "Behavioural sciences", "Behavior", "Sexology" ]
10,781,566
https://en.wikipedia.org/wiki/Grading%20%28earthworks%29
Grading in civil engineering and landscape architectural construction is the work of ensuring a level base, or one with a specified slope, for a construction work such as a foundation, the base course for a road or a railway, or landscape and garden improvements, or surface drainage. The earthworks created for such a purpose are often called the sub-grade or finished contouring (see diagram). Regrading Regrading is the process of grading for raising and/or lowering the levels of land. Such a project can also be referred to as a regrade. Regrading may be done on a small scale (as in preparation of a house site) or on quite a large scale (as in major reconfiguration of the terrain of a city, such as the Denny Regrade in Seattle). Regrading is typically performed to make land more level (flatter), in which case it is sometimes called levelling.) Levelling can have the consequence of making other nearby slopes steeper, and potentially unstable or prone to erosion. Transportation In the case of gravel roads and earthworks for certain purposes, grading forms not just the base but the cover and surface of the finished construction, and is often called finished grade. Process After the existing conditions of the limit of work has been surveyed, surveyors will set stakes in places that are to be regraded. These stakes have marks on them that either give a finished grade to the design of the project, or have CUT/FILL marks which specify how much dirt is to be added or subtracted. All grade marks are relative to site benchmarks that have been established. The regrading work is then often done using heavy machinery like bulldozers and excavators to roughly prepare an area, then a grader is used for a finer finish. Environmental design In the environmental design professions, grading and regrading are a specifications and construction component in landscape design, landscape architecture, and architecture projects. It is used for buildings or outdoor amenities regarding foundations and footings, slope terracing and stabilizing, aesthetic contouring, and directing surface runoff drainage of stormwater and domestic/irrigation runoff flows. Purposes Reasons for regrading include: Enabling construction on lands that were previously too varied and/or steeply sloped. Enabling transportation along routes that were previously too varied and/or steep. Changing drainage patterns and rerouting surface flow. Improving the stability of terrain adjacent to developments. Consequences Potential problems and consequences from regrading include: Soil and/or slope instability Terrain prone to erosion Ecological impacts, habitat destruction, terrestrial and/or aquatic biological losses. Drainage problems (surface and/or subsurface flow) for areas not considered in the regrading plan. Loss of aesthetic natural landscape topography and/or historical cultural landscapes. See also Cut (earthmoving) Cut-and-cover Cut and fill Fill dirt Grade (slope) (civil engineering and geographical term) Regrading Slope (mathematical term) Subgrade Trench References External links Matusik, John. "Grading and Earthworks" in The Land Development Handbook, 2004. Gravel Roads Construction and Maintenance Guide , Federal Highway Administration (FHWA) and the South Dakota Local Technical Assistance Program (SDLTAP), 2015. "How to Grade Gravel Roads" in Gravel Roads, Soil Stabilization, Soil-Sement® by Frank Elswick, 2017. Recommended Practices Manual: A Guideline for Maintenance and Service of Unpaved Roads, Choctawhatchee, Pea and Yellow Rivers Watershed Management Authority, 2000. Construction Artificial landforms Gardening aids Landscape architecture Road construction
Grading (earthworks)
[ "Engineering" ]
734
[ "Construction", "Architecture", "Landscape architecture", "Road construction" ]
10,782,073
https://en.wikipedia.org/wiki/Scale%20error
In developmental psychology, Scale error is a serious attempt made by a young child to perform a task that is behaviorally inappropriate for the object because of a mistaken difference in the perceived and actual size of the objects involved. The child does not consider the size of their body in relation to the object and may attempt to fit into miniature objects or toys. An example of this would be a child attempting to slide down a toy slide or attempting to enter and drive a miniature toy car. This phenomenon was first documented and studied by DeLoache et. al in 2004. Recent studies have added to the wealth of knowledge on the topic including evidence of the prevalence of scale error outside of the laboratory, as well as investigations into the frequency of scale errors in children. Criteria for scale errors For an action to be considered a scale error under the strictest definition, a child must: Perform or attempt to perform part or all of the actions done with the large object, on the smaller object. Make actual physical contact with the relevant body part. Perform the behavior with such seriousness that they are obviously not pretending; often the behavior is repetitive, and the lack of success becomes frustrating to the child. DeLoache study Psychologists DeLoache, Uttal, and Rosengren conducted and documented the first study on the aspect of Scale Error. In their study children were introduced to large (normal-sized) objects and given a chance to familiarize themselves with them. Some children were also prompted to engage in play behavior with the objects. After several minutes, the large objects were replaced with smaller versions of the same object. In several cases, regardless of prompting, the child attempted to interact with the small object in the same way they would have interacted with the large object. The researchers believed that the error was caused by a underdeveloped functioning between the part of the brain that controls the actual physical movement with the part that controls the planning of the action, as well as a lack of inhibition. When the child sees the toy chair the occipital lobe, the part of the brain responsible for seeing the object and planning the next action, is activated and recognizes the object as a part of the chair category, but does not take the size of the chair into account. Next, the motor cortex, which controls the physical movement of the action, recognizes the appropriate movement/action for a chair, and the child then takes "appropriate" action—attempts to sit in the chair. During this step the child performs the action proportionate to the miniature object, and thus is able to carry out precise movements. However, the step itself is initiated based on a larger mental representation. In older more developed children, these steps are usually inhibited by recognition and integration of the true miniature size of the object into the child's action plan. The child in this case would then go about playing with the toy normally. The study also found that if the child is given the choice, they will never choose to interact with the smaller object over the larger object. According to surveys taken by the researchers, the phenomenon is not common; parents more often reported that their child did not engage in the behavior. It is speculated, however, that parents may not remember less striking errors or they may not have been present to witness them. Additional studies Rosengren et. al 2009 Additional studies were conducted to document and quantify how often scale errors were committed by children daily. Rosengren et. al (2009) instructed parent participants to observe their children and note when they engaged in a scale error. Parents were also instructed to differentiate between a scale error and pretense, or pretend play. Rosengren et. al (2009) found that almost all parents documented an instance of scale error in their children. These results concluded that children will and do commit scale errors in early childhood. Ware et al. 2010 In order to provide evidence of scale error prevalence outside of the laboratory, Ware et al. (2010) conducted multiple studies to explore the presence of scale errors in children's daily lives. Throughout 2 studies, researchers had participants complete internet surveys questioning if the participant had ever seen a child engage in a scale error. Responses were screened and in the second study participants were interviewed through a secondary phone call about the incident they had identified. Ware et al. (2010) concluded that scale errors occurred both in and out of the laboratory setting. The study provided the first evidence of children making scale errors outside of the laboratory setting. Age The frequency of scale errors seems to differ for children across age ranges. Across 18-30 month olds, frequencies of scale error peaked around 24 months. See also Developmental Psychology Make Believe Play References Developmental psychology
Scale error
[ "Biology" ]
942
[ "Behavioural sciences", "Behavior", "Developmental psychology" ]
10,782,256
https://en.wikipedia.org/wiki/Dm-crypt
dm-crypt is a transparent block device encryption subsystem in Linux kernel versions 2.6 and later and in DragonFly BSD. It is part of the device mapper (dm) infrastructure, and uses cryptographic routines from the kernel's Crypto API. Unlike its predecessor cryptoloop, dm-crypt was designed to support advanced modes of operation, such as XTS, LRW and ESSIV, in order to avoid watermarking attacks. In addition to that, dm-crypt addresses some reliability problems of cryptoloop. dm-crypt is implemented as a device mapper target and may be stacked on top of other device mapper transformations. It can thus encrypt whole disks (including removable media), partitions, software RAID volumes, logical volumes, as well as files. It appears as a block device, which can be used to back file systems, swap or as an LVM physical volume. Some Linux distributions support the use of dm-crypt on the root file system. These distributions use initrd to prompt the user to enter a passphrase at the console, or insert a smart card prior to the normal boot process. Frontends The dm-crypt device mapper target resides entirely in kernel space, and is only concerned with encryption of the block device it does not interpret any data itself. It relies on user space front-ends to create and activate encrypted volumes, and manage authentication. At least two frontends are currently available: cryptsetup and cryptmount. cryptsetup The cryptsetup command-line interface, by default, does not write any headers to the encrypted volume, and hence only provides the bare essentials: encryption settings have to be provided every time the disk is mounted (although usually employed with automated scripts), and only one key can be used per volume; the symmetric encryption key is directly derived from the supplied passphrase. Because it lacks a "salt", using cryptsetup is less secure in this mode than is the case with Linux Unified Key Setup (LUKS). However, the simplicity of cryptsetup makes it useful when combined with third-party software, for example, with smart card authentication. cryptsetup also provides commands to deal with the LUKS on-disk format. This format provides additional features such as key management and key stretching (using PBKDF2), and remembers encrypted volume configuration across reboots. cryptmount The cryptmount interface is an alternative to the "cryptsetup" tool that allows any user to mount and unmount a dm-crypt file system when needed, without needing superuser privileges after the device has been configured by a superuser. Features The fact that disk encryption (volume encryption) software like dm-crypt only deals with transparent encryption of abstract block devices gives it a lot of flexibility. This means that it can be used for encrypting any disk-backed file systems supported by the operating system, as well as swap space; write barriers implemented by file systems are preserved. Encrypted volumes can be stored on disk partitions, logical volumes, whole disks as well as file-backed disk images (through the use of loop devices with the losetup utility). dm-crypt can also be configured to encrypt RAID volumes and LVM physical volumes. dm-crypt can also be configured to provide pre-boot authentication through an initrd, thus encrypting all the data on a computer except the bootloader, the kernel and the initrd image itself. When using the cipher block chaining (CBC) mode of operation with predictable initialization vectors as other disk encryption software, the disk is vulnerable to watermarking attacks. This means that an attacker is able to detect the presence of specially crafted data on the disk. To address this problem in its predecessors, dm-crypt included provisions for more elaborate, disk encryption-specific modes of operation. Support for ESSIV (encrypted salt-sector initialization vector) was introduced in Linux kernel version 2.6.10, LRW in 2.6.20 and XTS in 2.6.24. A wide-block disk encryption algorithm, Adiantum, was added in 5.0, and its AES-based cousin HCTR2 in 6.0. The Linux Crypto API includes support for most popular block ciphers and hash functions, which are all usable with dm-crypt. Crypted FS support include LUKS (versions 1 and 2) volumes, loop-AES, TrueCrypt/VeraCrypt (since Linux kernel 3.13), and BitLocker-encrypted NTFS (since cryptsetup 2.3.0). TrueCrypt/VeraCrypt (TCRYPT) and BitLocker (BITLK) support require the kernel userspace crypto API. Compatibility dm-crypt and LUKS encrypted disks can be accessed and used under MS Windows using the now defunct FreeOTFE (formerly DoxBox, LibreCrypt), provided that the filesystem used is supported by Windows (e.g. FAT/FAT32/NTFS). Encrypted ext2 and ext3 filesystems are supported by using Ext2Fsd or so-called "Ext2 Installable File System for Windows"; FreeOTFE also supports them. Cryptsetup/LUKS and the required infrastructure have also been implemented on the DragonFly BSD operating system. See also Comparison of disk encryption software References External links Official , and websites All about dm-crypt and LUKS on one page (on archive.org) a page covering dm-crypt/LUKS, starting with theory and ending with many practical examples about its usage. Device mapper Disk encryption Cryptographic software
Dm-crypt
[ "Mathematics" ]
1,233
[ "Cryptographic software", "Mathematical software" ]
10,782,668
https://en.wikipedia.org/wiki/Irregularity%20of%20distributions
The irregularity of distributions problem, stated first by Hugo Steinhaus, is a numerical problem with a surprising result. The problem is to find N numbers, , all between 0 and 1, for which the following conditions hold: The first two numbers must be in different halves (one less than 1/2, one greater than 1/2). The first 3 numbers must be in different thirds (one less than 1/3, one between 1/3 and 2/3, one greater than 2/3). The first 4 numbers must be in different fourths. The first 5 numbers must be in different fifths. etc. Mathematically, we are looking for a sequence of real numbers such that for every n ∈ {1, ..., N} and every k ∈ {1, ..., n} there is some i ∈ {1, ..., k} such that Solution The surprising result is that there is a solution up to N = 17, but starting at N = 18 and above it is impossible. A possible solution for N ≤ 17 is shown diagrammatically on the right; numerically it is as follows: In this example, considering for instance the first 5 numbers, we have Mieczysław Warmus concluded that 768 (1536, counting symmetric solutions separately) distinct sets of intervals satisfy the conditions for N = 17. References H. Steinhaus, One hundred problems in elementary mathematics, Basic Books, New York, 1964, page 12 M. Warmus, "A Supplementary Note on the Irregularities of Distributions", Journal of Number Theory 8, 260–263, 1976. Fractions (mathematics)
Irregularity of distributions
[ "Mathematics" ]
339
[ "Fractions (mathematics)", "Arithmetic", "Mathematical objects", "Numbers" ]
10,782,986
https://en.wikipedia.org/wiki/Branch%20collar
A branch collar is the "shoulder" between the branch and trunk of woody plants; the inflammation formed at the base of the branch is caused by annually overlapping trunk tissue. The shape of the branch collar is due to two separate growth patterns, initially the branch grows basipetally, followed by seasonal trunk growth which envelops the branch. Branch collars serve as a strong foundation to the branch, and its orientation and internal characteristics allow the branch to withstand stress from numerous directions. Functionally the branch collars also influence the conductivity of nutrients and growth patterns. The branch collar which provides a protective barrier to prevent infection and decay, can also be useful in diagnosing bacterial diseases. Proper pruning techniques should accommodate for the branch collar structure, as by damaging the tree it is likely to decay or become diseased. Definition In arboriculture, the "shoulder" junction structure between the branch and the trunk is known as the branch collar. This structure can be identified as a raised ring of tissue around the base of the branch The branch collar and trunk collar are collectively called the branch collar. Morphology Growth stages Tree branches are attached to the trunk with a series of trunk collars that annually envelope the branch collar. The branch tissues develop a basal collar first in spring, then trunk tissue envelops the collar later during seasons of growth. This rhythm of growth results in a tissue arrangement that wraps around the branch, creating the branch collar.  This processes where the branch tissue develops basipetally and the trunk tissue develops perpendicular to the branch, results in the cambium cells of the upper segments of the branch collar to develop in a right-angle formation. The expanding cambium of the trunk, over time, slowly overtakes the newly forming branch tissue, which causes the branch collar to swallow up more of the branch as the tree grows. The development of xylem tissue within the tight pocket above the branch collar known as the "crotch", causes the cells to be compacted to form the hard zone of connective tissue between the branch and the trunk. The formation of narrow channels and loops within the branch collar tissue are the pathways left behind by the flowing of large volumes of hormonal signals. Function Structural integrity The branch collar forms a sturdy foundation structure, the enveloping of branch tissue by trunk tissue gives the branch unique properties of strength. The branch collar junction due to various regions of differing elasticity allows the branch collar to withstand mechanical loads by distributing stress within tissue regions of varying strengths. Additionally, the orientation of fibres within critical regions of the branch collar can change their physical orientation to withstand and match stress from various directions. Furthermore, microfibril angles and density are adapted locally within the branch and branch collar, to develop patterns within the branch collar that best protect the plant from stress damage caused by loads on the branch and tension from branch growth. The points on the branch closest to the branch collar structure can take the most duress, similarly the branch collar provides the length of the branch with a strong foundation. Low Conductivity The presence of visible branch collars is a good indicator of low branch junction conductivity, this is because branch collars with perpendicular branches have significantly lower hydraulic conductivities than more upright branches. Within the branch collar there are water flow restriction zones, which are the combination of narrow vascular elements and non-functional circular vessels these structures help enhance the segmentation of the plant and promotes the movement of water and sap up the central xylem. Influence on growth patterns The circular tissues within the branch junctions directly influence the growth and dimensions of the tree, by affecting the shedding of branches and by attenuating their ability to withstand mechanical load, and indirectly, by affecting the movement of growth regulators and ascent of sap, which influence the development of branches especially the dominance of the leader branch. Trees can also self-prune by the bark building a ring notch at the branch collar which becomes a weak point so that at some stage the branch will be knocked off. Then the bark then grows over the wound and seals the tree. This function allows plants such as the crack willow (Salix Fragilis) to perform vegetative propagation where the shed branch will then root itself and grow. Ecology and disease Compartmentalising disease The branch collar inhibits infection by acting as a protective barrier. Trees compartmentalize their injury by producing antimicrobial substances then growing over the area. Events such as storms or incorrect pruning activity can cause damage to the branch collar When the trunk collar is injured, the trunk xylem below it is rapidly infected and decays. Within the branch collar there is a narrow cone of cells known as the branch defence zone,  these cells activate the development of wood wound which is a callus tissue that grows when the branch is broken off. Suberization followed by periderm formation may provide a barrier to further mycelial advance, and the abundant production of resin may constitute further protection. However periderm barriers can be penetrated by hyphae, especially in weather favouring the rapid extension of canker, and it is common to find a succession of such barriers which have been crossed by fungus. Epidemiology using the branch collar The branch collar can be used to diagnose dying trees, Liberibacter asiaticus bacteria was found in higher concentration in the branch collar than the pith. Branch collar cortical tissue is soft compared to other tissues used for bacterial measure like the pith, making the tissue of the branch collar easier and more efficient in the epidemiological diagnosis of infected trees. Furthermore, the ability to conduct an epidemiological study using branch collar is useful as it can be used instead of leaves, which allows for the diagnosing of trees without any leaves. Canker diseases While most infections occur commonly at the main branch crotches, cankers start at the branch collar. For young trees branches crotch and collar could confine the infection within itself, in older trees (older than 4 years of age) there were more stem cankers which frequently originated on pruning scars. Wounds infection of the stem also originated in wounds caused by large wildlife. Proper pruning techniques of the branches can prevent the development of cankers. Pruning Guidelines The generally accepted guideline for urban pruning has been a technique commonly referred to as "natural target pruning". Natural target pruning aims to retain the branch collar on the primary trunk while removing the rest of the branch, thereby promoting the development of the wood wound callus tissue free of defects and therefore possessing greater wood strength. Furthermore, Natural target pruning recommended guidelines aiming to retain the integrity of the branch collar has been shown to facilitate effective wound closure. The traditional method of pruning branches was to make an even level cut against the tree trunk, but this technique is currently avoided as evidence has shown that flush cuts increase the wound size and encourage the invasion of the wound by microorganisms and decay.  Therefore, the current recommendation encourages that branches should be removed outside the branch collar as this technique facilitates a circular closure around the wound, while flush cuts often result in a distorted closure that exposes the wood to discolouration and decay. Optimal pruning summary Pruning according to the branch collar is integral in maintaining the health of woody plants. When pruning injures or removes the branch collar, the trunk xylem above and below the cut is rapidly infected by the microorganisms inhabiting the wood and decay of the plant occurs. Optimal pruning is carried out by cutting with respect to the perimeter of the branch collar and cutting adjacent to it. When cutting it is important to use sharp equipment, as any crushing will damage the branch collar. Young trees should be pruned enough to control the direction of the plants growth and to correct any form of weakness along the branch. The tree should be pruned at its desired height. When pruning choose roughly five to seven main branches and prune the rest. Older trees need to be pruned more delicately – they are more susceptible to infections. When pruning older trees, prune out dead, weak, diseased and insect-infested branches and also remove low, broken and crossing branches. The quality of pruning has significant effect on the infection by fungal pathogens, which can consequently cause stem disease. Remove damaged, weak diseased, or insect infested growth or small unwanted branches anytime. It is most beneficial to prune prior to the annual period of most rapid growth, which is usually spring. Conversely, pruning when growth is nearly complete for the season tends to retard and stunt growth. The period of growth tends to vary for different trees, but generally; Deciduous trees should be pruned when dormant. Evergreen trees should be pruned before growth in spring. Spring flowering trees should be pruned towards the end of late spring as this tends to be their period of new growth, this can be indicated by the fading of flowers. Summer trees should be pruned before growth in late winter or spring. Pruning methodology The stages in pruning living branches with respect to branch collar:   1.    Decide where the branch collar begins and ends 2.    Identify the branch bark ridge (raised strip of bark at the top of the branch union or crotch that sits above the branch itself connecting to the trunk of the plant. 3.    Mark a point outside both the branch bark ridge  and the branch collar, mark a line angling down away following the angle of the branch collar. 4.    Ternary Method; the first cut should be done from the underside of the branch around 6 to 12 inches away from the branch's union to the trunk. This cut is done to prevent the falling weight of the branch from tearing the stem tissue as it pulls away from the tree, which can cause damage and infection. 5.    The second cut; called the top cut is made above and is further along the branch than the undercut. As beforementioned it is important to prevent any ripping while cutting and manipulating the branch. 6.    Once both these cuts have been completed the branch should fall and be removed. 7.    Make a third and final cut outside the previously marked point, at a 45 to 60 degree angle to the branch ridge, while cutting do so in a precise manner as to maintain the structurally integrity of the branch collar (in step 3). Consequences of Optimal Pruning Proper pruning techniques are integral in keeping the tree healthy. The branch collar has a variety of functions one of which is a natural defence system from disease and infection. Therefore, proper pruning techniques by maintaining the structurally integrity of the branch collar, allows for the branch collar to develop callus tissue which seals of the wound minimizing disease and infection. Application of Optimal Pruning Techniques Studies testing Kuala Lumpur City Hall (DBKL) tree maintenance workers on correct pruning techniques and conditions illustrated a need for improved education of optimal pruning practices. This would be beneficial as a clear understanding of optimal pruning techniques would improve the quality of their roadside tree pruning and consequently the health of the trees and the individuals living in communities with trees situated nearby. See also Arboriculture Branch attachment Plant morphology Tree fork References Trees Plant morphology Plant anatomy Biomechanics Horticulture
Branch collar
[ "Physics", "Biology" ]
2,318
[ "Biomechanics", "Mechanics", "Plant morphology", "Plants" ]
10,783,043
https://en.wikipedia.org/wiki/Fagnano%27s%20problem
In geometry, Fagnano's problem is an optimization problem that was first stated by Giovanni Fagnano in 1775: The solution is the orthic triangle, with vertices at the base points of the altitudes of the given triangle. Solution The orthic triangle, with vertices at the base points of the altitudes of the given triangle, has the smallest perimeter of all triangles inscribed into an acute triangle, hence it is the solution of Fagnano's problem. Fagnano's original proof used calculus methods and an intermediate result given by his father Giulio Carlo de' Toschi di Fagnano. Later however several geometric proofs were discovered as well, amongst others by Hermann Schwarz and Lipót Fejér. These proofs use the geometrical properties of reflections to determine some minimal path representing the perimeter. Physical principles A solution from physics is found by imagining putting a rubber band that follows Hooke's Law around the three sides of a triangular frame , such that it could slide around smoothly. Then the rubber band would end up in a position that minimizes its elastic energy, and therefore minimize its total length. This position gives the minimal perimeter triangle. The tension inside the rubber band is the same everywhere in the rubber band, so in its resting position, we have, by Lami's theorem, Therefore, this minimal triangle is the orthic triangle. See also Set TSP problem, a more general task of visiting each of a family of sets by the shortest tour References Heinrich Dörrie: 100 Great Problems of Elementary Mathematics: Their History and Solution. Dover Publications 1965, p. 359-360. , problem 90 (restricted online version (Google Books)) Paul J. Nahin: When Least is Best: How Mathematicians Discovered Many Clever Ways to Make Things as Small (or as Large) as Possible. Princeton University Press 2004, , p. 67 Coxeter, H. S. M.; Greitzer, S. L.:Geometry Revisited. Washington, DC: Math. Assoc. Amer. 1967, pp. 88–89. H.A. Schwarz: Gesammelte Mathematische Abhandlungen, vol. 2. Berlin 1890, pp. 344-345. (online at the Internet Archive, German) External links Fagnano's problem at cut-the-knot Fagnano's problem in the Encyclopaedia of Mathematics Triangle problems
Fagnano's problem
[ "Mathematics" ]
496
[ "Geometry problems", "Mathematical problems", "Triangle problems" ]
10,783,469
https://en.wikipedia.org/wiki/Capers%20Jones
Capers Jones is an American specialist in software engineering methodologies and measurement. He is often associated with the function point model of cost estimation. He is the author of thirteen books. He was born in St Petersburg, Florida, United States and graduated from the University of Florida, having majored in English. He later became the President and CEO of Capers Jones & Associates and latterly Chief Scientist Emeritus of Software Productivity Research (SPR). In 2011, he co-founded Namcook Analytics LLC, where he is Vice President and Chief Technology Officer (CTO). He formed his own business in 1984, Software Productivity Research, after holding positions at IBM and ITT. After retiring from Software Productivity Research in 2000, he remains active as an independent management consultant. He is a Distinguished Advisor to the Consortium for IT Software Quality (CISQ). Published Works Software Development Patterns and Antipatterns, Capers Jones, Routledge, 2021. . A Guide to Selecting Software Measures and Metrics, Capers Jones, Auerbach Publications, 2017. . Quantifying Software: Global and Industry Perspectives, Capers Jones, Auerbach Publications, 2017. Software Methodologies: A Quantitative Guide, Capers Jones, Auerbach Publications, 2017. The Technical and Social History of Software Engineering, Capers Jones, Addison-Wesley, 2013. . The Economics of Software Quality, Capers Jones, Olivier Bonsignour and Jitendra Subramanyam, Addison-Wesley Longman, 2011. . Software Assessments, Benchmarks, and Best Practices, Capers Jones, Addison-Wesley, 2010. . Software Engineering Best Practices : lessons from successful projects in the top companies, Capers Jones, Universal Publishers, 2009. . Applied Software Measurement: Global Analysis of Productivity and Quality, Capers Jones, McGraw-Hill, 2008. The History and Future of Narragansett Bay, Capers Jones, McGraw-Hill, 2008. . Estimating Software Costs 2nd Edition, Capers Jones, McGraw-Hill, 2007. . Software Assessments, Benchmarks and Best Practices, Capers Jones, Addison-Wesley Professional, 2000. . Assessment and Control of Software Risks, Capers Jones, Pearson, 1993. . Programming Productivity, Capers Jones, Mcgraw-Hill, 1986. . References Year of birth missing (living people) Living people People from St. Petersburg, Florida University of Florida College of Liberal Arts and Sciences alumni American computer specialists American technology chief executives Software engineering researchers Computer science writers IBM employees
Capers Jones
[ "Technology" ]
509
[ "Computing stubs", "Computer specialist stubs" ]
10,783,481
https://en.wikipedia.org/wiki/Electronic%20Document%20System
The Electronic Document System (EDS) was an early hypertext system – also known as the Interactive Graphical Documents (IGD) hypermedia system – focused on creation of interactive documents such as equipment repair manuals or computer-aided instruction texts with embedded links and graphics. EDS was a 1978–1981 research project at Brown University by Steven Feiner, Sandor Nagy and Andries van Dam. EDS used a dedicated Ramtech raster display and VAX-11/780 computer to create and navigate a network of graphic pages containing interactive graphic buttons. Graphic buttons had programmed behaviors such as invoking an animation, linking to another page, or exposing an additional level of detail. The system had three automatically created navigation aids: a timeline showing thumbnail images of pages traversed; a 'neighbors' display showing thumbnails of all pages linking to the current page on the left, and all pages reachable from the current page on the right; a visual display of thumbnail page images arranged by page keyword, color coded by chapter. Unlike most hypertext systems, EDS incorporated state variables associated with each page. For example, clicking a button indicating a particular hardware fault might set a state variable that would expose a new set of buttons with links to a relevant choice of diagnostic pages. The EDS model prefigured graphic hypertext systems such as Apple's HyperCard. References van Dam, Andries. (1988, July). Hypertext '87 keynote address. Communications of the ACM, 31, 887–895. Feiner, Steven; Nagy, Sandor; van Dam, Andries. (1981). An integrated system for creating and presenting complex computer-based documents. SIGGRAPH Proceedings of the 8th annual conference on Computer graphics and interactive techniques, Dallas Texas. Brown University Department of Computer Science. (2019, 23 May). A Half-Century of Hypertext at Brown: A Symposium. Hypertext Brown University History of human–computer interaction
Electronic Document System
[ "Technology" ]
402
[ "History of human–computer interaction", "History of computing" ]
10,784,136
https://en.wikipedia.org/wiki/Negative%20base
A negative base (or negative radix) may be used to construct a non-standard positional numeral system. Like other place-value systems, each position holds multiples of the appropriate power of the system's base; but that base is negative—that is to say, the base is equal to for some natural number (). Negative-base systems can accommodate all the same numbers as standard place-value systems, but both positive and negative numbers are represented without the use of a minus sign (or, in computer representation, a sign bit); this advantage is countered by an increased complexity of arithmetic operations. The need to store the information normally contained by a negative sign often results in a negative-base number being one digit longer than its positive-base equivalent. The common names for negative-base positional numeral systems are formed by prefixing nega- to the name of the corresponding positive-base system; for example, negadecimal (base −10) corresponds to decimal (base 10), negabinary (base −2) to binary (base 2), negaternary (base −3) to ternary (base 3), and negaquaternary (base −4) to quaternary (base 4). Example Consider what is meant by the representation in the negadecimal system, whose base is −10: The representation (which is intended to be negadecimal notation) is equivalent to in decimal notation, because 10,000 + (−2,000) + 200 + (−40) + 3 = . Remark On the other hand, in decimal would be written in negadecimal. History Negative numerical bases were first considered by Vittorio Grünwald in an 1885 monograph published in Giornale di Matematiche di Battaglini. Grünwald gave algorithms for performing addition, subtraction, multiplication, division, root extraction, divisibility tests, and radix conversion. Negative bases were later mentioned in passing by A. J. Kempner in 1936 and studied in more detail by Zdzisław Pawlak and A. Wakulicz in 1957. Negabinary was implemented in the early Polish computer BINEG (and UMC), built 1957–59, based on ideas by Z. Pawlak and A. Lazarkiewicz from the Mathematical Institute in Warsaw. Implementations since then have been rare. zfp, a floating-point compression algorithm from the Lawrence Livermore National Laboratory, uses negabinary to store numbers. According to zfp's documentation: Unlike sign-magnitude representations, the leftmost one-bit in negabinary simultaneously encodes the sign and approximate magnitude of a number. Moreover, unlike two’s complement, numbers small in magnitude have many leading zeros in negabinary regardless of sign, which facilitates encoding. Notation and use Denoting the base as , every integer can be written uniquely as where each digit is an integer from 0 to and the leading digit (unless ). The base expansion of is then given by the string . Negative-base systems may thus be compared to signed-digit representations, such as balanced ternary, where the radix is positive but the digits are taken from a partially negative range. (In the table below the digit of value −1 is written as the single character T.) Some numbers have the same representation in base as in base . For example, the numbers from 100 to 109 have the same representations in decimal and negadecimal. Similarly, and is represented by 10001 in binary and 10001 in negabinary. Some numbers with their expansions in a number of positive and corresponding negative bases are: Note that, with the exception of nega balanced ternary, the base expansions of negative integers have an even number of digits, while the base expansions of the non-negative integers have an odd number of digits. Calculation The base expansion of a number can be found by repeated division by , recording the non-negative remainders in , and concatenating those remainders, starting with the last. Note that if is with remainder , then and therefore . To arrive at the correct conversion, the value for must be chosen such that is non-negative and minimal. For the fourth line of the following example this means that has to be chosen — and not nor For example, to convert 146 in decimal to negaternary: Reading the remainders backward we obtain the negaternary representation of 14610: 21102–3. Proof: -3 · (-3 · (-3 · (-3 · ( 2 ) + 1 ) + 1 ) + 0 ) + 2 = (((2 · (–3) + 1) · (–3) + 1) · (–3) + 0) · (–3) + 2 = 14610. Reading the remainders forward we can obtain the negaternary least-significant-digit-first representation. Proof: 2 + ( 0 + ( 1 + ( 1 + ( 2 ) · -3 ) · -3) · -3 ) · -3  = 14610. Note that in most programming languages, the result (in integer arithmetic) of dividing a negative number by a negative number is rounded towards 0, usually leaving a negative remainder. In such a case we have . Because , is the positive remainder. Therefore, to get the correct result in such case, computer implementations of the above algorithm should add 1 and to the quotient and remainder respectively. Example implementation code To negabinary C# static string ToNegabinary(int val) { string result = string.Empty; while (val != 0) { int remainder = val % -2; val = val / -2; if (remainder < 0) { remainder += 2; val += 1; } result = remainder.ToString() + result; } return result; } C++ auto to_negabinary(int value) { std::bitset<sizeof(int) * CHAR_BIT > result; std::size_t bit_position = 0; while (value != 0) { const auto div_result = std::div(value, -2); if (div_result.rem < 0) value = div_result.quot + 1; else value = div_result.quot; result.set(bit_position, div_result.rem != 0); ++bit_position; } return result; } To negaternary C# static string Negaternary(int val) { string result = string.Empty; while (val != 0) { int remainder = val % -3; val = val / -3; if (remainder < 0) { remainder += 3; val += 1; } result = remainder.ToString() + result; } return result; } Python def negaternary(i: int) -> str: """Decimal to negaternary.""" if i == 0: digits = ["0"] else: digits = [] while i != 0: i, remainder = divmod(i, -3) if remainder < 0: i, remainder = i + 1, remainder + 3 digits.append(str(remainder)) return "".join(digits[::-1]) >>> negaternary(1000) '2212001' Common Lisp (defun negaternary (i) (if (zerop i) "0" (let ((digits "") (rem 0)) (loop while (not (zerop i)) do (progn (multiple-value-setq (i rem) (truncate i -3)) (when (minusp rem) (incf i) (incf rem 3)) (setf digits (concatenate 'string (write-to-string rem) digits)))) digits))) To any negative base Java import java.util.ArrayList; import java.util.Collections; public ArrayList<Integer> negativeBase(int input, int base) { ArrayList<Integer> result_rev = new ArrayList<>(); int number = input; while (number != 0) { int i = number % base; number /= base; if (i < 0) { i += Math.abs(base); number++; } result_rev.add(i); } return Collections.reverse(results_rev.clone()); } The above gives the result in an ArrayList of integers, so that the code does not have to handle how to represent a base smaller than -10. To display the result as a string, one can decide on a mapping of base to characrters. For example: import java.util.stream.Collectors; final String alphabet = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ@_"; public String toBaseString(ArrayList<Integer> lst) { // Would throw exception if base is beyond the 64 possible characters return lst.stream().map(n -> alphabet[n]).collect(Collectors.joining("")); } AutoLisp (defun negabase (num baz / dig rst) ;; NUM is any number. ;; BAZ is any number in the interval [-10, -2]. (This is forced by how we do string notation.) ;; ;; NUM and BAZ will be truncated to an integer if they're floats (e.g. 14.25 ;; will be truncated to 14, -123456789.87 to -123456789, etc.). (if (and (numberp num) (numberp baz) (<= (fix baz) -2) (> (fix baz) -11)) (progn (setq baz (float (fix baz)) num (float (fix num)) dig (if (= num 0) "0" "")) (while (/= num 0) (setq rst (- num (* baz (setq num (fix (/ num baz)))))) (if (minusp rst) (setq num (1+ num) rst (- rst baz))) (setq dig (strcat (itoa (fix rst)) dig))) dig) (progn (prompt (cond ((and (not (numberp num)) (not (numberp baz))) "\nWrong number and negabase.") ((not (numberp num)) "\nWrong number.") ((not (numberp baz)) "\nWrong negabase.") (t "\nNegabase must be inside [-10 -2] interval."))) (princ)))) Shortcut calculation The following algorithms assume that the input is available in bitstrings and coded in (base +2; digits in ) (as in most of today's digital computers), there are add (+) and xor (^) operations which operate on such bitstrings (as in most of today's digital computers), the set of output digits is standard, i. e. with base , the output is coded in the same bitstring format, but the meaning of the places is another one. To negabinary The conversion to negabinary (base −2; digits in ) allows a remarkable shortcut (C implementation): uint32_t toNegaBinary(uint32_t value) // input in standard binary { uint32_t Schroeppel2 = 0xAAAAAAAA; // = 2/3*((2*2)^16-1) = ...1010 return (value + Schroeppel2) ^ Schroeppel2; // eXclusive OR // resulting unsigned int to be interpreted as string of elements ε {0,1} (bits) } JavaScript port for the same shortcut calculation: function toNegaBinary(value) { const Schroeppel2 = 0xAAAAAAAA; // Convert as in C, then convert to a NegaBinary String return ( ( value + Schroeppel2 ) ^ Schroeppel2 ).toString(2); } The algorithm is first described by Schroeppel in the HAKMEM (1972) as item 128. The Wolfram MathWorld documents a version in the Wolfram Language by D. Librik (Szudzik). To negaquaternary The conversion to negaquaternary (base −4; digits in ) allows a similar shortcut (C implementation): uint32_t toNegaQuaternary(uint32_t value) // input in standard binary { uint32_t Schroeppel4 = 0xCCCCCCCC; // = 4/5*((2*4)^8-1) = ...11001100 = ...3030 return (value + Schroeppel4) ^ Schroeppel4; // eXclusive OR // resulting unsigned int to be interpreted as string of elements ε {0,1,2,3} (pairs of bits) } JavaScript port for the same shortcut calculation: function toNegaQuaternary(value) { const Schroeppel4 = 0xCCCCCCCC; // Convert as in C, then convert to NegaQuaternary String return ( ( value + Schroeppel4 ) ^ Schroeppel4 ).toString(4); } Arithmetic operations The following describes the arithmetic operations for negabinary; calculations in larger bases are similar. Addition Adding negabinary numbers proceeds bitwise, starting from the least significant bits; the bits from each addend are summed with the (balanced ternary) carry from the previous bit (0 at the LSB). This sum is then decomposed into an output bit and carry for the next iteration as show in the table: The second row of this table, for instance, expresses the fact that −1 = 1 + 1 × −2; the fifth row says 2 = 0 + −1 × −2; etc. As an example, to add 1010101 (1 + 4 + 16 + 64 = 85) and 1110100 (4 + 16 − 32 + 64 = 52), Carry: 1 −1 0 −1 1 −1 0 0 0 First addend: 1 0 1 0 1 0 1 Second addend: 1 1 1 0 1 0 0 + -------------------------- Number: 1 −1 2 0 3 −1 2 0 1 Bit (result): 1 1 0 0 1 1 0 0 1 Carry: 0 1 −1 0 −1 1 −1 0 0 so the result is 110011001 (1 − 8 + 16 − 128 + 256 = 137). Another method While adding two negabinary numbers, every time a carry is generated an extra carry should be propagated to next bit. Consider same example as above Extra carry: 1 1 1 0 1 0 0 0 Carry: 0 1 1 0 1 0 0 0 First addend: 1 0 1 0 1 0 1 Second addend: 1 1 1 0 1 0 0 + -------------------------- Answer: 1 1 0 0 1 1 0 0 1 Negabinary full adder A full adder circuit can be designed to add numbers in negabinary. The following logic is used to calculate the sum and carries: Incrementing negabinary numbers Incrementing a negabinary number can be done by using the following formula: (The operations in this formula are to be interpreted as operations on regular binary numbers. For example, is a binary left shift by one bit.) Subtraction To subtract, multiply each bit of the second number by −1, and add the numbers, using the same table as above. As an example, to compute 1101001 (1 − 8 − 32 + 64 = 25) minus 1110100 (4 + 16 − 32 + 64 = 52), Carry: 0 1 −1 1 0 0 0 First number: 1 1 0 1 0 0 1 Second number: −1 −1 −1 0 −1 0 0 + -------------------- Number: 0 1 −2 2 −1 0 1 Bit (result): 0 1 0 0 1 0 1 Carry: 0 0 1 −1 1 0 0 so the result is 100101 (1 + 4 −32 = −27). Unary negation, , can be computed as binary subtraction from zero, . Multiplication and division Shifting to the left multiplies by −2, shifting to the right divides by −2. To multiply, multiply like normal decimal or binary numbers, but using the negabinary rules for adding the carry, when adding the numbers. First number: 1 1 1 0 1 1 0 Second number: 1 0 1 1 0 1 1 × ------------------------------------- 1 1 1 0 1 1 0 1 1 1 0 1 1 0 1 1 1 0 1 1 0 1 1 1 0 1 1 0 1 1 1 0 1 1 0 + ------------------------------------- Carry: 0 −1 0 −1 −1 −1 −1 −1 0 −1 0 0 Number: 1 0 2 1 2 2 2 3 2 0 2 1 0 Bit (result): 1 0 0 1 0 0 0 1 0 0 0 1 0 Carry: 0 −1 0 −1 −1 −1 −1 −1 0 −1 0 0 For each column, add carry to number, and divide the sum by −2, to get the new carry, and the resulting bit as the remainder. Comparing negabinary numbers It is possible to compare negabinary numbers by slightly adjusting a normal unsigned binary comparator. When comparing the numbers and , invert each odd positioned bit of both numbers. After this, compare and using a standard unsigned comparator. Fractional numbers Base representation may of course be carried beyond the radix point, allowing the representation of non-integer numbers. As with positive-base systems, terminating representations correspond to fractions where the denominator is a power of the base; repeating representations correspond to other rationals, and for the same reason. Non-unique representations Unlike positive-base systems, where integers and terminating fractions have non-unique representations (for example, in decimal 0.999... = 1) in negative-base systems the integers have only a single representation. However, there do exist rationals with non-unique representations. For the digits {0, 1, ..., t} with the biggest digit and we have     as well as So every number with a terminating fraction added has two distinct representations. For example, in negaternary, i.e. and , there is . Such non-unique representations can be found by considering the largest and smallest possible representations with integer parts 0 and 1 respectively, and then noting that they are equal. (Indeed, this works with any integer-base system.) The rationals thus non-uniquely expressible are those of form with Imaginary base Just as using a negative base allows the representation of negative numbers without an explicit negative sign, using an imaginary base allows the representation of Gaussian integers. Donald Knuth proposed the quater-imaginary base (base 2i) in 1955. See also Quater-imaginary base Binary Balanced ternary Quaternary numeral system Numeral systems 1 − 2 + 4 − 8 + ⋯ (p-adic numbers) References Further reading External links Non-standard positional numeral systems Computer arithmetic
Negative base
[ "Mathematics" ]
4,389
[ "Computer arithmetic", "Arithmetic" ]
10,784,518
https://en.wikipedia.org/wiki/Ovomucin
Ovomucin is a glycoprotein found mainly in egg whites, as well as in the chalaza and vitelline membrane. The protein makes up around 2-4% of the protein content of egg whites; like other members of the mucin protein family, ovomucin confers gel-like properties. It is composed of two subunits, alpha-ovomucin (MUC5B) and beta-ovomucin (MUC6), of which the beta subunit is much more heavily glycosylated. The alpha subunit has a high number of acidic amino acids, while the beta subunit has more hydroxyl amino acids. The protein has a carbohydrate content of around 33%, featuring at least three unique types of carbohydrate side chains. It is known to possess a wide range of biological activities, including regulating cell functions and promoting the production of macrophages, lymphocytes, and cytokines, suggesting that it plays a role in the immune system. External links References Eggs Mucins Avian proteins
Ovomucin
[ "Chemistry", "Biology" ]
229
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
10,784,887
https://en.wikipedia.org/wiki/Quantum%20cellular%20automaton
A quantum cellular automaton (QCA) is an abstract model of quantum computation, devised in analogy to conventional models of cellular automata introduced by John von Neumann. The same name may also refer to quantum dot cellular automata, which are a proposed physical implementation of "classical" cellular automata by exploiting quantum mechanical phenomena. QCA have attracted a lot of attention as a result of its extremely small feature size (at the molecular or even atomic scale) and its ultra-low power consumption, making it one candidate for replacing CMOS technology. Usage of the term In the context of models of computation or of physical systems, quantum cellular automaton refers to the merger of elements of both (1) the study of cellular automata in conventional computer science and (2) the study of quantum information processing. In particular, the following are features of models of quantum cellular automata: The computation is considered to come about by parallel operation of multiple computing devices, or cells. The cells are usually taken to be identical, finite-dimensional quantum systems (e.g. each cell is a qubit). Each cell has a neighborhood of other cells. Altogether these form a network of cells, which is usually taken to be regular (e.g. the cells are arranged as a lattice with or without periodic boundary conditions). The evolution of all of the cells has a number of physics-like symmetries. Locality is one: the next state of a cell depends only on its current state and that of its neighbours. Homogeneity is another: the evolution acts the same everywhere, and is independent of time. The state space of the cells, and the operations performed on them, should be motivated by principles of quantum mechanics. Another feature that is often considered important for a model of quantum cellular automata is that it should be universal for quantum computation (i.e. that it can efficiently simulate quantum Turing machines, some arbitrary quantum circuit or simply all other quantum cellular automata). Models which have been proposed recently impose further conditions, e.g. that quantum cellular automata should be reversible and/or locally unitary, and have an easily determined global transition function from the rule for updating individual cells. Recent results show that these properties can be derived axiomatically, from the symmetries of the global evolution. Models Early proposals In 1982, Richard Feynman suggested an initial approach to quantizing a model of cellular automata. In 1985, David Deutsch presented a formal development of the subject. Later, Gerhard Grössing and Anton Zeilinger introduced the term "quantum cellular automata" to refer to a model they defined in 1988, although their model had very little in common with the concepts developed by Deutsch and so has not been developed significantly as a model of computation. Models of universal quantum computation The first formal model of quantum cellular automata to be researched in depth was that introduced by John Watrous. This model was developed further by Wim van Dam, as well as Christoph Dürr, Huong LêThanh, and Miklos Santha, Jozef Gruska. and Pablo Arrighi. However it was later realised that this definition was too loose, in the sense that some instances of it allow superluminal signalling. A second wave of models includes those of Susanne Richter and Reinhard Werner, of Benjamin Schumacher and Reinhard Werner, of Carlos Pérez-Delgado and Donny Cheung, and of Pablo Arrighi, Vincent Nesme and Reinhard Werner. These are all closely related, and do not suffer any such locality issue. In the end one can say that they all agree to picture quantum cellular automata as just some large quantum circuit, infinitely repeating across time and space. Recent reviews of the topic are available here. Models of physical systems Models of quantum cellular automata have been proposed by David Meyer, Bruce Boghosian and Washington Taylor, and Peter Love and Bruce Boghosian as a means of simulating quantum lattice gases, motivated by the use of "classical" cellular automata to model classical physical phenomena such as gas dispersion. Criteria determining when a quantum cellular automaton (QCA) can be described as quantum lattice gas automaton (QLGA) were given by Asif Shakeel and Peter Love. Quantum dot cellular automata A proposal for implementing classical cellular automata by systems designed with quantum dots has been proposed under the name "quantum cellular automata" by Doug Tougaw and Craig Lent, as a replacement for classical computation using CMOS technology. In order to better differentiate between this proposal and models of cellular automata which perform quantum computation, many authors working on this subject now refer to this as a quantum dot cellular automaton. See also References Cellular automata Quantum information science Richard Feynman
Quantum cellular automaton
[ "Mathematics" ]
977
[ "Recreational mathematics", "Cellular automata" ]
10,785,481
https://en.wikipedia.org/wiki/Pheophytin
Pheophytin or phaeophytin is a chemical compound that serves as the first electron carrier intermediate in the electron transfer pathway of Photosystem II (PS II) in plants, and the type II photosynthetic reaction center (RC P870) found in purple bacteria. In both PS II and RC P870, light drives electrons from the reaction center through pheophytin, which then passes the electrons to a quinone (QA) in RC P870 and RC P680. The overall mechanisms, roles, and purposes of the pheophytin molecules in the two transport chains are analogous to each other. Structure In biochemical terms, pheophytin is a chlorophyll molecule lacking a central Mg2+ ion. It can be produced from chlorophyll by treatment with a weak acid, producing a dark bluish waxy pigment. The probable etymology comes from this description, with pheo meaning dusky and phyt meaning vegetation. History and discovery In 1977, scientists Klevanik, Klimov, Shuvalov performed a series of experiments to demonstrate that it is pheophytin and not plastoquinone that serves as the primary electron acceptor in photosystem II. Using several experiments, including electron paramagnetic resonance (EPR), they were able to show that pheophytin was reducible and, therefore, the primary electron acceptor between P680 and plastoquinone (Klimov, Allakhverdiev, Klevanik, Shuvalov). This discovery was met with fierce opposition, since many believed pheophytin to only be a byproduct of chlorophyll degradation. Therefore, more experiments ensued to prove that pheophytin is indeed the primary electron acceptor of PSII, occurring between P680 and plastoquinone (Klimov, Allakhverdiev, Shuvalov). The data that was obtained is as follows: Photo-reduction of pheophytin has been observed in various mixtures containing PSII reaction centers. The quantity of pheophytin is in direct proportion to the number of PSII reaction centers. Photo-reduction of pheophytin occurs at temperatures as low as 100K, and is observed after the reduction of plastoquinone. These observations are all characteristic of photo-conversions of reaction center components. Reaction in purple bacteria Pheophytin is the first electron carrier intermediate in the photoreaction center (RC P870) of purple bacteria. Its involvement in this system can be broken down into 5 basic steps. The first step is excitation of the bacteriochlorophylls (Chl)2 or the special pair of chlorophylls. This can be seen in the following reaction. (Chl)2 + 1 photon → (Chl)2* (excitation) The second step involves the (Chl)2 passing an electron to pheophytin, producing a negatively charged radical (the pheophytin) and a positively charged radical (the special pair of chlorophylls), which results in a charge separation. (Chl)2* + Pheo → ·(Chl)2+ + ·Pheo− (charge separation) The third step is the rapid electron movement to the tightly bound menaquinone, QA, which immediately donates the electrons to a second, loosely bound quinone (QB). Two electron transfers convert QB to its reduced form (QBH2). 2·Pheo− + 2H+ + QB → 2Pheo + QBH2 (quinone reduction) The fifth and final step involves the filling of the “hole” in the special pair by an electron from a heme in cytochrome c. This regenerates the substrates and completes the cycle, allowing for subsequent reactions to take place. Involvement in photosystem II In photosystem II, pheophytin plays a very similar role. It again acts as the first electron carrier intermediate in the photosystem. After P680 becomes excited to P680*, it transfers an electron to pheophytin, which converts the molecule into a negatively charged radical. Two negatively charged pheophytin radicals quickly pass their extra electrons to two consecutive plastoquinone molecules. Eventually, the electrons pass through the cytochrome b6f molecule and leaves photosystem II. The reactions outlined above in the section concerning purple bacteria give a general illustration of the actual movement of the electrons through pheophytin and the photosystem. The overall scheme is: Excitation Charge separation Plastoquinone reduction Regeneration of substrates See also Photosynthesis Photosystem Chlorophyll Reaction center P680 Chlorophyllide References "Photosynthetic Molecules Section." Library of 3-D Molecular Structures. 22 April 2007 Xiong, Ling, and Richard Sayre. "The Identification of Potential Pheophytin Binding Sites in the Photosystem II Reaction Center of Chlamydomondas by Site-Directed Mutagenesis." (2000). America Society of Plant Biologists. 22 Apr. 2007. References Photosynthetic pigments Tetrapyrroles
Pheophytin
[ "Chemistry" ]
1,103
[ "Photosynthetic pigments", "Photosynthesis" ]
10,786,148
https://en.wikipedia.org/wiki/Panaeolus%20foenisecii
Panaeolus foenisecii, commonly called the mower's mushroom, haymaker, haymaker's panaeolus, or brown hay mushroom, is a very common and widely distributed little brown mushroom often found on lawns and is not an edible mushroom. In 1963 Tyler and Smith found that this mushroom contains serotonin, 5-HTP and 5-hydroxyindoleacetic acid. In many field guides it is listed as psychoactive; however, the mushroom does not produce any hallucinogenic effects. Description Cap: 1 to 3 cm across, conic to convex, chestnut brown to tan, hygrophanous, often with a dark band around the margin which fades as the mushroom dries. Gills: Broad, adnate, brown with lighter edges, becoming mottled as the spores mature. Stipe: 3 to 8 cm by 1 to 3 mm, fragile, hollow, beige to light brown, fibrous, pruinose, and slightly striate. Taste: A slightly unpleasant nutty fungal taste. Odor: Nutty, slightly unpleasant. Spore print: Dark walnut brown. Microscopic features: Spores measure 12–17 x 7–11 μm, subfusoid to lemon shaped, rough, dextrinoid, with an apical germ pore. Cheilocystidia subfusoid to cylindric or subcapitate, often wavy, up to 50 μm long. Pleurocystidia absent, but some authors report inconspicuous "pseudocystidia". The pileipellis a cellular cuticle with subglobose elements and has pileocystidia. Habitat In the Pacific Northwest of the United States, the species may be the most common to appear in lawns. It is also found on lawns along the east coast. Gallery The following two images are of Panaeolus foenisecii in the wild with two magnifications of the spore print. Similar species Similar species include Agaricus campestris, Conocybe apala, Marasmius oreades, Psathyrella candolleana, and Psathyrella gracilis. It is sometimes mistaken for the psychedelic Panaeolus cinctulus or Panaeolus olivaceus, both of which share the same habitat and can be differentiated by their jet black spores. This is probably why Panaeolus foenisecii is occasionally listed as a psychoactive species in older literature. See also List of Panaeolus species References External links Mushroom Expert – Panaeolus foenisecii Mykoweb – Panaeolus foenisecii Mushroom Observer – Panaeolus foenisecii at mushroomobserver.org Rough Spored Panaeoloideae spore comparison foenisecii Fungi of Europe Fungi described in 1933 Inedible fungi Taxa named by Christiaan Hendrik Persoon Fungus species
Panaeolus foenisecii
[ "Biology" ]
608
[ "Fungi", "Fungus species" ]
10,786,653
https://en.wikipedia.org/wiki/Rhenium%20diboride
Rhenium diboride (ReB2) is a synthetic high-hardness material that was first synthesized in 1962. The compound is formed from a mixture of rhenium, noted for its resistance to high pressure, and boron, which forms short, strong covalent bonds with rhenium. It has regained popularity in recent times in hopes of finding a material that possesses hardness comparable to that of diamond. Unlike other high-hardness synthetic materials, such as the c-BN, rhenium diboride can be synthesized at ambient pressure, potentially simplifying a mass production. However, the high cost of rhenium and commercial availability of alternatives such as polycrystalline c-BN, make a prospect of large-scale applications less likely. Synthesis ReB2 can be synthesized by at least three different methods at standard atmospheric pressure: solid-state metathesis, melting in an electric arc, and direct heating of the elements. In the metathesis reaction, rhenium trichloride and magnesium diboride are mixed and heated in an inert atmosphere and the magnesium chloride byproduct is washed away. Excess boron is needed to prevent the formation of other phases such as Re7B3 and Re3B. In the arc-melting method, rhenium and boron powders are mixed and a large electric current is passed through the mixture, also in an inert atmosphere. In the direct reaction method, the rhenium-boron mixture is sealed in a vacuum and held at a high temperature over a longer period (1,000 °C for five days). At least the last two methods are capable of producing pure ReB2 without any other phases, as confirmed by X-ray crystallography. Hardness Rhenium diboride is occasionally, and controversially, cited as a "superhard material" due to its high hardness level. However, tested in the asymptotic-hardness region, as recommended for hard and superhard materials, rhenium diboride demonstrates a Vickers hardness of only 30.1 ± 1.3 GPa at 4.9 N, well below the generally-accepted threshold of 40 GPa or more needed to classify it as "superhard". Another research has estimated the of full-dense ReB2 at about 22 GPa under an applied load of 2.94 N, comparable to that of tungsten carbide, silicon carbide, titanium diboride or zirconium diboride. Values greater than 40 GPa have been observed only in tests with very low loads, which is not a suitable testing method for this type of solids. In one test, the lowest tested load of 0.49 N yielded the average hardness of 48 ± 5.6 GPa and a maximum hardness of 55.5 GPa, which is comparable to the hardness of cubic boron nitride (c-BN) under an equivalent load. Such phenomenon of inverse relationship between the applied load and hardness is known as the indentation size effect. In recent times, there has been a significant amount of research into improving the hardness and other properties of the ReB2. In one study, the hardness for the ReB2(R-3m) polymorph was estimated at 41.7 GPa, while for the ReB2(P63/mmc) it was placed at c.a. 40.6 GPa. In another study, a fully dense B4C-ReB2 ceramic composite nanopowder was fabricated by spark plasma sintering. It has exhibited a microhardness of 50 ± 3 GPa under a 49 N load in the asymptotic-hardness region and had a 3.2 g/cm3 density, comparable with the hardness and density of the c-BN. The hardness of ReB2 exhibits considerable anisotropy because of its hexagonal layered structure, being greatest along the c axis. Two factors contribute to the high hardness of ReB2: a high density of valence electrons, and an abundance of short covalent bonds. Rhenium has one of the highest valence electron densities of any transition metal (476 electrons/nm3, compare to 572 electrons/nm3 for osmium and 705 electrons/nm3 for diamond). The addition of boron requires only a 5% expansion of the rhenium lattice because the small boron atoms fill the existing spaces between the rhenium atoms. Furthermore, the electronegativities of rhenium and boron are close enough (1.9 and 2.04 on the Pauling scale) that they form covalent bonds in which the electrons are shared almost equally. See also Ruthenium boride Network covalent bonding Superhard material References Borides Rhenium compounds Superhard materials Substances discovered in the 1960s
Rhenium diboride
[ "Physics" ]
1,001
[ "Materials", "Superhard materials", "Matter" ]
10,787,286
https://en.wikipedia.org/wiki/Journal%20of%20Mass%20Spectrometry
The Journal of Mass Spectrometry is a peer-reviewed scientific journal covering all aspects of mass spectrometry including instrument design and development, ionization processes, mechanisms and energetics of gaseous ion reactions, spectroscopy of gaseous ions, theoretical aspects, ion structure, analysis of compounds of biological interest, methodology development, applications to elemental analysis and inorganic chemistry, computer-related applications and developments, and environmental chemistry and other fields that use innovative aspects of mass spectrometry. It was established in 1968 as Organic Mass Spectrometry and obtained its current title in 1995. According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.982. See also Mass Spectrometry Reviews Rapid Communications in Mass Spectrometry References External links Mass spectrometry journals Academic journals established in 1995 Wiley (publisher) academic journals Monthly journals English-language journals
Journal of Mass Spectrometry
[ "Physics", "Chemistry" ]
182
[ "Spectrum (physical sciences)", "Biochemistry journal stubs", "Biochemistry stubs", "Mass spectrometry", "Mass spectrometry journals" ]
10,787,476
https://en.wikipedia.org/wiki/NGC%207052
NGC 7052 is an elliptical galaxy in the constellation Vulpecula. The galaxy harbours a supermassive black hole with mass c. 220–630 million solar masses in its nucleus. References External links Elliptical galaxies Vulpecula 7052 11718 66537
NGC 7052
[ "Astronomy" ]
57
[ "Vulpecula", "Constellations" ]
10,787,946
https://en.wikipedia.org/wiki/Natural%20prolongation%20principle
The natural prolongation principle or principle of natural prolongation is a legal concept introduced in maritime claims submitted to the United Nations. The phrase denotes a concept of political geography and international law that a nation's maritime boundary should reflect the 'natural prolongation' of where its land territory reaches the coast. Oceanographic descriptions of the land mass under coastal waters became conflated and confused with criteria that are deemed relevant in border delimitation. The concept was developed in the process of settling disputes if the borders of adjacent nations were located on a contiguous continental shelf. An unresolved issue is whether a natural prolongation defined scientifically, without reference to equitable principles, is to be construed as a "natural prolongation" for the purpose of maritime border delimitation or maritime boundary disputes. History The phrase natural prolongation was established as a concept in the North Sea Continental Cases in 1969. The relevance and importance of natural prolongation as a factor in delimitation disputes and agreements has declined during the period in which international acceptance of UNCLOS III has expanded. The Malta/Libya Case in 1985 is marked as the eventual demise of the natural prolongation principle being used in delimiting between adjoining national maritime boundaries. The Bay of Bengal cases in the early 2010s (Bangladesh v Myanmar) and (Bangladesh v India) likewise dealt a blow to natural prolongation as the guiding principle for delimitation of the continental shelf more than 200 nautical miles beyond baselines. See also Equidistance principle References Sources Capaldo, Giuliana Ziccardi. (1995). Répertoire de la jurisprudence de la cour internationale de justice (1947-1992). Dordrecht: Martinus Nijhoff Publishers. ; ; ; OCLC 30701545 Dorinda G. Dallmeyer and Louis De Vorsey. (1989). Rights to Oceanic Resources: Deciding and Drawing Maritime Boundaries. Dordrecht: Martinus Nijhoff Publishers. ; OCLC 18981568 Francalanci, Giampiero; Tullio Scovazzi; and Daniela Romanò. (1994). Lines in the Sea. Dordrecht: Martinus Nijhoff Publishers. ; OCLC 30400059 Kaye, Stuart B. (1995). Australia's maritime boundaries. Wollongong, New South Wales: Centre for Maritime Policy (University of Wollongong). ; OCLC 38390208 Borders Maritime boundaries
Natural prolongation principle
[ "Physics" ]
506
[ "Spacetime", "Borders", "Space" ]
10,788,128
https://en.wikipedia.org/wiki/Precautionary%20statement
In United States safety standards, precautionary statements are sentences providing information on potential hazards and proper procedures. They are used in situations from consumer product on labels and manuals to descriptions of physical activities. Various methods are used to bring focus to them, such as setting apart from normal text, graphic icons, changes in text's font and color. Texts will often clarify the types of statements and their meanings within the text. Common precautionary statements are described below. Danger Danger statements are a description of situations where an immediate hazard will cause death or serious injury to workers and/or the general public if not avoided. This designation is to be used only in extreme situations. ANSI Z535.5 Definition: "Indicates a hazardous situation that, if not avoided, will result in death or serious injury. The signal word "DANGER" is to be limited to the most extreme situations. DANGER [signs] should not be used for property damage hazards unless personal injury risk appropriate to these levels is also involved." OSHA 1910.145 Definition: "Shall be used in major hazard situations where an immediate hazard presents a threat of death or serious injury to employees. Danger tags shall be used only in these situations." Warning Warning statements are a description of a situation where a potentially hazardous condition exists that could result in the death or serious injury of workers and/or the general public if not avoided. ANSI Z535.5 Definition: "Indicates a hazardous situation that, if not avoided, could result in death or serious injury. WARNING [signs] should not be used for property damage hazards unless personal injury risk appropriate to this level is also involved." OSHA 1910.145 Definition: "May be used to represent a hazard level between "Caution" and "Danger," instead of the required "Caution" tag, provided that they have a signal word of "Warning," an appropriate major message, and otherwise meet the general tag criteria of paragraph (f)(4) of this section." Caution Caution statements are a description of situations where a non-immediate or potential hazard presents a lesser threat of injury that could result in minor or moderate injuries to workers and/or the general public. ANSI Z535.5 Definition: "Indicates a hazardous situation that, if not avoided, could result in minor or moderate injury." OSHA 1910.145 Definition: "Shall be used in minor hazard situations where a non-immediate or potential hazard or unsafe practice presents a lesser threat of employee injury." Notice Notice statements are a description of situations where a non-immediate or potential hazard presents a risk to damage of property and equipment. May be used to indicate important operational characteristics. There is no "Safety Alert" or attention symbol present in this situation. ANSI Z535.5 Definition: "Indicates information considered important but not hazard related. The safety alert symbol (a triangle with the exclamation point) shall not be used with this signal word. For environmental/facility signs, NOTICE is typically the choice of signal word for messages relating to property damage, security, sanitation, and housekeeping rules." OSHA 1910.145 Definition: None. See also ANSI Z535 - An American safety sign standard that makes heavy use of precautionary statements in the form of a 'header'. Safety sign Notes References Warning systems
Precautionary statement
[ "Technology", "Engineering" ]
683
[ "Warning systems", "Safety engineering", "Measuring instruments" ]
10,788,796
https://en.wikipedia.org/wiki/List%20of%20mass%20spectrometry%20software
Mass spectrometry software is used for data acquisition, analysis, or representation in mass spectrometry. Proteomics software In protein mass spectrometry, tandem mass spectrometry (also known as MS/MS or MS2) experiments are used for protein/peptide identification. Peptide identification algorithms fall into two broad classes: database search and de novo search. The former search takes place against a database containing all amino acid sequences assumed to be present in the analyzed sample. In contrast, the latter infers peptide sequences without knowledge of genomic data. Database search algorithms De novo sequencing algorithms De novo peptide sequencing algorithms are, in general, based on the approach proposed in Bartels et al. (1990). Homology searching algorithms MS/MS peptide quantification Other software See also Mass spectrometry data format: for a list of mass spectrometry data viewers and format converters. List of protein structure prediction software References External links List Proteomics Lists of bioinformatics software
List of mass spectrometry software
[ "Physics", "Chemistry" ]
208
[ "Mass spectrometry software", "Mass spectrometry", "Spectrum (physical sciences)", "Chemistry software" ]
10,788,804
https://en.wikipedia.org/wiki/Mascot%20%28software%29
Mascot is a software search engine that uses mass spectrometry data to identify proteins from peptide sequence databases. Mascot is widely used by research facilities around the world. Mascot uses a probabilistic scoring algorithm for protein identification that was adapted from the MOWSE algorithm. Mascot is freely available to use on the website of Matrix Science. A license is required for in-house use where more features can be incorporated. History means MOWSE was one of the first algorithms developed for protein identification using peptide mass fingerprinting. It was originally developed in 1993 as a collaboration between Darryl Pappin of the Imperial Cancer Research Fund (ICRF) and Alan Bleasby of the Science and Engineering Research Council (SERC). MOWSE stood apart from other protein identification algorithms in that it produced a probability-based score for identification. It was also the first to take into account the non-uniform distribution of peptide sizes, caused by the enzymatic digestion of a protein that is needed for mass spectrometry analysis. However, MOWSE was only applicable to peptide mass fingerprint searches and was dependent on pre-compiled databases which were inflexible with regard to post-translational modifications and enzymes other than trypsin. To overcome these limitations, to take advantage of multi-processor systems and to add non-enzymatic search functionality, development was begun again from scratch by David Perkins at the Imperial Cancer Research Fund. The first versions were developed for Silicon Graphics Irix and Digital Unix systems. Eventually this software was named Mascot and to reach a wider audience, an external bioinformatics company named Matrix Science was created by David Creasy and John Cottrell to develop and distribute Mascot. Legacy software versions exist for Tru64, Irix, AIX, Solaris, Microsoft Windows NT4 and Microsoft Windows 2000. Mascot has been available as a free service on the Matrix Science website since 1999 and has been cited in scientific literature over 5,000 times. Matrix Science still continues to work on improving Mascot’s functionality. Applications Mascot identifies proteins by interpreting mass spectrometry data. The prevailing experimental method for protein identification is a bottom-up approach, where a protein sample is typically digested with trypsin to form smaller peptides. While most proteins are too large, peptides usually fall within the limited mass range that a typical mass spectrometer can measure. Mass spectrometers measure the molecular weights of peptides in a sample. Mascot then compares these molecular weights against a database of known peptides. The program cleaves every protein in the specified search database in silico according to specific rules depending on the cleavage enzyme used for digestion and calculates the theoretical mass for each peptide. Mascot then computes a score based on the probability that the peptides from a sample match those in the selected protein database. The more peptides Mascot identifies from a particular protein, the higher the Mascot score for that protein. Features Peptide Mass Fingerprint search Identifies proteins from an uploaded peak list using a technique known as peptide mass fingerprinting. Sequence query Combines peptide mass data with amino acid sequence and composition information usually obtained from MS/MS tandem mass spectrometry data. Based on the peptide sequence tag approach. MS/MS Ion Search Identify fragment ions from uninterpreted MS/MS data of one or more peptides. The software processes data from mass spectrometers of the following companies: AB Sciex Agilent Technologies Bruker Shimadzu Corp. Thermo Fisher Scientific Waters Corporation Important parameters Modifications can be specified as fixed or variable. Fixed modifications are applied universally to every amino acid residue of the specified type or to the N-terminus or C-terminus of the peptide. The mass for the modification is added to each of the respective residues. When variable modifications are specified the program tries to match every different combination of amino acid residues with and without modification. This can increase the number of comparisons dramatically and lead to lower scores and longer search time. By setting a taxonomy, a search can be restricted to certain species or groups of species. This will reduce search time and ensure that only relevant protein hits are included. Scoring Mascot’s fundamental approach to identifying peptides is to calculate the probability whether an observed match between experimental data and peptide sequences found in a reference database has occurred by chance. The match with the lowest probability of occurring by chance is returned as the most significant match. The significance of the match depends on the size of the database that is being queried. Mascot employs the widely used significance level of 0.05, meaning that in a single test the probability of observing an event at random is less than or equal to 1 in 20. In this light, a score of 10−5 might seem very promising. However, if the database being searched contains 106 sequences several scores of this magnitude would be expected by chance alone because the algorithm carried out 106 individual comparisons. For a database of that size, by applying a Bonferroni correction to account for multiple comparisons, the significance threshold drops to 5*10−8. In addition to the calculated peptide scores, Mascot also estimates the False Discovery Rate (FDR) by searching against a decoy database. When performing a decoy search, Mascot generates a randomized sequence of the same length for every sequence in the target database. The decoy sequence is generated such that it has the same average amino acid composition as the target database. The FDR is estimated as the ratio of decoy database matches to target database matches. This relates to the standard formula FDR = FP / (FP + TP), where FP are false positives and TP are true positives. The decoy matches are certain to be spurious identifications, but we can't discriminate between true and false positives identified in the target database. FDR estimation was added in response to journals' guidelines on protein identification reports like the ones from Molecular and Cellular Proteomics. Mascot's FDR calculation incorporates ideas from different publications. Alternatives The most common alternative database search programs are listed in the Mass spectrometry software article. The performance of a variety of mass spectrometry software, including Mascot, can be observed in the 2011 iPRG study. Genome-based peptide fingerprint scanning is another method that compares the peptide fingerprints to the entire genome instead of only annotated genes. References Bioinformatics software Mass spectrometry software Proteomic sequencing
Mascot (software)
[ "Physics", "Chemistry", "Biology" ]
1,319
[ "Spectrum (physical sciences)", "Chemistry software", "Bioinformatics software", "Proteomic sequencing", "Bioinformatics", "Molecular biology techniques", "Mass spectrometry software", "Mass spectrometry" ]
10,788,899
https://en.wikipedia.org/wiki/Lek%20paradox
The lek paradox is a conundrum in evolutionary biology that addresses the persistence of genetic variation in male traits within lek mating systems, despite strong sexual selection through female choice. This paradox arises from the expectation that consistent female preference for particular male traits should erode genetic diversity, theoretically leading to a loss of the benefits of choice. The lek paradox challenges our understanding of how genetic variation is maintained in populations subject to intense sexual selection, particularly in species where males provide only genes to their offspring. Several hypotheses have been proposed to resolve this paradox, including the handicap principle, condition-dependent trait expression, and parasite resistance models. Background A lek is a type of mating system characterized by the aggregation of male animals for the purpose of competitive courtship displays. This behavior, known as lekking, is a strategy employed by various species to attract females for mating. Leks are most commonly observed in birds, but also occur in other vertebrates such as some bony fish, amphibians, reptiles, and mammals, as well as certain arthropods including crustaceans and insects. In a typical lek, males gather in a specific area to perform elaborate displays, while females visit to select a mate. This system is notable for its strong female choice in mate selection and the absence of male parental care. Leks can be classified as classical (where male territories are in close proximity) or exploded (with more widely separated territories). The paradox The lek paradox is the conundrum of how additive or beneficial genetic variation is maintained in lek mating species in the face of consistent sexual selection based on female preferences. While many studies have attempted to explain how the lek paradox fits into Darwinian theory, the paradox remains. Persistent female choice for particular male trait values should erode genetic diversity in male traits and thereby remove the benefits of choice, yet choice persists. This paradox can be somewhat alleviated by the occurrence of mutations introducing potential differences, as well as the possibility that traits of interest have more or less favorable recessive alleles. Causes and conditions The basis of the lek paradox is continuous genetic variation in spite of strong female preference for certain traits. There are two conditions in which the lek paradox arises. The first is that males contribute only genes and the second is that female preference does not affect fecundity. Female choice should lead to directional runaway selection, resulting in a greater prevalence for the selected traits. Stronger selection should lead to impaired survival, as it decreases genetic variance and ensures that more offspring have similar traits. However, lekking species do not exhibit runaway selection. Male Characteristics and Female Benefits In a lekking reproductive system, what male sexual characteristics can signal to females is limited, as the males provide no resources to females or parental care to their offspring. This implies that a female gains indirect benefits from her choice in the form of "good genes" for her offspring. Hypothetically, in choosing a male that excels at courtship displays, females gain genes for their offspring that increase their survival or reproductive fitness. Handicap Principle Amotz Zahavi declared that male sexual characteristics only convey useful information to the females if these traits confer a handicap on the male. Otherwise, males could simply cheat: if the courtship displays have a neutral effect on survival, males could all perform equally and it would signify nothing to the females. But if the courtship display is somehow deleterious to the male’s survival—such as increased predator risk or time and energy expenditure—it becomes a test by which females can assess male quality. Under the handicap principle, males who excel at the courtship displays prove that they are of better quality and genotype, as they have already withstood the costs to having these traits. Resolutions have been formed to explain why strong female mate choice does not lead to runaway selection. The handicap principle describes how costly male ornaments provide females with information about the male’s inheritable fitness. The handicap principle may be a resolution to the lek paradox, for if females select for the condition of male ornaments, then their offspring have better fitness. Genic Capture Hypothesis One potential resolution to the lek paradox is Rowe and Houle's theory of condition-dependent expression of male sexually selected traits. Similar to the handicap principle, Rowe and Houle argue that sexually selected traits depend on physical condition. Condition, in turn, summarizes a large number of genetic loci, including those involved in metabolism, muscular mass, nutrition, etc. Rowe and Houle claim that condition dependence maintains genetic variation in the face of persistent female choice, as the male trait is correlated with abundant genetic variation in condition. This is the genic capture hypothesis, which describes how a significant amount of the genome is involved in shaping the traits that are sexually selected. There are two criteria in the genic capture hypothesis: the first is that sexually selected traits are dependent upon condition and the second is that general condition is attributable to high genetic variance. Genetic Variation and Environmental Effects Genetic variation in condition-dependent traits may be further maintained through mutations and environmental effects. Genotypes may be more effective in developing condition dependent sexual characteristics in different environments, while mutations may be deleterious in one environment and advantageous in another. Thus genetic variance remains in populations through gene flow across environments or generation overlap. According to the genic capture hypothesis, female selection does not deplete the genetic variance, as sexual selection operates on condition dependence traits, thereby accumulating genetic variance within the selected for trait. Therefore, females are actually selecting for high genetic variance. Parasite Resistance Hypothesis In an alternate but non-exclusionary hypothesis, W. D. Hamilton and M. Zuk proposed that successful development of sexually selected traits signal resistance to parasites. Parasites can significantly stress their hosts so that they are unable to develop sexually selected traits as well as healthy males. According to this theory, a male who vigorously displays demonstrates that he has parasite-resistant genes to the females. In support of this theory, Hamilton and Zuk found that male sexual ornaments were significantly correlated with levels of incidence of six blood diseases in North American passerine bird species. The Hamilton and Zuk model addresses the lek paradox, arguing that the cycles of co-adaptation between host and parasite resist a stable equilibrium point. Hosts continue to evolve resistance to parasites and parasites continue to bypass resistant mechanisms, continuously generating genetic variation. The genic capture and parasite resistance hypotheses could logically co-occur in the same population. One resolution to the lek paradox involves female preferences and how preference alone does not cause a drastic enough directional selection to diminish the genetic variance in fitness. Another conclusion is that the preferred trait is not naturally selected for or against and the trait is maintained because it implies increased attractiveness to the male. Thus, there may be no paradox. References Ethology Reproduction in animals Evolutionary biology
Lek paradox
[ "Biology" ]
1,396
[ "Evolutionary biology", "Reproduction in animals", "Behavior", "Reproduction", "Behavioural sciences", "Ethology" ]
10,789,014
https://en.wikipedia.org/wiki/C4H8
{{DISPLAYTITLE:C4H8}} The molecular formula C4H8 (molar mass: 56.11 g/mol) may refer to: Butenes (butylenes) 1-Butene, or 1-butylene 2-Butene Isobutylene Cyclobutane Methylcyclopropane Molecular formulas
C4H8
[ "Physics", "Chemistry" ]
77
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
10,789,250
https://en.wikipedia.org/wiki/Television%20standards%20conversion
Television standards conversion is the process of changing a television transmission or recording from one video system to another. Converting video between different numbers of lines, frame rates, and color models in video pictures is a complex technical problem. However, the international exchange of television programming makes standards conversion necessary so that video may be viewed in another nation with a differing standard. Typically video is fed into video standards converter which produces a copy according to a different video standard. One of the most common conversions is between the NTSC and PAL standards. History The first known case of television systems conversion was in Europe a few years after World War II, mainly with the RTF (France) and the BBC (UK) trying to exchange their black and white 441 line and 405 line programming. The problem got worse with the introduction of color standards PAL, SECAM (both 625 lines), and the French black and white 819 line service. Until the 1980s, standards conversion was so difficult that 24 frame/s 16 mm or 35mm film was the preferred medium of programming interchange. Overview Perhaps the most technically challenging conversion to make is the PAL and SÉCAM to NTSC conversion. PAL and SÉCAM use 625 lines at 50 fields/s or 25 frames/s NTSC uses 525 lines at fields/s (60000/1001) or 30 frames/s The NTSC standard is temporally and spatially incompatible with both PAL and SÉCAM. Aside from the line count being different, converting to a format that requires 60 fields every second from a format that has only 50 fields poses difficulty. Every second, an additional 10 fields must be generated—the converter has to create new frames (from the existing input) in real time. Conversion between PAL and SÉCAM does not require similar timing changes, but still requires color encoding and sound conversion. Hidden signals: not always transferred TV contains many hidden signals. One signal type that is not transferred, except on some very expensive converters, is the closed captioning signal. Teletext signals do not need to be transferred, but the captioning data stream should be if it is technologically possible to do so. With HDTV broadcasting, this is less of an issue, for the most part meaning only passing the captioning datastream on to the new source material. However, DVB and ATSC have significantly different captioning datastream types. Role of information theory Theory behind systems conversion Information theory and the Nyquist–Shannon sampling theorem imply that conversion from one television standard to another will be easier if the conversion is from a higher framerate to a lower framerate (NTSC to PAL or SECAM, for example) is from a higher resolution to a lower resolution (HDTV to NTSC) is from one progressive-scan source to another progressive-scan source (interlaced PAL and NTSC are temporally and spatially incompatible with each other) has relatively little interframe motion, which reduces temporal or spatial judder is from a source whose signal-to-noise ratio is not detrimentally high [low?] is from a source that has no continuous (or periodic) signal defect that would inhibit translation. Sampling systems and ratios The subsampling in a video system is usually expressed as a three-part ratio. The three terms of the ratio are the number of brightness ("luminance", "luma", "Y") samples and the numbers of samples of the two color ("chroma") components (U/Cb then V/Cr) for each complete sample area. For quality comparison, only the ratio between those values is important, so 4:4:4 could easily be called 1:1:1; but traditionally the value for brightness is always 4, with the rest of the values scaled accordingly. The sampling principles above apply to both digital and analog television. Telecine judder The "3:2 pulldown" conversion process for 24 frame/s film to television (telecine) creates a slight error in the video signal compared to the original film frames. This is one reason why motion in 24-fps films viewed on typical NTSC home equipment may not appear as smooth as when viewed in a cinema. The phenomenon is particularly apparent during slow, steady camera movements, which appear slightly jerky when telecined. This process is commonly called telecine judder. PAL material in which 2:2:2:2:2:2:2:2:2:2:2:3 pulldown has been applied, suffers from a similar lack of smoothness, though this effect is not usually called telecine judder. Every 12th film frame is displayed for the duration of 3 PAL fields (60 milliseconds), whereas each of the 11 other frames is displayed for the duration of 2 PAL fields (40 milliseconds). This causes a slight "hiccup" in the video about twice a second. Television systems converters must avoid creating telecine judder effects during the conversion process. Avoiding this judder is economically importance, because much NTSC (60 Hz, technically 29.97 frame/s) resolution material that originates from film will have this problem when converted to PAL or SECAM (both 50 Hz, 25 frame/s). Historical standards conversion techniques Orthicon to orthicon This method was used by Ireland to convert 625 line service to 405 line service. It is perhaps the most basic television standard conversion technique. RTÉ used this method during the latter years of its use of the 405 line system. A standards converter was used to provide the 405 line service, but according to more than one former RTÉ engineering source the converter blew up and afterwards the 405 line service was provided by a 405 line camera pointing at a monitor. This is not the best conversion technique but it can work if one is going from a higher resolution to a lower one – at the same frame rate. Slow phosphors are required on both orthicons. The first video standards converters were analog. That is, a special professional video camera that used a video camera tube would be pointed at a cathode ray tube video monitor. Both the camera and the monitor could be switched to either NTSC or PAL, to convert both ways. Robert Bosch GmbH's Fernseh division made a large three rack analog video standards converter. These were the high-end converters of the 1960s and 1970s. Image Transform in Universal City, California, used the Fernseh converter and in the 1980s made their own custom digital converter. This was also a larger three-rack device. As digital memory size became larger in smaller packages, converters became the size of a microwave oven. Today one can buy a very small consumer converter for home use. SSTV to PAL and NTSC The Apollo Moon missions (late 1960s, early 1970s) used slow-scan television (SSTV) as opposed to normal bandwidth television; this was mostly done to save battery power (and transmission bandwidth, since the SSTV video from the Apollo missions was multiplexed with all other voice and telemetry communications from the spacecraft). The camera used only 7 watts of power. SSTV was used to transmit images from inside Apollo 7, Apollo 8, and Apollo 9, as well as the Apollo 11 Lunar Module television from the Moon; see Apollo TV camera. The SSTV system used in NASA's early Apollo missions transferred ten frames per second with a resolution of 320 frame lines using less bandwidth than a normal TV transmission. The early SSTV systems used by NASA differ significantly from the SSTV systems currently in use by amateur radio enthusiasts today. Standards conversion was necessary so that the missions could be seen by a worldwide audience in both PAL/SECAM (625 lines, 50 Hz) and NTSC (525 lines, 60 Hz) resolutions. Later Apollo missions featured color field sequential cameras that output 60-frame/s video. Each frame corresponded to one of the RGB primary colors. This method is compatible with black and white NTSC, but incompatible with color NTSC. In fact, even NTSC monochrome TV compatibility is marginal. A monochrome set could have reproduced the pictures, but the pictures would have flickered terribly. The camera color video ran at only 10  frame/s. Also, Doppler shift in the lunar signal would have caused pictures to tear and flip. For these reasons, the Apollo Moon pictures required special conversion techniques. The conversion steps were completely electromechanical, and they took place in nearly real time. First, the downlink station corrected the pictures for Doppler shift. Next, in an analog disc recorder, the downlink station recorded and replayed every video field six times. On the six-track recorder, recording and playback took place simultaneously. After the recorder, analog video processors added the missing components of the NTSC color signal: These components included: The 3.58-MHz color burst, The high-resolution monochrome signal, The sound, The I and Q color signals. The conversion delay lasted only some 10 seconds. Then color Moon pictures left the downlink station for world distribution. Standards conversion methods in common use Nyquist subsampling This conversion technique may become popular with manufacturers of HDTV --> NTSC and HDTV --> PAL converter boxes for the ongoing global conversion to HDTV. Multiple Nyquist subsampling was used by the defunct MUSE HDTV system that was used in Japan. MUSE chipsets that can be used for systems conversion do exist, or can be revised for the needs of HDTV --> Analog TV converter boxes. How it works In a typical image transmission setup, all stationary images are transmitted at full resolution. Moving pictures possess a lower resolution visually, based on complexity of interframe image content. When one uses Nyquist subsampling as a standards conversion technique, the horizontal and vertical resolution of the material are reduced – this is an excellent method for converting HDTV to standard definition television, but it works very poorly in reverse. As the horizontal and vertical content change from frame to frame, moving images will be blurred (in a manner similar to using 16 mm movie film for HDTV projection). In fact, whole-camera pans would result in a loss of 50% of the horizontal resolution. The Nyquist subsampling method of systems conversion only works for HDTV to Standard Definition Television, so as a standards conversion technology it has a very limited use. Phase Correlation is usually preferred for HDTV to standard definition conversion. Framerate conversion There is a large difference in frame rate between film (24.0 frames per second) and NTSC (approximately 29.97 frames per second). Unlike the two other most common video formats, PAL and SECAM, this difference cannot be overcome by a simple speed-up, because the required 25% speed-up would be clearly noticeable. To convert 24 frame/s film to 29.97 frame/s (presented as 59.94 interlaced fields per second) NTSC, a process called "3:2 pulldown" is used, in which every other film frame is duplicated across an additional interlaced field to achieve a framerate of 23.976 (the audio is slowed imperceptibly from the 24 frame/s source to match). This produces irregularities in the sequence of images which some people can perceive as a stutter during slow and steady pans of the camera in the source material. See telecine for more details. For viewing native PAL or SECAM material (such as European television series and some European movies) on NTSC equipment, a standards conversion has to take place. There are basically two ways to accomplish this: The framerate can be slowed from 25 to 23.976 frames per second (a slowdown of about 4%) to subsequently apply 3:2 pulldown. Interpolation of the contents of adjacent frames in order to produce new intermediate frames; this introduces artifacts, and even the most modestly trained of eyes can quickly spot video that has been converted between formats. Linear interpolation When converting PAL (625 lines @ 25 frame/s) to NTSC (525 lines @ 30 frame/s), the converter must eliminate 100 lines per frame. The converter must also create five frames per second. To reduce the 625-line signal to 525, less expensive converters drop 100 lines. These converters maintain picture fidelity by evenly spacing removed lines. (For example, the system might discard every sixth line from each PAL field. After the 50th discard, this process would stop. By then the system would have passed the viewable area of the field. In the following field, the process would repeat, completing one frame.) To create the five additional frames, the converter repeats every fifth frame. If there is little inter-frame motion, this conversion algorithm is fast, inexpensive and effective. Many inexpensive consumer television system converters have employed this technique. Yet in practise, most video features significant inter-frame motion. To reduce conversion artefacts, more modern or expensive equipment may use sophisticated techniques. Doubler The most basic and literal way to double lines is to repeat each scanline, though the results of this are generally very crude. Linear interpolation use digital interpolation to recreate the missing lines in an interlaced signal, and the resulting quality depends on the technique used. Generally the bob version of linear deinterlacer will only interpolate within a single field, rather than merging information from adjacent fields, to preserve the smoothness of motion, resulting in a frame rate equal to the field rate (i.e. a 60i signal would be converted to 60p.) The former technique in moving areas and the latter in static areas, which improves overall sharpness. Interfield interpolation Interfield Interpolation is a technique in which new frames are created by blending adjacent frames, rather than repeating a single frame. This is more complex and computationally expensive than linear interpolation, because it requires the interpolator to have knowledge of the preceding and the following frames to produce an intermediate blended frame. Deinterlacing may also be required in order to produce images which can be interpolated smoothly. Interpolation can also be used to reduce the number of scanlines in the image by averaging the colour and intensity of pixels on neighbouring lines, a technique similar to Bilinear filtering, but applied to only one axis. There are simple 2-line and 4 line converters. The 2-line converter creates a new line by comparing two adjacent lines, whereas a 4-line model compares 4 lines to average the 5th. Interfield interpolation reduces judder, but at the expense of picture smearing. The greater the blending applied to smooth out the judder, the greater the smear caused by blending. Adaptive motion interpolation Some more advanced techniques measure the nature and degree of inter-frame motion in the source, and use adaptive algorithms to blend the image based on the results. Some such techniques are known as motion compensation algorithms, and are computationally much more expensive than the simpler techniques, thus requiring more powerful hardware to be effective in real-time conversion. Adaptive Motion algorithms capitalize on the way the human eye and brain process moving images – in particular, detail is perceived less clearly on moving objects. Adaptive interpolation requires that the converter analyzes multiple successive fields and to detect the amount and type of motion of different areas of the picture. Where little motion is detected, the converter can use linear interpolation. When greater motion is detected, the converter can switch to an inter-field technique which sacrifices detail for smoother motion. Adaptive Motion Interpolation has many variations and is commonly found in midrange converters. The quality and cost is dependent upon the accuracy in analyzing the type and amount of motion, and the selection of the most appropriate algorithm for processing the type of motion. Adaptive motion interpolation + block matching Block matching involves dividing the image into mosaic blocks – say perhaps for the sake of explanation, 8x8 pixels. The blocks are then stored in memory. The next field read out is also divided up into the same number and size of mosaic blocks. The converter's computer then goes to work and starts matching up blocks. The blocks that stayed in the same relative position (read: there was no motion in this part of the image) receive relatively little processing. For each block that changed, the converter searches in every direction through its memory, looking for a match to find out where the "block" went (if there's motion, the block obviously had to have gone somewhere..). The search starts at the immediate surrounding blocks (assuming little motion). If a match isn't found, then it searches further and further out until it finds a match. When the matching block is found, the converter then knows how far the block moved and in which direction. This data is then stored as a motion vector for this block. Since interframe motion is often predictable owing to Newton's laws of motion in the real world, the motion vector can then be used to calculate where the block will probably be in the next field. The Newtonian method saves a lot of search and processing time. When panning from left to right is taking place (over say 10 fields) it is safe to assume that the 11th field will be similar or very close. Block matching can be seen as the "cutting and pasting" of image blocks. The technique is highly effective but it does require a tremendous amount of computing power. Consider a block of only 8x8 pixels. For each block, the computer has 64 possible directions and 64 pixels to be matched to the block in the next field. Also consider that the greater the motion, the further out the search must be conducted. Just to find an adjacent block in the next field would entail making a search of 9 blocks. 2 blocks out would require a search and match of 25 blocks – 3 blocks further distant and it grows to 49 etc. The type of motion can exponentially compound the compute power required. Consider a rotating object, where a simple straight line motion vector is of little help in predicting where the next block should match. It can quickly be seen that the more inter frame motion introduced, the much greater the processing power required. This is the general concept of block matching. Block match converters can vary widely in price and performance depending on the attention to detail and complexity. A weird artifact of block matching owes to the size of the block itself. If a moving object is smaller than the mosaic block, consider that it's the entire block that gets moved. In most cases, it's not an issue, but consider a thrown baseball. The ball itself has a high motion vector, but its background that makes up the rest of the block might not have any motion. The background gets transported in the moved block as well, based on the motion vector of the baseball, What you might see is the ball with a small amount of outfield or whatever, tagging along. As it's in motion, the block may be "soft" depending upon what additional techniques were used and barely noticeable unless you're looking for it. Block matching requires a staggering amount of processing horsepower, but today's microprocessors are making it a viable solution. Phase correlation Phase correlation is perhaps the most computationally complex of the general algorithms. Phase correlation's success lies in the fact that it is effective with coping with rapid motion and random motion. Phase correlation does not easily get confused by rotating or twirling objects that confuse most other kinds of systems converters. Phase correlation is elegant as well as technically and conceptually complex. Its successful operation is derived by performing a Fourier transform to each field of video. A fast Fourier transform (FFT) is an algorithm which deals with the transformation of discrete values (in this case image pixels). When applied to a sample of finite values, a fast Fourier transform expresses any changes (motion) in terms of frequency components. Since the result of the FFT represents only the inter-frame changes in terms of frequency distribution, there is far less data that has to be processed in order to calculate the motion vectors. DTV to analog converters for consumers A digital television adapter (DTA), commonly known as a converter box or decoder box, is a device that receives, by means of an antenna, a digital television (DTV) transmission, and converts that signal into an analog signal that can be received and displayed on an analog television. These boxes cheaply convert HDTV (16:9 at 720 or 1080) to (NTSC or PAL at 4:3). Very little is known about the specific conversion technologies used by these converter boxes in the PAL and NTSC regions. Downconversion is usually required, hence very little image quality loss is perceived by viewers at the recommended viewing distance with most television sets. Offline conversion A lot of cross format television conversion is done offline. There are several DVD packages that offer offline PAL ↔ NTSC conversion – including cross conversion (technically MPEG ↔ DTV) from the myriad of MPEG-based web video formats. Cross conversion can use any method commonly in use for TV system format conversion, but typically (in order to reduce complexity and memory use) it is left up to the codec to do the conversion. Most modern DVDs are converted from 525 <--> 625 lines in this way, as it is very economical for most programming that originates at EDTV resolution. See also Three-two pull down Reverse Standards Conversion IEEE papers on systems conversion AES/EBU papers on systems conversions ATSC tuner Digital television Digital television adapter DTV transition in the United States Set-top box References External links http://www.hawestv.com/moon_cam/moonctel.htm Film and video technology Television transmission standards Video hardware
Television standards conversion
[ "Engineering" ]
4,495
[ "Electronic engineering", "Video hardware" ]
10,789,539
https://en.wikipedia.org/wiki/Label%20Information%20Base
Label Information Base (LIB) is the software table maintained by IP/MPLS capable routers to store the details of port and the corresponding MPLS router label to be popped/pushed on incoming/outgoing MPLS packets. Entries are populated from label-distribution protocols. LDP is a protocol that automatically generates and exchanges labels between routers. Each router will locally generate labels for its prefixes and will then advertise the label values to its neighbors. It's a standard, based on Cisco's proprietary TDP (Tag Distribution Protocol). Nowadays almost everyone uses LDP instead of TDP. LDP first establishes a neighbor adjacency before it exchanges label information. It works a bit differently than most protocols though. LIB functions in the control plane of router's MPLS layer. It is used by the label distribution protocol for mapping the next hop labels. References MPLS networking
Label Information Base
[ "Technology" ]
189
[ "Computing stubs", "Computer network stubs" ]
10,789,602
https://en.wikipedia.org/wiki/Nakam
Nakam (, 'revenge') was a paramilitary organisation of about fifty Holocaust survivors who, after 1945, sought revenge for the murder of six million Jews during the Holocaust. Led by Abba Kovner, the group sought to kill six million Germans in a form of indiscriminate revenge, "a nation for a nation". Kovner went to Mandatory Palestine in order to secure large quantities of poison for poisoning water mains to kill large numbers of Germans. His followers infiltrated the water system of Nuremberg. However, Kovner was arrested upon arrival in the British zone of occupied Germany and had to throw the poison overboard. After this failure, Nakam turned their attention to "Plan B", targeting German prisoners of war held by the United States military in the American zone. They obtained arsenic locally and infiltrated the bakeries that supplied these prison camps. The conspirators poisoned 3,000 loaves of bread at (Consumer Cooperative Bakery) in Nuremberg, which sickened more than 2,000 German prisoners of war at Langwasser internment camp. However, no known deaths can be attributed to the group. Although Nakam is considered by some to have been a terrorist organisation, German public prosecutors dismissed a case against two of its members in 2000 due to the "unusual circumstances". Background During the Holocaust, Nazi Germany, its allies and collaborators murdered about six million Jews, by a variety of methods, including mass shootings and gassing. Many survivors, having lost their entire families and communities, had difficulty imagining a return to a normal life. The desire for revenge, either against Nazi war criminals or the entire German people, was widespread. From late 1942, as news of the Holocaust arrived in Mandatory Palestine, Jewish newspapers were full of calls for retribution. One of the leaders of the Warsaw Ghetto uprising, Yitzhak Zuckerman, later said he "didn't know a Jew who wasn't obsessed with revenge". However, very few survivors acted on these fantasies, instead focusing on rebuilding their lives and communities and commemorating those who had perished. In all, Israeli historian Dina Porat estimates that about 200 or 250 Holocaust survivors attempted to exact violent revenge, of which Nakam was a significant portion. Including assassinations carried out by Mossad, these operations claimed the lives of as many as 1,000 to 1,500 people. Formation In 1945, Abba Kovner, after visiting the site of the Ponary massacre and the extermination camp at Majdanek, and meeting survivors of Auschwitz in Romania, decided to take revenge. He recruited about 50 Holocaust survivors, mostly former Jewish partisans, but including a few who had escaped to the Soviet Union. Recruited for their ability to live undercover and not break down, most were in their early twenties and hailed from Vilnius, Rovno, Częstochowa, or Kraków. Generally known as ("revenge"), the organisation used the Hebrew name (, "judgement"), also an acronym of (, "the blood of Israel avenges"). The group's members believed that the defeat of Nazi Germany did not mean that Jews were safe from another Holocaust-level genocide. Kovner believed that a proportional revenge, killing six million Germans, was the only way to teach enemies of the Jews that they could not act with impunity: "The act should be shocking. The Germans should know that after Auschwitz there can be no return to normality." According to survivors, Kovner's "hypnotic" eloquence put words to the emotions that they were feeling. Members of the group believed that the laws of the time were unable to adequately punish such an extreme event as the Holocaust and that the complete moral bankruptcy of the world could be cured only by catastrophic retributive violence. Porat hypothesises that Nakam was "a necessary stage" before the embittered survivors would be prepared "to return to a life of society and laws". The group's leaders formed two plans: Plan A, to kill a large number of Germans, and Plan B, to poison several thousand SS prisoners held in US prisoner of war camps. From Romania Kovner's group traveled to Italy, where Kovner received a warm reception from Jewish Brigade soldiers who wanted him to help organise Aliyah Bet (illegal immigration to Mandate Palestine). Kovner refused because he was already set on revenge. Nakam developed a network of underground cells and immediately set out raising money, infiltrating German infrastructure, and securing poison. The group received a large supply of German-forged British currency from a Hashomer Hatzair emissary, forced speculators to contribute, and also obtained some money from sympathisers in the Jewish Brigade. Plan A (planned mass poisoning in Nuremberg) Joseph Harmatz, posing as a Polish displaced person (DP) named "Maim Mendele", attempted to infiltrate the municipal water supply in Nuremberg; Nakam targeted the city because it had been the stronghold of the Nazi Party. Harmatz had difficulty finding rooms for the conspirators to rent due to the housing shortage caused by the destruction of most of the city by Allied bombing. Through the use of bribes, he managed to place Willek Schwerzreich (Wilek Shinar), an engineer from Kraków who spoke fluent German, in a position with the municipal water company. Schwarzreich obtained the plan of the water system and control of the main water valve, and plotted where the poison should be introduced so as to kill the largest possible number of Germans. In Paris, was in charge of a Nakam cell including Vitka Kempner, Kovner's future wife and former comrade in the Vilna Ghetto underground. Reichman reportedly spoke to David Ben-Gurion during the latter's trip to a DP camp in Germany, but the latter preferred to work towards Israeli independence than seek revenge for the Holocaust. It fell to Kovner to obtain the poison from leaders in the Yishuv, the Jewish leadership in Mandatory Palestine. In July 1945, Kovner left the Jewish Brigade for Milan, disguising himself as a Jewish Brigade soldier on leave, and boarded a ship for Palestine the following month. Reichman became leader in Europe in his absence. Upon reaching Palestine, Kovner was held for three days in an apartment by the Mossad LeAliyah Bet and was personally interrogated by Mossad chief Shaul Meirov. Kovner negotiated with Haganah chiefs Moshe Sneh and Israel Galilee in hopes of convincing them to give him poison for a smaller revenge operation in return for not linking the murder to the Yishuv. In September, Kovner informed Nakam in Europe that he had not had any success in locating poison, and therefore they should recruit Yitzhak Ratner, a chemist and former Vilna Ghetto insurgent, and focus on Plan B. Kovner was eventually introduced to Ephraim and Aharon Katzir, chemists at the Hebrew University of Jerusalem, via one of their students who was a member of the Haganah. The Katzir brothers were sympathetic to Kovner's revenge plot and convinced the head of chemical storage at the Hebrew University to give him poison. Decades after the fact, Kovner claimed that he had pitched Plan B to Chaim Weizmann, then president of the World Zionist Organisation, who had directed him to the Katzir brothers. However, according to his biographer, if Kovner met Weizmann at all it was in February or March 1946, as Weizmann was out of the country before that. After several delays, Kovner travelled to Alexandria, Egypt, in December 1945 carrying false papers that identified him as a Jewish Brigade soldier returning from leave, and a duffel bag with gold hidden in toothpaste tubes and cans full of poison. Shortly after boarding a ship headed to Toulon, France, Kovner's name along with three others was called over the public address system. Kovner told a friend, Yitzik Rosenkranz, to convey the duffel bag to Kempner in Paris, and then threw half the poison overboard. After this, he turned himself in and was arrested by the British police. Nakam members later claimed that Kovner had been betrayed by the Haganah, but Porat writes that it is more likely that he was arrested as a suspected organiser of Aliyah Bet. Kovner, who spoke no English and had not attended the Jewish Brigade training, was not questioned about Nakam; after two months in jails in Egypt and Palestine, he was released. His involvement in Nakam ended at that time. Plan B (mass poisoning of SS prisoners) Because Kovner had not managed to secure the quantity of poison required, the Nuremberg cell decided to switch to poisoning SS prisoners definitively during the first months of 1946. Most of the Nakam action groups disbanded as ordered and their members dispersed into displaced persons camps, promised by the leaders that in future they would be reactivated to implement Plan A. The cells in Nuremberg and Dachau remained active because of the large US prisoner of war camps nearby. Yitzhak Ratner was recruited into the group to obtain poison locally. In October 1945, he set up a laboratory in the Nakam headquarters in Paris, where he tested various formulations in order to find a tasteless, odorless poison that would have delayed effects. Ratner eventually formulated a mixture of arsenic, glue, and other additives which could be painted onto loaves of bread; tests on cats proved the lethality of the mixture. He obtained more than of arsenic from friends who worked in the tanning industry, which was smuggled into Germany. Nakam focused on Langwasser internment camp near Nuremberg (formerly Stalag XIII-D), where 12,000 to 15,000 prisoners, mainly former SS officers or prominent Nazis, were imprisoned by the United States. Initially, two Nakam members were hired by the camp, one as a driver, another as a storehouse worker. The bread for Langwasser came from a single bakery in Nuremberg, the (Consumer Cooperative Bakery). Leipke Distel, a survivor of several Nazi concentration camps, posed as a Polish displaced person awaiting a visa to work at an uncle's bakery in Canada. He asked the manager if he could work for free and eventually secured access to the bakery storeroom after bribing him with cigarettes, alcohol, and chocolate. The Nakam operatives met each night in a rented room in Fürth to discuss their findings, especially how to confine their attack to the German prisoners and avoid harming the American guards. When Harmatz placed a few of the workers in clerical positions in the camp, they discovered that on Sundays, the black bread would be eaten only by the German prisoners because the American guards were specially issued white bread. Therefore, they decided to execute the attack on a Saturday night. Similar preparations were made with regard to a prison camp near Dachau and the bakery supplying it, an effort led by Warsaw Ghetto uprising veteran Simcha Rotem. After becoming friends with Poles who worked in the bakery, Rotem got the manager drunk, made copies of his keys, and then returned them before he sobered up. A few days before the planned attack, Reichman received a tip-off from a Jewish intelligence officer in the United States Army that two of the operatives were wanted by the police. As ordered, the Dachau Nakam operatives aborted on 11 April 1946. Reichman feared that the failure of one attack would cause the United States to increase its security measures at prison camps, preventing a second attack. By this time, six Nakam members worked at the in Nuremberg. Subverting tight security aimed at preventing the theft of food, they smuggled the arsenic in over several days, hiding it under raincoats, and stashed it beneath the floorboards. Because experiments had shown that the arsenic mixture did not spread evenly, the operatives decided to paint it onto the bottom of each loaf. On Saturday 13 April, the bakery workers were on strike, delaying the Nakam operatives and preventing three of them from entering the bakery. As a result, Distel and his two accomplices had only enough time to poison some 3,000 loaves of bread instead of 14,000 as originally planned. After painting the loaves, they fled to Czechoslovakia, helped by an Auschwitz survivor named Yehuda Maimon, continuing on through Italy to southern France. On 23 April 1946, The New York Times reported that 2,283 German prisoners of war had fallen ill from poisoning, with 207 hospitalized and seriously ill. However, the operation ultimately caused no known deaths. According to documents obtained by a Freedom of Information request to the National Archives and Records Administration, the amount of arsenic found in the bakery was enough to kill approximately 60,000 persons. It is unknown why the poisoners failed, but it is suspected to be either that they spread the poison too thinly, or else that the prisoners realised that the bread had been poisoned and did not eat very much. Aftermath and legacy About 30 former Nakam operatives boarded the ship Biriya on 23 June 1946 and arrived by the end of July following brief detention by the British authorities. They received a warm welcome at Kovner's kibbutz, Ein HaHoresh, from leading members of the Haganah and the Israeli Labor Party, and were invited to travel through the country. Although Kovner, and the majority of the operatives, considered that the time for revenge had passed, a small group led by Bolek Ben-Ya'akov returned to Europe to continue the mission. Nine other Nakam operatives broke away in the spring of 1947 and returned to Europe the following year, helped by Labor Party politician Abba Hushi. The breakaway groups faced mounting challenges, both logistical and financial, and the foundation of the Federal Republic of Germany in 1949 made illegal operations even more difficult. Many of the members turned to a life of crime to support themselves, and then tried to escape from German jails with the help of former French Resistance members. Most returned to Israel between 1950 and 1952. Ben-Ya'akov said in an interview that he "could not have looked at himself in the mirror" if he had not tried to get revenge, and that he still deeply regretted that it did not succeed. After coming to Israel, former Nakam members refused to speak about their experiences for several decades, beginning to discuss the issue only in the 1980s. Porat writes that Kovner "committed political suicide" by participating in Nakam; she describes its failure as a "miracle." Members of the group showed no remorse, said that the Germans "deserved it," and wanted recognition, rather than forgiveness, for their actions. In 1999, Harmatz and Distel appeared in a documentary and discussed their role in Nakam. Distel maintained that Nakam's actions were moral and that the Jews "had a right to revenge against the Germans." German prosecutors opened an investigation against them for attempted murder, but halted the preliminary investigation in 2000 because of the "unusual circumstances." , four members of the group are reported to still be alive. Historiography and popular culture An early journalistic account of Nakam's mission is in Michael Bar-Zohar's 1969 book The Avengers. The story was given a fictionalised treatment in Forged in Fury by Michael Elkins in 1971. Jonathan Freedland's novel The Final Reckoning is based on the story. The story of Nakam has also entered German popular culture. In 2009, Daniel Kahn & the Painted Bird, a Germany-based klezmer band, recorded a song called "Six Million Germans (Nakam)". Based on tapes Kovner made on his deathbed describing his activities in Nakam, a television documentary was produced by Channel 4 for its Secret History series titled Holocaust – The Revenge Plot, which was first broadcast on Holocaust Memorial Day, 27 January 2018. According to Israeli counterterrorism experts Ehud Sprinzak and Idith Zertal, Nakam's worldview was similar to messianic groups or cults because of its belief that the world was so evil as to deserve large-scale catastrophe. The Nakam operatives came from "heavily brutalized communities" which, according to Sprinzak and Zertal, sometimes consider catastrophic violence. Dina Porat is the first academic historian to systematically study the group, meeting with many of the survivors and gaining access to their private documents. She hypothesises that the failure of the attack may have been deliberate, as Kovner and other leaders began to realise that it could have greatly harmed the Jewish people. She struggled to reconcile the personalities of Nakam's members with the actions that they tried to carry out. When asked how he could plan an attack in which many innocent people would have been killed, one survivor explained that "[i]f you had been there with me, at the end of the war, you wouldn't talk that way". Porat's 2019 book on Nakam is titled Vengeance and Retribution Are Mine (), a phrase from Deuteronomy which expresses her belief that the Jewish people are best to leave vengeance in the hands of the God of Israel. In 2021, the film Plan A, directed and produced by Doron Paz and Yoav Paz, adapted the plot for cinema. See also Tilhas Tizig Gesheften Gmul References Bibliography Further reading Anti-German sentiment Chemical warfare Collective punishment History of Nuremberg Jewish ethics Jewish extremist terrorism Nazi hunters Revenge Terrorism in Germany
Nakam
[ "Chemistry" ]
3,630
[ "nan" ]
10,789,680
https://en.wikipedia.org/wiki/Residue%20field
In mathematics, the residue field is a basic construction in commutative algebra. If is a commutative ring and is a maximal ideal, then the residue field is the quotient ring = , which is a field. Frequently, is a local ring and is then its unique maximal ideal. In abstract algebra, the splitting field of a polynomial is constructed using residue fields. Residue fields also applied in algebraic geometry, where to every point of a scheme one associates its residue field . One can say a little loosely that the residue field of a point of an abstract algebraic variety is the natural domain for the coordinates of the point. Definition Suppose that is a commutative local ring, with maximal ideal . Then the residue field is the quotient ring . Now suppose that is a scheme and is a point of . By the definition of scheme, we may find an affine neighbourhood of , with some commutative ring . Considered in the neighbourhood , the point corresponds to a prime ideal (see Zariski topology). The local ring of at is by definition the localization of by , and has maximal ideal = . Applying the construction above, we obtain the residue field of the point : . One can prove that this definition does not depend on the choice of the affine neighbourhood . A point is called -rational for a certain field , if . Example Consider the affine line over a field . If is algebraically closed, there are exactly two types of prime ideals, namely , the zero-ideal. The residue fields are , the function field over k in one variable. If is not algebraically closed, then more types arise, for example if , then the prime ideal has residue field isomorphic to . Properties For a scheme locally of finite type over a field , a point is closed if and only if is a finite extension of the base field . This is a geometric formulation of Hilbert's Nullstellensatz. In the above example, the points of the first kind are closed, having residue field , whereas the second point is the generic point, having transcendence degree 1 over . A morphism , some field, is equivalent to giving a point and an extension . The dimension of a scheme of finite type over a field is equal to the transcendence degree of the residue field of the generic point. See also Arithmetic zeta function References Further reading , section II.2
Residue field
[ "Mathematics" ]
485
[ "Fields of abstract algebra", "Algebraic geometry" ]
10,789,878
https://en.wikipedia.org/wiki/Est%3A%20Playing%20the%20Game
est: Playing the Game the New Way is a non-fiction book by Carl Frederick, first published in 1976, by Delacorte Press, New York. The book describes in words the basic message of Werner Erhard's Erhard Seminars Training (est) theatrical experience. Erhard/est sued in federal court in the United States to stop the book from publication, but the suit failed. The book takes a 'trainer's' approach to the est experience, in that it essentially duplicates the est training, citing examples and using jargon from the actual experience. The title became a New York Times #2 best-seller, with more than a million copies in print, but overall critical reception was negative. The New York Times Book Review called it "a semi-literate rehash of Erhard-speak", and Library Journal noted, "The est disdain for critical thought and its fondness for its own jargon are painfully obvious in this book". Author Frederick graduated from Penn State University with a Bachelor of Science degree in Psychology. He also received an MBA degree from the University of Chicago, and gained professional experience in advertising and marketing. He worked for Procter & Gamble in Cincinnati, Ohio, as a product manager in the company's Advertising Department. Subsequently, he went to Heublein in Hartford, Connecticut, where he held the position of Director of New Products. Finally, Frederick became VP/Director of Marketing for Hot Wheels at MattelToys in Los Angeles. A graduate and devotee of est, Carl wrote the book while working to become a Seminar Leader for the est training. After the book became a best-seller, Frederick traveled to Hawaii, then traversed the globe in a sailboat. Along the way, he held management positions on various consulting projects in marketing and advertising, consulting, and also worked in journalism at a newspaper in New Zealand. In 1985, Carl started a U.S. business which distributed Harley Davidson accessories. The company carried out operations in the South Pacific, Hawaii, and California. Frederick eventually settled in Costa Rica. where he constructed and managed a tourist resort in 1997. Litigation Werner Erhard/est sued Frederick, reportedly in an attempt to prevent publication of the book. In an article "30 Years After the 'EST' Experience", which appeared in the 2003 Collectors Edition of his book, Frederick discussed the litigation initiated against him by Werner Erhard. Frederick wrote that his 200-plus page manuscript for the book was initially titled The Game of Life and How To Win It. According to Frederick, he sent a copy of the book to about 12 publishers, and also sent a personal copy to Erhard. Frederick wrote that Erhard responded by suing him in U.S. Federal Court, claiming "I had infringed his copyrighted material - BUT, he didn't attach any material to his complaint. Moreover, I never saw anything printed in est, it was all live theater", and I was never asked to sign anything that said what I could (or couldn't) do as a result of taking the est training" Frederick described his account of the litigation: "Erhard sent seven lawyers to the courtroom; I had one. They argued I was taking illegal liberty with the most incredible educational system in existence, and that I must be stopped immediately. The judge must have thought that we were arguing over some might trivial scribblings, because he just looked up quizzingly over his Franklin specs and threw the whole case right out the door." Erhard filed three lawsuits against Frederick, claiming copyright infringement. All of est/Erhard's suits were ruled 'nuisance claims'and summarily "thrown out of court" in 1976. Publication After the lawsuits concluded, Frederick signed a book deal with Dell/Delacorte Press, New York, receiving "more money in advance than any unknown author in the history of the United States." Frederick's agent, Ron Bernstein of Candida Donadio & Associates, commented on the book's successful publication in the face of the multiple lawsuits by Erhard/est: "... we feel this book's publication is a major victory against est's attempts to control the media." Originally copyrighted in 1974 by Delacorte Press, the book was first published in 1976, and republished in 1981. A Collector's Edition was published in 2003 by Synergy International of the Americas, and the author issued a revised Edition in 2012, which is now being sold on Amazon's Kindle and a paperback is available from Amazon Create Space. Contents The book essentially duplicates the est training in words. It is dedicated "To Werner .. the ultimate experience in beingness". Frederick takes the reader through the experience of the est training by providing vivid episodes which present the est perspective. He uses language to incite the reader in an attempt to reproduce this experience. Frederick utilizes stylistic techniques in the book such as text in all capital letters, instructing the reader, "THE TRUTH IS THAT THERE IS NO INHERENT SIGNIFICANCE TO ANYTHING YOU ARE, YOU DO, OR YOU HAVE." Frederick incorporates jargon from the est training in the work. He alternates between referring to the reader as "an ass" or "baby". The book contains short segments on various themes titled: "Total Acceptance and Responsibility", "Winners and Losers", and "The Game of Life". Chapter headings include: "How to Get All the Cheese in Life" and "How to Get Where You Really Want to Go in Life". Frederick emphasizes the importance of self-awareness, writing: "You are the Supreme being, in Spirit". He attempts to convince the reader that life should be considered a game, and that it is more desirable to win at a goal than it is to be right about it. Frederick's thesis is that individuals cause all events which occur in their lives, and that they should work to consciously control their circumstances. Critical reception The book became a best-seller. It hit number four on The New York Times' list of best selling trade paperbacks in May 1976, and later reached the number two spot on the list. The book sold 900,000 copies. In Psychology as Religion, author Paul C. Vitz describes the book as a "popularization". An article in New York Magazine characterizes Frederick's work as a "do-it-yourself book" on the est training. Writing in Sporting with the Gods, Michael Oriard comments that Frederick's book "celebrated" the est training. Carl Ferdinand Howard Henry notes in God, Revelation, and Authority that Frederick devotes text in the book to "expounding the view promoted by the self-assertion cult est". Writing in a review of the book for Library Journal, M. E. Monbeck comments: "The est disdain for critical thought and its fondness for its own jargon are painfully obvious in this book, est is certainly a most innovative approach, one which seems to have helped many adults and harmed few. There is, however, very little appreciation in est of the unique psychology of children, and est's effects on them seem to be potentially very harmful." Booklist criticizes the author, due to the ambiguous stylistic nature of the book. The review notes, "Whereas the dust jacket identifies him as an est graduate who is interpreting that experience for others, this book itself says nothing of the relationship." Booklist complained that it was, "difficult to separate interpretation from the original version" of Frederick's recounting of the est training. In her review in The New York Times Book Review, Vivian Gornick notes: "[Of this book] the less said the better. ... In short: Frederick's book is a semi-literate rehash of Erhard-speak as it is practiced by Erhard, his 'trainers,' and his 'graduates.'" A review of the book in Kirkus Reviews was negative; the review writes critically of the author, "Now we have priests like Carl Frederick, EST graduate, ad man and 'simply another human being,' who addresses his reader as 'baby' when not calling him 'asshole.' The original EST marathon entails four days of this kind of insult." Kirkus Reviews concludes, characterizing the book as, "Low blows at high decibels." See also Getting It: The psychology of est Human Potential Movement Large Group Awareness Training New age Outrageous Betrayal References Further reading Book reviews 1976 non-fiction books 1981 non-fiction books 2003 non-fiction books Human Potential Movement Personal development New Age books Werner Erhard
Est: Playing the Game
[ "Biology" ]
1,766
[ "Personal development", "Behavior", "Human behavior" ]
10,790,339
https://en.wikipedia.org/wiki/Analog%20%28program%29
Analog is a free web log analysis computer program that runs under Windows, macOS, Linux, and most Unix-like operating systems. It was first released on June 21, 1995, by Stephen Turner as generic freeware; the license was changed to the GNU General Public License in November 2004. The software can be downloaded for several computing platforms, or the source code can be downloaded and compiled if desired. Analog has support for 35 languages, and provides the ability to do reverse DNS lookups on log files, to indicate where web site hits originate. It can analyze several different types of web server logs, including Apache, IIS, and iPlanet. It has over 200 configuration options and can generate 32 reports. It also supports log files for multiple virtual hosts. The program is comparable to Webalizer or AWStats, though it does not use as many images, preferring to stick with simple bar charts and lists to communicate similar information. Analog can export reports in a number of formats including HTML, XHTML, XML, Latex and a delimited output mode (for example CSV) for importing into other programs. Delimited or "computer" output from Analog is often used to generate more structured and graphically rich reports using the third party Report Magic program. The popularity of Analog is largely unknown as no download count information has been released on its historic dissemination. In a 1998 survey by the Graphic, Visualization, & Usability Center (GVU), Analog was reportedly used by 24.9% (up from 19.9% the year before), with its nearest rival, Web Trends holding some 20.3% of the market. It is not clear how Analog's usage has changed in the decade leading up to 2010, nor how its usage profile has been impacted by on-line analysis services such as Google Analytics. Analog can operate on an individual or web-farm basis from a single process, requiring no modification of web page or web script code in order to use it. It is a stand-alone utility, and it is not possible for visiting clients to block all of the logging of traffic directly from the client. Analog has not been officially updated since the version 6.0 release in December 2004. The original author moved on to commercial traffic analysis. Updates to Analog continued informally by its user community up until the end of 2009 on the official mailing list. Currently the only formally compiled updated redistributable of Analog is that of Analog CE, which has focused on fixing issues in Analog's XML DTD and on adding new operating system and web browser detection to the original code branch. History Analog was first released in June 1995, as research project by its creator Dr. Stephen Turner, then working as a research fellow in Sidney Sussex College in the University of Cambridge. Some of the larger release milestones include: 14 June 1995 Analog 0.8b, the initial full testing build. 29 June 1995 Analog 0.9b was the first public release of Analog. 12 September 1995 Analog 1.0 was the first stable release. 10 February 1997 Analog 2.0 was the initial release of a native Win32 version of Analog. 15 June 1998 Analog 3.0 included support for HTTP/1.1 status codes and included a more refined log parsing engine in addition to the ability to parse non-standard log file formats. 16 November 1999 Analog 4.0 supported new reports including the Organisation Report, Operating System Report, Search Word Report, Search Query Report and Processing Time Report. 1 May 2001 Analog 5.0 is released with support for 24 languages, a range of new configuration commands and a new LaTeX output format. 19 December 2004 Analog 6.0 is released, including support for Palm OS and Symbian OS detection and all other changes from its 21-month beta period. Analog 6.0 was the first stable release made available under GPL license terms. 2 October 2007 Analog 6.0.1 C:Amie Edition the first release of the C:Amie maintenance branch. Included support for Windows Vista, improved support for Windows 3.11 and Windows NT 3.5 detection and allowed for the detection of the NetFront browser. 4 April 2009 Analog 6.0.4 C:Amie Edition was a bug fix release to Analog 6.0, containing bug fixes to Analog's XML output rendering and new configuration options. 18 July 2011 Analog 6.0.8 C:Amie Edition, with support for Windows Phone 7.5 (Mango), Apple iOS 5.0 and all current Android releases. 17 August 2012 Analog 6.0.9 C:Amie Edition, with new operating system identification profiles for Android 4.1 Jelly Bean, Windows Phone 6.5, Windows Phone 8.0, Windows 8 and Windows Server 2012. For the first time, the release expands out the previously grouped operating system detection for Mac OS X so that version number breakdowns are provided where information is available via user agent entries in log-file. The release also includes a number of bug fixes. 7 October 2013 Analog 6.0.11 C:Amie Edition, with improved accuracy in MSIE compatibility mode detection and new detection profiles for iOS, Android, Windows Phone and Windows 8/Server 2012 R2. 28 June 2015 Analog 6.0.12 C:Amie Edition, with new detection profiles for iOS, OS X, Android, Edge, Windows Phone and Windows 10. 27 July 2019Analog CE 6.0.16 added a new ANONYMIZERURL setting to allow the use of a URL forwarding service on reports, a new LINKNOFOLLOW setting to enable/disable hyperlink rel="nofollow" on reports (set to on by default), changed Mac OS X branding with macOS and other improvements to the Operating System report. A full list of the changes in each release is recorded in the Analog What's New Changelog. A full list of changes in the maintenance release is recorded on the Analog C:Amie Edition page. References External links Analog CE Analog Mailing List Internet Protocol based network software Free web analytics software Web log analysis software
Analog (program)
[ "Technology" ]
1,261
[ "Web log analysis software", "Computer logging" ]
10,790,676
https://en.wikipedia.org/wiki/Modulated%20complex%20lapped%20transform
The modulated complex lapped transform (MCLT) is a lapped transform, similar to the modified discrete cosine transform, that explicitly represents the phase (complex values) of the signal. References H. Malvar, "A Modulated Complex Lapped Transform And Its Applications to Audio Processing". Proc. International Conference on Acoustics, Speech and Signal Processing, 1999. H. Malvar, "Fast Algorithm for the Modulated Complex Lapped Transform", IEEE Signal Processing Letters, vol. 10, No. 1, 2003. See also Modified discrete cosine transform Fourier analysis
Modulated complex lapped transform
[ "Technology" ]
123
[ "Computing stubs" ]
10,791,959
https://en.wikipedia.org/wiki/E%E2%80%93Z%20notation
E–Z configuration, or the E–Z convention, is the IUPAC preferred method of describing the absolute stereochemistry of double bonds in organic chemistry. It is an extension of cis–trans isomer notation (which only describes relative stereochemistry) that can be used to describe double bonds having two, three or four substituents. E and Z notation are only used when a compound doesn't have two identical substituents. Following the Cahn–Ingold–Prelog priority rules (CIP rules), each substituent on a double bond is assigned a priority, then positions of the higher of the two substituents on each carbon are compared to each other. If the two groups of higher priority are on opposite sides of the double bond (trans to each other), the bond is assigned the configuration E (from entgegen, , the German word for "opposite"). If the two groups of higher priority are on the same side of the double bond (cis to each other), the bond is assigned the configuration Z (from zusammen, , the German word for "together"). The letters E and Z are conventionally printed in italic type, within parentheses, and separated from the rest of the name with a hyphen. They are always printed as full capitals (not in lowercase or small capitals), but do not constitute the first letter of the name for English capitalization rules (as in the example above). Another example: The CIP rules assign a higher priority to bromine than to chlorine, and a higher priority to chlorine than to hydrogen, hence the following (possibly counterintuitive) nomenclature. For organic molecules with multiple double bonds, it is sometimes necessary to indicate the alkene location for each E or Z symbol. For example, the chemical name of alitretinoin is (2E,4E,6Z,8E)-3,7-dimethyl-9-(2,6,6-trimethyl-1-cyclohexenyl)nona-2,4,6,8-tetraenoic acid, indicating that the alkenes starting at positions 2, 4, and 8 are E while the one starting at position 6 is Z. Undefined ene stereochemistry The prefix 'E/Z-' can be used to indicate uncertainty in the E or Z isomers for an ene bond. For graphical representations, wavy single bonds are the standard way to represent unknown or unspecified stereochemistry or a mixture of isomers (as with tetrahedral stereocenters). A crossed double-bond has been used sometimes; it is no longer considered an acceptable style for general use by IUPAC but may still be required by computer software. See also Descriptor (chemistry) Geometric isomerism Molecular geometry References Chemistry prefixes Stereochemistry
E–Z notation
[ "Physics", "Chemistry" ]
606
[ "Stereochemistry", "Space", "nan", "Spacetime", "Chemistry prefixes" ]
10,792,481
https://en.wikipedia.org/wiki/Herbimycin
Herbimycin is a benzoquinone ansamycin antibiotic that binds to Hsp90 (Heat Shock Protein 90) and alters its function. Hsp90 client proteins play important roles in the regulation of the cell cycle, cell growth, cell survival, apoptosis, angiogenesis and oncogenesis. It was originally found by its herbicidal activity, and thus named. The most recent herbimycins to be discovered, herbimycins D-F, were isolated from a Streptomyces isolated from thermal vents associated with the Ruth Mullins coal fire in Appalachian Kentucky. Synonyms Antibiotic Tan 420F Herbimycin A Biological activity Herbimycin induces the degradation of proteins that are need to be mutated in tumor cells such as v-Src, Bcr-Abl and p53 preferentially over their normal cellular counterparts. This effect is mediated via Hsp90. See also Geldanamycin Satoshi Ōmura References External links Herbimycin A from Center for Pharmaceutical Research and Innovation Antibiotics 1,4-Benzoquinones Carbamates Lactams Ethers Ansamycins
Herbimycin
[ "Chemistry", "Biology" ]
245
[ "Biotechnology products", "Functional groups", "Organic compounds", "Antibiotics", "Ethers", "Biocides" ]
10,792,995
https://en.wikipedia.org/wiki/Frequency%20synthesizer
A frequency synthesizer is an electronic circuit that generates a range of frequencies from a single reference frequency. Frequency synthesizers are used in devices such as radio receivers, televisions, mobile telephones, radiotelephones, walkie-talkies, CB radios, cable television converter boxes, satellite receivers, and GPS systems. A frequency synthesizer may use the techniques of frequency multiplication, frequency division, direct digital synthesis, frequency mixing, and phase-locked loops to generate its frequencies. The stability and accuracy of the frequency synthesizer's output are related to the stability and accuracy of its reference frequency input. Consequently, synthesizers use stable and accurate reference frequencies, such as those provided by a crystal oscillator. Types Three types of synthesizer can be distinguished. The first and second type are routinely found as stand-alone architecture: direct analog synthesis (also called a mix-filter-divide architecture as found in the 1960s e.g., HP 5100A and the more modern direct digital synthesizer (DDS) (table lookup). The third type are routinely used as communication system IC building blocks: indirect digital (PLL) synthesizers including integer-N and fractional-N. The recently emerged TAF-DPS is also a direct approach. It directly constructs the waveform of each pulse in the clock pulse train. Digiphase synthesizer A digiphase synthesizer is in some ways similar to a DDS, but it has architectural differences. One of its advantages is to allow a much finer resolution than other types of synthesizers with a given reference frequency. Time-Average-Frequency Direct Period Synthesis (TAF-DPS) Time-Average-Frequency Direct Period Synthesis (TAF-DPS) focuses on frequency generation for clock signals driving integrated circuits. Different from all other techniques, it uses a novel concept of time average frequency. Its aim is to address the two long-lasting problems in the field of on-chip clock signal generation: arbitrary frequency generation and instantaneous frequency switching. Starting from a base time unit, TAF-DPS first creates two types of cycles TA and TB. These two types of cycles are then used in an interleaved fashion to produce the clock pulse train. As a result, TAF-DPS can address the problems of arbitrary-frequency generation and instantaneous-frequency switching effectively. The first circuit technology utilizing the TAF concept WAS the flying-adder or flying-adder PLL, which was developed in late 1990s. Since the introduction of TAF concept in 2008, the development of a frequency synthesis technology that works on TAF formally kicks off. A detailed description of this technology can be found in those books and this short tutorial. As development progresses, it gradually becomes clear that TAF-DPS is a circuit level enabler for system level innovation. It can be used in many areas other than clock signal generation. Its impact is significant since clock signal is the most important signal in electronics, establishing the flow-of-time inside the electronic world. This profound influence is being seen in this directional change in Moore's Law from space to time. History Prior to widespread use of synthesizers, in order to pick up stations on different frequencies, radio and television receivers relied on manual tuning of a local oscillator, which used a resonant circuit composed of an inductor and capacitor, or sometimes resonant transmission lines, to determine the frequency. The receiver was adjusted to different frequencies by either a variable capacitor, or a switch which chose the proper tuned circuit for the desired channel, such as with the turret tuner commonly used in television receivers prior to the 1980s. However the resonant frequency of a tuned circuit is not very stable; variations in temperature and aging of components caused frequency drift, causing the receiver to drift off the station frequency. Automatic frequency control (AFC) solves some of the drift problem, but manual retuning was often necessary. Since transmitter frequencies are stabilized, an accurate source of fixed, stable frequencies in the receiver would solve the problem. Quartz crystal resonators are many orders of magnitude more stable than LC circuits and when used to control the frequency of the local oscillator offer adequate stability to keep a receiver in tune. However the resonant frequency of a crystal is determined by its dimensions and cannot be varied to tune the receiver to different frequencies. One solution is to employ many crystals, one for each frequency desired, and switch the correct one into the circuit. This "brute force" technique is practical when only a handful of frequencies are required, but quickly becomes costly and impractical in many applications. For example, the FM radio band in many countries supports 100 individual channel frequencies from about 88 to 108 MHz; the ability to tune in each channel would require 100 crystals. Cable television can support even more frequencies or channels over a much wider band. A large number of crystals increases cost and requires greater space. The solution to this was the development of circuits which could generate multiple frequencies from a "reference frequency" produced by a crystal oscillator. This is called a frequency synthesizer. The new "synthesized" frequencies would have the frequency stability of the master crystal oscillator, since they were derived from it. Many techniques have been devised over the years for synthesizing frequencies. Some approaches include phase locked loops, double mix, triple mix, harmonic, double mix divide, and direct digital synthesis (DDS). The choice of approach depends on several factors, such as cost, complexity, frequency step size, switching rate, phase noise, and spurious output. Coherent techniques generate frequencies derived from a single, stable master oscillator. In most applications, a crystal oscillator is common, but other resonators and frequency sources can be used. Incoherent techniques derive frequencies from a set of several stable oscillators. The vast majority of synthesizers in commercial applications use coherent techniques due to simplicity and low cost. Synthesizers used in commercial radio receivers are largely based on phase-locked loops or PLLs. Many types of frequency synthesizer are available as integrated circuits, reducing cost and size. High end receivers and electronic test equipment use more sophisticated techniques, often in combination. System analysis and design A well-thought-out design procedure is considered to be the first significant step to a successful synthesizer project. In the system design of a frequency synthesizer, states Manassewitsch, there are as many "best" design procedures as there are experienced synthesizer designers. System analysis of a frequency synthesizer involves output frequency range (or frequency bandwidth or tuning range), frequency increments (or resolution or frequency tuning), frequency stability (or phase stability, compare spurious outputs), phase noise performance (e.g., spectral purity), switching time (compare settling time and rise time), and size, power consumption, and cost. James A. Crawford says that these are mutually contradictive requirements. Influential early books on frequency synthesis techniques include those by Floyd M. Gardner (his 1966 Phaselock techniques) and by Venceslav F. Kroupa (his 1973 Frequency Synthesis). Mathematical techniques analogous to mechanical gear-ratio relationships can be employed in frequency synthesis when the frequency synthesis factor is a ratio of integers. This method allows for effective planning of distribution and suppression of spectral spurs. Variable-frequency synthesizers, including DDS, are routinely designed using Modulo-N arithmetic to represent phase. Principle of PLL synthesizers A phase locked loop is a feedback control system. It compares the phases of two input signals and produces an error signal that is proportional to the difference between their phases. The error signal is then low pass filtered and used to drive a voltage-controlled oscillator (VCO) which creates an output frequency. The output frequency is fed through a frequency divider back to the input of the system, producing a negative feedback loop. If the output frequency drifts, the phase error signal will increase, driving the frequency in the opposite direction so as to reduce the error. Thus the output is locked to the frequency at the other input. This other input is called the reference and is usually derived from a crystal oscillator, which is very stable in frequency. The block diagram below shows the basic elements and arrangement of a PLL based frequency synthesizer. The key to the ability of a frequency synthesizer to generate multiple frequencies is the divider placed between the output and the feedback input. This is usually in the form of a digital counter, with the output signal acting as a clock signal. The counter is preset to some initial count value, and counts down at each cycle of the clock signal. When it reaches zero, the counter output changes state and the count value is reloaded. This circuit is straightforward to implement using flip-flops, and because it is digital in nature, is very easy to interface to other digital components or a microprocessor. This allows the frequency output by the synthesizer to be easily controlled by a digital system. Example Suppose the reference signal is 100 kHz, and the divider can be preset to any value between 1 and 100. The error signal produced by the comparator will only be zero when the output of the divider is also 100 kHz. For this to be the case, the VCO must run at a frequency which is 100 kHz x the divider count value. Thus it will produce an output of 100 kHz for a count of 1, 200 kHz for a count of 2, 1 MHz for a count of 10 and so on. Note that only whole multiples of the reference frequency can be obtained with the simplest integer N dividers. Fractional N dividers are readily available. Practical considerations In practice this type of frequency synthesizer cannot operate over a very wide range of frequencies, because the comparator will have a limited bandwidth and may suffer from aliasing problems. This would lead to false locking situations, or an inability to lock at all. In addition, it is hard to make a high frequency VCO that operates over a very wide range. This is due to several factors, but the primary restriction is the limited capacitance range of varactor diodes. However, in most systems where a synthesizer is used, we are not after a huge range, but rather a finite number over some defined range, such as a number of radio channels in a specific band. Many radio applications require frequencies that are higher than can be directly input to the digital counter. To overcome this, the entire counter could be constructed using high-speed logic such as ECL, or more commonly, using a fast initial division stage called a prescaler which reduces the frequency to a manageable level. Since the prescaler is part of the overall division ratio, a fixed prescaler can cause problems designing a system with narrow channel spacings – typically encountered in radio applications. This can be overcome using a dual-modulus prescaler. Further practical aspects concern the amount of time the system can switch from channel to channel, time to lock when first switched on, and how much noise there is in the output. All of these are a function of the loop filter of the system, which is a low-pass filter placed between the output of the frequency comparator and the input of the VCO. Usually the output of a frequency comparator is in the form of short error pulses, but the input of the VCO must be a smooth noise-free DC voltage. (Any noise on this signal naturally causes frequency modulation of the VCO.) Heavy filtering will make the VCO slow to respond to changes, causing drift and slow response time, but light filtering will produce noise and other problems with harmonics. Thus the design of the filter is critical to the performance of the system and in fact the main area that a designer will concentrate on when building a synthesizer system. Use as a frequency modulator Many PLL frequency synthesizers can also generate frequency modulation (FM). The modulating signal is added to the output of the loop filter, directly varying the frequency of the VCO and the synthesizer output. The modulation will also appear at the phase comparator output, reduced in amplitude by any frequency division. Any spectral components in the modulating signal too low to be blocked by the loop filter end up back at the VCO input with opposite polarity to the modulating signal, thus cancelling them out. (The loop effectively sees these components as VCO noise to be tracked out.) Modulation components above the loop filter cutoff frequency cannot return to the VCO input so they remain in the VCO output. This simple scheme therefore cannot directly handle low frequency (or DC) modulating signals but this is not a problem in the many AC-coupled video and audio FM transmitters that use this method. Such signals may also be placed on a subcarrier above the cutoff frequency of the PLL loop filter. PLL frequency synthesizers can also be modulated at low frequency and down to DC by using two-point modulation to overcome the above limitation. Modulation is applied to the VCO as before, but now is also applied digitally to the synthesizer in sympathy with the analog FM signal using a fast delta sigma ADC. See also Superheterodyne receiver Digitally controlled oscillator Dual-modulus prescaler Wadley Loop References . Also PDF version. Xiu, Liming (2012), Nanometer Frequency Synthesis beyond Phase Locked Loop, Aug. 2012, John Wiley IEEE press (IEEE Press Series on Microelectronic Systems), . Xiu, Liming (2015), From Frequency to Time-Average-Frequency: A Paradigm Shift in the Design of Electronic system, May 2015, John Wiley IEEE press (IEEE Press Series on Microelectronic Systems), . Further reading Ulrich L. Rohde "Digital PLL Frequency Synthesizers – Theory and Design ", Prentice-Hall, Inc., Englewood Cliffs, NJ, January 1983 Ulrich L. Rohde " Microwave and Wireless Synthesizers: Theory and Design ", John Wiley & Sons, August 1997, External links Frequency Synthesizer U.S. Patent 3,555,446, Braymer, N. B., (1971, January 12) . HP 5100A Direct synthesizer: comb generator; filter, mix, divide. Given 3.0bcd MHz, mix with 24 MHz and filter to get 27.0bcd MHz, mix with 3.a MHz and filter to get 30.abcd MHz; divide by 10 and filter to get 3.0abcd MHz; feed to next stage to get another digit or mix up to 360.abcd MHz and start mixing and filtering with other frequencies in 1 MHz (30–39 MHz) and 10 MHz (350–390 MHz) steps. Spurious signals are -90 dB (p. 2). . HP 8660A/B Multiloop PLL synthesizer. Electronic oscillators Communication circuits Radio technology Electronic test equipment Second-harmonic generation
Frequency synthesizer
[ "Technology", "Engineering" ]
3,061
[ "Information and communications technology", "Telecommunications engineering", "Electronic test equipment", "Radio technology", "Measuring instruments", "Communication circuits" ]
10,793,106
https://en.wikipedia.org/wiki/Trachealis%20muscle
The trachealis muscle is a sheet of smooth muscle in the trachea. Structure The trachealis muscle lies posterior to the trachea and anterior to the oesophagus. It bridges the gap between the free ends of C-shaped rings of cartilage at the posterior border of the trachea, adjacent to the oesophagus. This completes the ring of cartilages of the trachea. The trachealis muscle also supports a thin cartilage on the inside of the trachea. It is the only smooth muscle present in the trachea. Function The primary function of the trachealis muscle is to constrict the trachea, allowing air to be expelled with more force, such as during coughing. Clinical significance Tracheomalacia may involve hypotonia of the trachealis muscle. The trachealis muscle may become stiffer during ageing, which makes the whole trachea less elastic. In infants, the insertion of an oesophagogastroduodenoscope into the oesophagus may compress the trachealis muscle, and narrow the trachea. This can result in reduced airflow to the lungs. Infants may be intubated to make sure that the trachea is fixed open. See also Muscles of respiration References Respiratory system Respiration
Trachealis muscle
[ "Biology" ]
285
[ "Organ systems", "Respiratory system" ]
14,426,202
https://en.wikipedia.org/wiki/XLDB
XLDB (eXtremely Large DataBases) was a yearly conference about databases, data management and analytics held from 2007 to 2019. The definition of extremely large refers to data sets that are too big in terms of volume (too much), and/or velocity (too fast), and/or variety (too many places, too many formats) to be handled using conventional solutions. This conference dealt with the high-end of very large databases (VLDB). It was conceived and chaired by Jacek Becla. History In October 2007, data experts gathered at SLAC National Accelerator Lab for the First Workshop on Extremely Large Databases. As a result, the XLDB research community was formed to meet the rapidly growing demands of the largest data systems. In addition to the original invitational workshop, an open conference, tutorials, and annual satellite events on different continents were added. The main event, held annually at Stanford University gathers over 300 attendees. XLDB is one of the data systems events catering to both academic and industry communities. For 2009, the workshop was co-located with VLDB 2009 in France to reach out to non-US research communities. XLDB 2019 followed Stanford's Conference on Systems and Machine Learning (SysML). Goals The main goals of this community include: Identify trends, commonalities and major roadblocks related to building extremely large databases Bridge the gap between users trying to build extremely large databases and database solution providers worldwide Facilitate development and growth of practical technologies for extremely large data stores XLDB Community As of 2013, the community consisted of over one thousand members including: Scientists who develop, use, or plan to develop or use XLDB for their research, from laboratories. Commercial users of XLDB. Providers of database products, including commercial vendors and representatives from open source database communities. Academic database researchers. XLDB Conferences, Workshops and Tutorials The community met annually at Stanford University through 2019. Occasional satellite events were held in Asia and Europe. A detailed report or videos was produced after each workshop. Tangible results XLDB events led to initiating an effort to build a new open source, science database called SciDB. The XLDB organizers started defining a science benchmark for scientific data management systems called SS-DB. At XLDB 2012 the XLDB organizers announced that two major databases that support arrays as first-class objects (MonetDB SciQL and SciDB) have formed a working group in conjunction with XLDB. This working group is proposing a common syntax (provisionally named “ArrayQL”) for manipulating arrays, including array creation and query. See also International Conference on Very Large Data Bases References Further reading Pavlo A., Paulson E., Rasin A., Abadi D. J., Dewitt D. J., Madden S., and Stonebraker M., A Comparison of Approaches to Large-Scale Data Analysis," Proceedings of the 2009 ACM SIGMOD, https://web.archive.org/web/20090611174944/http://database.cs.brown.edu/sigmod09/benchmarks-sigmod09.pdf Becla, J., & Wang, D. L. 2005, Lessons Learned from Managing a Petabyte, downloaded from https://web.archive.org/web/20110604223735/http://www.slac.stanford.edu/pubs/slacpubs/10750/slac-pub-10963.pdf on 2007-11-25. Duellmann, D. 1999, Petabyte Databases, ACM SIGMOD Record, vol. 28, p. 506, https://web.archive.org/web/20071012015357/http://www.sigmod.org/sigmod/record/issues/9906/index.html#TutorialSessions. Hanushevsky, A., & Nowak, M. 1999, Pursuit of a Scalable High Performance Multi-Petabyte Database, 16th IEEE Symposium on Mass Storage Systems, pp. 169–175, http://citeseer.ist.psu.edu/217883.html. Shiers, J., Building Very Large, Distributed Object Databases'', downloaded from https://web.archive.org/web/20070915101842/http://wwwasd.web.cern.ch/wwwasd/cernlib/rd45/papers/dbprog.html on 2007-11-25. External links Official website Computer science conferences Types of databases Data management
XLDB
[ "Technology" ]
973
[ "Data management", "Computer science", "Computer science conferences", "Data" ]
14,426,255
https://en.wikipedia.org/wiki/Trans-regulatory%20element
Trans-regulatory elements (TRE) are DNA sequences encoding upstream regulators (ie. trans-acting factors), which may modify or regulate the expression of distant genes. Trans-acting factors interact with cis-regulatory elements to regulate gene expression. TRE mediates expression profiles of a large number of genes via trans-acting factors. While TRE mutations affect gene expression, it is also one of the main driving factors for evolutionary divergence in gene expression. Trans vs cis elements Trans-regulatory elements work through an intermolecular interaction between two different molecules and so are said to be "acting in trans". For example (1) a transcribed and translated transcription factor protein derived from the trans-regulatory element; and a (2) DNA regulatory element that is adjacent to the regulated gene. This is in contrast to cis-regulatory elements that work through an intramolecular interaction between different parts of the same molecule: (1) a gene; and (2) an adjacent regulatory element for that gene in the same DNA molecule. Additionally, each trans-regulatory element affects a large number of genes on both alleles, while cis-regulatory element is allele specific and only controls genes nearby. Exonic and promoter sequences of the genes are significantly more conserved than the genes in cis- and trans- regulatory elements. Hence, they have higher resistance to genetic divergence, yet retains its susceptibility to mutations in upstream regulators. This accentuates the significance of genetic divergence within species due to cis- and trans-regulatory variants. Trans- and cis-regulatory elements co-evolved rapidly in large-scale to maintain gene expression. They often act in opposite directions, one up-regulates while another down-regulates, to compensate for their effects on the exonic and promoter sequences they act on. Other evolutionary models, such as the independent evolution of trans- or cis-regulatory elements, were deemed incompatible in regulatory systems. Co-evolution of the two regulatory elements was suggested to arise from the same lineage. TRE is more evolutionary constraint than cis-regulatory element, suggesting a hypothesis that TRE mutations are corrected by CRE mutations to maintain stability in gene expression. This makes biological sense, due to TRE's effect on a broad range of genes and CRE's compensatory effect on specific genes. Following a TRE mutation, accumulation of CRE mutations act to fine-tune the mutative effect. Examples Trans-acting factors can be categorized by their interactions with the regulated genes, cis-acting elements of the genes, or the gene products. DNA binding DNA binding trans-acting factors regulate gene expression by interfering with the gene itself or cis-acting elements of the gene, which lead to changes in transcription activities. This can be direct initiation of transcription, promotion, or repression of transcriptional protein activities. Specific examples include: Transcription factors DNA editing DNA editing proteins edit and permanently change gene sequence, and subsequently the gene expression of the cell. All progenies of the cell will inherit the edited gene sequence. DNA editing proteins often take part in the immune response system of both prokaryotes and eukaryotes, providing high variance in gene expression in adaptation to various pathogens. Specific examples include: RAG1/RAG2 TdT Cas1/Cas2 mRNA processing mRNA processing acts as a form of post-transcriptional regulation, which mostly happens in eukaryotes. 3′ cleavage/polyadenylation and 5’ capping increase overall RNA stability, and the presence of 5’ cap allows ribosome binding for translation. RNA splicing allows the expression of various protein variants from the same gene. Specific examples include: SR proteins Ribonucleoprotein hnRNP snRNP mRNA binding mRNA binding allows repression of protein translation through direct blocking, degradation or cleavage of mRNA. Certain mRNA binding mechanisms have high specificity, which can act as a form of the intrinsic immune response during certain viral infections. Certain segmented RNA viruses can also regulate viral gene expression through RNA binding of another genome segment, however, the details of this mechanism are still unclear. Specific examples include: RNA binding protein siRNA miRNA piRNA See also Cis-regulatory element References Gene expression
Trans-regulatory element
[ "Chemistry", "Biology" ]
849
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
14,427,315
https://en.wikipedia.org/wiki/GPR12
Probable G-protein coupled receptor 12 is a protein that in humans is encoded by the GPR12 gene. The gene product of GPR12 is an orphan receptor, meaning that its endogenous ligand is currently unknown. Gene disruption of GPR12 in mice results in dyslipidemia and obesity. Ligands Inverse agonists Cannabidiol Evolution Paralogues Source: GPR6 GPR3 S1PR5 CNR1 CNR2 MC4R S1PR1 MC3R MC2R S1PR2 MC1R S1PR3 LPAR2 MC5R LPAR1 S1PR4 LPAR3 GPR119 References Further reading G protein-coupled receptors
GPR12
[ "Chemistry" ]
149
[ "G protein-coupled receptors", "Signal transduction" ]
14,427,342
https://en.wikipedia.org/wiki/GPR15
G protein-coupled receptor 15 is a protein that in humans is encoded by the GPR15 gene. GPR15 is a class A orphan G protein-coupled receptor (heterotrimeric guanine nucleotide-binding protein, GPCR). The GPR15 gene is localized at chromosome 3q11.2-q13.1. It is found in epithelial cells, synovial macrophages, endothelial cells and lymphocytes especially T cells. From the mRNA sequence a 40.8 kD molecular weight of GPR15 is proposed. In an epithelial tumour cell line (HT-29), however, a 36 kD band, composed of GPR15 and galactosyl ceramide, was detected. Protein expression in lymphocytes is strongly associated with hypomethylation of its gene. Tissue distribution High gene expression was described for colonic mucosa, small bowel mucosa, liver and spleen. Moderate gene expression was found in blood, lymph node, thymus, testis and prostate. In peripheral blood, GPR15 is mainly found on T cells, especially on CD4+ T helper cells, and less prominent on B cells. By immunohistochemistry GPR15 is found specifically in glandular cells of the stomach, α-cells of islet of Langerhans in pancreas, surface epithelium of small intestine and colon, hepatocytes in liver, tubular epithelium of the kidney and in diverse tumour tissues such as glioblastoma, melanoma, small cell lung carcinoma or colon carcinoma. Function The overall physiological role remains elusive. It seems to play a role in homing of single T cell types to the colon. In human, GPR15 controls together with α4β7-integrin the homing of effector T cells to the inflamed gut of ulcerative colitis. With respect to the homing of GPR15-expressing immune cells to the colon there are divergent mechanisms between human and rodents like mouse. Ligands There are at least two endogenous ligand found recently. One ligand encoded by the human gene GPR15LG was identified as a robust marker for psoriasis whose abundance decreased after therapeutic treatment with anti-interleukin-17 antibody. Transcripts of GPR15LG are abundant in cervix and colon. It is currently unknown whether GPR15LG causes disease symptoms or is the consequence of a disturbed epithelial barrier. It does not act as a chemotactic agent but rather decrease T cell migration suggesting a mechanism of heterologous receptor desensitization. The second ligand is a fragment of thrombomodulin exerting anti-inflammatory function in mice. Clinical significance Human GPR15 was originally cloned as a co-receptor for HIV or the simian immunodeficiency virus. HIV-induced activation of GPR15 in enterocytes seems to cause HIV enteropathy accompanied with diarrhea and lipid malabsorption. In inflammatory bowel diseases (IBD) such as Crohn's disease and ulcerative colitis the proportion of GPR15-expressing cells among regulatory T cells is slightly increased in peripheral blood. In mouse, GPR15-deficient mice were prone to develop severe large intestine inflammation, which was rescued by the transfer of GPR15-sufficient T regs. Lifestyle Chronic tobacco smoking is a very strong inducer of GPR15-expressing T cells in peripheral blood. Although the proportion of GPR15-expressing cells among T-cells in peripheral blood is a high sensitive and specific biomarker for chronic tobacco smoking it does not indicate a disturbed homeostasis in the lung. References Further reading G protein-coupled receptors
GPR15
[ "Chemistry" ]
811
[ "G protein-coupled receptors", "Signal transduction" ]
14,427,362
https://en.wikipedia.org/wiki/GPR17
Uracil nucleotide/cysteinyl leukotriene receptor is a G protein-coupled receptor that in humans is encoded by the GPR17 gene located on chromosome 2 at position q21. The actual activating ligands for and some functions of this receptor are disputed. History Initially discovered in 1998 as an Orphan receptor, i.e. a receptor whose activating ligand(s) and function were unknown, GPR17 was "deorphanized" in a study that reported it to be a receptor for LTC4, LTD4, and uracil nucleotides. In consequence, GPR17 attracted attention as a potential mediator of reactions caused by LTC4 and LTD4 viz., asthma, rhinitis, and urticarial triggered by allergens, nonsteroidal anti-inflammatory drugs, and exercise (see Aspirin-induced asthma). Subsequent reports, however, have varied in results: studies focusing on the allergen and non-allergen reactions find that GPR17-bearing cells do not respond to LTC4, LTD4, and uracil nucleotides while studies focusing on nerve tissue find that certain types of GPR17-bearing oligodendrocytes do indeed respond to them. In 2013 and 2014 reports, the International Union of Basic and Clinical Pharmacology took no position on which of these are true ligands for GPR17. GPR17 is a constitutively active receptor, i.e. a receptor that has baseline activity which is independent of, although potentially increased by, its ligands. Biochemistry GPR17 has a structure which is intermediate between the cysteinyl leukotriene receptor group (i.e. cysteinyl leukotriene receptor 1 and cysteinyl leukotriene receptor 2) and the purine P2Y subfamily of 12 receptors (see P2Y receptors), sharing 28 to 48% amino acid identity with them. GPR17 is a G protein coupled receptor that acts primarily through G proteins linked to the Gi alpha subunit but also to Gq alpha subunit. Matching these structural relationships, GPR17 has been reported to be activated by cysteinyl leukotrienes (i.e. LTC4 and LTD4) as well as the purines (i.e., uridine, Uridine diphosphate (UDP), UDP-glucose). Further relating these receptors, GPR17 may dimerize (i.e. associate with) certain of the cited cysteinyl leukotriene or purine receptors in mediating cell responses and this dimerization may explain some of the discrepancies reported for the ability of these ligands to activate GPR17 as expressed in different cell types (see below section of Function). GPR17 is also activated by the emergency-signaling and atherosclerosis-promoting oxysterols and by synthetic compounds with broadly different structures. Relevant to its activating ligands as well as its reported interaction with other G protein coupled receptors, GPR17 is a promiscuous receptor. Montelukast which inhibits cysteinyl leukotriene receptor 1 and is in clinical use for the chronic and preventative treatment of LTC4- and LTD4-promoted allergic and non-allergic diseases, and Cangrelor, which inhibits P2Y purinergic receptors and is approved in the USA as an antiplatelet drug, inhibit the GPR17 receptor. Distribution GPR17 was first clone form and is highly expressed in certain precursors of oligodendrocytes in the nerve tissue of the central nervous system (CNS); it is overexpress in CNS tissues experiencing demyelination injuries; within 48 hours of the latter types of injuries, GPR17 expression is induced in dying neurons within and on the borders of injury, in infiltrating microglia and macrophages, and in activated oligodendrocyte precursor cells. Function Studies focusing on allergic and hypersensitivity reactions have found that the LTC4 and LTD4 ligands for Cysteinyl leukotriene receptor 1 (CysLTR1) and Cysteinyl leukotriene receptor 2, which mediate these reactions, have disputed findings that LTC4 and LTD4 are ligands for GPR17. They have shown that cells co-expressing both CysLTR1 and GPR17 receptors exhibit a marked reduction in binding LTC4 and that mice lacking GPR17 are hyper-responsive to IgE-induced passive cutaneous anaphylaxis. They therefore have nominated GPR17 as functioning to inhibit CysLTR1 in these model systems and as such might serve to dampen the acute reactions involving the cited LTs. Studies focusing on nerve tissue indicate that GPR17 is: a) highly expressed in precursors to mature oligodendrocytes but not expressed in mature oligodendrocytes, suggesting that GPR17 must be down-regulated in order for precursor cells to proceed to terminal oligodendrocyte differentiation; b) activated by uridine, Uridine diphosphate (UDP) and UDP-glucose to stimulate outward K+ channels and the aforementioned maturation responses in oligodendrocyte precursor cells; c) also activated by LTC4 and LTD4; d) more highly expressed in central nervous system (CNS) tissues of animal models undergoing ischemia, Experimental autoimmune encephalomyelitis, and focal demyelination as well as in the CNS tissues of humans suffering brain damage due to ischemia, trauma, and multiple sclerosis; e) expressed in injured neurons and associated with the rapid death and clearance of these neurons in a model of mouse spinal cord crush injury; f) acts to reduce the extent of spinal cord injury in the latter model based on the increased extent of injury in GPR17-depleted mice; and g) acts to reduce inflammation, elevate hippocampus neurogenesis, and improve learning and memory in a rat model of age-related cognitive impairment based on the effects of the GPR17 antagonist, montelukast, as well as of GPR17 depletion. The studies suggest that GPR17 is a sensor of damage in the CNS and participates in the resolution of this damage by clearing and/or promoting the re-myelination of injured neurons caused by a variety of insults perhaps including old age. The GPR17 gene has also been found to regulate food intake response mediated by FOXO1. Clinical significance GPR17 has been proposed as a potential pharmacological target for the treatment of multiple sclerosis and traumatic brain injury in humans. References Further reading G protein-coupled receptors
GPR17
[ "Chemistry" ]
1,422
[ "G protein-coupled receptors", "Signal transduction" ]
14,427,375
https://en.wikipedia.org/wiki/NAGly%20receptor
N-Arachidonyl glycine receptor (NAGly receptor), also known as G protein-coupled receptor 18 (GPR18), is a protein that in humans is encoded by the GPR18 gene. Along with the other previously orphan receptors GPR55 and GPR119, GPR18 has been found to be a receptor for endogenous lipid neurotransmitters, several of which also bind to cannabinoid receptors. It has been found to be involved in the regulation of intraocular pressure. Research supports the hypothesis that GPR18 is the abnormal cannabidiol receptor and N-arachidonoyl glycine, the endogenous lipid metabolite of anandamide, initiates directed microglial migration in the CNS through activation of GPR18, though recent evidence demonstrates that NAGly was not shown to be a GPR18 agonist in rat sympathetic neurons. Resolvin D2 (RvD2), a member of the specialized proresolving mediators (SPM) class of polyunsaturated fatty acid metabolites, is an activating ligand for GPR18; RvD2 and its activation of GPR18 contribute to the resolution of inflammatory responses as well as inflammation-based and other diseases in animal models and are proposed to do so in humans. Furthermore, RvD2 is a metabolite of the omega-3 fatty acid, docosahexaenoic acid (DHA); the metabolism of DHA to RvD2 and RvD2's activation of GPR18 is proposed to one among many other mechanisms for the anti-inflammatory and other beneficial effects attributed to omega-3 fatty acid-rich diets Ligands Agonists Ligands found to bind to GPR18 as agonists include: N-Arachidonoylglycine (NAGly) Abnormal cannabidiol (Abn-CBD) AM-251 - partial agonist Cannabidiol - partial agonist CBG-DMH O-1602 Δ9-Tetrahydrocannabinol (Δ9-THC) - THC is actually a more potent agonist at GPR18 than at CB1 or CB2, with Ki of 0.96nM at GPR18, 8.1nM at GPR55, 25.1nM at CB1 and 35.2nM at CB2. Anandamide (N-arachidonoyl ethanolamine, AEA) Arachidonylcyclopropylamide (ACPA) Resolvin D2 (RvD2) Antagonists Amauromine O-1918 CID-85469571 References Further reading G protein-coupled receptors Biology of bipolar disorder
NAGly receptor
[ "Chemistry" ]
582
[ "G protein-coupled receptors", "Signal transduction" ]
14,427,389
https://en.wikipedia.org/wiki/GPR19
Probable G-protein coupled receptor 19 is a protein that in humans is encoded by the GPR19 gene. GPR19 has been proposed as the receptor for the peptide hormone adropin. References Further reading G protein-coupled receptors
GPR19
[ "Chemistry" ]
49
[ "G protein-coupled receptors", "Signal transduction" ]
14,427,399
https://en.wikipedia.org/wiki/GPR20
Probable G-protein coupled receptor 20 is a protein that in humans is encoded by the GPR20 gene. References Further reading G protein-coupled receptors
GPR20
[ "Chemistry" ]
32
[ "G protein-coupled receptors", "Signal transduction" ]
14,427,402
https://en.wikipedia.org/wiki/MPP%2B
{{DISPLAYTITLE:MPP+}} MPP+ (1-methyl-4-phenylpyridinium) is a positively charged organic molecule with the chemical formula C12H12N+. It is a monoaminergic neurotoxin that acts by interfering with oxidative phosphorylation in mitochondria by inhibiting complex I, leading to the depletion of ATP and eventual cell death. MPP+ arises in the body as the toxic metabolite of the closely related compound MPTP. MPTP is converted in the brain into MPP+ by the enzyme MAO-B, ultimately causing parkinsonism in primates by killing certain dopamine-producing neurons in the substantia nigra. The ability for MPP+ to induce Parkinson's disease has made it an important compound in Parkinson's research since this property was discovered in 1983. The chloride salt of MPP+ found use in the 1970s as an herbicide under the trade name cyperquat. Though no longer in use as an herbicide, cyperquat's closely related structural analog paraquat still finds widespread usage, raising some safety concerns. History MPP+ has been known since at least the 1920s, with a synthesis of the compound being published in a German chemistry journal in 1923. Its neurotoxic effects, however, were not known until much later, with the first paper definitively identifying MPP+ as a Parkinson's-inducing poison being published in 1983. This paper followed a string of poisonings that took place in San Jose, California in 1982 in which users of an illicitly synthesized analog of meperidine were presenting to hospital emergency rooms with symptoms of Parkinson's. Since most of the patients were young and otherwise healthy and Parkinson's disease tends to afflict people at a much older age, researchers at the hospital began to scrutinize the illicitly synthesized opiates that the patients had ingested. The researchers discovered that the opiates were tainted with MPTP, which is the biological precursor to the neurotoxic MPP+. The MPTP was present in the illicitly synthesized meperidine analog as an impurity, which had a precedent in a 1976 case involving a chemistry graduate student synthesizing meperidine and injecting the resulting product into himself. The student came down with symptoms of Parkinson's disease, and his synthesized product was found to be heavily contaminated with MPTP. The discovery that MPP+ could reliably and irreversibly induce Parkinson's disease in mammals reignited interest in Parkinson's research, which had previously been dormant for decades. Following the revelation, MPP+ and MPTP sold out in virtually all chemical catalogs, reappearing months later with a 100-fold price increase. Synthesis Laboratory MPP+ can be readily synthesized in the laboratory, with Zhang and colleagues publishing a representative synthesis in 2017. The synthesis involves reacting 4-phenylpyridine with methyl iodide in acetonitrile solvent at reflux for 24 hours. An inert atmosphere is used to ensure a quantitative yield. The product is formed as the iodide salt, and the reaction proceeds via an SN2 pathway. The industrial synthesis of MPP+ for sale as the herbicide cyperquat used methyl chloride as the source of the methyl group. Biological MPP+ is produced in vivo from the precursor MPTP. The process involves two successive oxidations of the molecule by monoamine oxidase B to form the final MPP+ product. This metabolic process occurs predominantly in astrocytes in the brain. Mechanism of toxicity MPP+ exhibits its toxicity mainly by promoting the formation of reactive free radicals in the mitochondria of dopaminergic neurons in the substantia nigra. MPP+ can siphon electrons from the mitochondrial electron transport chain at complex I and be reduced, in the process forming radical reactive oxygen species which go on to cause further, generalized cellular damage. In addition, the overall inhibition of the electron transport chain eventually leads to stunted ATP production and eventual death of the dopaminergic neurons, which ultimately displays itself clinically as symptoms of Parkinson's disease. MPP+ also displays toxicity by inhibiting the synthesis of catecholamines, reducing levels of dopamine and cardiac norepinephrine, and inactivating tyrosine hydroxylase. The mechanism of uptake of MPP+ is important to its toxicity. MPP+ injected as an aqueous solution into the bloodstream causes no symptoms of Parkinsonism in test subjects, since the highly charged molecule is unable to diffuse through the blood-brain barrier. Furthermore, MPP+ shows little toxicity to cells other than dopaminergic neurons, suggesting that these neurons have a unique process by which they can uptake the molecule, since, being charged, MPP+ cannot readily diffuse across the lipid bilayer that composes cellular membranes. Unlike MPP+, its common biological precursor MPTP is a lipid-soluble molecule that diffuses readily across the blood-brain barrier. MPTP itself is not cytotoxic, however, and must be metabolized to MPP+ by MAO-B to show any signs of toxicity. The oxidation of MPTP to MPP+ is a process that can be catalyzed only by MAO-B, and cells that express other forms of MAO do not show any MPP+ production. Studies in which MAO-B was selectively inhibited showed that MPTP had no toxic effect, further cementing the crucial role of MAO-B in MPTP and MPP+ toxicity. Studies in rats and mice show that various compounds, including nobiletin, a flavonoid found in citrus, can rescue dopaminergic neurons from degeneration caused by treatment with MPP+. The specific mechanism of protection, however, remains unknown. Uses In scientific research MPP+ and its precursor MPTP are widely used in animal models of Parkinson's disease to irreversibly induce the disease. Excellent selectivity and dose control can be achieved by injecting the compound directly into cell types of interest. Most modern studies use rats as a model system, and much research is directed at identifying compounds that can attenuate or reverse the effects of MPP+. Commonly studied compounds include various MAO inhibitors and general antioxidants. While some of these compounds are quite effective at stopping the neurotoxic effects of MPP+, further research is needed to establish their potential efficacy in treating clinical Parkinson's. The revelation that MPP+ causes the death of dopaminergic neurons and ultimately induces symptoms of Parkinson's disease was crucial in establishing the lack of dopamine as central to Parkinson's disease. Levodopa or L-DOPA came into common use as an anti-Parkinson's medication thanks to the results brought about by research using MPP+. Further medications are in trial to treat the progression of the disease itself as well as the motor and non-motor symptoms associated with Parkinson's, with MPP+ still being widely used in early trials to test efficacy. As a pesticide MPP+, sold as the chloride salt under the brand name cyperquat, was used briefly in the 1970s as an herbicide to protect crops against nutsedge, a member of the cyperus genus of plants. MPP+ as a salt has much lower acute toxicity than its precursor MPTP due to the inability of the former to pass through the blood-brain barrier and ultimately access the only cells that will permit its uptake, the dopaminergic neurons. While cyperquat is no longer used as an herbicide, a closely related compound named paraquat is. Given the structural similarities, some have raised concerns about paraquat's active use as an herbicide for those handling it. However, studies have shown paraquat to be far less neurotoxic than MPP+, since paraquat does not bind to complex I in the mitochondrial electron transport chain, and thus its toxic effects cannot be realized. Safety MPP+ is commonly sold as the water-soluble iodide salt and is a white-to-beige powder. Specific toxicological data on the compound is somewhat lacking, but one MSDS quotes an LD50 of 29 mg/kg via an intraperitoneal route and 22.3 mg/kg via a subcutaneous route of exposure. Both values come from a mouse model system. MPP+ encountered in the salt form is far less toxic by ingestion, inhalation, and skin exposure than its biological precursor MPTP, due to the inability of MPP+ to cross the blood-brain barrier and freely diffuse across cellular membranes. There is no specific antidote to MPP+ poisoning. Clinicians are advised to treat exposure symptomatically. References Herbicides Human drug metabolites Human pathological metabolites Monoaminergic neurotoxins Pyridinium compounds
MPP+
[ "Chemistry", "Biology" ]
1,882
[ "Chemicals in medicine", "Biocides", "Herbicides", "Human drug metabolites" ]
14,427,412
https://en.wikipedia.org/wiki/GPR21
Probable G-protein coupled receptor 21 is a protein that in humans is encoded by the GPR21 gene. References Further reading G protein-coupled receptors
GPR21
[ "Chemistry" ]
32
[ "G protein-coupled receptors", "Signal transduction" ]
14,427,427
https://en.wikipedia.org/wiki/GPR22
Probable G-protein coupled receptor 22 is a protein that in humans is encoded by the GPR22 gene. References Further reading G protein-coupled receptors
GPR22
[ "Chemistry" ]
32
[ "G protein-coupled receptors", "Signal transduction" ]
14,427,441
https://en.wikipedia.org/wiki/LPAR4
Lysophosphatidic acid receptor 4 also known as LPA4 is a protein that in humans is encoded by the LPAR4 gene. LPA4 is a G protein-coupled receptor that binds the lipid signaling molecule lysophosphatidic acid (LPA). See also Lysophospholipid receptor P2Y receptor References Further reading G protein-coupled receptors
LPAR4
[ "Chemistry" ]
85
[ "G protein-coupled receptors", "Signal transduction" ]
14,427,463
https://en.wikipedia.org/wiki/GPR25
Probable G-protein coupled receptor 25 is a protein that in humans is encoded by the GPR25 gene. References Further reading G protein-coupled receptors
GPR25
[ "Chemistry" ]
32
[ "G protein-coupled receptors", "Signal transduction" ]
14,427,484
https://en.wikipedia.org/wiki/GPR26
Probable G-protein coupled receptor 26 is a protein that in humans is encoded by the GPR26 gene. GPR26 expression is found to peak perinatally, when the visual system is first challenged, and contains a 53 kb LD-block enriched for association with introgressed Neanderthal-derived SNPs. Additionally, it is known to form oligomeric structures with the 5-HT1a receptor. References Further reading G protein-coupled receptors
GPR26
[ "Chemistry" ]
98
[ "G protein-coupled receptors", "Signal transduction" ]
14,427,502
https://en.wikipedia.org/wiki/GPR27
Probable G-protein coupled receptor 27 is a protein that in humans is encoded by the GPR27 gene. See also SREB References G protein-coupled receptors
GPR27
[ "Chemistry" ]
34
[ "G protein-coupled receptors", "Signal transduction" ]
14,427,523
https://en.wikipedia.org/wiki/GPR31
G-protein coupled receptor 31 also known as 12-(S)-HETE receptor is a protein that in humans is encoded by the GPR31 gene. The human gene is located on chromosome 6q27 and encodes a G-protein coupled receptor protein composed of 319 amino acids. Function The GPR31 receptor shares a close amino acid sequence similarity with the oxoeicosanoid receptor 1, a G-protein coupled receptor encoded by the GPR170 gene. Ligand binding and activation The oxoeicosanoid receptor 1 is the receptor for a group of arachidonic acid metabolites produced by 5-lipoxygenase, such as 5-Hydroxyicosatetraenoic acid (5-HETE), 5-oxoicosanoic acid (5-oxo-ETE), and other members of this family, which are potent bioactive cell stimuli. In contrast, the GPR31 receptor binds to a different arachidonic acid metabolite, 12-hydroxyeicosatetraenoic acid (12-HETE), synthesized by 12-lipoxygenase. This conclusion is supported by studies that cloned the receptor from the PC-3 prostate cancer cell line. The cloned receptor, when expressed in other cell types, bound 12-HETE with high affinity (Kd = 5 nM) and mediated the effects of low concentrations of the S but not R stereoisomer of 12-HETE. In a [35S]GTPγS binding assay, which estimates a receptor's binding affinity by measuring its stimulation of [35S]GTPγS binding, 12(S)-HETE activated GPR31 with an EC50 (effective concentration causing a 50% of maximal [35S]GTPγS binding) of less than 0.3 nM. In comparison, the EC50 was 42 nM for 15(S)-HETE, 390 nM for 5(S)-HETE, and undetectable for 12(R)-HETE. It is currently unknown whether GPR31 interacts with structural analogs of 12(S)-HETE, such as 12-oxo-ETE (a metabolite of 12(S)-HETE), various 5,12-diHETEs including LTB4, or other bioactive metabolites like the hepoxilins. Further research is required to determine whether GPR31 exclusively binds and mediates the effects of 12(S)-HETE or, like the oxoeicosanoid receptor 1, interacts with a broader family of analogs. Signaling pathways Like the oxoeicosanoid receptor, GPR31 activates the MEK-ERK1/2 signaling pathway, but unlike oxoeicosanoid receptor 1, it does not cause an increase in cytosolic Ca2+ concentration. It also activates NFκB. GPR31 exhibits stereospecificity and other properties expected of a true G-protein coupled receptor (GPCR). Additional receptors activated by 12(S)-HETE 12(S)-HETE also: a) binds to and activates the leukotriene B4 receptor-2 (BLT2), a GPCR for the 5-lipoxygenase-derived metabolite LTB4; b) binds to, but inhibits, the GPCR for prostaglandin H2 and thromboxane A2, two arachidonic acid metabolites; c) binds with high affinity to a 50 kilodalton (kDa) subunit of a 650 kDa cytosolic and nuclear protein complex; and d) binds with low affinity to and activates intracellular peroxisome proliferator-activated receptor gamma. Complications in determining GPR31 function These alternate binding sites complicate the determination of 12(S)-HETE's reliance on GPR31 for cell activation and the overall function of GPR31. Studies utilizing GPR31 Gene knockout models will be crucial for understanding its role in vivo. Tissue distribution GPR31 receptor mRNA is highly expressed in the PC-3 prostate cancer cell line and to a lesser extent the DU145 prostate cancer cell line and to human umbilical vein endothelial cells (HUVEC), human umbilical vein endothelial cells (HUVEC), human brain microvascular endothelial cells (HBMEC), and human pulmonary aortic endothelial cells (HPAC). Its mRNA is also express but at rather low levels in several other human cell lines including: K562 cells (human myelogenous leukemia cells); Jurkat cells (T lymphocyte cells); Hut78 cells (T cell lymphoma cells), HEK 293 cells (primary embryonic kidney cells), MCF-7 cells (mammary adenocarcinoma cellss), and EJ cells (bladder carcinoma cells). Mice express an ortholog to human GPR31 in their circulating blood platelets. Clinical significance Prostate cancer The GPR31 receptor appears to mediate the responses of PC-3 prostate cancer cells to 12(S)-HETE in stimulating the MEK-ERK1/2 and NFκB pathways and therefore may contribute to the growth-promoting and metastasis-promoting actions that 12(S)-HETE is proposed to have in human prostate cancer. However, LNCaP and PC3 human prostate cancer cells also express BLT2 receptors; in LNCaP cells, BLT2 receptors stimulate the expression of the growth- and metastasis-promoting androgen receptor; in PC3 cells, BLT2 receptors stimulate the NF-κB pathway to inhibit the apoptosis induced by cell detachment from surfaces (i.e. Anoikis; and, in BLT2-overexpressing PWR-1E non-malignant prostate cells, 12(S)-HETE diminished anoikis-associated apoptotic cell death. Thus, the roles of 12(S)-HETE in human prostate cancer, if any, may involve its activation of either or both GPR31 and BLT2 receptors. Other diseases The many other actions of 12(S)-HETE (see 12-Hydroxyeicosatetraenoic acid) and any other ligands found to interact with this receptor will require studies similar those conducted on PC3 cells and mesenteric arteries to determine the extent to which they interact with BLT2, TXA2/PGH2, and PPARgamma receptors and thereby may contribute in part or whole to their activity. Clues implicating the GPR31, as opposed to the other receptors in the actions of 12(S)-HETE include findings that GPR31 receptors do not respond to 12(R)-HETE nor induce rises in cytosolic Ca2+ whereas the other receptors mediate one or both of these actions. These studies will be important because, in addition to prostate cancer, preliminary studies suggest that the GPR31 receptor is implicated in several other diseases such as malignant megakaryocytis (Acute megakaryoblastic leukemia), arthritis, Alzheimer's disease, progressive B-cell chronic lymphocytic leukemia, Diabetic neuropathy, and high grade astrocytoma. References G protein-coupled receptors
GPR31
[ "Chemistry" ]
1,560
[ "G protein-coupled receptors", "Signal transduction" ]