source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Singularity%20spectrum | The singularity spectrum is a function used in multifractal analysis to describe the fractal dimension of a subset of points of a function belonging to a group of points that have the same Hölder exponent. Intuitively, the singularity spectrum gives a value for how "fractal" a set of points are in a function.
More formally, the singularity spectrum of a function, , is defined as:
Where is the function describing the Hölder exponent, of at the point . is the Hausdorff dimension of a point set.
See also
Fractal
Fractional Brownian motion
Hausdorff dimension |
https://en.wikipedia.org/wiki/The%20Invisible%20Boy%3A%20Second%20Generation | The Invisible Boy: Second Generation () is a 2018 Italian fantasy-superhero film directed by Gabriele Salvatores. It is the sequel to the 2014 film The Invisible Boy.
Plot
Three years after the events of the first film, Michele Silenzi's adoptive mother Giovanna has died in a car accident and Michele's love interest Stella – whose memories were mentally erased by Michele's father Andre – remembers nothing of what happened in the submarine and got engaged to Brando, who made her believe he was her savior.
In Michele's class comes a new girl, Natasha, who has a special elemental ability of pyrokinesis. During a party given at the home of Brando, Natasha lights up a fire to create a stir so as to be able to stun Michele, who wakes up in the bedroom of an abandoned noble house, and finds Natasha and a woman named Yelena, who reveals to him that the girl is his sister and that she is their mother. In the meantime, some Specials scattered around the world are kidnapped.
Yelena explains to Michele that she needs blood transfusions from him and Natasha in order to be able to reinforce since she is belonging to the typology of first-generation specials. Transfusions are performed by the Doctor KA, a doctor belonging to the "normal" who has worked with Yelena for some time; she tells her son that Andrey has been mysteriously kidnapped and is convinced that the abductor is the cruel Igor Zavarov, a Russian oligarch known to her as a bitter enemy and exploiter of the Specials. Yelena's plan is to help kidnap Zavarov with the help of Michele and Natasha in order to stop his campaign against the Specials and save Andrey. The kidnapping must be carried out while Zavarov is in Trieste for the inauguration of his new pipeline and Yelena entrusts Michele with the task of entering the offices of the police station and stealing the security plans for the day of the inauguration of the gas pipeline, discovering the exact movements of Zavarov. Through his invisibility, Michele succ |
https://en.wikipedia.org/wiki/Energizer%20Bunny | The Energizer Bunny is the marketing mascot of Energizer batteries in North America. It is a pink mechanical toy rabbit wearing sunglasses and blue and black striped flip-flops that beats a bass drum bearing the Energizer logo.
History
The Energizer Bunny was first created as a parody of the Duracell Bunny, which first appeared in television advertising in 1973, in its "Drumming Bunny" commercial. Duracell had purportedly trademarked the drumming bunny character, but whether they had or not, said trademark had lapsed by 1988, providing Energizer an opening to create their own trademark.
The first Energizer Bunny commercial was broadcast on United States television on October 30, 1988. Produced by DDB Needham Worldwide, the spot began as a direct parody of Duracell's "Drumming Bunny" ad. In the original Duracell ads, a set of battery-powered drum-playing toy rabbits gradually slow to a halt until only the toy powered by a Duracell copper-top battery remains active. In Energizer's parody, the Energizer Bunny enters the screen midway through the ad, beating a huge bass drum and swinging a mallet over his head.
The Energizer Bunny is promoted as being able to continue operating indefinitely, or at least much longer than similar toys (or other products) using rival brands' batteries. The criticism was that Duracell compared their batteries with carbon-zinc batteries, and not similar alkaline batteries like Energizer. The creative team at D.D.B. Chicago who conceived and designed the bunny chose All Effects special effects company to build the original Energizer Bunny, a remote-controlled prop. All Effects operated the Energizer Bunny in most of its first commercials.
In subsequent commercials, the Bunny left the studio in which it had performed the "Drumming Bunny" ad to wander onto the sets of realistic-looking commercials for fictional products, interrupting their action. As the campaign progressed, many of these ads were standalone (for fake products such as " |
https://en.wikipedia.org/wiki/Alexander%20van%20Oudenaarden | Alexander van Oudenaarden (19 March 1970) is a Dutch biophysicist and systems biologist. He is a leading researcher in stem cell biology, specialising in single cell techniques. In 2012 he started as director of the Hubrecht Institute and was awarded three times an ERC Advanced Grant, in 2012, 2017, and 2022. He was awarded the Spinoza Prize in 2017.
Biography
Van Oudenaarden was born 19 March 1970, in Zuidland, a small town in the Dutch province of South Holland. He studied at the Delft University of Technology, where he obtained an MSc degree in Materials Science and Engineering (cum laude) and an MSc degree in Physics, both in 1993, and subsequently a PhD degree in Physics (cum laude) in 1998 in experimental condensed matter physics, under the supervision of professor J.E. Mooij. He received the Andries Miedema Award (best doctoral research in the field of condensed matter physics in the Netherlands) for his thesis on "Quantum vortices and quantum interference effects in circuits of small tunnel junctions". In 1998, he moved to Stanford University, where he was a postdoctoral researcher in the departments of Biochemistry and of Microbiology & Immunology, working on force generation of polymerising actin filaments in the Theriot lab and
a postdoctoral researcher in the department of Chemistry, working on Micropatterning of supported phospholipid bi-layers in the Boxer lab. In 2000 he joined the department of Physics at MIT as an assistant professor, was tenured in 2004 and became a full professor. In 2001 he received the NSF CAREER award, and was both an Alfred Sloan Research Fellow and the Keck Career Development Career Development Professor in Biomedical Engineering. In 2012 Alexander became the director of the Hubrecht Institute as the successor of Hans Clevers. In 2017 he received his second ERC Advanced Grant, for his study titled "a single-cell genomics approach integrating gene expression, lineage, and physical interactions". In 2022 he received his thi |
https://en.wikipedia.org/wiki/Multidrug-resistant%20Gram-negative%20bacteria | Multidrug resistant Gram-negative bacteria (MDRGN bacteria) are a type of Gram-negative bacteria with resistance to multiple antibiotics. They can cause bacteria infections that pose a serious and rapidly emerging threat for hospitalized patients and especially patients in intensive care units. Infections caused by MDR strains are correlated with increased morbidity, mortality, and prolonged hospitalization. Thus, not only do these bacteria pose a threat to global public health, but also create a significant burden to healthcare systems.
Emerging threat
These bacteria pose a great threat to public health due to the limited treatment options available as well as lack of newly developed antimicrobial medications. MDR strains of Enterobacteriaceae, Pseudomonas aeruginosa, and Acinetobacter baumannii have become of most concern because they have been reported by hospitals all around the United States. There are many factors which could be contributed to the existence and spread of MDR gram-negative bacteria such as the: overuse or misuse of existing antimicrobial agents, which has led to the development of adaptive resistance mechanisms by bacteria; a lack of responsible antimicrobial stewardship such that the use of multiple broad-spectrum agents has helped perpetuate the cycle of increasing resistance; and a lack of good infection control practices.
Treatment options
Although there is currently a shortage of new drugs in the antimicrobial realm, there are a few antibiotics currently being studied and tested for the treatment of serious Gram-negative bacterial infections. These include cephalosporins, ceftobiprole, ceftarolin and FR-264205. The lack of newly emerging antimicobrial drugs have resulted in the revisit of old antibiotic drugs such as colistin (Polymyxins) and fosfomycin, which are traditionally considered to be toxic but have gained a principal role in the treatment of the most problematic MDR Gram-negative pathogens including Pseudomonas aeruginosa, |
https://en.wikipedia.org/wiki/Edgar%20T.%20Westbury | Edgar T. Westbury was perhaps best known as a major contributor to the English recreational magazine Model Engineer. He contributed under his own name, and also under the pseudonyms 'Artificer', 'Ned', 'Kinemette' and Exactus. Beginning in 1925 until his death in 1970, he made over 1474 authored contributions to Model Engineer under his real name. As Artificer, he wrote a further 135 articles from 1936 to 1970, on a range of topics including basic workshop skills and techniques, and construction of a light vertical milling machine. Ned was the nom-de-plume for writing about workshop equipment, under which he wrote about 159 articles. As Kinemette came a further 67 contributions from 1936 to 1959, on making optical equipment including slide and film projectors, and enlargers.
Westbury was born in 1896. He served in the Royal Navy during the latter part of the First World War. In the late 1920s he was an instructor in the RAF. His "Atom Minor" engine of 1926 was the first of his to fly a model aeroplane in collaboration with Colonel C. E. Bowden breaking a new model flight endurance record, it later broke a model hydroplane speed record in one of Bowden's boats. During World War Two, he developed a number of small petrol driven generators for use by the armed forces. He was editor of Model Engineer for a time, and from 1966 was technical consultant under the magazine's new management with Martin Evans as editor. E.T.W. died on 3 May 1970. A significant collection of Westbury's engines are held by the Society of Model and Experimental Engineers and are currently undergoing restoration by students of West Dean College.
His era spanned what might be called the 'late industrial period' in Britain, a time when Britain was in the throes of austerity and manufactured goods were expensive, mechanical skills were relatively common, and the only way to obtain extra luxuries was to make them yourself. His aptitude, and the mechanical craftsman ethos he represented, was eulogis |
https://en.wikipedia.org/wiki/Distributed%20File%20System%20%28Microsoft%29 | Distributed File System (DFS) is a set of client and server services that allow an organization using Microsoft Windows servers to organize many distributed SMB file shares into a distributed file system. DFS has two components to its service: Location transparency (via the namespace component) and Redundancy (via the file replication component). Together, these components enable data availability in the case of failure or heavy load by allowing shares in multiple different locations to be logically grouped under one folder, the "DFS root".
Microsoft's DFS is referred to interchangeably as 'DFS' and 'Dfs' by Microsoft and is unrelated to the DCE Distributed File System, which held the 'DFS' trademark but was discontinued in 2005.
It is also called "MS-DFS" or "MSDFS" in some contexts, e.g. in the Samba user space project.
Overview
There is no requirement to use the two components of DFS together; it is perfectly possible to use the logical namespace component without using DFS file replication, and it is perfectly possible to use file replication between servers without combining them into one namespace.
A DFS root can only exist on a server version of Windows (from Windows NT 4.0 and up) and OpenSolaris (in kernel space) or a computer running Samba (in user space.) The Enterprise and Datacenter Editions of Windows Server can host multiple DFS roots on the same server. OpenSolaris intends on supporting multiple DFS roots in "a future project based on Active Directory (AD) domain-based DFS namespaces".
There are two ways of implementing DFS on a server:
Standalone DFS namespace - allows for a DFS root that exists only on the local computer, and thus does not use Active Directory. A Standalone DFS can only be accessed on the computer on which it is created. It does not offer any fault tolerance and cannot be linked to any other DFS. This is the only option available on Windows NT 4.0 Server systems. Standalone DFS roots are rarely encountered because of their |
https://en.wikipedia.org/wiki/Particle%20beam | A particle beam is a stream of charged or neutral particles. In particle accelerators, these particles can move with a velocity close to the speed of light. There is a difference between the creation and control of charged particle beams and neutral particle beams, as only the first type can be manipulated to a sufficient extent by devices based on electromagnetism. The manipulation and diagnostics of charged particle beams at high kinetic energies using particle accelerators are main topics of accelerator physics.
Sources
Charged particles such as electrons, positrons, and protons may be separated from their common surrounding. This can be accomplished by e.g. thermionic emission or arc discharge. The following devices are commonly used as sources for particle beams:
Ion source
Cathode ray tube, or more specifically in one of its parts called electron gun. This is also part of traditional television and computer screens.
Photocathodes may also be built in as a part of an electron gun, using the photoelectric effect to separate particles from their substrate.
Neutron beams may be created by energetic proton beams which impact on a target, e.g. of beryllium material. (see article Particle therapy)
Bursting a petawatt laser onto a titanium foil to produce a proton beam.
Manipulation
Acceleration
Charged beams may be further accelerated by use of high resonant, sometimes also superconducting, microwave cavities. These devices accelerate particles by interaction with an electromagnetic field. Since the wavelength of hollow macroscopic, conducting devices is in the radio frequency (RF) band, the design of such cavities and other RF devices is also a part of accelerator physics.
More recently, plasma acceleration has emerged as a possibility to accelerate particles in a plasma medium, using the electromagnetic energy of pulsed high-power laser systems or the kinetic energy of other charged particles. This technique is under active development, but cannot provide |
https://en.wikipedia.org/wiki/3D%20cell%20culture | A 3D cell culture is an artificially created environment in which biological cells are permitted to grow or interact with their surroundings in all three dimensions. Unlike 2D environments (e.g. a Petri dish), a 3D cell culture allows cells in vitro to grow in all directions, similar to how they would in vivo. These three-dimensional cultures are usually grown in bioreactors, small capsules in which the cells can grow into spheroids, or 3D cell colonies. Approximately 300 spheroids are usually cultured per bioreactor.
3D cell cultures can also be performed on microfluidic devices such as the Organoplate® to generate perfusable 3D tissues, and hanging drop devices to generate 3D spheroids.
Background
3D cell cultures have been used in research for several decades. One of the first recorded approaches for their development was at the beginning of the 20th century, with the efforts of Alexis Carrel to develop methods for prolonged in vitro tissue cultures. Early studies in the 80's, led by Mina Bissell from the Lawrence Berkeley National Laboratory, highlighted the importance of 3D techniques for creating accurate in vitro culturing models. This work focused on the importance of the extracellular matrix and the ability of cultures in artificial 3D matrices to produce physiologically relevant multicellular structures, such as acinar structures in healthy and cancerous breast tissue models. These techniques have been applied to in vitro disease models used to evaluate cellular responses to pharmaceutical compounds.
Eric Simon, in a 1988 NIH SBIR grant report, showed that electrospinning could be used to produced nano- and submicron-scale polystyrene and polycarbonate fibrous mats (now known as scaffolds) specifically intended for use as in vitro cell substrates. This early use of electrospun fibrous lattices for cell culture and tissue engineering showed that various cell types including Human Foreskin Fibroblasts (HFF), transformed Human Carcinoma (HEp-2), and Mink |
https://en.wikipedia.org/wiki/Oligocrystalline%20material | Oligocrystalline material owns a microstructure consisting of a few coarse grains, often columnar and parallel to the longitudinal ingot axis. This microstructure can be found in the ingots produced by electron beam melting (EBM). |
https://en.wikipedia.org/wiki/Hoshi-imo | Hoshiimo () is a sweet potato snack popular in Japan similar to a number of other dried foods in Asia.
This food generally consists of steamed, dried, sweet potatoes that are skinned and sliced with no artificial sweeteners added.
In some cases, the sweet potatoes may be roasted rather than steamed. The surface may be covered with a white powder.
Not to be mistaken for mold, this is a form of crystallized sugar that emerges as the sweet potatoes dry.
With a chewy texture and sweet potato flavor, this food can be eaten raw or roasted.
This dish is particularly rich in vitamins A, B1, C, and E and contains much potassium and calcium
as well as dietary fiber. As an alkaline food, reputedly it helps the body maintain a healthy pH balance.
This product goes by other names in Japan such as 'kanso-imo' (), 'mushi-kirihoshi' (), and 'kiriboshi kansho' ().
It is similar to two Korean dishes: "goguma-malaengi" () and "mallingoguma" ().
The latter is cut into thinner slices and has a crisper texture than the former.
In China a snack very similar to hoshiimo known as "dìguā gàn" () is popular and in Vietnam a product known as "khoai lang sấy dẻo" is common.
Varieties in Japan
In Japan hoshiimo is available in many shapes ranging from French fry-like rods to broad, flat chunks roughly 10–15 cm in
length and 5 cm in width. Sometimes oblong, unsliced varieties, known as "maru-boshi" () are also marketed.
Many types of sweet potato (Ipomoea batatas) are used to make this product.
Common varieties include Beniharu (), Tamayutaka (), Silk Sweet (),
and Anno-Mitsuki () sweet potatoes. Since China ranks above Japan in terms of sweet potato production and the price of sweet potatoes in China is generally lower than in Japan, much of the hoshiimo sold in Japan today is, in fact, produced in China.
History
This product supposedly originated in Omaezaki City in what today is Shizuoka Prefecture. Around 1824 a merchant named Shozo Kuribayashi began in manufacturing this d |
https://en.wikipedia.org/wiki/Third%20medium%20contact%20method | The third medium contact (TMC) is an implicit formulation for contact mechanics. Contacting bodies are embedded in a highly compliant medium (the third medium), which becomes increasingly stiff under compression. The stiffening of the third medium allows tractions to be transferred between the contacting bodies when the third medium between the bodies is compressed. In itself, the method is inexact; however, in contrast to most other contact methods, the third medium approach is continuous and differentiable, which makes it applicable to applications such as topology optimization.
The method was first proposed by Peter Wriggers et al. where an Ogden material model was used to model the third medium. This approach requires explicit treatment of surface normals. A simplification to the method was offered by Bog et al. by applying a Hencky material with the inherent property of becoming rigid under ultimate compression. This property has made the explicit treatment of surface normals redundant, thereby transforming the third medium contact method into a fully implicit method. The addition of a new void regularization by Bluhm et al. further extended the method to applications involving moderate sliding, rendering it practically applicable |
https://en.wikipedia.org/wiki/Certified%208-VSB%20specialist | Certified 8-VSB Specialist (8-VSB) is a specialist title granted to an individual that successfully meets the prerequisite certification and examination requirements in the United States. The certification is regulated by the Society of Broadcast Engineers (SBE), and demonstrates competence in the various aspects of 8-VSB (ATSC) digital television facilities. The "Certified 8-VSB Specialist" title is protected by copyright laws. Individuals who use the title without consent from the Society of Broadcast Engineers could face legal action.
The SBE certifications were created to recognize individuals who practice in career fields which are not regulated by state licensing or Professional Engineering programs. Marine Radio and radar systems still require a Federal Communications Commission (FCC) license apart from an SBE certification. Broadcast Engineering is regulated at the national level and not by individual states.
External links
Certified 8-VSB Specialist (8-VSB) Requirements & Application
SBE Official Website
See also
List of post-nominal letters
Broadcast engineering
Professional titles and certifications |
https://en.wikipedia.org/wiki/African%20Biosafety%20Network%20of%20Expertise | The African Biosafety Network of Expertise describes a continental network hosted by Ouagadougou, Burkina Faso.
Origin
The African Biosafety Network of Expertise was launched on 23 February 2010 with the signing of a host agreement between the New Partnership for Africa's Development (NEPAD) and the Government of Burkina Faso.
It was conceptualized in Africa’s Science and Technology Consolidated Plan of Action (2005) and fulfils the recommendation of the High-Level African Panel on Modern Biotechnology, entitled Freedom to Innovate. The network is funded by the Bill and Melinda Gates Foundation.
Mission and activities
The network serves as a resource for regulators dealing with safety issues related to the introduction and development of genetically modified organisms. In addition to providing regulators with access to policy briefs and other relevant information online in English and French, the network organizes national and subregional workshops on specific topics. For instance, one-week biosafety courses for African regulators were run by the network in Burkina Faso in November 2013 and in Uganda in July 2014, in partnership with the University of Michigan (USA). Twenty-two regulators from Ethiopia, Kenya, Malawi, Mozambique, Tanzania, Uganda, and Zimbabwe took part in the latter course.
In April 2014, the network ran a training workshop in Nigeria at the request of the Federal Ministry of Environment for 44 participants drawn from government ministries, regulatory agencies, universities, and research institutions. The aim was to strengthen the regulatory capacity of institutional biosafety committees. This training was considered important to ensure continued regulatory compliance for ongoing confined field trials and multilocation trials for Maruca-resistant cowpea and biofortified sorghum. The workshop was run in partnership with the International Food Policy Research Institute's Program for Biosafety Systems.
From 28 April to 2 May 2014, Togo's Minist |
https://en.wikipedia.org/wiki/Interlobar%20arteries | The interlobar arteries are vessels of the renal circulation which supply the renal lobes. The interlobar arteries branch from the lobar arteries which branch from the segmental arteries, from the renal artery. They give rise to arcuate arteries. |
https://en.wikipedia.org/wiki/Super%20VGA | Super VGA (SVGA) is a broad term that covers a wide range of computer display standards that extended IBM's VGA specification.
When used as shorthand for a resolution, as VGA and XGA often are, SVGA refers to a resolution of 800 × 600.
History
In the late 1980s, after the release of IBM's VGA, third-party manufacturers began making graphics cards based on its specifications with extended capabilities. As these cards grew in popularity they began to be referred to as "Super VGA."
This term was not an official standard, but a shorthand for enhanced VGA cards which had become common by 1988. The first cards that explicitly used the term were Genoa Systems's SuperVGA and SuperVGA HiRes in 1987.
Super VGA cards broke compatibility with the IBM VGA standard, requiring software developers to provide specific display drivers and implementations for each card their software could operate on. Initially, the heavy restrictions this placed on software developers slowed the uptake of Super VGA cards, which motivated VESA to produce a unifying standard, the VESA BIOS Extensions (VBE), first introduced in 1989, to provide a common software interface to all cards implementing the VBE specification.
Eventually, Super VGA graphics adapters supported innumerable modes.
Specifications
The Super VGA standardized the following resolutions:
640 × 400 or 640 × 480 with 256 colors
800 × 600 with 24-bit color depth
1024 × 768 with 24-bit color depth
1280 × 1024 with 24-bit color depth
SVGA uses the same DE-15 VGA connector as the original standard, and otherwise operates over the same cabling and interfaces as VGA.
Early manufacturers
Some early Super VGA manufacturers and some of their models, where available:
Ahead Technologies (Not related to Nero AG, formerly Ahead Software)
Amdek: VGA ADAPTER/132 (Tseng Labs chipset)
AST Research, Inc.: VGA Plus (rebranded Paradise)
ATI Technologies: VIP (82C451), VGA Wonder
Chips and Technologies: 82C451
Cirrus Logic: CL-GD410/4 |
https://en.wikipedia.org/wiki/Equity-indexed%20annuity | An indexed annuity (the word equity previously tied to indexed annuities has been removed to help prevent the assumption of stock market investing being present in these products) in the United States is a type of tax-deferred annuity whose credited interest is linked to an equity index—typically the S&P 500 or international index. It guarantees a minimum interest rate (typically between 1% and 3%) if held to the end of the surrender term and protects against a loss of principal. An equity index annuity is a contract with an insurance or annuity company. The returns may be higher than fixed instruments such as certificates of deposit (CDs), money market accounts, and bonds but not as high as market returns. Equity Index Annuities are insured by each state's Guarantee Fund; coverage is not as strong as the insurance provided by the FDIC. For example, in California the fund will cover "80%, not to exceed $250,000." The guarantees in the contract are backed by the relative strength of the insurer.
The contracts may be suitable for a portion of the asset portfolio for those who want to avoid risk and are in retirement or nearing retirement age. The objective of purchasing an equity index annuity is to realize greater gains than those provided by CDs, money markets or bonds, while still protecting principal. The long term ability of Equity Index Annuities to beat the returns of other fixed instruments is a matter of debate.
Indexed annuities represent about 25.3% of all fixed annuity sales in 2020 according to the My Annuity Store, Inc..
Equity-indexed annuities may also be referred to as fixed indexed annuities or simple indexed annuities. The mechanics of equity-indexed annuities are often complex and the returns can vary greatly depending on the month and year the annuity is purchased. Like many other types of annuities, equity-indexed annuities usually carry a surrender charge for early withdrawal. These "surrender periods" range between 3 and 16 years; typically |
https://en.wikipedia.org/wiki/Benefits%20Supervisor%20Sleeping | Benefits Supervisor Sleeping is a 1995 oil on canvas painting by the British artist Lucian Freud depicting a fat, naked woman lying on a couch. It is a portrait of Sue Tilley, a Jobcentre supervisor, who then weighed about .
Tilley is the author of a biography of the Australian performer Leigh Bowery titled Leigh Bowery, The Life and Times of an Icon. Tilley was introduced to Freud by Bowery, who was already modelling for him. Freud painted a number of large portraits of her around the period 1994–96, and came to call her "Big Sue". He said of her body: "It's flesh without muscle and it has developed a different kind of texture through bearing such a weight-bearing thing."
The painting held the world record for the highest price paid for a painting by a living artist when it was sold by Guy Naggar for US$33.6 million (£17.2 million) at Christie's in New York City in May 2008 to Roman Abramovich.
Freud's painting The Brigadier was sold for £35.8 million ($56.2 million) in 2015, four years after his death, replacing Benefits Supervisor Sleeping as the most expensive Freud painting sold at auction.
The painting was exhibited twice at Flowers Gallery:
1996: Naked – Flowers East at London Fields
1997: British Figurative Art - Part 1: Painting at Flowers East |
https://en.wikipedia.org/wiki/Schoenflies%20notation | The Schoenflies (or Schönflies) notation, named after the German mathematician Arthur Moritz Schoenflies, is a notation primarily used to specify point groups in three dimensions. Because a point group alone is completely adequate to describe the symmetry of a molecule, the notation is often sufficient and commonly used for spectroscopy. However, in crystallography, there is additional translational symmetry, and point groups are not enough to describe the full symmetry of crystals, so the full space group is usually used instead. The naming of full space groups usually follows another common convention, the Hermann–Mauguin notation, also known as the international notation.
Although Schoenflies notation without superscripts is a pure point group notation, optionally, superscripts can be added to further specify individual space groups. However, for space groups, the connection to the underlying symmetry elements is much more clear in Hermann–Mauguin notation, so the latter notation is usually preferred for space groups.
Symmetry elements
Symmetry elements are denoted by i for centers of inversion, C for proper rotation axes, σ for mirror planes, and S for improper rotation axes (rotation-reflection axes). C and S are usually followed by a subscript number (abstractly denoted n) denoting the order of rotation possible.
By convention, the axis of proper rotation of greatest order is defined as the principal axis. All other symmetry elements are described in relation to it. A vertical mirror plane (containing the principal axis) is denoted σv; a horizontal mirror plane (perpendicular to the principal axis) is denoted σh.
Point groups
In three dimensions, there are an infinite number of point groups, but all of them can be classified by several families.
Cn (for cyclic) has an n-fold rotation axis.
Cnh is Cn with the addition of a mirror (reflection) plane perpendicular to the axis of rotation (horizontal plane).
Cnv is Cn with the addition of n mirror pla |
https://en.wikipedia.org/wiki/Crungus | A Crungus is an imaginary creature found in artificial intelligence text-to-image models, sometimes also referred to as a digital cryptid. Twitch streamer and voice actor Guy Kelly found that typing the made-up word into the Craiyon image generator consistently produced pictures of a monstrous, hairy humanoid.
He later tweeted about this which resulted in a very long thread of reactions and experiments with "Crungus" and variations of this AI-prompt also on different engines.
Since Craiyon version 2 was introduced in 2023, the prompt "Crungus" no longer produces the same imaginary creature.
Origins
It is unclear how the Crungus in Craiyon's output came into existence. Kelly thinks an error in the AI software models is the most likely explanation. In any case, the Krampus was quickly eliminated as a model. Although the horned mythical figure from the Alpine region has a similar name and his mask looks almost exactly the same, the term "Krampus" on Craiyon provides different images.
Kelly also speculated at the AI was responding to the "-rungus" suffix, noting the similar appearance of heavy metal performer Oderus Urungus, and could be interpreting the word as something "orc-based", in reference to the fantasy creatures.
See also
Artificial intelligence art
Loab, another AI generated cryptid |
https://en.wikipedia.org/wiki/Jonathan%20Steuer | Jonathan Steuer (born December 3, 1965, in Wisconsin) is a pioneer in online publishing.
Steuer led the launch teams of a number of early and influential online publishing ventures, including Cyborganic, a pioneering online/offline community, HotWired, the first ad-supported web magazine, and c|net's online operations. Steuer's article "Defining virtual realities: Dimensions determining telepresence", is widely cited in academic and industry literature. Originally published in 1992 in the Journal of Communication 42, 73-9, it has been reprinted in Communication in the Age of Virtual Reality (1995), F. Biocca & M. R. Levy (Eds.).
Steuer's vividness and interactivity matrix from that article appeared in Wired circa 1995 and has been particularly influential in shaping the discourse by defining virtual reality in terms of human experience, rather than technological hardware, and setting out vividness and interactivity as axial dimensions of that experience. Steuer's notability in diverse arenas as a scholar, architect, and instigator of new media is documented in multiple, independent, non-trivial, published works.
Steuer has been a consultant and senior executive for a number of other online media startups: CNet, ZDTV, Sawyer Media Systems and Scient.
Steuer has an AB in philosophy from Harvard University, and a PhD in communication theory & research from Stanford University. There, his doctoral dissertation concerned Vividness and Source of Evaluation as Determinants of Social Responses Toward Mediated Representations of Agency.
Personal Life
He is married to Majorie Ingall. A longtime resident of the Bay Area, today Steuer resides in New York City. |
https://en.wikipedia.org/wiki/Breviograph | A breviograph or brevigraph (from , short, and Greek grapho, to write) is a type of scribal abbreviation in the form of an easily written symbol, character, flourish or stroke, based on a modified letter form to take the place of a common letter combination, especially those occurring at the beginning or end of a word. Breviographs were used frequently by stenographers, law clerks and scriveners, and they were also found in early printed books and tracts. Their use declined after the 17th century.
Examples
Examples of breviographs:
& — et (e.g. &c = etc)
⋅i⋅ — id est
ꝑ — per-, pre-, or par- (e.g. ꝑson = person)
ß — ser-, sur-, or sir- (e.g. ßuaunt = seruaunt = servant)
X — Christ- (e.g. Xian = Christian)
See also
Acronym and initialism
Palaeography
Tironian notes
Classical abbreviations
Medieval abbreviations
Scribal abbreviations |
https://en.wikipedia.org/wiki/Single-cell%20DNA%20template%20strand%20sequencing | Single-cell DNA template strand sequencing, or Strand-seq, is a technique for the selective sequencing of a daughter cell's parental template strands.
This technique offers a wide variety of applications, including the identification of sister chromatid exchanges in the parental cell prior to segregation, the assessment of non-random segregation of sister chromatids, the identification of misoriented contigs in genome assemblies, de novo genome assembly of both haplotypes in diploid organisms including humans, whole-chromosome haplotyping, and the identification of germline and somatic genomic structural variation, the latter of which can be detected robustly even in single cells.
Background
Strand-seq (single-cell and single-strand sequencing) was one of the first single-cell sequencing protocols described in 2012. This genomic technique selectively sequencings the parental template strands in single daughter cells DNA libraries. As a proof of concept study, the authors demonstrated the ability to acquire sequence information from the Watson and/or Crick chromosomal strands in an individual DNA library, depending on the mode of chromatid segregation; a typical DNA library will always contain DNA from both strands. The authors were specifically interested in showing the utility of strand-seq in detecting sister chromatid exchanges (SCEs) at high-resolution. They successfully identified eight putative SCEs in the murine (mouse) embryonic stem (meS) cell line with resolution up to 23 bp. This methodology has also been shown to hold great utility in discerning patterns of non-random chromatid segregation, especially in stem cell lineages. Furthermore, SCEs have been implicated as diagnostic indicators of genome stress, information that has utility in cancer biology. Most research on this topic involves observing the assortment of chromosomal template strands through many cell development cycles and correlating non-random assortment with particular cell fates. Single |
https://en.wikipedia.org/wiki/Chiral%20derivatizing%20agent | In analytical chemistry, A chiral derivatizing agent (CDA), also known as a chiral resolving reagent, is a derivatization reagent that is a chiral auxiliary used to convert a mixture of enantiomers into diastereomers in order to analyze the quantities of each enantiomer present and determine the optical purity of a sample. Analysis can be conducted by spectroscopy or by chromatography. Some analytical techniques such as HPLC and NMR, in their most commons forms, cannot distinguish enantiomers within a sample, but can distinguish diastereomers. Therefore, converting a mixture of enantiomers to a corresponding mixture of diastereomers can allow analysis. The use of chiral derivatizing agents has declined with the popularization of chiral HPLC. Besides analysis, chiral derivatization is also used for chiral resolution, the actual physical separation of the enantiomers.
History
Since NMR spectroscopy has been available to chemists, there have been numerous studies on the applications of this technique. One of these noted the difference in the chemical shift (i.e. the distance between the peaks) of two diastereomers. Conversely, two compounds that are enantiomers have the same NMR spectral properties. It was reasoned that if a mix of enantiomers could be converted into a mix of diastereomers by bonding them to another chemical that was itself chiral, it would be possible to distinguish this new mixture using NMR, and therefore learn about the original enantiomeric mixture. The first popular example of this technique was published in 1969 by Harry S. Mosher. The chiral agent used was a single enantiomer of MTPA (α-methoxy-α-(trifluoromethyl)phenylacetic acid), also known as Mosher's acid. The corresponding acid chloride is also known as Mosher's acid chloride, and the resultant diastereomeric esters are known as Mosher's esters. Another system is Pirkle's Alcohol developed in 1977.
Requirements
The general use and design of CDAs obey the following rules so that the CD |
https://en.wikipedia.org/wiki/Lysophosphatidic%20acid%20phosphatase%20type%206 | Lysophosphatidic acid phosphatase type 6 is an acid phosphatase enzyme that is encoded in humans by the ACP6 gene.
It acts as a phosphomonoesterase at low pHs. It is responsible for the hydrolysis of Lysophosphatidic acids (LPAs) to their respective monoacylglycerols and the release a free phosphate group in the process. The enzyme has higher activity for myristate-LPA (14 carbon chain), oleate-LPA (18 carbon chain and one unsaturated carbon-carbon bond), laurate-LPA (12 carbon chain) or palmitate-LPA (16 carbon chain). When the substrate is stearate-LPA (18 carbon chain), the enzyme has reduced activity. Phosphatidic acids can also be hydrolyzed by lysophosphatidic acid phosphatase, but at a significantly lower rate. The addition of the second fatty chain makes fitting into the active site much harder.
LPAs are necessary for healthy cell growth, survival and pro-angiogenic factors for both in vivo and in vitro cells. Unbalanced concentrations of lysophosphatidic acid phosphatase can frequently lead to unbalanced LPA concentrations, which can cause metabolic disorders, and lead to ovarian cancer in women.
Structure
Lysophosphatidic acid phosphatase is a monomer composed of two domains. One domain functions as a cap on the enzyme, while the second comprises the body of the enzyme. The enzyme has two (α) alpha helices on one side, seven (β) beta sheets in the middle, and two more α helices on the opposite side. The space between the two domains serves as a large substrate pocket, as well as a channel through which water molecules can move through. This channel is lined with hydrophilic residues that lead the water molecule to the active site, where the terminal water molecule interacts with Asp-335 residue and is then activated. This catalyzes the bond formation to the phosphate group. Lysophosphatidic acid phosphatase also has two disulfide bridges. One that binds α12 and α4 together, and the other that binds a turn at the edge of β7 strand. Analysis of the po |
https://en.wikipedia.org/wiki/Friability | In materials science, friability ( ), the condition of being friable, describes the tendency of a solid substance to break into smaller pieces under duress or contact, especially by rubbing. The opposite of friable is indurate.
Substances that are designated hazardous, such as asbestos or crystalline silica, are often said to be friable if small particles are easily dislodged and become airborne, and hence respirable (able to enter human lungs), thereby posing a health hazard.
Tougher substances, such as concrete, may also be mechanically ground down and reduced to finely divided mineral dust. However, such substances are not generally considered friable because of the degree of difficulty involved in breaking the substance's chemical bonds through mechanical means. Some substances, such as polyurethane foams, show an increase in friability with exposure to ultraviolet radiation, as in sunlight.
Friable is sometimes used metaphorically to describe "brittle" personalities who can be "rubbed" by seemingly-minor stimuli to produce extreme emotional responses.
General
A friable substance is any substance that can be reduced to fibers or finer particles by the action of a small amount of pressure or friction, such as rubbing or inadvertently brushing up against the substance. The term could also apply to any material that exhibits these properties, such as:
Ionically bound substances that are less than 1 kg/L in density
Clay tablets
Crackers
Mineral fibers
Polyurethane (foam)
Aerogel
Geological
Friable and indurated are terms used commonly in soft-rock geology, especially with sandstones, mudstones, and shales to describe how well the component rock fragments are held together.
Examples:
Clumps of dried clay
Chalk
Perlite
Medical
The term friable is also used to describe tumors in medicine. This is an important determination because tumors that are easily torn apart have a higher risk of malignancy and metastasis.
Examples:
Some forms of cancer, such |
https://en.wikipedia.org/wiki/Hard%20hexagon%20model | In statistical mechanics, the hard hexagon model is a 2-dimensional lattice model of a gas, where particles are allowed to be on the vertices of a triangular lattice but no two particles may be adjacent.
The model was solved by , who found that it was related to the Rogers–Ramanujan identities.
The partition function of the hard hexagon model
The hard hexagon model occurs within the framework of the grand canonical ensemble, where the total number of particles (the "hexagons") is allowed to vary naturally, and is fixed by a chemical potential. In the hard hexagon model, all valid states have zero energy, and so the only important thermodynamic control variable is the ratio of chemical potential to temperature μ/(kT). The exponential of this ratio, z = exp(μ/(kT)) is called the activity and larger values correspond roughly to denser configurations.
For a triangular lattice with N sites, the grand partition function is
where g(n, N) is the number of ways of placing n particles on distinct lattice sites such that no 2 are adjacent. The function κ is defined by
so that log(κ) is the free energy per unit site. Solving the hard hexagon model means (roughly) finding an exact expression for κ as a function of z.
The mean density ρ is given for small z by
The vertices of the lattice fall into 3 classes numbered 1, 2, and 3, given by the 3 different ways to fill space with hard hexagons. There are 3 local densities ρ1, ρ2, ρ3, corresponding to the 3 classes of sites. When the activity is large the system approximates one of these 3 packings, so the local densities differ, but when the activity is below a critical point the three local densities are the same. The critical point separating the low-activity homogeneous phase from the high-activity ordered phase is with golden ratio φ. Above the critical point the local densities differ and in the phase where most hexagons are on sites of type 1 can be expanded as
Solution
The solution is given for small values of z |
https://en.wikipedia.org/wiki/Index%20of%20physics%20articles%20%28G%29 | The index of physics articles is split into multiple pages due to its size.
To navigate by individual letter use the table of contents below.
G
G-factor (physics)
G-force
G-parity
G. B. Pegram
G. C. Danielson
G. M. B. Dobson
G. Michael Morris
G. Michael Purdy
G. N. Glasoe
G. N. Ramachandran
G. V. Skrotskii
G. W. Pierce
GALLEX
GEANT (program)
GEKKO XII
GENERIC formalism
GEOBASE (database)
GEO 600
GHP formalism
GHZ experiment
GIM mechanism
GLEEP
GLORIA sidescan sonar
GRAPES-3
GRENOUILLE
GROMACS
GROMOS
GRS 1915+105
GRTensorII
GSI Helmholtz Centre for Heavy Ion Research
GSI anomaly
GSO projection
GW approximation
GYRO
Gabriel Gabrielsen Holtsmark
Gabriel Lippmann
Gabriele Rabel
Gabriele Veneziano
Gabrio Piola
Gadolinium yttrium garnet
Gaetano Crocco
Gaetano Vignola
Gain-switching
Gaja Alaga
Gal (unit)
Galactic archaeology
Galactic cosmic ray
Galaxy cloud
Galaxy filament
Galaxy formation and evolution
Galaxy merger
Galaxy rotation curve
Galilean cannon
Galilean invariance
Galilean transformation
Galilei number
Galileo's Leaning Tower of Pisa experiment
Galileo Ferraris
Galileo Galilei
Galileo thermometer
Gallium arsenide phosphide
Gallium indium arsenide antimonide phosphide
Galvanoluminescence
Galvanometer
Gamma-ray astronomy
Gamma-ray burst
Gamma-ray burst emission mechanisms
Gamma-ray burst progenitors
Gamma camera
Gamma counter
Gamma matrices
Gamma ray
Gamma spectroscopy
Gamow-Teller Transition
Gamow factor
Ganapathy Baskaran
Gans theory
Gareth Roberts (physicist)
Gargamelle
Garrett Jernigan
Garshelis effect
Gary Bold
Gary C. Bjorklund
Gary Gibbons
Gary L. Bennett
Gary S. Grest
Gary Westfall
Gas
Gas-discharge lamp
Gas-filled tube
Gas-phase ion chemistry
Gas Electron Multiplier
Gas centrifuge
Gas compressor
Gas constant
Gas discharge
Gas dynamic laser
Gas dynamics
Gas focusing
Gas immersion laser doping
Gas in a box
Gas in a harmonic trap
Gas laser
Gas laws
Gas thermometer
Gaseous diffusion
Gaseous ionization detectors
Gaspar Schott
Gaspard-Gustave Coriolis
Gasparo |
https://en.wikipedia.org/wiki/Vakarel%20radio%20transmitter | The Vakarel Transmitter was a large broadcasting facility for long- and medium wave near Vakarel, Bulgaria. The Vakarel Transmitter was inaugurated in 1937. It had one directional antenna consisting of three guyed masts and another consisting of two masts.
The most remarkable mast of the Vakarel Transmitter was the Blaw-Knox tower, built in 1937 by the company Telefunken. Along with Lakihegy Tower, Hungary, Riga LVRTC Transmitter, Latvia and Lisnagarvey Radio Mast, Northern Ireland it was one of the few Blaw-Knox towers in Europe until its demolition on 16 September 2020.
The transmitter was shut down at 22:00 UTC on 31 December 2014.
Transmitter internal structure
The modulation method used by the transmitter in Vakarel is called a tube voltage modulation and was successfully used in all powerful AM transmitters at that time. The Vakarel transmitter is supplied with electricity from a substation in Samokov via a medium voltage transmission line. The transmitter uses six stages of amplification. The first stage contains a single radio tube, which generates alternating current at a carrier frequency of 850 kHz. The electrical oscillations of the anode circuit in the tube are coupled in series to the second and third stage. The signals in these three stages are only amplified, without any other changes.
In the special fourth modulation stage, the form of signals is modulated with speech or music. The audio recordings are sent to the transmitter with an underground communication cable from the main radio studio in Sofia. Due to the large distance of almost , the audio signal is amplified at both ends by separate blocks of amplifiers.
The fifth stage consists of six transmitting tubes, two of which are in reserve, and four others can be switched on, if necessary. All of them are water-cooled.
The final sixth stage consists of four high-power transmitting tubes amplifying the final output up to 100 kW. The energy is filtered by a high-power tuned circuit and sent |
https://en.wikipedia.org/wiki/World%20Uranium%20Hearing | The World Uranium Hearing was held in Salzburg, Austria in September 1992.Anti-nuclear speakers from all continents, including indigenous speakers and scientists, testified to the health and environmental problems of uranium mining and processing, nuclear power, nuclear weapons, nuclear tests, and radioactive waste disposal.
People who spoke at the 1992 Hearing include: Thomas Banyacya, Katsumi Furitsu, Manuel Pino and Floyd Red Crow Westerman. They said they were deeply dismayed by the atomic bombings of Hiroshima and Nagasaki and highlighted what they called the inherently destructive nature of all phases of the nuclear supply chain. They recalled the disastrous impact of nuclear weapons testing in places such as the Nevada Test Site, Bikini Atoll and Eniwetok, Tahiti, Maralinga, and Central Asia. They highlighted the threat of radioactive contamination to all peoples, especially indigenous communities and said that their survival requires self-determination and emphasis on spiritual and cultural values. Increased renewable energy commercialization was advocated.
The proceedings were published as a book, Poison fire, sacred earth testimonies, lectures, conclusions. The outcome document, the Declaration of Salzburg was accepted by the United Nations Working Group on Indigenous Populations.
See also
International Uranium Film Festival
Uranium in the environment
History of the anti-nuclear movement
The Navajo People and Uranium Mining
Uranium mining debate
List of Nuclear-Free Future Award recipients
Hibakusha |
https://en.wikipedia.org/wiki/Radical%20behaviorism | Radical behaviorism is a "philosophy of the science of behavior" developed by B. F. Skinner. It refers to the philosophy behind behavior analysis, and is to be distinguished from methodological behaviorism—which has an intense emphasis on observable behaviors—by its inclusion of thinking, feeling, and other private events in the analysis of human and animal psychology. The research in behavior analysis is called the experimental analysis of behavior and the application of the field is called applied behavior analysis (ABA), which was originally termed "behavior modification."
Radical behaviorism as natural science
Radical behaviorism inherits from behaviorism the position that the science of behavior is a natural science, a belief that animal behavior can be studied profitably and compared with human behavior, a strong emphasis on the environment as cause of behavior, and an emphasis on the operations involved in the modification of behavior. Radical behaviorism does not claim that organisms are tabula rasa whose behavior is unaffected by biological or genetic endowment. Rather, it asserts that experiential factors play a major role in determining the behavior of many complex organisms, and that the study of these matters is a major field of research in its own right.
Operant psychology
Skinner believed that classical conditioning did not account for the behavior that many people are interested in, such as riding a bike or writing a book. His observations led him to propose a theory about how these and similar behaviors, called "operants", come about.
Roughly speaking, in operant conditioning, an operant is actively emitted and produces changes in the world (i.e., produces consequences) that alter the likelihood that the behavior will occur again.
As represented in the table below, operant conditioning involves two basic actions (increasing or decreasing the probability that a specific behavior will occur in the future), which are accomplished by adding or remo |
https://en.wikipedia.org/wiki/Feature%20toggle | A feature toggle in software development provides an alternative to maintaining multiple feature branches in source code. A condition within the code enables or disables a feature during runtime. In agile settings the toggle is used in production, to switch on the feature on demand, for some or all the users. Thus, feature toggles do make it easier to release often. Advanced roll out strategies such as canary roll out and A/B testing are easier to handle.
Even if new releases are not deployed to production continuously, continuous delivery is supported by feature toggles. The feature is integrated into the main branch even before it is completed. The version is deployed into a test environment once, the toggle allows to turn the feature on, and test it. Software integration cycles get shorter, and a version ready to go to production can be provided.
The third use of the technique is to allow developers to release a version of a product that has unfinished features. These unfinished features are hidden (toggled) so that they do not appear in the user interface. There is less effort to merge features into and out of the productive branch, and hence allows many small incremental versions of software.
A feature toggle is also called feature switch, feature flag, feature gate, feature flipper, or conditional feature.
Implementation
Feature toggles are essentially variables that are used inside conditional statements. Therefore, the blocks inside these conditional statements can be toggled 'on or off' depending on the value of the feature toggle. This allows developers to control the flow of their software and bypass features that are not ready for deployment. A block of code behind a runtime variable is usually still present and can be conditionally executed, sometimes within the same application lifecycle; a block of code behind a preprocessor directive or commented out would not be executable. A feature flag approach could use any of these methods to separate |
https://en.wikipedia.org/wiki/Gyroelongated%20triangular%20cupola | In geometry, the gyroelongated triangular cupola is one of the Johnson solids (J22). It can be constructed by attaching a hexagonal antiprism to the base of a triangular cupola (J3). This is called "gyroelongation", which means that an antiprism is joined to the base of a solid, or between the bases of more than one solid.
The gyroelongated triangular cupola can also be seen as a gyroelongated triangular bicupola (J44) with one triangular cupola removed. Like all cupolae, the base polygon has twice as many sides as the top (in this case, the bottom polygon is a hexagon because the top is a triangle).
Formulae
The following formulae for volume and surface area can be used if all faces are regular, with edge length a:
Dual polyhedron
The dual of the gyroelongated triangular cupola has 15 faces: 6 kites, 3 rhombi, and 6 pentagons. |
https://en.wikipedia.org/wiki/General-Purpose%20Serial%20Interface | General-Purpose Serial Interface, also known as GPSI, 7-wire interface, or 7WS, is a 7 wire communications interface. It is used as an interface between Ethernet MAC and PHY blocks.
Data is received and transmitted using separate data paths (TXD, RXD) and separate data clocks (TXCLK, RXCLK). Other signals consist of transmit enable (TXEN), receive carrier sense (CRS), and collision (COL).
See also
Media-independent interface (MII) |
https://en.wikipedia.org/wiki/Nuts%20%26%20Milk | is a puzzle-platform game developed and published by Japanese software developer Hudson Soft in 1983. The game was released for the FM-7, MSX, NEC PC-8801, NEC PC-6001, and later to the Famicom in Japan. Along with Lode Runner, it was the first third-party video game to be released on a Nintendo console.
Gameplay
Both versions of Nuts & Milk involve the player moving through various levels while collecting an assortment of fruit scattered throughout each of them. By gathering all the fruit on a particular screen, the player will gain access to a previously unopened house door containing Milk's fiancée, Yogurt. When the player makes contact with the female blob, they are advanced to the next level to start the process anew. Movement through these levels is accomplished by using the directional pad or keyboard to move Milk across the stage while avoiding pitfalls and other obstacles, most notably the character's rival, Nuts. If contact is made at any time during game play with Nuts or other harmful objects such as miniature blimps, the player will lose a life and have to restart the current level, with all fruit reset back to their initial positions. Once all three of Milk's lives are lost in this fashion, the game ends.
In the Famicom version, Milk can jump a short distance vertically or horizontally, allowing him to traverse pits or quickly gain access to an adjacent platform. If the player falls from too great a distance, Milk will become momentarily dazed and unable to move until the player joggles him awake with the jump button. Rope bridges are suspended in mid-air on most levels, and by using the directional pad, the player can climb them up or down as well as walk across them once they reach the top. In all, 50 individual levels exist on the Famicom version, and each one can be skipped freely by pressing the select button. Once a player has cycled through all fifty of them, he will return to the first level and restart the sequence until all of Milk's lives |
https://en.wikipedia.org/wiki/Automated%20X-ray%20inspection | Automated inspection (AXI) is a technology based on the same principles as automated optical inspection (AOI). It uses as its source, instead of visible light, to automatically inspect features, which are typically hidden from view.
Automated X-ray inspection is used in a wide range of industries and applications, predominantly with two major goals:
Process optimization, i.e. the results of the inspection are used to optimize following processing steps,
Anomaly detection, i.e. the result of the inspection serve as a criterion to reject a part (for scrap or re-work).
Whilst AOI is mainly associated with electronics manufacturing (due to widespread use in PCB manufacturing), AXI has a much wider range of applications. It ranges from the quality check of alloy wheels to the detection of bone fragments in processed meat. Wherever large numbers of very similar items are produced according to a defined standard, automatic inspection using advanced image processing and pattern recognition software (Computer vision) has become a useful tool to ensure quality and improve yield in processing and manufacturing.
Principle of Operation
While optical inspection produces full color images of the surface of the object, x-ray inspection transmits x-rays through the object and records gray scale images of the shadows cast. The image is then processed by image processing software that detects the position and size/ shape of expected features (for process optimization) or presence/ absence of unexpected/ unintended objects or features (for anomaly detection).
X-rays are generated by an x-ray tube, usually located directly above or below the object under inspection. A detector located the opposite side of the object records an image of the x-rays transmitted through the object. The detector either converts the x-rays first into visible light which is imaged by an optical camera, or detects directly using an x-ray sensor array. The object under inspection may be imaged at highe |
https://en.wikipedia.org/wiki/Johann%20Radon | Johann Karl August Radon (; 16 December 1887 – 25 May 1956) was an Austrian mathematician. His doctoral dissertation was on the calculus of variations (in 1910, at the University of Vienna).
Life
Radon was born in Tetschen, Bohemia, Austria-Hungary, now Děčín, Czech Republic. He received his doctoral degree at the University of Vienna in 1910. He spent the winter semester 1910/11 at the University of Göttingen, then he was an assistant at the German Technical University in Brno, and from 1912 to 1919 at the Technical University of Vienna. In 1913/14, he passed his habilitation at the University of Vienna. Due to his near-sightedness, he was exempt from the draft during wartime.
In 1919, he was called to become Professor extraordinarius at the newly founded University of Hamburg; in 1922, he became Professor ordinarius at the University of Greifswald, and in 1925 at the University of Erlangen. Then he was Ordinarius at the University of Breslau from 1928 to 1945.
After a short stay at the University of Innsbruck he became Ordinarius at the Institute of Mathematics of the University of Vienna on 1 October 1946. In 1954/55, he was rector of the University of Vienna.
In 1939, Radon became corresponding member of the Austrian Academy of Sciences, and in 1947, he became a member. From 1952 to 1956, he was Secretary of the Class of Mathematics and Science of this Academy. From 1948 to 1950, he was president of the Austrian Mathematical Society.
Johann Radon married Maria Rigele, a secondary school teacher, in 1916. They had three sons who died young or very young. Their daughter Brigitte, born in 1924, obtained a Ph.D. in mathematics at the University of Innsbruck and married the Austrian mathematician Erich Bukovics in 1950. Brigitte lives in Vienna.
Radon, as Curt C. Christian described him in 1987 at the occasion of the unveiling of his brass bust at the University of Vienna, was a friendly, good-natured man, highly esteemed by students and colleagues alike, a n |
https://en.wikipedia.org/wiki/Business%20Process%20Model%20and%20Notation | Business Process Model and Notation (BPMN) is a graphical representation for specifying business processes in a business process model.
Originally developed by the Business Process Management Initiative (BPMI), BPMN has been maintained by the Object Management Group (OMG) since the two organizations merged in 2005. Version 2.0 of BPMN was released in January 2011, at which point the name was amended to Business Process Model and Notation to reflect the introduction of execution semantics, which were introduced alongside the existing notational and diagramming elements. Though it is an OMG specification, BPMN is also ratified as ISO 19510. The latest version is BPMN 2.0.2, published in January 2014.
Overview
Business Process Model and Notation (BPMN) is a standard for business process modeling that provides a graphical notation for specifying business processes in a Business Process Diagram (BPD), based on a flowcharting technique very similar to activity diagrams from Unified Modeling Language (UML). The objective of BPMN is to support business process management, for both technical users and business users, by providing a notation that is intuitive to business users, yet able to represent complex process semantics. The BPMN specification also provides a mapping between the graphics of the notation and the underlying constructs of execution languages, particularly Business Process Execution Language (BPEL).
BPMN has been designed to provide a standard notation readily understandable by all business stakeholders, typically including business analysts, technical developers and business managers. BPMN can therefore be used to support the generally desirable aim of all stakeholders on a project adopting a common language to describe processes, helping to avoid communication gaps that can arise between business process design and implementation.
BPMN is one of a number of business process modeling language standards used by modeling tools and processes. While the cu |
https://en.wikipedia.org/wiki/IBM%20Tivoli%20Server-free%20backup | IBM introduced Server-Free backup with IBM Tivoli Storage Manager 5.1 in 2002 for Windows 2000 servers only.
Server-Free backup functionality (with DATAMOVER TYPE=SCSI) were included in IBM Tivoli Storage Manager version 5.1, 5.2, and 5.3, but not in 5.4 or later, but (DATAMOVER TYPE=NAS) are supported in 5.4 and later.
SCSI-3 Extended Copy
Server-Free data movement uses the SCSI-3 EXTENDED COPY command and are carried out by a data mover device that must exists on the SAN, and it is the data mover device that is responsible for copying the data, either from a SAN-attached (client-owned) disk to a SAN-attached tape drive (server-owned), or vice versa.
From the T10 working group, "The EXTENDED COPY command provides a means to copy data from one set of logical units to another set of logical units or to the same set of logical units. The entity within a SCSI device that receives and performs the EXTENDED COPY command is called the copy manager. The copy manager is responsible for copying data from the source devices to the destination devices. The copy source and destination devices are logical units that may reside in different SCSI devices or the same SCSI device."
Notes
External links
IBM Almaden Research Center
IBM - Server-Free Data Movement Information
IBM Tivoli Storage Manager Administrator's Guide, Chapter 7. Setting Up Server-Free Data Movement
IBM Tivoli Storage Manager for Windows, Version 5.3, SAN-based server-free data movement
IBM - Tivoli Storage Manager Administration Center: support for new and existing function
Introduction to T10
T10/1731-D Information technology ISO/IEC 14776-314 SCSI Primary Commands - 4 (SPC-4)
Tivoli Storage Manager Version 5.1 Technical Guide
Storage software
Tivoli Server-free backup |
https://en.wikipedia.org/wiki/Gaussian%20blur | In image processing, a Gaussian blur (also known as Gaussian smoothing) is the result of blurring an image by a Gaussian function (named after mathematician and scientist Carl Friedrich Gauss).
It is a widely used effect in graphics software, typically to reduce image noise and reduce detail. The visual effect of this blurring technique is a smooth blur resembling that of viewing the image through a translucent screen, distinctly different from the bokeh effect produced by an out-of-focus lens or the shadow of an object under usual illumination.
Gaussian smoothing is also used as a pre-processing stage in computer vision algorithms in order to enhance image structures at different scales—see scale space representation and scale space implementation.
Mathematics
Mathematically, applying a Gaussian blur to an image is the same as convolving the image with a Gaussian function. This is also known as a two-dimensional Weierstrass transform. By contrast, convolving by a circle (i.e., a circular box blur) would more accurately reproduce the bokeh effect.
Since the Fourier transform of a Gaussian is another Gaussian, applying a Gaussian blur has the effect of reducing the image's high-frequency components; a Gaussian blur is thus a low-pass filter.
The Gaussian blur is a type of image-blurring filter that uses a Gaussian function (which also expresses the normal distribution in statistics) for calculating the transformation to apply to each pixel in the image. The formula of a Gaussian function in one dimension is
In two dimensions, it is the product of two such Gaussian functions, one in each dimension:
where x is the distance from the origin in the horizontal axis, y is the distance from the origin in the vertical axis, and σ is the standard deviation of the Gaussian distribution. It is important to note that the origin on these axes are at the center (0, 0). When applied in two dimensions, this formula produces a surface whose contours are concentric circles w |
https://en.wikipedia.org/wiki/Image%20registration | Image registration is the process of transforming different sets of data into one coordinate system. Data may be multiple photographs, data from different sensors, times, depths, or viewpoints. It is used in computer vision, medical imaging, military automatic target recognition, and compiling and analyzing images and data from satellites. Registration is necessary in order to be able to compare or integrate the data obtained from these different measurements.
Algorithm classification
Intensity-based vs feature-based
Image registration or image alignment algorithms can be classified into intensity-based and feature-based. One of the images is referred to as the moving or source and the others are referred to as the target, fixed or sensed images. Image registration involves spatially transforming the source/moving image(s) to align with the target image. The reference frame in the target image is stationary, while the other datasets are transformed to match to the target. Intensity-based methods compare intensity patterns in images via correlation metrics, while feature-based methods find correspondence between image features such as points, lines, and contours. Intensity-based methods register entire images or sub-images. If sub-images are registered, centers of corresponding sub images are treated as corresponding feature points. Feature-based methods establish a correspondence between a number of especially distinct points in images. Knowing the correspondence between a number of points in images, a geometrical transformation is then determined to map the target image to the reference images, thereby establishing point-by-point correspondence between the reference and target images. Methods combining intensity-based and feature-based information have also been developed.
Transformation models
Image registration algorithms can also be classified according to the transformation models they use to relate the target image space to the reference image space. Th |
https://en.wikipedia.org/wiki/List%20of%20proposed%20quantum%20registers | A practical quantum computer must use a physical system as a programmable quantum register. Researchers are exploring several technologies as candidates for reliable qubit implementations.
Superconducting quantum computing (qubit implemented by the state of nonlinear resonant superconducting circuits containing Josephson junctions)
Trapped ion quantum computer (qubit implemented by the internal state of trapped ions)
Neutral atoms in optical lattices (qubit implemented by internal states of neutral atoms trapped in an optical lattice)
Quantum dot computer, spin-based (e.g. the Loss-DiVincenzo quantum computer) (qubit given by the spin states of trapped electrons)
Quantum dot computer, spatial-based (qubit given by electron position in double quantum dot)
Quantum computing using engineered quantum wells, which could in principle enable the construction of a quantum computer that operates at room temperature
Coupled quantum wire (qubit implemented by a pair of quantum wires coupled by a quantum point contact)
Nuclear magnetic resonance quantum computer (NMRQC) implemented with the nuclear magnetic resonance of molecules in solution, where qubits are provided by nuclear spins within the dissolved molecule and probed with radio waves
Solid-state NMR Kane quantum computer (qubit realized by the nuclear spin state of phosphorus donors in silicon)
Vibrational quantum computer (qubits realized by vibrational superpositions in cold molecules)
Electrons-on-helium quantum computer (qubit is the electron spin)
Cavity quantum electrodynamics (CQED) (qubit provided by the internal state of trapped atoms coupled to high-finesse cavities)
Molecular magnet (qubit given by spin states)
Fullerene-based ESR quantum computer (qubit based on the electronic spin of atoms or molecules encased in fullerenes)
Nonlinear optical quantum computer (qubits realized by processing states of different modes of light through both linear and nonlinear elements)
Linear optical quantum computer (LOQC) |
https://en.wikipedia.org/wiki/List%20of%20stochastic%20processes%20topics | In the mathematics of probability, a stochastic process is a random function. In practical applications, the domain over which the function is defined is a time interval (time series) or a region of space (random field).
Familiar examples of time series include stock market and exchange rate fluctuations, signals such as speech, audio and video; medical data such as a patient's EKG, EEG, blood pressure or temperature; and random movement such as Brownian motion or random walks.
Examples of random fields include static images, random topographies (landscapes), or composition variations of an inhomogeneous material.
Stochastic processes topics
This list is currently incomplete. See also :Category:Stochastic processes
Basic affine jump diffusion
Bernoulli process: discrete-time processes with two possible states.
Bernoulli schemes: discrete-time processes with N possible states; every stationary process in N outcomes is a Bernoulli scheme, and vice versa.
Bessel process
Birth–death process
Branching process
Branching random walk
Brownian bridge
Brownian motion
Chinese restaurant process
CIR process
Continuous stochastic process
Cox process
Dirichlet processes
Finite-dimensional distribution
First passage time
Galton–Watson process
Gamma process
Gaussian process – a process where all linear combinations of coordinates are normally distributed random variables.
Gauss–Markov process (cf. below)
GenI process
Girsanov's theorem
Hawkes process
Homogeneous processes: processes where the domain has some symmetry and the finite-dimensional probability distributions also have that symmetry. Special cases include stationary processes, also called time-homogeneous.
Karhunen–Loève theorem
Lévy process
Local time (mathematics)
Loop-erased random walk
Markov processes are those in which the future is conditionally independent of the past given the present.
Markov chain
Markov chain central limit theorem
Conti |
https://en.wikipedia.org/wiki/Rudolf%20Clausius | Rudolf Julius Emanuel Clausius (; 2 January 1822 – 24 August 1888) was a German physicist and mathematician and is considered one of the central founding fathers of the science of thermodynamics. By his restatement of Sadi Carnot's principle known as the Carnot cycle, he gave the theory of heat a truer and sounder basis. His most important paper, "On the Moving Force of Heat", published in 1850, first stated the basic ideas of the second law of thermodynamics. In 1865 he introduced the concept of entropy. In 1870 he introduced the virial theorem, which applied to heat.
Life
Clausius was born in Köslin (now Koszalin, Poland) in the Province of Pomerania in Prussia. His father was a Protestant pastor and school inspector, and Rudolf studied in the school of his father. In 1838, he went to the Gymnasium in Stettin. Clausius graduated from the University of Berlin in 1844 where he had studied mathematics and physics since 1840 with, among others, Gustav Magnus, Peter Gustav Lejeune Dirichlet and Jakob Steiner. He also studied history with Leopold von Ranke. During 1848, he got his doctorate from the University of Halle on optical effects in Earth's atmosphere. In 1850 he became professor of physics at the Royal Artillery and Engineering School in Berlin and Privatdozent at the Berlin University. In 1855 he became professor at the ETH Zürich, the Swiss Federal Institute of Technology in Zürich, where he stayed until 1867. During that year, he moved to Würzburg and two years later, in 1869 to Bonn.
In 1870 Clausius organized an ambulance corps in the Franco-Prussian War. He was wounded in battle, leaving him with a lasting disability. He was awarded the Iron Cross for his services.
His wife, Adelheid Rimpau died in 1875, leaving him to raise their six children. In 1886, he married Sophie Sack, and then had another child. Two years later, on 24 August 1888, he died in Bonn, Germany.
Work
Clausius's PhD thesis concerning the refraction of light proposed that we see a bl |
https://en.wikipedia.org/wiki/Naval%20Radiological%20Defense%20Laboratory | The United States Naval Radiological Defense Laboratory (NRDL) was an early military lab created to study the effects of radiation and nuclear weapons. The facility was based at the Hunter's Point Naval Shipyard in San Francisco, California.
History
The NRDL was formed in 1946 to manage testing, decontamination, and disposition of US Navy ships contaminated by the Operation Crossroads nuclear tests in the Pacific. A number of ships that survived the atomic detonations were towed to Hunter's Point for detailed study and decontamination. Some of the ships were cleaned and sold for scrap. The aircraft carrier , which had been heavily damaged and contaminated with nuclear fallout by Operation Crossroads explosions in July 1946, was brought to the NRDL for study. After years of trying in vain to decontaminate the ship enough that it could be safely sold for scrap, the Navy ultimately packed the ship full of nuclear waste and scuttled the radioactive hulk off California near the Farallon Islands in January 1951. The ship's wreck was discovered resting upright under 790 m of water in 2009.
The NRDL used several buildings at the Hunter's Point shipyard from 1946 to 1969. Working with the newly formed US Atomic Energy Commission (predecessor to the U.S. Nuclear Regulatory Commission established in 1974), the Navy conducted a wide variety of radiation experiments on materials and animals at the lab, including the construction of a cyclotron on the site for use in radiation experiments and storage for various nuclear materials.
Activities
An article published 2 May 2001 in SF Weekly detailed various aspects of nuclear testing at NRDL from declassified records:
Contamination
The first use of radioactive materials at NRDL predated the issuing of licenses by the Atomic Energy Commission, but the AEC later issued licenses for a broad spectrum of radioactive materials to be used in research at the NRDL. Radioactive materials specific to nuclear weapon testing were exempted from |
https://en.wikipedia.org/wiki/Free-flow%20electrophoresis | Free-flow electrophoresis (FFE), also known as carrier-free electrophoresis, is a matrix-free electrophoretic separation technique. FFE is an analogous technique to capillary electrophoresis, with a comparable resolution, that can be used for scientific questions, where semi-preparative and preparative amounts of samples are needed. It is used to quantitatively separate samples according to differences in charge or isoelectric point. Because of the versatility of the technique, a wide range of protocols for the separation of samples like rare metal ions, protein isoforms, multiprotein complexes, peptides, organelles, cells, DNA origami, blood serum and nanoparticles exist. The advantage of FFE is the fast and gentle separation of samples dissolved in a liquid solvent without any need of a matrix, like polyacrylamide in gel electrophoresis. This ensures a very high recovery rate since analytes do not adhere to any carrier or matrix structure. Because of its continuous nature and high volume throughput, this technique allows a fast separation of preparative amounts of samples with a very high resolution. Furthermore, the separations can be conducted under native or denaturing conditions.
History
FFE was developed in the 1960s by Kurt Hannig at the Max-Planck-Institute in Germany. Until the 1980s, it was a standardized technology for the separation of cells and organelles, and FFE was even tested in space to minimize the sedimentation under zero gravity. As flow cytometry became the standard method for cell sorting, FFE developments focused on the separation of proteins and charged particles. Some groups are also working on miniaturized versions of FFE systems or micro FFEs.
Technique
The separation chamber consists of a backplate and a front plate. The backplate usually consists of a cooled aluminum block, covered with a plastic covered glass mirror. The front plate is nowadays made of PMMA, in earlier times glass has been used. The distance between the front- |
https://en.wikipedia.org/wiki/Superior%20thoracic%20aperture | The superior thoracic aperture, also known as the thoracic outlet, or thoracic inlet refers to the opening at the top of the thoracic cavity. It is also clinically referred to as the thoracic outlet, in the case of thoracic outlet syndrome. A lower thoracic opening is the inferior thoracic aperture.
Structure
The superior thoracic aperture is essentially a hole surrounded by a bony ring, through which several vital structures pass. It is bounded by: the first thoracic vertebra (T1) posteriorly; the first pair of ribs laterally, forming lateral C-shaped curves posterior to anterior; and the costal cartilage of the first rib and the superior border of the manubrium anteriorly.
Dimensions
The adult thoracic outlet is around 6.5 cm antero-posteriorly and 11 cm transversely. Because of the obliquity of the first pair of ribs, the aperture slopes antero-inferiorly.
Relations
The clavicle articulates with the manubrium to form the anterior border of the thoracic outlet. Above the superior thoracic outlet is the root of the neck, and the superior mediastinum is inferiorly related. The brachial plexus is a superolateral relation of the thoracic outlet. The brachial plexus emerges between the anterior and middle scalene muscles, superior to the first rib, and passes obliquely and inferiorly, underneath the clavicle, into the shoulder and then the arm. Impingement of the plexus in the region of the scalenes, ribs, and clavicles is responsible for thoracic outlet syndrome.
Function
Structures that pass through the thoracic inlet include:
trachea
oesophagus
thoracic duct
apices of the lungs
nerves
phrenic nerve
vagus nerve
recurrent laryngeal nerves
sympathetic trunks
vessels
arteries
left and right common carotid arteries
left subclavian arteries
veins
internal jugular veins
brachiocephalic veins
subclavian veins
lymph nodes and lymphatic vessels
This is not an exhaustive list. There are several other minor, but important, vessels and nerves passing |
https://en.wikipedia.org/wiki/Hopf%20lemma | In mathematics, the Hopf lemma, named after Eberhard Hopf, states that if a continuous real-valued function in a domain in Euclidean space with sufficiently smooth boundary is harmonic in the interior and the value of the function at a point on the boundary is greater than the values at nearby points inside the domain, then the derivative of the function in the direction of the outward pointing normal is strictly positive. The lemma is an important tool in the proof of the maximum principle and in the theory of partial differential equations. The Hopf lemma has been generalized to describe the behavior of the solution to an elliptic problem as it approaches a point on the boundary where its maximum is attained.
In the special case of the Laplacian, the Hopf lemma had been discovered by Stanisław Zaremba in 1910. In the more general setting for elliptic equations, it was found independently by Hopf and Olga Oleinik in 1952, although Oleinik's work is not as widely known as Hopf's in Western countries. There are also extensions which allow domains with corners.
Statement for harmonic functions
Let Ω be a bounded domain in Rn with smooth boundary. Let f be a real-valued function continuous on the closure of Ω and harmonic on Ω. If x is a boundary point such that f(x) > f(y) for all y in Ω sufficiently close to x, then the (one-sided) directional derivative of f in the direction of the outward pointing normal to the boundary at x is strictly positive.
Proof for harmonic functions
Subtracting a constant, it can be assumed that f(x) = 0 and f is strictly negative at interior points near x. Since the boundary of Ω is smooth there is a small ball contained in Ω the closure of which is tangent to the boundary at x and intersects the boundary only at x. It is then sufficient to check the result with Ω replaced by this ball. Scaling and translating, it is enough to check the result for the unit ball in Rn, assuming f(x) is zero for some unit vector x and f(y) < 0 if |y| < |
https://en.wikipedia.org/wiki/Cantharellus%20tabernensis | Cantharellus tabernensis is a species of fungus in the family Cantharellaceae that was described as new to science in 1996. It is found in the southern United States, where it grows in mixed pine and hardwood forests, close to mature Pinus elliottii trees. Fruit bodies have a yellowish-brown cap with a slightly darker brown center, and a bright orange gills and stipe. The specific epithet tabernensis refers to the meeting house at the Stennis Space Center Recreation area, near the type locality. |
https://en.wikipedia.org/wiki/E.%20T.%20S.%20Appleyard | Edgar Thomas Snowden Appleyard (14 June 1904 – 15 June 1939) was a physicist and pioneer in the fields of thin films and superconductivity.
Biography
He was born on 14 June 1904, the son of Edgar Snowden Appleyard and Elizabeth Whitehead of Huddersfield, England.
Appleyard attended Almondbury Grammar School and then was admitted to the Cambridge as a King’s College scholar. In the Natural Science Tripos he selected Physics as one of the key science subjects to focus his interest. He spent several years on research in the Cavendish Laboratory. In 1929 at the University of Bristol's H.H. Wills Physics Laboratory, Appleyard received an appointment to a George Wills research associateship. At the University of Chicago for the 1931–1932 academic year, Appleyard was awarded with a Rockefeller fellowship.
Appleyard died on 15 June 1939 through injuries caused by a fall.
Noteworthy collaborators
H. W. B. Skinner
John J. Hopfield
A.C.B. Lovell
A. D. Misener
Heinz London
Research interests
Excitation of polarized light
Preparation of Schumann plates
Thin metal films: Conductivity, Resistance
Superconductivity
Select publications
Appleyard, E. T. S. "Electronic Structure of the a-X Band System of N2." Physical Review 41.2 (1932): 254.
Appleyard, E. T. S. "Discussion of the papers by Finch, Appleyard and Lennard-Jones." Proceedings of the Physical Society 49.4S (1937): 151. |
https://en.wikipedia.org/wiki/Spectrophotometry | Spectrophotometry is a branch of electromagnetic spectroscopy concerned with the quantitative measurement of the reflection or transmission properties of a material as a function of wavelength. Spectrophotometry uses photometers, known as spectrophotometers, that can measure the intensity of a light beam at different wavelengths. Although spectrophotometry is most commonly applied to ultraviolet, visible, and infrared radiation, modern spectrophotometers can interrogate wide swaths of the electromagnetic spectrum, including x-ray, ultraviolet, visible, infrared, and/or microwave wavelengths.
Overview
Spectrophotometry is a tool that hinges on the quantitative analysis of molecules depending on how much light is absorbed by colored compounds. Important features of spectrophotometers are spectral bandwidth (the range of colors it can transmit through the test sample), the percentage of sample-transmission, the logarithmic range of sample-absorption, and sometimes a percentage of reflectance measurement.
A spectrophotometer is commonly used for the measurement of transmittance or reflectance of solutions, transparent or opaque solids, such as polished glass, or gases. Although many biochemicals are colored, as in, they absorb visible light and therefore can be measured by colorimetric procedures, even colorless biochemicals can often be converted to colored compounds suitable for chromogenic color-forming reactions to yield compounds suitable for colorimetric analysis. However, they can also be designed to measure the diffusivity on any of the listed light ranges that usually cover around 200–2500 nm using different controls and calibrations. Within these ranges of light, calibrations are needed on the machine using standards that vary in type depending on the wavelength of the photometric determination.
An example of an experiment in which spectrophotometry is used is the determination of the equilibrium constant of a solution. A certain chemical reaction within a |
https://en.wikipedia.org/wiki/Robert%20MacMillan | Robert "Judy" Gordon MacMillan (3 April 1865 – 3 April 1936) was a Scottish international rugby union player.
Rugby Union career
Amateur career
MacMillan played club rugby for Edinburgh University, West of Scotland and London Scottish.
Provincial career
MacMillan was capped by Glasgow District to play in the inter-city on 3 December 1887.
He was selected for Middlesex to play against Yorkshire in the 1893 English County Championship. Five Scots were selected for Middlesex: Gregor MacGregor, George Campbell, William Wotherspoon, MacMillan and Frederick Goodhue, all with London Scottish who played in the county. He played in that match, but Yorkshire won and then secured the championship.
On 22 December 1894 he played for the Provinces District against the Cities District side.
International career
MacMillan played international rugby for Scotland for over 11 seasons, and in 1891 he represented the British Isles team on their South Africa.
Administrative career
MacMillan was made vice-president in 1899 when he was still with London Scottish.
He became the 27th President of the Scottish Rugby Union. He served the 1900–1901 term in office.
Family
MacMillan was born in 1865, the eldest son of John Gordon MacMillan and Margaret Holmes.
Outside of rugby
MacMillan was an insurance underwriter for Lloyds. He was an underwriter from 1890 to 1923. In 1924 he became a non-underwriting member.
He played cricket while at Merchiston Castle School. He also liked rowing and golf.
Death
MacMillan owned Somerford House - an old vicarage, which he bought in 1922, rebuilt and added stables - in Somerford Keynes near Cirencester. He also had a house in Chelsea and other lands. He was killed when out on a fox hunt in 1936 with the Vale of White Horse Hounds in Cirencester Park. These were owned by Earl Bathurst; MacMillan had hunted with Bathurst for years. He was killed on his 71st birthday - the hunt was organised to celebrate his birthday - when his horse refused to |
https://en.wikipedia.org/wiki/P-i%20mechanism | The p-i concept refers to the pharmacological interaction of drugs with immune receptors. It explains a form of drug hypersensitivity, namely T cell stimulation, which can lead to various acute inflammatory manifestations such as exanthems, eosinophilia and systemic symptoms, Stevens–Johnson syndrome, toxic epidermal nercrolysis, and complications upon withdrawing the drug.
Principle
The p-i concept links pharmacology with immunology: It implies that drugs bind directly, as an off-target activity to immune receptors which results in various forms of T cell stimulations. P-i thus starts with an off-target pharmacological activity of the drug followed by a cascade of immunological events which always starts with T cell activation, even if the drug did not bind to the T cell itself but to an antigen presenting cell (APC).
The drug bindings occur by non-covalent bonds (e.g. Hydrogen bonds, electrostatic interactions, van der Waals forces) to some of the highly polymorphic T cell receptors for antigen (TCR) and / or human leukocyte antigens (HLA). The binding occurs mostly on the cell surface and is labile, reversible and transient. It interacts with the crucial molecules of antigen dependent T cell activation, which may alter the self-HLA to make it look like an allo-HLA-allele, to which T cells strongly react; Some drug binding to TCR itself may – together with HLA-peptide interaction – elicit TCR-CDR signalling or alter the TCR conformation, thereby enhancing its interaction with HLA-peptide (allogeneic effect). Certain drugs may not only interact with the immune receptors on the surface but also inside the cell (endoplasmic reticulum e.g. Abacavir to HLA-B*57:01). This may cause a change of presented peptides (altered peptide model).
The polymorphism of the immune receptors explains to a large extent the notoriously unpredictable “idiosyncrasy” of drug hypersensitivity reactions (DHR), as some of the individually distinct protein sequences may bind the drug bette |
https://en.wikipedia.org/wiki/Characteristic%20admittance | Characteristic admittance is the mathematical inverse of the characteristic impedance.
The general expression for the characteristic admittance of a transmission line is:
where
is the resistance per unit length,
is the inductance per unit length,
is the conductance of the dielectric per unit length,
is the capacitance per unit length,
is the imaginary unit, and
is the angular frequency.
The current and voltage phasors on the line are related by the characteristic admittance as:
where the superscripts and represent forward- and backward-traveling waves, respectively.
See also
Characteristic impedance |
https://en.wikipedia.org/wiki/Patatin-like%20phospholipase | Family of patatin-like phospholipases consists of various patatin glycoproteins from the total soluble protein from potato tubers, and also some proteins found in vertebrates. Patatin is a storage protein but it also has the enzymatic activity of phospholipase, catalysing the cleavage of fatty acids from membrane lipids.
Subfamilies
Protein of unknown function UPF0028
Human proteins containing this domain
PNPLA1; PNPLA2; PNPLA3; PNPLA4; PNPLA5; PNPLA6; PNPLA7; PNPLA8; |
https://en.wikipedia.org/wiki/Heat%20of%20dilution | In thermochemistry, the heat of dilution, or enthalpy of dilution, refers to the enthalpy change associated with the dilution process of a component in a solution at a constant pressure. If the initial state of the component is a pure liquid (presuming the solution is liquid), the dilution process is equal to its dissolution process and the heat of dilution is the same as the heat of solution. Generally, the heat of dilution is normalized by the amount of the solution and its dimensional units are energy per unit mass or amount of substance, commonly expressed in the unit of kJ/mol (or J/mol).
Definition
The heat of dilution can be defined from two perspectives: the differential heat and the integral heat.
The differential heat of dilution is viewed on a micro scale, which is associated with the process in which a small amount of solvent is added to a large quantity of solution. The molar differential heat of dilution is thus defined as the enthalpy change caused by adding a mole of solvent at a constant temperature and pressure to a very large amount of solution. Because of the small amount of addition, the concentration of dilute solution remains practically unchanged. Mathematically, the molar differential heat of dilution is denoted as:
where ∂∆ni is the infinitesimal change or differential of mole number of the dilution.
The integral heat of dilution, however, is viewed on a macro scale. With respect to the integral heat, consider a process in which a certain amount of solution diluted from an initial concentration to a final concentration. The enthalpy change in this process, normalized by the mole number of solute, is evaluated as the molar integral heat of dilution. Mathematically, the molar integral heat of dilution is denoted as:
If the infinite amount of solvent is added to a solution with a known concentration of solute, the corresponding change of enthalpy is called as integral heat of dilution to infinite dilution.
The dilution between two conce |
https://en.wikipedia.org/wiki/BSD%20Authentication | BSD Authentication, otherwise known as BSD Auth, is an authentication framework and software API employed by OpenBSD and accompanying software such as OpenSSH. It originated with BSD/OS, and although the specification and implementation were donated to the FreeBSD project by BSDi, OpenBSD chose to adopt the framework in release 2.9. Pluggable Authentication Modules (PAM) serves a similar purpose on other operating systems such as Linux, FreeBSD and NetBSD.
BSD Auth performs authentication by executing scripts or programs as separate processes from the one requiring the authentication. This prevents the child authentication process from interfering with the parent except through a narrowly defined inter-process communication API, a technique inspired by the principle of least privilege and known as privilege separation. This behaviour has significant security benefits, notably improved fail-safeness of software, and robustness against malicious and accidental software bugs.
See also
Name Service Switch |
https://en.wikipedia.org/wiki/Mathematical%20notation | Mathematical notation consists of using symbols for representing operations, unspecified numbers, relations, and any other mathematical objects and assembling them into expressions and formulas. Mathematical notation is widely used in mathematics, science, and engineering for representing complex concepts and properties in a concise, unambiguous, and accurate way.
For example, Albert Einstein's equation is the quantitative representation in mathematical notation of the mass–energy equivalence.
Mathematical notation was first introduced by François Viète at the end of the 16th century and largely expanded during the 17th and 18th centuries by René Descartes, Isaac Newton, Gottfried Wilhelm Leibniz, and overall Leonhard Euler.
Symbols
The use of many symbols is the basis of mathematical notation. They play a similar role as words in natural languages. They may play different roles in mathematical notation similarly as verbs, adjective and nouns play different roles in a sentence.
Letters as symbols
Letters are typically used for naming—in mathematical jargon, one says representing—mathematical objects. This is typically the Latin and Greek alphabets that are used, but some letters of Hebrew alphabet are sometimes used. Uppercase and lowercase letters are considered as different symbols. For Latin alphabet, different typefaces provide also different symbols. For example, and could theoretically appear in the same mathematical text with six different meanings. Normally, roman upright typeface is not used for symbols, except for symbols that are formed of several letters, such as the symbol "" of the sine function.
In order to have more symbols, and for allowing related mathematical objects to be represented by related symbols, diacritics, subscripts and superscripts are often used. For example, may denote the Fourier transform of the derivative of a function called
Other symbols
Symbols are not only used for naming mathematical objects. They can be used fo |
https://en.wikipedia.org/wiki/Abiotic%20stress | Abiotic stress is the negative impact of non-living factors on the living organisms in a specific environment. The non-living variable must influence the environment beyond its normal range of variation to adversely affect the population performance or individual physiology of the organism in a significant way.
Whereas a biotic stress would include living disturbances such as fungi or harmful insects, abiotic stress factors, or stressors, are naturally occurring, often intangible and inanimate factors such as intense sunlight, temperature or wind that may cause harm to the plants and animals in the area affected. Abiotic stress is essentially unavoidable. Abiotic stress affects animals, but plants are especially dependent, if not solely dependent, on environmental factors, so it is particularly constraining. Abiotic stress is the most harmful factor concerning the growth and productivity of crops worldwide. Research has also shown that abiotic stressors are at their most harmful when they occur together, in combinations of abiotic stress factors.
Examples
Abiotic stress comes in many forms. The most common of the stressors are the easiest for people to identify, but there are many other, less recognizable abiotic stress factors which affect environments constantly.
The most basic stressors include:
High winds
Extreme temperatures
Drought
Flood
Other natural disasters, such as tornadoes and wildfires.
Cold
Heat
Nutrient deficiency
Lesser-known stressors generally occur on a smaller scale. They include: poor edaphic conditions like rock content and pH levels, high radiation, compaction, contamination, and other, highly specific conditions like rapid rehydration during seed germination.
Effects
Abiotic stress, as a natural part of every ecosystem, will affect organisms in a variety of ways. Although these effects may be either beneficial or detrimental, the location of the area is crucial in determining the extent of the impact that abiotic stress w |
https://en.wikipedia.org/wiki/Nurikabe%20%28puzzle%29 | Nurikabe (hiragana: ぬりかべ) is a binary determination puzzle named for Nurikabe, an invisible wall in Japanese folklore that blocks roads and delays foot travel. Nurikabe was apparently invented and named by Nikoli; other names (and attempts at localization) for the puzzle include Cell Structure and Islands in the Stream.
Rules
The puzzle is played on a typically rectangular grid of cells, some of which contain numbers. Cells are initially of unknown color, but can only be black or white. Two same-color cells are considered "connected" if they are adjacent vertically or horizontally, but not diagonally. Connected white cells form "islands", while connected black cells form the "sea".
The challenge is to paint each cell black or white, subject to the following rules:
Each numbered cell is an island cell, the number in it is the number of cells in that island.
Each island must contain exactly one numbered cell.
There must be only one sea, which is not allowed to contain "pools", i.e. 2×2 areas of black cells.
Human solvers typically dot the non-numbered cells they've determined to be certain to belong to an island.
Like most other pure-logic puzzles, a unique solution is expected, and a grid containing random numbers is highly unlikely to provide a uniquely solvable Nurikabe puzzle.
History
Nurikabe was first developed by "renin (れーにん)," whose pen name is the Japanese pronunciation of "Lenin" and whose autonym can be read as such, in the 33rd issue of (Puzzle Communication) Nikoli at March 1991.
It soon created a sensation, and has appeared in all issues of that publication from the 38th to the present.
As of 2005, seven books consisting entirely of Nurikabe puzzles have been published by Nikoli.
(This paragraph mainly depends on "Nikoli complete works of interesting-puzzles(ニコリ オモロパズル大全集)." https://web.archive.org/web/20060707011243/http://www.nikoli.co.jp/storage/addition/omopadaizen/)
Solution methods
No blind guessing should be required to solve a Nu |
https://en.wikipedia.org/wiki/The%20Emperor%27s%20New%20Mind | The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics is a 1989 book by the mathematical physicist Sir Roger Penrose.
Penrose argues that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventional Turing machine, which includes a digital computer. Penrose hypothesizes that quantum mechanics plays an essential role in the understanding of human consciousness. The collapse of the quantum wavefunction is seen as playing an important role in brain function.
Most of the book is spent reviewing, for the scientifically-minded lay-reader, a plethora of interrelated subjects such as Newtonian physics, special and general relativity, the philosophy and limitations of mathematics, quantum physics, cosmology, and the nature of time. Penrose intermittently describes how each of these bears on his developing theme: that consciousness is not "algorithmic". Only the later portions of the book address the thesis directly.
Overview
Penrose states that his ideas on the nature of consciousness are speculative, and his thesis is considered erroneous by experts in the fields of philosophy, computer science, and robotics.
The Emperor's New Mind attacks the claims of artificial intelligence using the physics of computing: Penrose notes that the present home of computing lies more in the tangible world of classical mechanics than in the imponderable realm of quantum mechanics. The modern computer is a deterministic system that for the most part simply executes algorithms. Penrose shows that, by reconfiguring the boundaries of a billiard table, one might make a computer in which the billiard balls act as message carriers and their interactions act as logical decisions. The billiard-ball computer was first designed some years ago by Edward Fredkin and Tommaso Toffoli of the Massachusetts Institute of Technology.
Reception
Following the publication of the book, Penrose began to collaborate with Stuart Hameroff on a biological a |
https://en.wikipedia.org/wiki/Acoustic%20camera | An acoustic camera (or noise camera) is an imaging device used to locate sound sources and to characterize them. It consists of a group of microphones, also called a microphone array, from which signals are simultaneously collected and processed to form a representation of the location of the sound sources.
Terminology
The term acoustic camera has first appeared at the end of the 19th century: A physiologist, J.R. Ewald, was investigating the function of the inner ear and introduced an analogy with the Chladni plates (a domain nowadays called Cymatics), a device enabling to visually see the modes of vibration of a plate. He called this device an acoustic camera. The term has then been widely used during the 20th century to designate various types of acoustic devices, such as underwater localization systems or active systems used in medicine. It designates nowadays any transducer array used to localize sound sources (the medium is usually the air), especially when coupled with an optical camera.
Technology
General principles
An acoustic camera generally consists of a microphone array and optionally an optical camera. The microphones – analog or digital – are acquired simultaneously or with known relative time delays to be able to use the phase difference between the signals. As the sound propagates in the medium (air, water...) at a finite known speed, a sound source is perceived by the microphones at different time instants and at different sound intensities that depend on both the sound source location and the microphone location.
One popular method to obtain an acoustic image from the measurement of the microphone is to use beamforming: By delaying each microphone signal relatively and adding them, the signal coming from a specific direction is amplified while signals coming from other directions are canceled. The power of this resulting signal is then calculated and reported on a power map at a pixel corresponding to the direction . The process is iterated |
https://en.wikipedia.org/wiki/Nucleic%20acid%20thermodynamics | Nucleic acid thermodynamics is the study of how temperature affects the nucleic acid structure of double-stranded DNA (dsDNA). The melting temperature (Tm) is defined as the temperature at which half of the DNA strands are in the random coil or single-stranded (ssDNA) state. Tm depends on the length of the DNA molecule and its specific nucleotide sequence. DNA, when in a state where its two strands are dissociated (i.e., the dsDNA molecule exists as two independent strands), is referred to as having been denatured by the high temperature.
Concepts
Hybridization
Hybridization is the process of establishing a non-covalent, sequence-specific interaction between two or more complementary strands of nucleic acids into a single complex, which in the case of two strands is referred to as a duplex. Oligonucleotides, DNA, or RNA will bind to their complement under normal conditions, so two perfectly complementary strands will bind to each other readily. In order to reduce the diversity and obtain the most energetically preferred complexes, a technique called annealing is used in laboratory practice. However, due to the different molecular geometries of the nucleotides, a single inconsistency between the two strands will make binding between them less energetically favorable. Measuring the effects of base incompatibility by quantifying the temperature at which two strands anneal can provide information as to the similarity in base sequence between the two strands being annealed. The complexes may be dissociated by thermal denaturation, also referred to as melting. In the absence of external negative factors, the processes of hybridization and melting may be repeated in succession indefinitely, which lays the ground for polymerase chain reaction. Most commonly, the pairs of nucleic bases A=T and G≡C are formed, of which the latter is more stable.
Denaturation
DNA denaturation, also called DNA melting, is the process by which double-stranded deoxyribonucleic acid unwinds an |
https://en.wikipedia.org/wiki/Polyglutamine%20tract | A polyglutamine tract or polyQ tract is a portion of a protein consisting of a sequence of several glutamine units. A tract typically consists of about 10 to a few hundred such units.
A multitude of genes, in various eukaryotic species (including humans), contain a number of repetitions of the nucleotide triplet CAG or CAA. When the gene is translated into a protein, each of these triplets gives rise to a glutamine unit, resulting in a polyglutamine tract. Different alleles of such a gene often have different numbers of triplets since the highly repetitive sequence is prone to contraction and expansion.
Several inheritable neurodegenerative disorders, the polyglutamine diseases, occur if a mutation causes a polyglutamine tract in a specific gene to become too long. Important examples of polyglutamine diseases are spinocerebellar ataxia and Huntington's disease. Trinucleotide repeat expansion occurring in a parental germline cell can lead to children that are more affected or display an earlier onset and greater severity of the condition. Trinucleotide repeat expansion is considered to be a consequence of slipped strand mispairing either during DNA replication or DNA repair synthesis. It is believed that cells cannot properly dispose of proteins with overlong polyglutamine tracts, which over time leads to damage in nerve cells. The longer the polyglutamine tract, the earlier in life these diseases tend to appear.
History
Nucleotide sequences encoding a lengthy polyQ tract were first noted in the gene encoding the Notch receptor. Variation of the length of this Notch polyQ tract, as caused by triplet repeat instability, was later found to cause developmental defects. The significance of similarly expanded tracts in humans became evident when polyQ tracts were found to underlie Huntington's disease and several spinocerebellar ataxias. In general, several neurodegenerative disorders were found to involve nucleotide repeat expansions in protein coding sequences. |
https://en.wikipedia.org/wiki/Ger%20%28magazine%29 | Ger (Ger is the Mongolian word for home and also for the traditional tent dwelling) was an online magazine launched in Mongolia in the late 1990s. The country's first online magazine, Ger became a much-cited source on the effects of the transition to free markets and democracy the country experienced throughout the 1990s.
Overview
Ger was launched on September 9, 1998. The theme of youth in the transition was explored by a combined team of Mongolian and foreign journalists. The Ger Magazine project had basically three goals: first, raise the quality of journalism in the country, secondly, introduce the country to a wider global audience and, by being the country’s first online magazine, prove the internet was an effective way to communicate. Stories tackled the struggle to find work in the free market, the booming pop music scene and how it is leading the way in business entrepreneurship, reproductive health, the basics on Mongolian culture, and vox pop views from Mongolian youth.
Issue 2 of the magazine investigated modern life in Mongolia during transition. Stories probed the proliferation of bars and the problem of alcoholism, corrupt banking practices and the loss of savings, how the young were the country's leading entrepreneurs, Mongolia's meat and milk diet, "girl power" and the strong role played by women, the burgeoning new media, the rise and rise of Buddhism, and Mongolia's dynamic fashion designers (this article inspired foreign fashion designers to embrace the Mongolian "look" in the next season's designs).
An online survey of the state of Mongolia's media and its history (http://www.pressreference.com/Ma-No/Mongolia.html), had this to say: "An interesting variation from some of the other publications available is Ger Magazine (published online with guidance from the United Nations Development Program, UNDP), which is concerned with Mongolian youth in cultural transition. The name of the magazine is meant to be ironic because a ger is the Mongolian |
https://en.wikipedia.org/wiki/The%20Logical%20Structure%20of%20Linguistic%20Theory | The Logical Structure of Linguistic Theory or LSLT is a major work in linguistics by American linguist Noam Chomsky. It was written in 1955 and published in 1975. In 1955, Chomsky submitted a part of this book as his PhD thesis titled Transformational Analysis, setting out his ideas on transformational grammar; he was awarded a Ph.D. for it, and it was privately distributed among specialists on microfilm. Chomsky offered the manuscript of LSLT for publication, but MIT's Technology Press refused to publish it. It was published by Springer in 1975. |
https://en.wikipedia.org/wiki/Contig | A contig (from contiguous) is a set of overlapping DNA segments that together represent a consensus region of DNA.
In bottom-up sequencing projects, a contig refers to overlapping sequence data (reads); in top-down sequencing projects, contig refers to the overlapping clones that form a physical map of the genome that is used to guide sequencing and assembly. Contigs can thus refer both to overlapping DNA sequences and to overlapping physical segments (fragments) contained in clones depending on the context.
Original definition of contig
In 1980, Staden wrote: In order to make it easier to talk about our data gained by the shotgun method of sequencing we have invented the word "contig". A contig is a set of gel readings that are related to one another by overlap of their sequences. All gel readings belong to one and only one contig, and each contig contains at least one gel reading. The gel readings in a contig can be summed to form a contiguous consensus sequence and the length of this sequence is the length of the contig.
Sequence contigs
A sequence contig is a continuous (not contiguous) sequence resulting from the reassembly of the small DNA fragments generated by bottom-up sequencing strategies. This meaning of contig is consistent with the original definition by Rodger Staden (1979). The bottom-up DNA sequencing strategy involves shearing genomic DNA into many small fragments ("bottom"), sequencing these fragments, reassembling them back into contigs and eventually the entire genome ("up"). Because current technology allows for the direct sequencing of only relatively short DNA fragments (300–1000 nucleotides), genomic DNA must be fragmented into small pieces prior to sequencing. In bottom-up sequencing projects, amplified DNA is sheared randomly into fragments appropriately sized for sequencing. The subsequent sequence reads, which are the data that contain the sequences of the small fragments, are put into a database. The assembly software then searches |
https://en.wikipedia.org/wiki/Diode%20modelling | In electronics, diode modelling refers to the mathematical models used to approximate the actual behaviour of real diodes to enable calculations and circuit analysis. A diode's I-V curve is nonlinear.
A very accurate, but complicated, physical model composes the I-V curve from three exponentials with a slightly different steepness (i.e. ideality factor), which correspond to different recombination mechanisms in the device; at very large and very tiny currents the curve can be continued by linear segments (i.e. resistive behaviour).
In a relatively good approximation a diode is modelled by the single-exponential Shockley diode law. This nonlinearity still complicates calculations in circuits involving diodes
so even simpler models are often used.
This article discusses the modelling of p-n junction diodes, but the techniques may be generalized to other solid state diodes.
Large-signal modelling
Shockley diode model
The Shockley diode equation relates the diode current of a p-n junction diode to the diode voltage . This relationship is the diode I-V characteristic:
,
where is the saturation current or scale current of the diode (the magnitude of the current that flows for negative in excess of a few , typically 10−12A). The scale current is proportional to the cross-sectional area of the diode. Continuing with the symbols: is the thermal voltage (, about 26 mV at normal temperatures), and is known as the diode ideality factor (for silicon diodes is approximately 1 to 2).
When the formula can be simplified to:
.
This expression is, however, only an approximation of a more complex I-V characteristic. Its applicability is particularly limited in case of ultrashallow junctions, for which better analytical models exist.
Diode-resistor circuit example
To illustrate the complications in using this law, consider the problem of finding the voltage across the diode in Figure 1.
Because the current flowing through the diode is the same as the current thro |
https://en.wikipedia.org/wiki/Aeronautical%20Message%20Handling%20System | Air Traffic Services Message Handling Services (AMHS) is a standard for aeronautical ground-ground communications (e.g. for the transmission of NOTAM, Flight Plans or Meteorological Data) based on X.400 profiles. It has been defined by the ICAO.
Levels of service
ICAO Doc 9880 Part II defines two fundamental levels of service within the ATSMHS;
Basic ATSMHS and
the Extended ATSMHS.
Additionally, ICAO Doc 9880 (Part II, section 3.4) outlines different subsets of the Extended ATSMHS. The Basic ATSMHS performs an operational role similar to the
Aeronautical Fixed Telecommunication Network with a few enhancements. The Extended ATSMHS provided enhanced features but includes the Basic level of service
capability; in this way it is ensured that users with Extended Service capabilities can inter-operate, at a basic level, with users having Basic Service capabilities and vice versa.
The ATSMHS is provided by a set of end systems, which collectively comprise the ATS Message Handling System. The systems co-operate to provide users (human or automated) with a data communication service. The AMHS network is composed of interconnected ATS Message Servers that perform message switching at the application layer (Layer 7 in the OSI model).
Direct users connect to ATS Message Servers by means of ATS Message User Agents. An ATS Message User Agent supporting the Extended level of service will use
the Basic level of service to allow communication with users who only support the Basic ATSMHS.
Interoperability
In order to ensure unobstructed communication between the ANSPs, the European Air Navigation Planning Group (EANPG) of ICAO has defined 59 test cases in its EUR AMHS Manual (V5.0), 17/06/2010 (Appendix D, AMHS Conformance Tests), ASIA/PAC AMHS Manual (Annex B, AMHS Conformance and Compatibility Test, V2.0, 22/09/08) which have to be performed prior to establishment of bilateral links between the ANSPs. Those tests are conducted using a test engine (AMHS Conformance Test Too |
https://en.wikipedia.org/wiki/Time%27s%20Arrow%20and%20Archimedes%27%20Point | Time's Arrow and Archimedes Point: New Directions for the Physics of Time is a 1996 book by Huw Price, on the physics and philosophy of the Arrow of Time. It explores the problem of the direction of time, looking at issues in thermodynamics, cosmology, electromagnetism, and quantum mechanics. Price argues that it is fruitful to think about time from a hypothetical Archimedean Point - a viewpoint outside of time. In later chapters, Price argues that retrocausality can resolve many of the philosophical issues facing quantum mechanics and along these lines proposes an interpretation involving what he calls 'advanced action'.
Summary
Chapter 1 - The View From Nowhen
Price briefly introduces the stock philosophical questions about time, starting with Saint Augustine's observations in Confessions, highlighting the questions 'What is the difference between past and future?', 'Could the future affect the past?' and 'what gives time its direction?'.
He then introduces the block universe view where the 'present' is regarded as a subjective notion, which changes from observer to observer, in the same way that the concept of 'here' changes depending on where the observer is. The block universe view rejects the notion that there exists an objective present and grants that the past, present and future are all equally real. He then surveys reasons to favour this view and common objections to it. Price then introduces the idea of viewing the block universe from an Archimedean Point from outside of time, which is the view that is taken in the rest of the book.
Finally, Price introduces two problems regarding the Arrow of Time, which he calls the taxonomy problem and the genealogy problem. The taxonomy problem is the problem characterizing and finding the relationship between different arrows of time (e.g. the thermodynamic and cosmological arrows of time). The genealogy problem is to explain why asymmetries (ie arrows) exist in time, given that the laws of physics seem to be r |
https://en.wikipedia.org/wiki/Experimentation%20on%20prisoners | Throughout history, prisoners have been frequent participants in scientific, medical and social human subject research. Some of the research involving prisoners has been exploitative and cruel. Many of the modern protections for human subjects evolved in response to the abuses in prisoner research. Research involving prisoners is still conducted today, but prisoners are now one of the most highly protected groups of human subjects
Requirements of research involving prisoners
According to the Common Rule (45 CFR 46), prisoners may only be included in human subjects research when the research involves no more than a minimal risk of harm.
Prisoner consent
Prisoners cannot consent. Their status as imprisoned human subjects becomes even more ethically problematic when investigators offer incentives such as parole, phone calls, or objects that are normally unavailable to prisoners.
Historical abuses
Ancient history
Herophilos of Chalcedon was reputed by Celcus, among others, to have vivisected prisoners received from the Ptolemaic kings.
Second Sino-Japanese War and World War II
In Japan, Unit 731, located near Harbin (Manchukuo), experimented with prisoner vivisection, dismemberment and induced epidemics on a very large scale from 1935 to 1945 during the Second Sino-Japanese War. With the expansion of the Empire of Japan during World War II, many other units were implemented in conquered cities such as Nanking (Unit 1644), Beijing (Unit 1855), Guangzhou (Unit 8604) and Singapore (Unit 9420). After the war, Supreme Commander for the Allied Powers Douglas MacArthur gave immunity in the name of the United States to all members of the units in exchange for the results of a fraction of the conducted experiments, so that in post-war Japan, Shiro Ishii and others continued to hold honoured positions. The United States blocked Soviet access to this information. However, some unit members were judged by the Soviets during the Khabarovsk War Crime Trials. The effects were las |
https://en.wikipedia.org/wiki/Traveling%20plane%20wave | In mathematics and physics, a traveling plane wave is a special case of plane wave, namely a field whose evolution in time can be described as simple translation of its values at a constant wave speed , along a fixed direction of propagation .
Such a field can be written as
where is a function of a single real parameter . The function describes the profile of the wave, namely the value of the field at time , for each displacement . For each displacement , the moving plane perpendicular to at distance from the origin is called a wavefront. This plane too travels along the direction of propagation with velocity ; and the value of the field is then the same, and constant in time, at every one of its points.
The wave may be a scalar or vector field; its values are the values of .
A sinusoidal plane wave is a special case, when is a sinusoidal function of .
Properties
A traveling plane wave can be studied by ignoring the dimensions of space perpendicular to the vector ; that is, by considering the wave on a one-dimensional medium, with a single position coordinate .
For a scalar traveling plane wave in two or three dimensions, the gradient of the field is always collinear with the direction ; specifically, , where is the derivative of . Moreover, a traveling plane wave of any shape satisfies the partial differential equation
Plane traveling waves are also special solutions of the wave equation in an homogeneous medium.
See also
Spherical wave
Spherical sinusoidal wave
Standing wave |
https://en.wikipedia.org/wiki/Life%20Sciences%20in%20Space%20Research | Life Sciences in Space Research is a quarterly peer-reviewed scientific journal covering astrobiology, origins of life, life in extreme environments, habitability, effects of spaceflight on the human body, radiation risks, and other aspects of life sciences relevant in space research. It was established in 2014 and is published by Elsevier. It is an official journal of the Committee on Space Research (COSPAR), publishing papers in the areas that were previously covered by the Life Sciences section of Advances in Space Research, another official journal of COSPAR. The editor-in-chief is Tom Hei (Columbia University Medical Center).
Abstracting and indexing
The journal is abstracted and indexed in the Emerging Sources Citation Index, Index Medicus/MEDLINE/PubMed, and Scopus. |
https://en.wikipedia.org/wiki/Adult%20stem%20cell | Adult stem cells are undifferentiated cells, found throughout the body after development, that multiply by cell division to replenish dying cells and regenerate damaged tissues. Also known as somatic stem cells (from Greek σωματικóς, meaning of the body), they can be found in juvenile, adult animals, and humans, unlike embryonic stem cells.
Scientific interest in adult stem cells is centered around two main characteristics. The first of which is their ability to divide or self-renew indefinitely, and the second their ability to generate all the cell types of the organ from which they originate, potentially regenerating the entire organ from a few cells. Unlike embryonic stem cells, the use of human adult stem cells in research and therapy is not considered to be controversial, as they are derived from adult tissue samples rather than human embryos designated for scientific research. The main functions of adult stem cells are to replace cells that are at risk of possibly dying as a result of disease or injury and to maintain a state of homeostasis within the cell. There are three main methods to determine if the adult stem cell is capable of becoming a specialized cell. The adult stem cell can be labeled in vivo and tracked, it can be isolated and then transplanted back into the organism, and it can be isolated in vivo and manipulated with growth hormones. They have mainly been studied in humans and model organisms such as mice and rats.
Structure
Defining properties
A stem cell possesses two properties:
Self-renewal is the ability to go through numerous cycles of cell division while still maintaining its undifferentiated state. Stem cells can replicate several times and can result in the formation of two stem cells, one stem cell more differentiated than the other, or two differentiated cells.
Multipotency or multidifferentiative potential is the ability to generate progeny of several distinct cell types, (for example glial cells and neurons) as opposed to u |
https://en.wikipedia.org/wiki/Reproductive%20biology | Reproductive biology includes both sexual and asexual reproduction.
Reproductive biology includes a wide number of fields:
Reproductive systems
Endocrinology
Sexual development (Puberty)
Sexual maturity
Reproduction
Fertility
Human reproductive biology
Endocrinology
Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands.
Reproductive systems
Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring.
Female reproductive system
The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth.
These structures include:
Ovaries
Oviducts
Uterus
Vagina
Mammary Glands
Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female.
Male reproductive system
The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia.
Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract.
Animal Reproductive Biology
Animal reproduction oc |
https://en.wikipedia.org/wiki/Mesaxon | In neurobiology, a mesaxon is a pair of parallel plasma membranes of a Schwann cell. It marks the point of edge-to-edge contact by the Schwann cell encircling the axon. A single Schwann cell of the peripheral nervous system will wrap around and support only one individual axon (then myelinated; ratio of 1:1), while the oligodendrocytes found in the central nervous system can wrap around and support 5-8 axons. Thin unmyelinated axons are often bundled, with several unmyelinated axons to a single mesaxon (and several such groups to a single Schwann cell).
The outer mesaxon (Terminologia histologica: Mesaxon externum) is the connection of the outer cell membrane to the compact myelin sheath. The inner mesaxon (Terminologia histologica: Mesaxon internum) is the connection between the myelin sheath and the inner part of the cell membrane of the Schwann cell, which is directly opposite the axolemma, i.e. the cell membrane of the nerve fibre ensheathed by the Schwann cell. |
https://en.wikipedia.org/wiki/Mie%20scattering | The Mie solution to Maxwell's equations (also known as the Lorenz–Mie solution, the Lorenz–Mie–Debye solution or Mie scattering) describes the scattering of an electromagnetic plane wave by a homogeneous sphere. The solution takes the form of an infinite series of spherical multipole partial waves. It is named after Gustav Mie.
The term Mie solution is also used for solutions of Maxwell's equations for scattering by stratified spheres or by infinite cylinders, or other geometries where one can write separate equations for the radial and angular dependence of solutions. The term Mie theory is sometimes used for this collection of solutions and methods; it does not refer to an independent physical theory or law. More broadly, the "Mie scattering" formulas are most useful in situations where the size of the scattering particles is comparable to the wavelength of the light, rather than much smaller or much larger.
Mie scattering (sometimes referred to as a non-molecular scattering or aerosol particle scattering) takes place in the lower of the atmosphere, where many essentially spherical particles with diameters approximately equal to the wavelength of the incident ray may be present. Mie scattering theory has no upper size limitation, and converges to the limit of geometric optics for large particles.
Introduction
A modern formulation of the Mie solution to the scattering problem on a sphere can be found in many books, e.g., J. A. Stratton's Electromagnetic Theory. In this formulation, the incident plane wave, as well as the scattering field, is expanded into radiating spherical vector spherical harmonics. The internal field is expanded into regular vector spherical harmonics. By enforcing the boundary condition on the spherical surface, the expansion coefficients of the scattered field can be computed.
For particles much larger or much smaller than the wavelength of the scattered light there are simple and accurate approximations that suffice to describe the beh |
https://en.wikipedia.org/wiki/Etiquette%20in%20technology | Etiquette in technology, colloquially referred to as netiquette is a term used to refer to the unofficial code of policies that encourage good behavior on the Internet which is used to regulate respect and polite behavior on social media platforms, online chatting sites, web forums, and other online engagement websites. The rules of etiquette that apply when communicating over the Internet are different from these applied when communicating in person or by audio (such as telephone) or photographic phone. It is a social code that is used in all places where one can interact with other human beings via the Internet, including text messaging, email, online games, Internet forums, chat rooms, and many more. Although social etiquette in real life is ingrained into our social life, netiquette is a fairly recent concept.
It can be a challenge to communicate on the Internet without misunderstandings mainly because input from facial expressions and body language is absent in cyberspace. Therefore, several rules, in an attempt to safeguard against these misunderstandings and to discourage unfriendly behavior, are regularly put in place at many websites, and often enforced by moderation by the website's users or administrators.
Netiquette
Netiquette, a colloquial portmanteau of network and etiquette or Internet and etiquette, is a set of social conventions that facilitate interaction over networks, ranging from Usenet and mailing lists to blogs and forums.
Like the network itself, these developing norms remain in a state of flux and vary from community to community. The points most strongly emphasised about Usenet netiquette often include using simple electronic signatures, and avoiding multiposting, cross-posting, off-topic posting, hijacking a discussion thread, and other techniques used to minimize the effort required to read a post or a thread. Similarly, some Usenet guidelines call for use of unabbreviated English while users of instant messaging protocols like SMS o |
https://en.wikipedia.org/wiki/Association%20for%20Automatic%20Identification%20and%20Mobility | Association for Automatic Identification and Mobility (AIM) is an industry trade group that developed and standardized bar codes, automatic identification and data capture. It is based in Cranberry Township, Butler County, Pennsylvania.
When AIM was formed in 1973, it consisted of four member organizations. In the years since, it was grown to over 400 members worldwide, including Intel and the Food and Drug Administration (FDA). |
https://en.wikipedia.org/wiki/Privilege%20sign | A privilege sign is a retail store sign provided by a manufacturer, with the manufacturer's branding on it. The signs may be provided to the store at no cost, in return for the manufacturer's advertising on the sign. Examples include Coca-Cola signs, bar/tavern signage provided by breweries containing that brewery's brand logo above the establishment's name, and painted signs on sides of shops.
Privilege signs are no longer popular with manufacturers or stores in the United States, slowly disappearing from storefronts in that country. However, it remains a common fixture in other countries, such as sari-sari stores in the Philippines, where common sponsors of privilege signs include soft drinks, telecommunications services, and soap brands.
Similar such signs still appear on independent newsagents in the United Kingdom with Lycamobile and Coca-Cola being among the most prominent of brands to advertise.
See also
Ghost sign |
https://en.wikipedia.org/wiki/Nitrospira%20inopinata | Nitrospira inopinata is a bacterium from the phylum Nitrospirota. This phylum contains nitrite-oxidizing bacteria playing role in nitrification. However N. inopinata was shown to perform complete ammonia oxidation to nitrate thus being the first comammox bacterium to be discovered.
N. inopinata was cultivated in enrichment culture. Initial inoculum was obtained in 2011 from microbial biofilm growing on metal surface of the pipe covered by hot water (56 °C, pH 7.5), which was raised from 1 200m deep oil exploration well. The well was located in Aushiger, North Caucasus, Russia. The growth in pure culture was achieved in 2017.
Genome of N. inopinata was released in 2015 represented by 3.3 Mbp, with 3 116 genes and 59.2% GC content. NCBI accession number is LN885086. |
https://en.wikipedia.org/wiki/Gr%C3%A4fenberg%27s%20ring | Gräfenberg's ring is a flexible ring of silk suture, later versions of which were wrapped in silver wire. It was an early IUD, a birth control device. Gräfenberg's ring was the first IUD used by a significant number of women. The ring was introduced by German gynecologist Ernst Gräfenberg in 1929. It ceased to be in wide use circa 1939.
Inserting a foreign device into the uterus causes an inflammatory response, which creates a hostile environment for sperm. The silver wire used to construct later versions of Gräfenberg's ring was contaminated with copper, which increases this spermicidal effect.
In 1934, Japanese physician Tenrei Ota developed a variation of the Gräfenberg ring that contained a supportive structure in the center. The addition of this central disc lowered the IUD's expulsion rate. However, insertion of these devices caused high rates of infection and were condemned by the medical community. Furthermore, their use and development was stifled by World War II politics: contraception was forbidden in both Nazi Germany and Axis-allied Japan. The rest of the Western world did not learn of the work of Gräfenberg and Ota until well after the war ended. |
https://en.wikipedia.org/wiki/Andrewsiphiinae | The Andrewsiphiinae is an extinct subfamily of early whales of the family Remingtonocetidae. Thiewessen & Bajpai (2009) proposed the clade when Andrewsiphius and Kuchicetus were accepted as separate genera. Kuchicetus was originally synonymized with Andrewsiphius in 2001 by Gingerich et al., but later authors, however, still accept both as separate genera. |
https://en.wikipedia.org/wiki/Killing%20horizon | In physics, a Killing horizon is a geometrical construct used in general relativity and its generalizations to delineate spacetime boundaries without reference to the dynamic Einstein field equations. Mathematically a Killing horizon is a null hypersurface defined by the vanishing of the norm of a Killing vector field (both are named after Wilhelm Killing). It can also be defined as a null hypersurface generated by a Killing vector, which in turn is null at that surface.
After Hawking showed that quantum field theory in curved spacetime (without reference to the Einstein field equations) predicted that a black hole formed by collapse will emit thermal radiation, it became clear that there is an unexpected connection between spacetime geometry (Killing horizons) and thermal effects for quantum fields. In particular, there is a very general relationship between thermal radiation and spacetimes that admit a one-parameter group of isometries possessing a bifurcate Killing horizon, which consists of a pair of intersecting null hypersurfaces that are orthogonal to the Killing field.
Flat spacetime
In Minkowski space-time, in pseudo-Cartesian coordinates with signature an example of Killing horizon is provided by the Lorentz boost (a Killing vector of the space-time)
The square of the norm of is
Therefore, is null only on the hyperplanes of equations
that, taken together, are the Killing horizons generated by .
Black hole Killing horizons
Exact black hole metrics such as the Kerr–Newman metric contain Killing horizons, which can coincide with their ergospheres. For this spacetime, the corresponding Killing horizon is located at
In the usual coordinates, outside the Killing horizon, the Killing vector field is timelike, whilst inside it is spacelike.
Furthermore, considering a particular linear combination of and , both of which are Killing vector fields, gives rise to a Killing horizon that coincides with the event horizon.
Associated with a Killi |
https://en.wikipedia.org/wiki/List%20of%20protein%20subcellular%20localization%20prediction%20tools | This list of protein subcellular localisation prediction tools includes software, databases, and web services that are used for protein subcellular localization prediction.
Some tools are included that are commonly used to infer location through predicted structural properties, such as signal peptide or transmembrane helices, and these tools output predictions of these features rather than specific locations. These software related to protein structure prediction may also appear in lists of protein structure prediction software.
Tools
Descriptions sourced from the entry in the https://bio.tools/ registry (used under CC-BY license) are indicated by link |
https://en.wikipedia.org/wiki/Thinking%2C%20Fast%20and%20Slow | Thinking, Fast and Slow is a 2011 popular science book by psychologist Daniel Kahneman.
The book's main thesis is a differentiation between two modes of thought: "System 1" is fast, instinctive and emotional; "System 2" is slower, more deliberative, and more logical.
The book delineates rational and non-rational motivations or triggers associated with each type of thinking process, and how they complement each other, starting with Kahneman's own research on loss aversion. From framing choices to people's tendency to replace a difficult question with one which is easy to answer, the book summarizes several decades of research to suggest that people have too much confidence in human judgment. Kahneman performed his own research, often in collaboration with Amos Tversky, which enriched his experience to write the book. It covers different phases of his career: his early work concerning cognitive biases, his work on prospect theory and happiness, and with the Israel Defense Forces.
The book was a New York Times bestseller and was the 2012 winner of the National Academies Communication Award for best creative work that helps the public understanding of topics in behavioral science, engineering and medicine. The integrity of some priming studies cited in the book has been called into question in the midst of the psychological replication crisis.
Two systems
In the book's first section, Kahneman describes two different ways the brain forms thoughts:
System 1: Fast, automatic, frequent, emotional, stereotypic, unconscious. Examples (in order of complexity) of things system 1 can do:
determine that an object is at a greater distance than another
localize the source of a specific sound
complete the phrase "war and ..."
display disgust when seeing a gruesome image
solve 2+2=?
read text on a billboard
drive a car on an empty road
think of a good chess move (if you're a chess master)
understand simple sentences
associate the description 'quiet and structured p |
https://en.wikipedia.org/wiki/Waller%20Gunnery%20Trainer | The Waller Gunnery Trainer was a simulator for training World War II aerial gunners using multiple film projectors. Its inventor, Fred Waller, later invented the Cinerama film format.
See also
First Motion Picture Unit |
https://en.wikipedia.org/wiki/Cache%20coherence | In computer architecture, cache coherence is the uniformity of shared resource data that ends up stored in multiple local caches. When clients in a system maintain caches of a common memory resource, problems may arise with incoherent data, which is particularly the case with CPUs in a multiprocessing system.
In the illustration on the right, consider both the clients have a cached copy of a particular memory block from a previous read. Suppose the client on the bottom updates/changes that memory block, the client on the top could be left with an invalid cache of memory without any notification of the change. Cache coherence is intended to manage such conflicts by maintaining a coherent view of the data values in multiple caches.
Overview
In a shared memory multiprocessor system with a separate cache memory for each processor, it is possible to have many copies of shared data: one copy in the main memory and one in the local cache of each processor that requested it. When one of the copies of data is changed, the other copies must reflect that change. Cache coherence is the discipline which ensures that the changes in the values of shared operands (data) are propagated throughout the system in a timely fashion.
The following are the requirements for cache coherence:
Write Propagation Changes to the data in any cache must be propagated to other copies (of that cache line) in the peer caches.
Transaction Serialization Reads/Writes to a single memory location must be seen by all processors in the same order.
Theoretically, coherence can be performed at the load/store granularity. However, in practice it is generally performed at the granularity of cache blocks.
Definition
Coherence defines the behavior of reads and writes to a single address location.
One type of data occurring simultaneously in different cache memory is called cache coherence, or in some systems, global memory.
In a multiprocessor system, consider that more than one processor has cached a copy o |
https://en.wikipedia.org/wiki/PAdES | PAdES (PDF Advanced Electronic Signatures) is a set of restrictions and extensions to PDF and ISO 32000-1 making it suitable for advanced electronic signatures (AdES). This is published by ETSI as EN 319 142.
Description
While PDF and ISO 32000-1 provide a framework for digitally signing their documents, PAdES specifies precise profiles making it compliant with ETSI standards for digital signatures (Advanced Electronic Signature - AES and Qualified Electronic Signature - QES). ETSI (European Technical Standards Institute) has the function of issuing technical standards by delegation in the EU eIDAS Regulation (European Union Regulation on electronic identification and trust services for electronic transactions in the internal market). The eIDAS regulation enhances and repeals the Electronic Signatures Directive 1999/93/EC. EIDAS is legally binding and in all EU member states since July 2014 and unlike the Directive it replaces, the eIDAS as a Regulation is directly applicable without implementing or interpreting legislation. Any electronic signature recognised under eIDAS (including ‘click accept’) cannot be denied validity and effectiveness by reason of being electronic. If it is a ‘digital signature’ that is, an electronic signature implementing digital certificates in compliance with the advanced or qualified described in eIDAS (and their implementations developed by ETSI from a technology level) it can support PAdES. AES and QES have a higher evidentiary value than simple or ‘standard’ electronic signatures. QES is recognised the same legal value as a handwritten signature.
PAdES standards travel in the same direction and have the same aims as digital signatures (AES and QES). This means they can be easily verified in any PDF reader and as
it is uniquely linked to the signatory (in QES to the identity of the signatory);
it is capable of identifying the signatory (‘attribution’);
only the signatory has control of the data used for the signature creation (i |
https://en.wikipedia.org/wiki/Degrees%20of%20freedom%20%28mechanics%29 | In physics, the degrees of freedom (DOF) of a mechanical system is the number of independent parameters that define its configuration or state. It is important in the analysis of systems of bodies in mechanical engineering, structural engineering, aerospace engineering, robotics, and other fields.
The position of a single railcar (engine) moving along a track has one degree of freedom because the position of the car is defined by the distance along the track. A train of rigid cars connected by hinges to an engine still has only one degree of freedom because the positions of the cars behind the engine are constrained by the shape of the track.
An automobile with highly stiff suspension can be considered to be a rigid body traveling on a plane (a flat, two-dimensional space). This body has three independent degrees of freedom consisting of two components of translation and one angle of rotation. Skidding or drifting is a good example of an automobile's three independent degrees of freedom.
The position and orientation of a rigid body in space is defined by three components of translation and three components of rotation, which means that it has six degrees of freedom.
The exact constraint mechanical design method manages the degrees of freedom to neither underconstrain nor overconstrain a device.
Motions and dimensions
The position of an n-dimensional rigid body is defined by the rigid transformation, [T] = [A, d], where d is an n-dimensional translation and A is an n × n rotation matrix, which has n translational degrees of freedom and n(n − 1)/2 rotational degrees of freedom. The number of rotational degrees of freedom comes from the dimension of the rotation group SO(n).
A non-rigid or deformable body may be thought of as a collection of many minute particles (infinite number of DOFs), this is often approximated by a finite DOF system. When motion involving large displacements is the main objective of study (e.g. for analyzing the motion of satellites), |
https://en.wikipedia.org/wiki/Hexagonal%20fast%20Fourier%20transform | The fast Fourier transform (FFT) is an important tool in the fields of image and signal processing. The hexagonal fast Fourier transform (HFFT) uses existing FFT routines to compute the discrete Fourier transform (DFT) of images that have been captured with hexagonal sampling. The hexagonal grid serves as the optimal sampling lattice for isotropically band-limited two-dimensional signals and has a sampling efficiency which is 13.4% greater than the sampling efficiency obtained from rectangular sampling. Several other advantages of hexagonal sampling include consistent connectivity, higher symmetry, greater angular resolution, and equidistant neighbouring pixels. Sometimes, more than one of these advantages compound together, thereby increasing the efficiency by 50% in terms of computation and storage when compared to rectangular sampling. Despite all of these advantages of hexagonal sampling over rectangular sampling, its application has been limited because of the lack of an efficient coordinate system. However that limitation has been removed with the recent development of the hexagonal efficient coordinate system (HECS, formerly known as array set addressing or ASA) which includes the benefit of a separable Fourier kernel. The existence of a separable Fourier kernel for a hexagonally sampled image allows the use of existing FFT routines to efficiently compute the DFT of such an image.
Preliminaries
Hexagonal Efficient Coordinate System (HECS)
The hexagonal efficient coordinate system (formerly known as array set addressing (ASA)) was developed based on the fact that a hexagonal grid can be represented as a combination of two interleaved rectangular arrays. It is easy to address each individual array using familiar integer-valued row and column indices and the individual arrays are distinguished by a single binary coordinate. Therefore, a full address for any point in the hexagonal grid can be uniquely represented by three coordinates.
where the coordinate |
https://en.wikipedia.org/wiki/Automorphic%20L-function | In mathematics, an automorphic L-function is a function L(s,π,r) of a complex variable s, associated to an automorphic representation π of a reductive group G over a global field and a finite-dimensional complex representation r of the Langlands dual group LG of G, generalizing the Dirichlet L-series of a Dirichlet character and the Mellin transform of a modular form. They were introduced by .
and gave surveys of automorphic L-functions.
Properties
Automorphic -functions should have the following properties (which have been proved in some cases but are still conjectural in other cases).
The L-function should be a product over the places of of local functions.
Here the automorphic representation is a tensor product of the representations of local groups.
The L-function is expected to have an analytic continuation as a meromorphic function of all complex , and satisfy a functional equation
where the factor is a product of "local constants"
almost all of which are 1.
General linear groups
constructed the automorphic L-functions for general linear groups with r the standard representation (so-called standard L-functions) and verified analytic continuation and the functional equation, by using a generalization of the method in Tate's thesis. Ubiquitous in the Langlands Program are Rankin-Selberg products of representations of GL(m) and GL(n). The resulting Rankin-Selberg L-functions satisfy a number of analytic properties, their functional equation being first proved via the Langlands–Shahidi method.
In general, the Langlands functoriality conjectures imply that automorphic L-functions of a connected reductive group are equal to products of automorphic L-functions of general linear groups. A proof of Langlands functoriality would also lead towards a thorough understanding of the analytic properties of automorphic L-functions. |
https://en.wikipedia.org/wiki/Toyota%20Active%20Control%20Suspension | Toyota Active Control Suspension was (according to Toyota) the world's first fully active suspension.
Two versions of Toyota's Active Control Suspension system went into production - the first was a very limited production run from 1990 to 1991 of 300 units of the ST183 Celica, called the Active Sports. This was the first production car in the world to utilise an active suspension system. The suspension employed conventional coil-spring struts and 4-wheel steering. No anti-roll (stabiliser) bars were fitted as the strut damping was actively controlled by a combined power steering/suspension fluid pump and valve body that counteracted roll and pitch forces. This system of controlling damping force while utilising conventional springs was largely achieved with the much simpler Toyota Electronic Modulated Suspension system (TEMS).
The second version of the Active Suspension system came with the UZZ32 Soarer produced between 1991 and 1996. It was a complex, computer-controlled system that removed both conventional springs and anti-roll (stabiliser) bars in favour of fully hydropneumatic struts controlled by an array of sensors (such as axis accelerometers, suspension height and wheel speed) that detected cornering, acceleration and braking forces. The system worked well and gave an unusually controlled yet smooth ride with no body roll. However, the additional weight and power requirements of the system affected straight-line performance somewhat.
Due to the complexity and cost of the UZZ32 Soarer, only 873 were produced.
Mercedes-Benz introduced a very similar active suspension, called Active Body Control, on the Mercedes-Benz CL-Class in 1999.
Vehicles
Toyota Celica (ST183) 1990-1991
Toyota Soarer (UZZ32) 1991–1996
Toyota Curren (ST207) 1994-1995 XS Touring Selection
See also
Active Body Control
Toyota Electronic Modulated Suspension |
https://en.wikipedia.org/wiki/Trouton%E2%80%93Noble%20experiment | The Trouton–Noble experiment was an attempt to detect motion of the Earth through the luminiferous aether, and was conducted in 1901–1903 by Frederick Thomas Trouton and H. R. Noble. It was based on a suggestion by George FitzGerald that a charged parallel-plate capacitor moving through the aether should orient itself perpendicular to the motion. Like the earlier Michelson–Morley experiment, Trouton and Noble obtained a null result: no motion relative to the aether could be detected. This null result was reproduced, with increasing sensitivity, by Rudolf Tomaschek (1925, 1926), Chase (1926, 1927) and Hayden in 1994. Such experimental results are now seen, consistent with special relativity, to reflect the validity of the principle of relativity and the absence of any absolute rest frame (or aether). The experiment is a test of special relativity.
The Trouton–Noble experiment is also related to thought experiments such as the "Trouton–Noble paradox," and the "right-angle lever" or "Lewis–Tolman paradox". Several solutions have been proposed to solve this kind of paradox, all of them in agreement with special relativity.
Trouton–Noble experiment
In the experiment, a suspended parallel-plate capacitor is held by a fine torsion fiber and is charged. If the aether theory were correct, the change in Maxwell's equations due to the Earth's motion through the aether would lead to a torque causing the plates to align perpendicular to the motion. This is given by:
where is the torque, the energy of the condenser, the angle between the normal of the plate and the velocity.
On the other hand, the assertion of special relativity that Maxwell's equations are invariant for all frames of reference moving at constant velocities would predict no torque (a null result). Thus, unless the aether were somehow fixed relative to the Earth, the experiment is a test of which of these two descriptions is more accurate. Its null result thus confirms Lorentz invariance of special rela |
https://en.wikipedia.org/wiki/Bharat%20Operating%20System%20Solutions | Bharat Operating System Solutions (BOSS GNU/Linux) is an Indian Linux distribution based on Debian, with Its latest stable version is 9.0 ("Urja") which was released in February 2021.
Editions
BOSS Linux was released in four editions for different purposes:
BOSS Desktop: Designed for personal, home, and office use.
EduBOSS: Designed for schools and the education community.
BOSS Advanced Server: The server-oriented edition.
BOSS MOOL: A specialized edition for specific purposes.
History
BOSS Linux was developed by the Centre for Development of Advanced Computing with the aim of promoting the adoption of free and open-source software throughout India. As a vital deliverable software of the National Resource Centre for Free and Open Source Software, it has an enhanced desktop environment that includes support for various Indian language and instructional software.
The software has been endorsed by the Government of India for adoption and implementation in India. BOSS Linux has been certified by the Linux Foundation for compliance with the Linux Standard Base. BOSS Linux supported Intel and AMD IA-32/x86-64 architecture until version 6 ("Anoop"). From version 7 ("Drishti"), the development shifted to x86-64 architecture only.
Versions
BOSS Linux has nine major releases:
BOSS 5.0 (Anokha)
This release came with many new applications that focused mainly on enhanced security and user-friendliness. The distribution included over 12,800 new packages, for a total of over . Most of the software in the distribution had been updated as well: over software packages (70% of all packages in Savir). BOSS 5.0 supported Linux Standard Base (LSB) version 4.1. It also featured XBMC to allow users to easily browse and view videos, photos, podcasts, and music from a hard drive, optical disc, local network, and the Internet.
BOSS 6.0 (Anoop)
There are several significant updates in BOSS Linux 6.0 (Anoop) from 5.0 (Anokha). Notable changes include a kernel update from 3.10 t |
https://en.wikipedia.org/wiki/WARP%20%28systolic%20array%29 | The Warp machines were a series of increasingly general-purpose systolic array processors, created by Carnegie Mellon University (CMU), in conjunction with industrial partners G.E., Honeywell and Intel, and funded by the U.S. Defense Advanced Research Projects Agency (DARPA).
The Warp projects were started in 1984 by H. T. Kung at Carnegie Mellon University. The Warp projects yielded research results, publications and advancements in general purpose systolic hardware design, compiler design and systolic software algorithms.
There were three distinct machine designs known as the WW-Warp (Wire Wrap Warp), PC-Warp (Printed Circuit Warp), and iWarp (integrated circuit Warp, conveniently also a play on the “i” for Intel).
Each successive generation became increasingly general-purpose by increasing memory capacity and loosening the coupling between processors. Only the original WW-Warp forced a truly lock step sequencing of stages, which severely restricted its programmability but was in a sense the purest “systolic-array” design.
Warp machines were attached to Sun workstations (UNIX based). Software development for all models of Warp machines was done on Sun workstations.
A research compiler, for a language known as “W2,” targeted all three machines and was the only compiler for the WW-Warp and PC-Warp while it served as an early compiler during development of the iWarp. The production compiler for iWarp was a C and Fortran compiler based on the AT&T pcc compiler for UNIX, ported under contract for Intel and then extensively modified and extend by Intel.
The WW-Warp and PC-Warp machines were systolic array computers with a linear array of ten or more cells, each of which is a programmable processor capable of performing 10 million single precision floating-point operations per second (10 MFLOPS). A 10-cell machine had a peak performance of 100 MFLOPS. The iWarp machines doubled this performance, delivering 20 MFLOPS single precision and supporting double precision f |
https://en.wikipedia.org/wiki/Selenium%20responsive%20proteins | Selenium responsive proteins within human biology, are the class of proteins sensitive to selenium, in healthy human beings, in cancer patients, in in-vivo models or in-vitro cell culture models.
The original gi accession (version) numbers have been updated to NCBI accession number and protein names updated accordingly. |
https://en.wikipedia.org/wiki/Somatomedin | Somatomedins are a group of proteins produced predominantly by the liver when growth hormones act on target tissue. Somatomedins inhibit the release of growth hormones by acting directly on anterior pituitary and by stimulating the secretion of somatostatin from the hypothalamus.
Somatomedins are a group of proteins that promote cell growth and division in response to stimulation by growth hormone (GH), also known as somatotropin (STH).
Somatomedins have similar biological effects to somatotropin.
In addition to their actions that stimulate growth, somatomedins also stimulate production of somatostatin, which suppresses growth hormone release. Thus, levels of somatomedins are controlled via negative feedback through the intermediates of somatostatin and growth hormone. Somatomedins are produced in many tissues and have autocrine and paracrine actions in addition to their endocrine action. The liver is thought to be the predominant source of circulating somatomedins.
Three forms include:
Somatomedin A, which is another name for insulin-like growth factor 2 (IGF-2)
Somatomedin B, which is derived from vitronectin
Somatomedin C, which is another name for insulin-like growth factor 1 (IGF-1) |
https://en.wikipedia.org/wiki/Chebyshev%E2%80%93Gauss%20quadrature | In numerical analysis Chebyshev–Gauss quadrature is an extension of Gaussian quadrature method for approximating the value of integrals of the following kind:
and
In the first case
where
and the weight
In the second case
where
and the weight
See also
Chebyshev polynomials
Chebyshev nodes |
https://en.wikipedia.org/wiki/SCN5A | Sodium channel protein type 5 subunit alpha, also known as NaV1.5 is an integral membrane protein and tetrodotoxin-resistant voltage-gated sodium channel subunit. NaV1.5 is found primarily in cardiac muscle, where it mediates the fast influx of Na+-ions (INa) across the cell membrane, resulting in the fast depolarization phase of the cardiac action potential. As such, it plays a major role in impulse propagation through the heart. A vast number of cardiac diseases is associated with mutations in NaV1.5 (see paragraph genetics). SCN5A is the gene that encodes the cardiac sodium channel NaV1.5.
Gene structure
SCN5A is a highly conserved gene located on human chromosome 3, where it spans more than 100 kb. The gene consists of 28 exons, of which exon 1 and in part exon 2 form the 5' untranslated region (5’UTR) and exon 28 the 3' untranslated region (3’UTR) of the RNA. SCN5A is part of a family of 10 genes that encode different types of sodium channels, i.e. brain-type (NaV1.1, NaV1.2, NaV1.3, NaV1.6), neuronal channels (NaV1.7, NaV1.8 and NaV1.9), skeletal muscle channels (NaV1.4) and the cardiac sodium channel NaV1.5.
Expression pattern
SCN5A is mainly expressed in the heart, where expression is abundant in working myocardium and conduction tissue. In contrast, expression is low in the sinoatrial node and atrioventricular node. Within the heart, a transmural expression gradient from subendocardium to subsepicardium is present, with higher expression of SCN5A in the endocardium as compared to the epicardium. SCN5A is also expressed in the gastrointestinal tract.
Splice variants
More than 10 different splice isoforms have been described for SCN5A, of which several harbour different functional properties. In the heart, two isoforms are mainly expressed (ratio 1:2), of which the least predominant one contains an extra glutamine at position 1077 (1077Q). Moreover, different isoforms are expressed during fetal life and adult, differing in the inclusion of an alternati |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.