source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Dirac%20bracket
The Dirac bracket is a generalization of the Poisson bracket developed by Paul Dirac to treat classical systems with second class constraints in Hamiltonian mechanics, and to thus allow them to undergo canonical quantization. It is an important part of Dirac's development of Hamiltonian mechanics to elegantly handle more general Lagrangians; specifically, when constraints are at hand, so that the number of apparent variables exceeds that of dynamical ones. More abstractly, the two-form implied from the Dirac bracket is the restriction of the symplectic form to the constraint surface in phase space. This article assumes familiarity with the standard Lagrangian and Hamiltonian formalisms, and their connection to canonical quantization. Details of Dirac's modified Hamiltonian formalism are also summarized to put the Dirac bracket in context. Inadequacy of the standard Hamiltonian procedure The standard development of Hamiltonian mechanics is inadequate in several specific situations: When the Lagrangian is at most linear in the velocity of at least one coordinate; in which case, the definition of the canonical momentum leads to a constraint. This is the most frequent reason to resort to Dirac brackets. For instance, the Lagrangian (density) for any fermion is of this form. When there are gauge (or other unphysical) degrees of freedom which need to be fixed. When there are any other constraints that one wishes to impose in phase space. Example of a Lagrangian linear in velocity An example in classical mechanics is a particle with charge and mass confined to the - plane with a strong constant, homogeneous perpendicular magnetic field, so then pointing in the -direction with strength . The Lagrangian for this system with an appropriate choice of parameters is where is the vector potential for the magnetic field, ; is the speed of light in vacuum; and is an arbitrary external scalar potential; one could easily take it to be quadratic in and , witho
https://en.wikipedia.org/wiki/Email%20archiving
Email archiving is the act of preserving and making searchable all email to/from an individual. Email archiving solutions capture email content either directly from the email application itself or during transport. The messages are typically then stored on magnetic disk storage and indexed to simplify future searches. In addition to simply accumulating email messages, these applications index and provide quick, searchable access to archived messages independent of the users of the system using a couple of different technical methods of implementation. The reasons a company may opt to implement an email archiving solution include protection of mission critical data, to meet retention and supervision requirements of applicable regulations, and for e-discovery purposes. It is predicted that the email archiving market will grow from nearly $2.1 billion in 2009 to over $5.1 billion in 2013. Definition Email archiving is an automated process for preserving and protecting all inbound and outbound email messages (as well as attachments and metadata) so they can be accessed at a later date should the need arise. The benefits of email archiving include the recovery of lost or accidentally deleted emails, accelerated audit response, preservation of the intellectual property contained in business email and its attachments and "eDiscovery" in the case of litigation or internal investigations (what happened when, who said what). Overview Email Archiving is the process of capturing, preserving, and making easily searchable all email traffic to and from a given individual, organization, or service. Email archiving solutions capture email content either directly from the email server itself (journaling) or during message transit. The email archive can then be stored on magnetic tape, disk arrays, or now more often than not, in the cloud. Regardless of the location of the email archive, it gets indexed in order to speed future searches, and most archive vendors provide a search
https://en.wikipedia.org/wiki/Email%20bankruptcy
Email bankruptcy is deleting or ignoring all emails older than a certain date, due to an overwhelming volume of messages. The term is usually attributed to author Lawrence Lessig in 2004, though it can also be attributed to Sherry Turkle in 2002. An insurmountable volume or backlog of legitimate messages (e.g. on return from an extended absence) usually leads to bankruptcy. During the act of declaring email bankruptcy, a message is usually sent to all senders explaining the problem, that their message has been deleted, and that if their message still requires a response they should resend their message. Similarly, the inability to maintain an overview over messages in an instant messenger chat room may be referred to as "chat room bankruptcy".
https://en.wikipedia.org/wiki/KZNO-LD
KZNO-LD (channel 12) is a low-power television station in Los Angeles, California, United States. Owned by the Venture Technologies Group, it transmits from Mount Harvard, a peak adjacent to Mount Wilson in Los Angeles County, as a Spanish-language religious radio station that can be received at 87.7 FM. Its ATSC 3.0 video feed broadcasts Jewelry Television on digital channel 6.1. History The station was founded on August 7, 1996, as translator K06MU in Big Bear Lake. It was also available to area subscribers of Charter Cable on channel 6. The station's programming at the time was similar to a public-access television cable television channel, consisting primarily of news, public affairs and travel programs produced by local residents of the Big Bear Valley. In 2016, ownership was transferred from Bear Valley Broadcasting to Venture Technologies, which previously owned KSFV-CA, which also operated on analog channel 6 as a radio station, which, as KSFV-CD, now shares transmitting facilities on Mount Harvard (a peak adjacent to Mount Wilson) in Los Angeles County with Ontario-licensed KPOM-CD. While operating as an analog TV station, KZNO-LP audio could be heard locally by radio receivers at 87.7 FM, since TV channel 6 is in the 82–88 MHz range. Because it was a low-power station, it was not obligated to switch to a digital signal on June 12, 2009, which was required of all full-power TV stations in the United States. As of July 9, 2021, KZNO-LP had ceased its analog TV transmissions, prior to the July 13, 2021, Federal Communications Commission (FCC) deadline for LPTV stations to end analog TV transmissions. The station was licensed for digital operation effective July 15, 2021, changing its call sign to KZNO-LD at the same time. Effective July 27, 2021, the station was granted a six-month special temporary authority to resume audio transmissions receivable at 87.7 FM. On July 20, 2023, an FCC "Report and Order" included this station as one of 13 "FM6" stations a
https://en.wikipedia.org/wiki/Generic%20Security%20Service%20Algorithm%20for%20Secret%20Key%20Transaction
GSS-TSIG (Generic Security Service Algorithm for Secret Key Transaction) is an extension to the TSIG DNS authentication protocol for secure key exchange. It is a GSS-API algorithm which uses Kerberos for passing security tokens to provide authentication, integrity and confidentiality. GSS-TSIG (RFC 3645) uses a mechanism like SPNEGO with Kerberos or NTLM. In Windows, this implementation is called Secure Dynamic Update. GSS-TSIG uses TKEY records for key exchange between the DNS client and server in GSS-TSIG mode. For authentication between the DNS client and Active Directory, the AS-REQ, AS-REP, TGS-REQ, TGS-REP exchanges must take place for granting of ticket and establishing a security context. The security context has a limited lifetime during which dynamic updates to the DNS server can take place.
https://en.wikipedia.org/wiki/Emil%20Rupp
Philipp Heinrich Emil Rupp (1 July 1898 – 10 April 1979) was a German physicist, regarded by many as a respectable and important experimentalist in the late 1920s. He was later forced to recant all five of the papers he had published in 1935, admitting that his findings and experiments had been fictions. There is evidence that most if not all of his earlier experimental results were forged as well. Canal ray experiments In 1926 Rupp's canal ray experiments seemed to corroborate Albert Einstein's theories on wave–particle duality. He published these results in a paper that was printed next to a theoretical paper on the same subject by Einstein, who evidently accepted Rupp's alleged findings as confirming his (Einstein's) theoretical model. Rupp's experimental results were later shown to have been falsified (although subsequent experimental work re-confirmed Einstein's model). Exposure of fraud Although the validity of Rupp's experimental results had been challenged by other workers in the field repeatedly throughout his career, it was not until 1935 that his misdeeds were fully exposed. In 1935 experimentalists Walther Gerlach and Eduard Rüchardt published a corrected version of Einstein's mirror diagram in an article that argued that Rupp had falsely claimed to have carried out the rotated mirror experiment. Some fellow physicists at the AEG labs grew suspicious of Rupp when he claimed having accelerated protons at 500 kV, something he could not have the technical facilities to achieve. Rupp had to publicly retract five publications from the previous year. He attached a psychiatric diagnosis by that said he had written them under the influence of "dreamlike states" caused by psychasthenia. Rupp never worked again as a physicist, and all other physicists ceased to refer to any of his alleged results. See also List of experimental errors and frauds in physics
https://en.wikipedia.org/wiki/Colloid%20vibration%20current
Colloid vibration current is an electroacoustic phenomenon that arises when ultrasound propagates through a fluid that contains ions and either solid particles or emulsion droplets. The pressure gradient in an ultrasonic wave moves particles relative to the fluid. This motion disturbs the double layer that exists at the particle-fluid interface. The picture illustrates the mechanism of this distortion. Practically all particles in fluids carry a surface charge. This surface charge is screened with an equally charged diffuse layer; this structure is called the double layer. Ions of the diffuse layer are located in the fluid and can move with the fluid. Fluid motion relative to the particle drags these diffuse ions in the direction of one or the other of the particle's poles. The picture shows ions dragged towards the left hand pole. As a result of this drag, there is an excess of negative ions in the vicinity of the left hand pole and an excess of positive surface charge at the right hand pole. As a result of this charge excess, particles gain a dipole moment. These dipole moments generate an electric field that in turn generates measurable electric current. This phenomenon is widely used for measuring zeta potential in concentrated colloids. See also Electric sonic amplitude Electroacoustic phenomena Interface and colloid science Zeta potential
https://en.wikipedia.org/wiki/Nutation%20%28botany%29
Nutation refers to the bending movements of stems, roots, leaves and other plant organs caused by differences in growth in different parts of the organ. Circumnutation refers specifically to the circular movements often exhibited by the tips of growing plant stems, caused by repeating cycles of differences in growth around the sides of the elongating stem. Nutational movements are usually distinguished from 'variational' movements caused by temporary differences in the water pressure inside plant cells (turgor). Simple nutation occurs in flat leaves and flower petals, caused by unequal growth of the two sides of the surface. For example, in young leaf buds the outer surface of each leaflet grows faster, causing it to curve over its neighbors and form a compact bud. As the bud expands, growth becomes more rapid on the inner surface of the leaves, causing the bud to open and the leaves to flatten out. Similar inequality of growth, but more sharply localized, leads to the folding and rolling of the leaf in the bud, and to the changing shapes of flower petals. Circumnutational movements are most obvious in growing seedlings, where the combination of circular movement and upward growth causes the tip to move up in a spiral path. The first detailed analysis of circumnutation was Charles Darwin's The Power of Movement in Plants; he concluded that most plant movements were modifications of circumnutation, but many counterexamples are now known. Circumnutation is not a direct response to gravity or the direction of illumination, but these factors and many physiological processes can influence its direction, timing and amplitude. Although the function of circumnutation in most plants is not known, many twining plants have adapted these movements to help them find and twine around vertical objects such as tree trunks, and to help tendrils find and wind around smaller supports. The growing tips of the vine or tendril initially swings in wide circles that maximize its
https://en.wikipedia.org/wiki/Rydberg%20matter
Rydberg matter is an exotic phase of matter formed by Rydberg atoms; it was predicted around 1980 by É. A. Manykin, M. I. Ozhovan and P. P. Poluéktov. It has been formed from various elements like caesium, potassium, hydrogen and nitrogen; studies have been conducted on theoretical possibilities like sodium, beryllium, magnesium and calcium. It has been suggested to be a material that diffuse interstellar bands may arise from. Circular Rydberg states, where the outermost electron is found in a planar circular orbit, are the most long-lived, with lifetimes of up to several hours, and are the most common. Physical Rydberg matter consists of usually hexagonal planar clusters; these cannot be very big because of the retardation effect caused by the finite velocity of the speed of light. Hence, they are not gases or plasmas; nor are they solids or liquids; they are most similar to dusty plasmas with small clusters in a gas. Though Rydberg matter can be studied in the laboratory by laser probing, the largest cluster reported consists of only 91 atoms, but it has been shown to be behind extended clouds in space and the upper atmosphere of planets. Bonding in Rydberg matter is caused by delocalisation of the high-energy electrons to form an overall lower energy state. The way in which the electrons delocalise is to form standing waves on loops surrounding nuclei, creating quantised angular momentum and the defining characteristics of Rydberg matter. It is a generalised metal by way of the quantum numbers influencing loop size but restricted by the bonding requirement for strong electron correlation; it shows exchange-correlation properties similar to covalent bonding. Electronic excitation and vibrational motion of these bonds can be studied by Raman spectroscopy. Lifetime Due to reasons still debated by the physics community because of the lack of methods to observe clusters, Rydberg matter is highly stable against disintegration by emission of radiation; the character
https://en.wikipedia.org/wiki/Chromosome%20segregation
Chromosome segregation is the process in eukaryotes by which two sister chromatids formed as a consequence of DNA replication, or paired homologous chromosomes, separate from each other and migrate to opposite poles of the nucleus. This segregation process occurs during both mitosis and meiosis. Chromosome segregation also occurs in prokaryotes. However, in contrast to eukaryotic chromosome segregation, replication and segregation are not temporally separated. Instead segregation occurs progressively following replication. Mitotic chromatid segregation During mitosis chromosome segregation occurs routinely as a step in cell division (see mitosis diagram). As indicated in the mitosis diagram, mitosis is preceded by a round of DNA replication, so that each chromosome forms two copies called chromatids. These chromatids separate to opposite poles, a process facilitated by a protein complex referred to as cohesin. Upon proper segregation, a complete set of chromatids ends up in each of two nuclei, and when cell division is completed, each DNA copy previously referred to as a chromatid is now called a chromosome. Meiotic chromosome and chromatid segregation Chromosome segregation occurs at two separate stages during meiosis called anaphase I and anaphase II (see meiosis diagram). In a diploid cell there are two sets of homologous chromosomes of different parental origin (e.g. a paternal and a maternal set). During the phase of meiosis labeled “interphase s” in the meiosis diagram there is a round of DNA replication, so that each of the chromosomes initially present is now composed of two copies called chromatids. These chromosomes (paired chromatids) then pair with the homologous chromosome (also paired chromatids) present in the same nucleus (see prophase I in the meiosis diagram). The process of alignment of paired homologous chromosomes is called synapsis (see Synapsis). During synapsis, genetic recombination usually occurs. Some of the recombination even
https://en.wikipedia.org/wiki/Myclobutanil
Myclobutanil is a triazole chemical used as a fungicide. It is a steroid demethylation inhibitor, specifically inhibiting ergosterol biosynthesis. Ergosterol is a critical component of fungal cell membranes. Stereoisomerism Safety The Safety Data Sheet indicates the following hazards: Suspected of damaging fertility or the unborn child. Toxic to aquatic life with long lasting effects. The first hazard has caused this chemical to be placed on the 1986 California Proposition 65 toxics list. When heated, myclobutanil decomposes to produce corrosive and/or toxic fumes, including carbon monoxide, carbon dioxide, hydrogen chloride, hydrogen cyanide, and nitrogen oxides. Banned for cannabis cultivation Myclobutanil is banned in Canada, Colorado, Washington, Oregon, and Oklahoma for the production of medical and recreational cannabis. In 2014, a Canadian news investigation by The Globe and Mail reported the discovery of myclobutanil in medical cannabis produced by at least one government licensed grower. In September 2019, NBC News commissioned CannaSafe to test THC cartridges for heavy metals, pesticides, and residual solvents like Vitamin E; pesticides, including myclobutanil, was found in products from unlicensed dealers. In Michigan, the current state action limit for myclobutanil is 200 ppb in cannabis products.
https://en.wikipedia.org/wiki/Radiophysics
Radiophysics (also modern writing "radio physics") is a branch of physics focused on the theoretical and experimental study of certain kinds of radiation, its emission, propagation and interaction with matter. The term is used in the following major meanings: study of radio waves (the original area of research) study of radiation used in radiology study of other ranges of the spectrum of electromagnetic radiation in some specific applications Among the main applications of radiophysics are radio communications, radiolocation, radio astronomy and radiology. Branches Classical radiophysics deals with radio wave communications and detection Quantum radiophysics (physics of lasers and masers; Nikolai Basov was the founder of quantum radiophysics in the Soviet Union) Statistical radiophysics
https://en.wikipedia.org/wiki/Cut%20locus
The cut locus is a mathematical structure defined for a closed set in a space as the closure of the set of all points that have two or more distinct shortest paths or geodesics in from to . For example, the cut locus of every point on the regular 2-sphere consists of exactly one point, namely the antipodal point. Definition in a special case Let be a metric space, equipped with the metric , and let be a point. The cut locus of in (), is the locus of all the points in for which there exists at least two distinct shortest paths to in . More formally, for a point in if and only if there exists two paths such that , , , and the trajectories of the two paths are distinct. Examples For example, let S be the boundary of a simple polygon, and X the interior of the polygon. Then the cut locus is the medial axis of the polygon. The points on the medial axis are centers of maximal disks that touch the polygon boundary at two or more points, corresponding to two or more shortest paths to the disk center. As a second example, let S be a point x on the surface of a convex polyhedron P, and X the surface itself. Then the cut locus of x is what is known as the ridge tree of P with respect to x. This ridge tree has the property that cutting the surface along its edges unfolds P to a simple planar polygon. This polygon can be viewed as a net for the polyhedron.
https://en.wikipedia.org/wiki/Enation
Enations are scaly leaflike structures, differing from leaves in their lack of vascular tissue. They are created by some leaf diseases and occur normally on Psilotum. Enations are also found on some early plants such as Rhynia, where they are hypothesized to have aided in photosynthesis.
https://en.wikipedia.org/wiki/Matutinal
Matutinal, matinal (in entomological writings), and matutine are terms used in the life sciences to indicate something of, relating to, or occurring in the early morning. The term may describe crepuscular animals that are significantly active during the predawn or early morning hours. During the morning twilight period and shortly thereafter, these animals partake in important tasks, such as scanning for mates, mating, and foraging. Matutinal behaviour is thought to be adaptive because there may be less competition between species, and sometimes even a higher prevalence of food during these hours. It may also serve as an anti-predator adaptation by allowing animals to sit between the brink of danger that may come with diurnal and nocturnal activity. Etymology The word matutinal is derived from the Latin word , meaning "of or pertaining to the morning", from Mātūta, the Roman goddess of the morning or dawn (+ -īnus '-ine' + -ālis '-al'). Adaptive relevance Selection pressures, such as high predatory activity or low food may require animals to change their behaviours to adapt. An animal changing the time of day at which it carries out significant tasks (e.g., mating and/or foraging) is recognized as one of these adaptive behaviours. For example, human activity, which is more predominant during daylight hours, has forced certain species (most often larger mammals) living in urban areas to shift their schedules to crepuscular ones. When observed in environments where there is little or no human activity, these same species often do not exhibit this temporal shift. It may be argued that if the goal is to avoid human activity, or any other diurnal predator's activity, a nocturnal schedule would be safer. However, many of these animals depend on sight, so a matutinal or crepuscular schedule is especially advantageous as it allows animals to both avoid predation, and have sufficient light to mate and forage. Matutinal mating For certain species, commencing mating
https://en.wikipedia.org/wiki/Vanillic%20acid
Vanillic acid (4-hydroxy-3-methoxybenzoic acid) is a dihydroxybenzoic acid derivative used as a flavoring agent. It is an oxidized form of vanillin. It is also an intermediate in the production of vanillin from ferulic acid. Occurrence in nature The highest amount of vanillic acid in plants known so far is found in the root of Angelica sinensis, an herb indigenous to China, which is used in traditional Chinese medicine. Occurrences in food Açaí oil, obtained from the fruit of the açaí palm (Euterpe oleracea), is rich in vanillic acid (). It is one of the main natural phenols in argan oil. It is also found in wine and vinegar. Metabolism Vanillic acid is one of the main catechins metabolites found in humans after consumption of green tea infusions. Synthesis Vanillic acid can be obtained from the oxidation of vanillin by various oxidizing agents. With Pd/C, NaBH4, and KOH as the oxidizing agent, the conversion was reported to occur in ~89% yield.
https://en.wikipedia.org/wiki/Hybrid%20Insect%20Micro-Electro-Mechanical%20Systems
Hybrid Insect Micro-Electro-Mechanical Systems (HI-MEMS) is a project of DARPA, a unit of the United States Department of Defense. Created in 2006, the unit's goal is the creation of tightly coupled machine-insect interfaces by placing micro-mechanical systems inside the insects during the early stages of metamorphosis. After implantation, the "insect cyborgs" could be controlled by sending electrical impulses to their muscles. The primary application is surveillance. The project was created with the ultimate goal of delivering an insect within 5 meters of a target located 100 meters away from its starting point. In 2008, a team from the University of Michigan demonstrated a cyborg unicorn beetle at an academic conference in Tucson, Arizona. The beetle was able to take off and land, turn left or right, and demonstrate other flight behaviors. Researchers at Cornell University demonstrated the successful implantation of electronic probes into tobacco hornworms in the pupal stage.
https://en.wikipedia.org/wiki/Daikon%20%28system%29
Daikon is a computer program that detects likely invariants of programs. An invariant is a condition that always holds true at certain points in the program. It is mainly used for debugging programs in late development, or checking modifications to existing code. Properties Daikon can detect properties in C, C++, Java, Perl, and IOA programs, as well as spreadsheet files or other data sources. Daikon is easy to extend and is free software. External links Daikon Official home site Source Repository on GitHub Dynamically Discovering Likely Program Invariants, Michael D. Ernst PhD. Thesis (using Daikon)
https://en.wikipedia.org/wiki/Sarcoscypha%20coccinea
Sarcoscypha coccinea, commonly known as the scarlet elf cup, scarlet elf cap, or the scarlet cup, is a species of fungus in the family Sarcoscyphaceae of the order Pezizales. The fungus, widely distributed in the Northern Hemisphere, has been found in Africa, Asia, Europe, North and South America, and Australia. The type species of the genus Sarcoscypha, S. coccinea has been known by many names since its first appearance in the scientific literature in 1772. Phylogenetic analysis shows the species to be most closely related to other Sarcoscypha species that contain numerous small oil droplets in their spores, such as the North Atlantic island species S. macaronesica. Due to similar physical appearances and sometimes overlapping distributions, S. coccinea has often been confused with S. occidentalis, S. austriaca, and S. dudleyi. The saprobic fungus grows on decaying sticks and branches in damp spots on forest floors, generally buried under leaf litter or in the soil. The cup-shaped fruit bodies are usually produced during the cooler months of winter and early spring. The brilliant red interior of the cups—from which both the common and scientific names are derived—contrasts with the lighter-colored exterior. The edibility of the fruit bodies is well established, but its small size, small abundance tough texture and insubstantial fruitings would dissuade most people from collecting for the table. The fungus has been used medicinally by the Oneida Native Americans, and also as a colorful component of table decorations in England. In the northern part of Russia, where fruitings are more frequent, it is consumed in salads, fried with smetana, or just used as colored dressing for meals. Molliardiomyces eucoccinea is the name given to the imperfect form of the fungus that lacks a sexually reproductive stage in its life cycle. Taxonomy, naming, and phylogeny The species was originally named Helvella coccinea by the Italian naturalist Giovanni Antonio Scopoli in 1772. Oth
https://en.wikipedia.org/wiki/Voice%20search
Voice search, also called voice-enabled search, allows the user to use a voice command to search the Internet, a website, or an app. In a broader definition, voice search includes open-domain keyword query on any information on the Internet, for example in Google Voice Search, Cortana, Siri and Amazon Echo. Voice search is often interactive, involving several rounds of interaction that allows a system to ask for clarification. Voice search is a type of dialog system. Voice search is not a replacement for typed search. Rather the search terms, experience and use cases can differ heavily depending on the input type. Method Voice searching is a method of search which allows users to search using spoken voice commands rather than typing. The search can be done on any device with a voice input. Three common methods to activate voice search: Click on the voice command icon Call out the name of the virtual assistant Click on the home button or gesture on interface Activate the virtual assistant Apple: Hey, Siri Google: OK, Google Amazon: Hey, Alexa Microsoft: Hey, Cortana Samsung: Hi, Bixby Supported language Language is the most essential factor for a system to understand, and provide the most accurate results of what the user searches. This covers across languages, dialects, and accents, as users want a voice assistant that both understands them and speaks to them understandably. While spoken and written languages differ, voice search should support natural spoken language instead of only transforming voice into text and doing a regular text search with the help speech recognition. For example, in typed search an eCommerce user can easily copy and paste an alphanumeric product code to search field, but when speaking the search terms can be very different, such as "show me the new Bluetooth headphones by Samsung". How it works The difference between text and voice search is not only the input type. The mechanism must include an automatic speech recog
https://en.wikipedia.org/wiki/Synchronizing%20word
In computer science, more precisely, in the theory of deterministic finite automata (DFA), a synchronizing word or reset sequence is a word in the input alphabet of the DFA that sends any state of the DFA to one and the same state. That is, if an ensemble of copies of the DFA are each started in different states, and all of the copies process the synchronizing word, they will all end up in the same state. Not every DFA has a synchronizing word; for instance, a DFA with two states, one for words of even length and one for words of odd length, can never be synchronized. Existence Given a DFA, the problem of determining if it has a synchronizing word can be solved in polynomial time using a theorem due to Ján Černý. A simple approach considers the power set of states of the DFA, and builds a directed graph where nodes belong to the power set, and a directed edge describes the action of the transition function. A path from the node of all states to a singleton state shows the existence of a synchronizing word. This algorithm is exponential in the number of states. A polynomial algorithm results however, due to a theorem of Černý that exploits the substructure of the problem, and shows that a synchronizing word exists if and only if every pair of states has a synchronizing word. Length The problem of estimating the length of synchronizing words has a long history and was posed independently by several authors, but it is commonly known as the Černý conjecture. In 1969, Ján Černý conjectured that (n − 1)2 is the upper bound for the length of the shortest synchronizing word for any n-state complete DFA (a DFA with complete state transition graph). If this is true, it would be tight: in his 1964 paper, Černý exhibited a class of automata (indexed by the number n of states) for which the shortest reset words have this length. The best upper bound known is 0.1654n3, far from the lower bound. For n-state DFAs over a k-letter input alphabet, an algorithm by David Eppstein fi
https://en.wikipedia.org/wiki/Viaspan
Viaspan was the trademark under which the University of Wisconsin cold storage solution (also known as University of Wisconsin solution or UW solution) was sold. Currently, UW solution is sold under the Belzer UW trademark and others like Bel-Gen or StoreProtect. UW solution was the first solution designed for use in organ transplantation, and became the first intracellular-like preservation medium. Developed in the late 1980s by Folkert Belzer and James Southard for pancreas preservation, the solution soon displaced EuroCollins solution as the preferred medium for cold storage of livers and kidneys, as well as pancreas. The solution has also been used for hearts and other organs. University of Wisconsin cold storage solution remains what is often called the gold standard for organ preservation, despite the development of other solutions that are in some respects superior. Development The guiding principles for the development of UW Solution were: osmotic concentration maintained by the use of metabolically inert substances like lactobionate and raffinose rather than with glucose Hydroxyethyl starch (HES) is used to prevent edema Substances are added to scavenge free radicals, along with steroids and insulin. Composition Potassium lactobionate: 100 mM KH2PO4: 25 mM MgSO4: 5 mM Raffinose: 30 mM Adenosine: 5 mM Glutathione: 3 mM Allopurinol: 1 mM Hydroxyethyl starch: 50 g/L See also HTK Solution (Histidine-tryptophan-ketoglutarate) Biostasis Organ transplant
https://en.wikipedia.org/wiki/Seapine%20Software
Seapine Software was a privately held Mason, Ohio-based software and services company. The company developed a suite of software products that managed the full software development lifecycle. Seapine's tools included testing tools, configuration management, test-case management, and requirements management. The company was best known for its TestTrack line of application lifecycle management (ALM) software. The company was acquired in 2016 by Minneapolis, Minnesota-based Perforce Software, and TestTrack was rebranded as Helix ALM. History Seapine was established in 1995 by Rick and Kelly Riccetti. The company shipped their first product, TestTrack Pro, in 1996. In 2012, Seapine built a new 50,000 square-foot technology headquarters in Mason, OH for their 100+ employees. In 2016, the company was acquired by Minneapolis, Minnesota-based Perforce Software. Six months after the acquisition, Perforce renamed TestTrack as Helix ALM to match other products in Perforce's suite. Awards and recognition Seapine was on the SD Times 100 list in 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, and 2015. In 2013, Seapine was rated a Champion in the application lifecycle management space by Info-Tech Research Group. Seapine products had won various Jolt Awards, including its QA Wizard in 2011, Surround SCM in 2008, and TestTrack Studio in 2007. See also List of revision control software Comparison of issue tracking systems
https://en.wikipedia.org/wiki/Shrikhande%20graph
In the mathematical field of graph theory, the Shrikhande graph is a graph discovered by S. S. Shrikhande in 1959. It is a strongly regular graph with 16 vertices and 48 edges, with each vertex having degree 6. Every pair of nodes has exactly two other neighbors in common, whether the pair of nodes is connected or not. Construction The Shrikhande graph can be constructed as a Cayley graph. The vertex set is . Two vertices are adjacent if and only if the difference is in . Properties In the Shrikhande graph, any two vertices I and J have two distinct neighbors in common (excluding the two vertices I and J themselves), which holds true whether or not I is adjacent to J. In other words, it is strongly regular and its parameters are: {16,6,2,2}, i.e., . This equality implies that the graph is associated with a symmetric BIBD. The Shrikhande graph shares these parameters with exactly one other graph, the 4×4 rook's graph, i.e., the line graph L(K4,4) of the complete bipartite graph K4,4. The latter graph is the only line graph L(Kn,n) for which the strong regularity parameters do not determine that graph uniquely but are shared with a different graph, namely the Shrikhande graph (which is not a rook's graph). The Shrikhande graph is locally hexagonal; that is, the neighbors of each vertex form a cycle of six vertices. As with any locally cyclic graph, the Shrikhande graph is the 1-skeleton of a Whitney triangulation of some surface; in the case of the Shrikhande graph, this surface is a torus in which each vertex is surrounded by six triangles. Thus, the Shrikhande graph is a toroidal graph. The embedding forms a regular map in the torus, with 32 triangular faces. The skeleton of the dual of this map (as embedded in the torus) is the Dyck graph, a cubic symmetric graph. The Shrikhande graph is not a distance-transitive graph. It is the smallest distance-regular graph that is not distance-transitive. The automorphism group of the Shrikhande graph is of order 192.
https://en.wikipedia.org/wiki/Shotgun%20proteomics
Shotgun proteomics refers to the use of bottom-up proteomics techniques in identifying proteins in complex mixtures using a combination of high performance liquid chromatography combined with mass spectrometry. The name is derived from shotgun sequencing of DNA which is itself named after the rapidly expanding, quasi-random firing pattern of a shotgun. The most common method of shotgun proteomics starts with the proteins in the mixture being digested and the resulting peptides are separated by liquid chromatography. Tandem mass spectrometry is then used to identify the peptides. Targeted proteomics using SRM and data-independent acquisition methods are often considered alternatives to shotgun proteomics in the field of bottom-up proteomics. While shotgun proteomics uses data-dependent selection of precursor ions to generate fragment ion scans, the aforementioned methods use a deterministic method for acquisition of fragment ion scans. History Shotgun proteomics arose from the difficulties of using previous technologies to separate complex mixtures. In 1975, two-dimensional polyacrylamide gel electrophoresis (2D-PAGE) was described by O’Farrell and Klose with the ability to resolve complex protein mixtures. The development of matrix-assisted laser desorption ionization (MALDI), electrospray ionization (ESI), and database searching continued to grow the field of proteomics. However these methods still had difficulty identifying and separating low-abundance proteins, aberrant proteins, and membrane proteins. Shotgun proteomics emerged as a method that could resolve even these proteins. Advantages Shotgun proteomics allows global protein identification as well as the ability to systematically profile dynamic proteomes. It also avoids the modest separation efficiency and poor mass spectral sensitivity associated with intact protein analysis. Disadvantages The dynamic exclusion filtering that is often used in shotgun proteomics maximizes the number of identified
https://en.wikipedia.org/wiki/Swarm%20Development%20Group
The Swarm Development Group (SDG) is an American non-profit organization to advance the development of complex adaptive system-oriented agent-based modeling (ABM) tools initiated at the Santa Fe Institute (SFI) in Santa Fe, New Mexico, US. History Swarm Development Group was formed in 1999 by a group of multidisciplinary scientists, researchers, and software developers, led by Chris Langton. Langton was also the founder of the emerging field of research called artificial life. The initial, primary, role for the SDG was to house continued development of the Swarm simulation software after the software become independent of the SFI in 1999. The role of the Swarm Development Group has expanded to include the co-ordination of a long-running conference SwarmFest during May, June, or July each summer – typically hosted by a different research university each year. Developers, users, and researchers gather to present research papers and discuss the state of Swarm and other agent-based modeling platforms like RePast (University of Chicago) and Ascape (Brookings). Typically, a wide range of academic, corporate, and government organizations are represented at SwarmFest. The first SwarmFest was in 1998 while Swarm was still sponsored by the Santa Fe Institute. From SwarmFest 2000 onwards, after the SDG was formed in late 1999, SwarmFests were organized directly by the SDG. Recent SwarmFests have been held at a variety of institutions. SwarmFest 2007 was held at DePaul University's School of Computer Science, Telecommunications, and Information Systems, in Chicago. SwarmFest 2008 was held at Northwestern Memorial Hospital/Northwestern University Feinberg School of Medicine, in Chicago. Swarmfest 2008 had special focus areas on agent based modeling in Systems Biology, and the implementation of agent based models in high-performance computing environments. Between 2009 and 2012, SwarmFest was held at the Santa Fe Complex in Santa Fe, New Mexico. Swarmfest 2013 was
https://en.wikipedia.org/wiki/Plant%20%28control%20theory%29
A plant in control theory is the combination of process and actuator. A plant is often referred to with a transfer function (commonly in the s-domain) which indicates the relation between an input signal and the output signal of a system without feedback, commonly determined by physical properties of the system. An example would be an actuator with its transfer of the input of the actuator to its physical displacement. In a system with feedback, the plant still has the same transfer function, but a control unit and a feedback loop (with their respective transfer functions) are added to the system.
https://en.wikipedia.org/wiki/Common%20Industrial%20Protocol
The Common Industrial Protocol (CIP) is an industrial protocol for industrial automation applications. It is supported by ODVA. Previously known as Control and Information Protocol, CIP encompasses a comprehensive suite of messages and services for the collection of manufacturing automation applications – control, safety, synchronization, motion, configuration and information. It allows users to integrate these manufacturing applications with enterprise-level Ethernet networks and the Internet. It is supported by hundreds of vendors around the world, and is media-independent. CIP provides a unified communication architecture throughout the manufacturing enterprise. It is used in EtherNet/IP, DeviceNet, CompoNet and ControlNet. ODVA is the organization that supports network technologies built on the Common Industrial Protocol (CIP). These also currently include application extensions to CIP: CIP Safety, CIP Motion and CIP Sync.
https://en.wikipedia.org/wiki/Orthostochastic%20matrix
In mathematics, an orthostochastic matrix is a doubly stochastic matrix whose entries are the squares of the absolute values of the entries of some orthogonal matrix. The detailed definition is as follows. A square matrix B of size n is doubly stochastic (or bistochastic) if all its rows and columns sum to 1 and all its entries are nonnegative real numbers. It is orthostochastic if there exists an orthogonal matrix O such that All 2-by-2 doubly stochastic matrices are orthostochastic (and also unistochastic) since for any we find the corresponding orthogonal matrix with such that For larger n the sets of bistochastic matrices includes the set of unistochastic matrices, which includes the set of orthostochastic matrices and these inclusion relations are proper.
https://en.wikipedia.org/wiki/Brian%20Bowditch
Brian Hayward Bowditch (born 1961) is a British mathematician known for his contributions to geometry and topology, particularly in the areas of geometric group theory and low-dimensional topology. He is also known for solving the angel problem. Bowditch holds a chaired Professor appointment in Mathematics at the University of Warwick. Biography Brian Bowditch was born in 1961 in Neath, Wales. He obtained a B.A. degree from Cambridge University in 1983. He subsequently pursued doctoral studies in Mathematics at the University of Warwick under the supervision of David Epstein where he received a PhD in 1988. Bowditch then had postdoctoral and visiting positions at the Institute for Advanced Study in Princeton, New Jersey, the University of Warwick, Institut des Hautes Études Scientifiques at Bures-sur-Yvette, the University of Melbourne, and the University of Aberdeen. In 1992 he received an appointment at the University of Southampton where he stayed until 2007. In 2007 Bowditch moved to the University of Warwick, where he received a chaired Professor appointment in Mathematics. Bowditch was awarded a Whitehead Prize by the London Mathematical Society in 1997 for his work in geometric group theory and geometric topology. He gave an Invited address at the 2004 European Congress of Mathematics in Stockholm. Bowditch is a former member of the Editorial Board for the journal Annales de la Faculté des Sciences de Toulouse and a former Editorial Adviser for the London Mathematical Society. Mathematical contributions Early notable results of Bowditch include clarifying the classic notion of geometric finiteness for higher-dimensional Kleinian groups in constant and variable negative curvature. In a 1993 paper Bowditch proved that five standard characterisations of geometric finiteness for discrete groups of isometries of hyperbolic 3-space and hyperbolic plane, (including the definition in terms of having a finitely-sided fundamental polyhedron) remain equivalent for g
https://en.wikipedia.org/wiki/Alfiz
The alfiz (, from Andalusi Arabic alḥíz, from Standard Arabic alḥáyyiz, meaning 'the container';) is an architectural adornment, consisting of a moulding, usually a rectangular panel, which encloses the outward side of an arch. It is an architectonic ornament of Etruscan origin, used in Visigothic, Asturian, Moorish, Mozarabic, Mudéjar and Isabelline Gothic architecture. It is frequent in the Islamic Hispanic art and Mozarabic art (usually in connection with the horseshoe arch). As the image illustrates, there are two alfiz variants: Alfiz starting from the impost. Alfiz starting from the floor. The space between the arch and the alfiz is called enjuta or arrabá, usually richly decorated (iron-gray in the illustration). Each curved triangle is called albanega (spandrel).
https://en.wikipedia.org/wiki/Ion%20vibration%20current
The ion vibration current (IVI) and the associated ion vibration potential is an electric signal that arises when an acoustic wave propagates through a homogeneous fluid. Historically, the IVI was the first known electroacoustic phenomenon. It was predicted by Peter Debye in 1933. When a longitudinal sound wave travels through a solvent, the associated pressure gradients push the fluid particles back and forth, and it is easy in practice to create such accelerations that measure thousands or millions of g's. If a solute molecule is more dense or less dense than the surrounding liquid, then in this accelerating environment, the molecule will move relative to the surrounding liquid. This relative motion is essentially the same phenomenon that occurs in a centrifuge, or more simply, it is essentially the same phenomenon that occurs when low-density objects float to the top of a glass of water, and high-density particles sink to the bottom (see the equivalence principle, which states that gravity is just like any other acceleration). The amount of relative motion depends on the balance between the molecule's effective mass (which includes both the mass of the molecule itself and any solvent molecules that are so tightly bound to the molecule that they follow along with the molecule's motion), its effective volume (related to buoyant force), and the viscous drag (friction) between the molecule and the surrounding fluid. IVI concerns the case where the particles in question are anions and cations. In general, they will have different amounts of motion relative to the fluid during the sound wave oscillations, and that discrepancy creates an alternating electric potential between various points in a sound wave. This effect was extensively used in the 1950s and 1960s for characterizing ion solvation. These works are mostly associated with the names of Zana and Yaeger, who published a review of their studies in 1982. This effect can be studied with modern devices that em
https://en.wikipedia.org/wiki/Streaming%20vibration%20current
The streaming vibration current (SVI) and the associated streaming vibration potential is an electric signal that arises when an acoustic wave propagates through a porous body in which the pores are filled with fluid. Streaming vibration current was experimentally observed in 1948 by M. Williams. A theoretical model was developed some 30 years later by Dukhin and coworkers. This effect opens another possibility for characterizing the electric properties of the surfaces in porous bodies. See also Interface and colloid science
https://en.wikipedia.org/wiki/Mass%20attenuation%20coefficient
The mass attenuation coefficient, or mass narrow beam attenuation coefficient of a material is the attenuation coefficient normalized by the density of the material; that is, the attenuation per unit mass (rather than per unit of distance). Thus, it characterizes how easily a mass of material can be penetrated by a beam of light, sound, particles, or other energy or matter. In addition to visible light, mass attenuation coefficients can be defined for other electromagnetic radiation (such as X-rays), sound, or any other beam that can be attenuated. The SI unit of mass attenuation coefficient is the square metre per kilogram (). Other common units include cm2/g (the most common unit for X-ray mass attenuation coefficients) and L⋅g−1⋅cm−1 (sometimes used in solution chemistry). Mass extinction coefficient is an old term for this quantity. The mass attenuation coefficient can be thought of as a variant of absorption cross section where the effective area is defined per unit mass instead of per particle. Mathematical definitions Mass attenuation coefficient is defined as where μ is the attenuation coefficient (linear attenuation coefficient); ρm is the mass density. When using the mass attenuation coefficient, the Beer–Lambert law is written in alternative form as where is the area density known also as mass thickness, and is the length, over which the attenuation takes place. Mass absorption and scattering coefficients When a narrow (collimated) beam passes through a volume, the beam will lose intensity to two processes: absorption and scattering. Mass absorption coefficient, and mass scattering coefficient are defined as where μa is the absorption coefficient; μs is the scattering coefficient. In solutions In chemistry, mass attenuation coefficients are often used for a chemical species dissolved in a solution. In that case, the mass attenuation coefficient is defined by the same equation, except that the "density" is the density of only that one chemical s
https://en.wikipedia.org/wiki/Indoxacarb
Indoxacarb is an oxadiazine pesticide developed by DuPont that acts against lepidopteran larvae. It is marketed under the names Indoxacarb Technical Insecticide, Steward Insecticide and Avaunt Insecticide. It is also used as the active ingredient in the Syngenta line of commercial pesticides: Advion and Arilon. Its main mode of action is via blocking of neuronal sodium channels. It is fairly lipophilic with a Kow of 4.65. This pesticide should be used with caution since some insects such as the oriental tobacco budworm (Helicoverpa assulta) become resistant when exposed. In 2021, the European Union chose not to renew Indoxacarb for use as an insecticide. The United Kingdom still allows use of the compound until 2025. Development Indoxacarb was developed by the McCann et al team at E. I. du Pont de Nemours. Household products Indoxacarb is the active ingredient in a number of household insecticides, including cockroach and ant baits, and can remain active after digestion. In 2012 DuPont's Professional Products including the line of Advion and Arilon products was purchased by Syngenta. Indoxacarb is the active ingredient in the pet product, Activyl, from Merck Animal Health. It is marketed to kill fleas on dogs and cats. Toxicity to humans While toxicity to humans has not been formally studied, there is a reported case of a person consuming indoxacarb in a suicide attempt. The patient developed methemoglobinemia following ingestion. Methemoglobinemia (also known as blue baby syndrome) is a condition which ultimately decreases the effectiveness of red blood cells to exchange oxygen with organs. Methemoglobinemia can be fatal if left untreated, however when the cause is exposure to a chemical agent (not genetic) a variety of treatments are available and effective.
https://en.wikipedia.org/wiki/Heuser%27s%20membrane
Heuser's membrane (or the exocoelomic membrane) is a short lived combination of hypoblast cells and extracellular matrix. At day 9-10 of embryonic development, cells from the hypoblast begin to migrate to the embryonic pole, forming a layer of cells just beneath the cytotrophoblast, called Heuser's membrane. It surrounds the exocoelomic cavity (primary yolk sac), i.e. it lines the inner surface of the cytotrophoblast. At this point, the exocoelomic cavity replaces the blastocyst cavity. At days 11 to 12, there is further delineation of the trophoblastic cells giving rise to a layer of loosely arranged cells that inserts between Heuser's membrane and both syncytiotrophoblast and cytotrophoblast. The Heuser's membrane cells (hypoblast cells) that migrated along the inner cytotrophoblast lining of the blastocoel, secrete an extracellular matrix along the way. Cells of the hypoblast migrate along the outer edges of this reticulum and form the extraembryonic mesoderm (splanchic & somatic); this disrupts the extraembryonic reticulum. Soon pockets form in the reticulum, which ultimately coalesce to form the chorionic cavity (extraembryonic coelom).
https://en.wikipedia.org/wiki/Intraembryonic%20coelom
In the development of the human embryo, the intraembryonic coelom (or somatic coelom) is a portion of the conceptus forming in the mesoderm during the third week of development. During the third week of development, the lateral plate mesoderm splits into a dorsal somatic mesoderm (somatopleure) and a ventral splanchnic mesoderm (splanchnopleure). The resulting cavity between the somatopleure and splanchnopleure is called the intraembryonic coelom. This space will give rise to the thoracic and abdominal cavities. The coelomic spaces in the lateral mesoderm and cardiogenic area are isolated. The isolated coelom begins to organize into a horseshoe shape. The spaces soon join together and form a single horseshoe-shaped cavity: the intraembryonic coelom. It then separates the mesoderm into two layers. It briefly has a connection with the extraembryonic coelom. See also Cavitation (embryology)
https://en.wikipedia.org/wiki/Top-down%20proteomics
Top-down proteomics is a method of protein identification that either uses an ion trapping mass spectrometer to store an isolated protein ion for mass measurement and tandem mass spectrometry (MS/MS) analysis or other protein purification methods such as two-dimensional gel electrophoresis in conjunction with MS/MS. Top-down proteomics is capable of identifying and quantitating unique proteoforms through the analysis of intact proteins. The name is derived from the similar approach to DNA sequencing. During mass spectrometry intact proteins are typically ionized by electrospray ionization and trapped in a Fourier transform ion cyclotron resonance (Penning trap), quadrupole ion trap (Paul trap) or Orbitrap mass spectrometer. Fragmentation for tandem mass spectrometry is accomplished by electron-capture dissociation or electron-transfer dissociation. Effective fractionation is critical for sample handling before mass-spectrometry-based proteomics. Proteome analysis routinely involves digesting intact proteins followed by inferred protein identification using mass spectrometry (MS). Top-down MS (non-gel) proteomics interrogates protein structure through measurement of an intact mass followed by direct ion dissociation in the gas phase. Advantages The main advantages of the top-down approach include the ability to detect degradation products, protein isoforms, sequence variants, combinations of post-translational modifications as well as simplified processes for data normalization and quantitation. Top-down proteomics, when accompanied with polyacrylamide gel electrophoresis, can help to complement the bottom-up proteomic approach. Top-down proteomic methods can assist in exposing large deviations from predictions and has been very successfully pursued by combining Gel Elution Liquid-based Fractionation Entrapment Electrophoresis fractionation, protein precipitation, and reverse phase HPLC with electrospray ionization and MS/MS. Characterization of small proteins r
https://en.wikipedia.org/wiki/Port%20Revel
The Port Revel Shiphandling Training Centre is a French maritime pilotage school that trains pilots, masters, and officers on large ships like supertankers, container ships, LNG carriers and cruise ships . The facility uses manned models at a 1:25 scale on a man-made lake designed to simulate natural conditions including harbours, canals, and open seas. It was the first such facility in the world. The Centre was created in 1967 near Grenoble, France, by Laboratoire Dauphinois d'Hydraulique (now Artelia). The courses are given by former maritime pilots. Since 1967, the Centre has trained over 6 500 maritime pilots, captains and officers from all over the world. French, European, Australian, Brazilian and North American pilots make up 90% of the Centre's students. The manned model training regime is now recommended by the International Maritime Organization under Resolution A 960 (23) of December 2005. The facility was written about by John McPhee in an October, 1998 article for The Atlantic Monthly, later republished as Chapter Two in his book Uncommon Carriers (2006). History The centre's origin goes back to the fifties, when Port Revel's mother company, Sogreah, was studying bank erosion on the Suez Canal using model ships sailing on a scale model with a movable bed (i.e. granular material subjected to erosion by turbulent water movement). At the end of the sixties this experience with free sailing model ships was used by Esso to anticipate the manoeuvring behaviour of the new, much larger, oil tankers. After three years spent with Esso captains between 1967 and 1970, the Centre was taken over by Sogreah in 1970. During the 1970s, most students were captains, while the first maritime pilots came to discover the centre. In the 1990s, the first refresher courses were organised for pilots, who returned every 5 years. These courses are less directive and leave more room for customisation, which is a way of optimising port operations to increase port accessibi
https://en.wikipedia.org/wiki/Similitude%20of%20ship%20models
Manned models Many research workers, hydraulics specialists and engineers have used scale models for over a century, in particular in towing tanks. Manned models are small scale models that can carry and be handled by at least one person on an open expanse of water. They must behave just like real ships, giving the shiphandler the same sensations. Physical conditions such as wind, currents, waves, water depths, channels and berths must be reproduced realistically. Manned models are used for research (e.g. ship behaviour), engineering (e.g. port layout) and for training in shiphandling (e.g. maritime pilots, masters and officers). They are usually at 1:25 scale. Similitude of manned models Worldwide, manned model schools have chosen to apply the similitude law of William Froude (1810-1879) for its manned models. This means that gravity is considered to be preponderant over the other forces acting on the hull (viscosity, capillarity, cavitation, compressibility, etc.). The different aspects of similitude may thus be defined as follows: Physical similitude Similitude of shape: The model has exactly the same geometric shape as the real ship. This means that all the length (L) dimensions of the real ship are divided by the same factor, the scale factor. The designers of Port Revel chose a scale (S) of 1:25, so: S(L) = 25 (smaller, hence distance is 25 times less) In this similitude, the proportions are kept (the ratios between the various dimensions of the ship are identical). This is also the case with the block coefficient. Furthermore, the angles are a length ratio, so they are also identical to the original ones. The scale factors of the areas and volumes are deduced from this, i.e.: S2(L) = 252 = 625 S3(L) = 253 = 15 625 Similitude of mass (M): The model used for shiphandling training must not only resemble the original but also move in the same way as the original when subjected to similar forces. Consequently, the scale factor for the mass (M) and
https://en.wikipedia.org/wiki/Bottom-up%20proteomics
Bottom-up proteomics is a common method to identify proteins and characterize their amino acid sequences and post-translational modifications by proteolytic digestion of proteins prior to analysis by mass spectrometry. The major alternative workflow used in proteomics is called top-down proteomics where intact proteins are purified prior to digestion and/or fragmentation either within the mass spectrometer or by 2D electrophoresis. Essentially, bottom-up proteomics is a relatively simple and reliable means of determining the protein make-up of a given sample of cells, tissues, etc. In bottom-up proteomics, the crude protein extract is enzymatically digested, followed by one or more dimensions of separation of the peptides by liquid chromatography coupled to mass spectrometry, a technique known as shotgun proteomics. By comparing the masses of the proteolytic peptides or their tandem mass spectra with those predicted from a sequence database or annotated peptide spectral in a peptide spectral library, peptides can be identified and multiple peptide identifications assembled into a protein identification. Advantages For high throughput bottom-up methods, there is better front-end separation of peptides compared with proteins and higher sensitivity than the (non-gel) top-down methods. Disadvantages There is limited protein sequence coverage by identified peptides, loss of labile PTMs, and ambiguity of the origin for redundant peptide sequences. Recently the combination of bottom-up and top-down proteomics, so called middle-down proteomics, is receiving a lot of attention as this approach not only can be applied to the analysis of large protein fragments but also avoids redundant peptide sequences. See also Protein mass spectrometry Shotgun proteomics Peptide mass fingerprinting Top-down proteomics
https://en.wikipedia.org/wiki/Spoken%20dialog%20system
A spoken dialog system (SDS) is a computer system able to converse with a human with voice. It has two essential components that do not exist in a written text dialog system: a speech recognizer and a text-to-speech module (written text dialog systems usually use other input systems provided by an OS). It can be further distinguished from command and control speech systems that can respond to requests but do not attempt to maintain continuity over time. Components An automatic speech recognizer (ASR) decodes speech into text. Domain-specific recognizers can be configured for language designed for a given application. A "cloud" recognizer will be suitable for domains that do not depend on very specific vocabularies. Natural language understanding transforms a recognition into a concept structure that can drive system behavior. Some approaches will combine recognition and understanding processing but are thought to be less flexible since interpretation has to be coded into the grammar. The dialog manager controls turn-by-turn behavior. A simple dialog system may ask the user questions then act on the response. Such directed dialog systems use a tree-like structure for control; frame- (or form-) based systems allow for some user initiative and accommodate different styles of interaction. More sophisticated dialog managers incorporate mechanisms for dealing with misunderstandings and clarification. The domain reasoner, or more simply the back-end, makes use of a knowledge base to retrieve information and helps formulate system responses. In simple systems, this may be a database which is queried using information collected through the dialog. The domain reasoner, together with the dialog manager, maintain the context of interaction and allows the system to reflect some human conversational abilities (for example using anaphora). Response generation is similar to text-based natural language generation, but takes into account the needs of spoken communication. Thi
https://en.wikipedia.org/wiki/Society%20for%20Developmental%20Biology
The Society for Developmental Biology (SDB), originally the Society for the Study of Development and Growth, is a professional society for scientists and professionals around the world whose research is focused on the study of the developmental biology, embryology, and related disciplines. History The "Society for the Study of Development and Growth" (SDB) was founded in 1939. In August 1939, the SDB held its first conference, a symposium on Development and Growth, in a small village schoolhouse in North Truro, Massachusetts. In 1965, it was renamed the "Society for Development Biology" to reflect the SDB's advocacy of developmental biology. Mission The Society for Developmental Biology's mission is to employ, "... an inclusive philosophy to further the study of developmental biology and related disciplines; to foster, support, and provide a forum for all investigators in these fields; to educate non-specialists, educators, the general public, and policymakers about developmental biology and related disciplines; and to promote fair, respectful, ethical and equitable practices throughout the scientific enterprise." SDB seeks to: Foster excellence in research and education in developmental biology and related areas. Organize scientific meetings, workshops, and courses that focus on developmental biology and related areas. Provide resources on careers and professional development in developmental biology and related fields. Provide information for the public on relevant topics in developmental biology. Serve as a communication hub for all developmental biologists worldwide. Membership SDB has more than 2,000 members and provides an international forum for research, education, and career development in developmental biology. Membership is open to all with discounted rates for students, postdoctoral researchers, and affiliates. SDB Emerging Research Organisms Grant supports the development of techniques, approaches, community resources, collaborations, and
https://en.wikipedia.org/wiki/Records%20of%20the%20Parliaments%20of%20Scotland
The Records of the Parliaments of Scotland to 1707 is an online publication of the Scottish Parliament and the University of St Andrews arising from a project to create a comprehensive online database of the proceedings of the Parliament of Scotland from 1235 to the Act of Union. The website was launched in 2008. The project was formulated by Professor Keith Brown of St Andrews University in 1996. Funding was quickly approved by then-Secretary of State for Scotland Michael Forsyth and announced by then-Prime Minister John Major on 4 July 1996. As well as the initial funding by the Scottish Office, monies for what became the Scottish Parliament Project were provided by the Scottish Government, the Arts and Humanities Research Board, and the Strathmartine Trust. Under the general editorship of Professor Brown, the eleven-year project to complete the database created a work of around fifteen million words in size. It includes parallel translations from the original Latin, Norman French, and Scots. The primary editors of the text of each period were: Alastair Mann (Early Modern) Gillian McIntosh (Early Modern) Pamela Ritchie (Early Modern) Roland Tanner (Medieval and Latin) The lead website developer was Swithun Crowe. See also National Archives of Scotland Advocates' Library
https://en.wikipedia.org/wiki/Mathematics%20of%20bookmaking
In gambling parlance, making a book is the practice of laying bets on the various possible outcomes of a single event. The phrase originates from the practice of recording such wagers in a hard-bound ledger (the 'book') and gives the English language the term bookmaker for the person laying the bets and thus 'making the book'. Making a 'book' (and the notion of overround) A bookmaker strives to accept bets on the outcome of an event in the right proportions in order to make a profit regardless of which outcome prevails. See Dutch book and coherence (philosophical gambling strategy). This is achieved primarily by adjusting what are determined to be the true odds of the various outcomes of an event in a downward fashion (i.e. the bookmaker will pay out using his actual odds, an amount which is less than the true odds would have paid, thus ensuring a profit). The odds quoted for a particular event may be fixed but are more likely to fluctuate in order to take account of the size of wagers placed by the bettors in the run-up to the actual event (e.g. a horse race). This article explains the mathematics of making a book in the (simpler) case of the former event. For the second method, see Parimutuel betting. It is important to understand the relationship between fractional and decimal odds. Fractional odds are written a − b (a/b or a to b), meaning a winning bettor will receive their money back plus a units for every b units they bet. Decimal odds are a single value, greater than 1, representing the amount to be paid out for each unit bet. For example, a bet of £40 at 6 − 4 (fractional odds) will pay out £40 + £60 = £100. The equivalent decimal odds are 2.5; £40 × 2.5 = £100. We can convert fractional to decimal odds by the formula D = . Hence, fractional odds of a − 1 (ie. b = 1) can be obtained from decimal odds by a = D − 1. It is also important to understand the relationship between odds and implied probabilities: Fractional odds of a − b (with corresponding dec
https://en.wikipedia.org/wiki/Dynamic%20electrophoretic%20mobility
Dynamic electrophoretic mobility is a parameter that determines intensity of electroacoustic phenomena, such as Colloid Vibration Current and Electric Sonic Amplitude in colloids. It is similar to electrophoretic mobility, but at high frequency, on a scale of megahertz. Usual electrophoretic mobility is the low frequency limit of the dynamic electrophoretic mobility. Colloidal chemistry Condensed matter physics Soft matter
https://en.wikipedia.org/wiki/In-band%20control
In-band control is a characteristic of network protocols with which data control is regulated. In-band control passes control data on the same connection as main data. Protocols that use in-band control include HTTP and SMTP. This is as opposed to Out-of-band control used by protocols such as FTP. Example Here is an example of an SMTP client-server interaction: Server: 220 example.com Client: HELO example.net Server: 250 Hello example.net, pleased to meet you Client: MAIL FROM: <jane.doe@example.net> Server: 250 jane.doe@example.net... Sender ok Client: RCPT TO: <john.doe@example.com> Server: 250 john.doe@example.com ... Recipient ok Client: DATA Server: 354 Enter mail, end with "." on a line by itself Client: Do you like ketchup? Client: How about pickles? Client: . Server: 250 Message accepted for delivery Client: QUIT Server: 221 example.com closing connection SMTP is in-band because the control messages, such as "HELO" and "MAIL FROM", are sent in the same stream as the actual message content. See also Out-of-band control Computer networks
https://en.wikipedia.org/wiki/Out-of-band%20control
Out-of-band control is a characteristic of network protocols with which data control is regulated. Out-of-band control passes control data on a separate connection from main data. Protocols such as FTP use out-of-band control. FTP sends its control information, which includes user identification, password, and put/get commands, on one connection, and sends data files on a separate parallel connection. Because it uses a separate connection for the control information, FTP uses out-of-band control. See also Out-of-band management In-band control Computer networks
https://en.wikipedia.org/wiki/CNET
CNET (short for "Computer Network") is an American media website that publishes reviews, news, articles, blogs, podcasts, and videos on technology and consumer electronics globally. CNET originally produced content for radio and television in addition to its website before applying new media distribution methods through its internet television network, CNET Video, and its podcast and blog networks. Founded in 1992 by Halsey Minor and Shelby Bonnie, it was the flagship brand of CNET Networks and became a brand of CBS Interactive through that unit's acquisition of CNET Networks in 2008. It has been owned by Red Ventures since October 30, 2020. Other than English, CNET's region- and language-specific editions include Chinese, French, German, Japanese, Korean, and Spanish. History Origins After leaving PepsiCo, Halsey Minor and Shelby Bonnie launched c/net, a 24-hour cable network about computers and technology in 1992. With help from Fox Network co-founder Kevin Wendle and former Disney creative associate Dan Baker, CNET produced four pilot television programs about computers, technology, and the Internet. CNET TV was composed of CNET Central, The Web, and The New Edge. CNET Central was created first and aired in syndication in the United States on the USA Network. Later, it began airing on USA's sister network Sci-Fi Channel along with The Web and The New Edge. These were later followed by TV.com in 1996. Media personality Ryan Seacrest first came to national prominence at CNET, as the host of The New Edge and doing various voice-over work for CNET. CNET online launched in June 1995. CNET, Inc., the site's owner, had its initial public offering (IPO) in July 1996. In 1998, CNET, Inc. was sued by Snap Technologies, operators of the education service CollegeEdge, for trademark infringement relating to CNET, Inc.'s ownership of the domain name Snap.com, due to Snap Technologies already owning a trademark on its name. CNET produced another television technology ne
https://en.wikipedia.org/wiki/Microaggression
Microaggression is a term used for commonplace verbal, behavioral or environmental slights, whether intentional or unintentional, that communicate hostile, derogatory, or negative attitudes toward stigmatized or culturally marginalized groups. The term was coined by Harvard University psychiatrist Chester M. Pierce in 1970 to describe insults and dismissals which he regularly witnessed non-black Americans inflicting on African Americans. By the early 21st century, use of the term was applied to the casual disparagement of any socially marginalized group, including LGBT people, poor people, and disabled people. Psychologist Derald Wing Sue defines microaggressions as "brief, everyday exchanges that send denigrating messages to certain individuals because of their group membership". The persons making the comments may be otherwise well-intentioned and unaware of the potential impact of their words. A number of scholars and social commentators have criticized the concept of microaggression for its lack of a scientific basis, over-reliance on subjective evidence, and promotion of psychological fragility. Critics argue that avoiding behaviors that one interprets as microaggressions restricts one's own freedom and causes emotional self-harm, and that employing authority figures to address microaggressions (i.e. call-out culture) can lead to an atrophy of those skills needed to mediate one's own disputes. Some argue that, because the term "microaggression" uses language connoting violence to describe verbal conduct, it can be abused to exaggerate harm, resulting in retribution and the elevation of victimhood. D. W. Sue, who popularized the term microaggressions, has expressed doubts on how the concept is being used: "I was concerned that people who use these examples would take them out of context and use them as a punitive rather than an exemplary way." In the 2020 edition of his book with Lisa Spanierman and in a 2021 book with his doctoral students, Dr. Sue introduce
https://en.wikipedia.org/wiki/Net3
Net3 was a Wi-Fi-like system developed, manufactured and commercialised by Olivetti in the early 1990s. It could wirelessly connect PCs to an Ethernet fixed LAN at a speed of up to 512kbit/s, over a very wide area. It was a micro-cellular system, in which each base station had an effective range of about 100m indoors, 300m outdoors, and the system supported seamless handover between base stations. The system was based on the DECT standard, published in 1992. A prototype system was first demonstrated at the Telecom '91 show in Geneva in October 1991, and is believed to be the first public demonstration of the DECT transmission system. The product was launched in June 1993, and was the first product based on the DECT standard to reach the market, narrowly beating Siemens' highly successful Gigaset cordless telephone. It is also believed to be the first wireless LAN to be sold on the European market. In its first version, the adapter cards consisted of half-size PC cards connected to an external desk-seated radio unit of modest dimensions. The second version, launched at Telecom '95, consisted of a PCMCIA card and a small external radio unit suitable for portable use. The system was developed in the laboratories of Olivetti Sixtel, the telecommunications technology division of Olivetti in Ivrea, Italy. At a time when knowledge of commercial digital radio technology was scarce in Italy, the group began research in 1988 and developed in-house a high level of capability in DECT technology, including patented technology that became fundamental to the standard. The development was funded partly from corporate venture resources, partly from ESPRIT funding, and partly from an unusual but highly effective tool of industrial policy, invented by Ing. Augusto Vighi of the Istituto Superiore delle Poste e Telecomunicazioni. Vighi placed a contract for proof-of-concept DECT demonstration systems with a consortium of Italian technology companies, covering the full range
https://en.wikipedia.org/wiki/Evolution%20of%20morality
The concept of the evolution of morality refers to the emergence of human moral behavior over the course of human evolution. Morality can be defined as a system of ideas about right and wrong conduct. In everyday life, morality is typically associated with human behavior rather than animal behavior. The emerging fields of evolutionary biology, and in particular evolutionary psychology, have argued that, despite the complexity of human social behaviors, the precursors of human morality can be traced to the behaviors of many other social animals. Sociobiological explanations of human behavior remain controversial. Social scientists have traditionally viewed morality as a construct, and thus as culturally relative, although others such as Sam Harris argue that there is an objective science of morality. Animal sociality Though other animals may not possess what humans may perceive as moral behavior, all social animals have had to modify or restrain their behaviors for group living to be worthwhile. Typical examples of behavioral modification can be found in the societies of ants, bees and termites. Ant colonies may possess millions of individuals. E. O. Wilson argues that the single most important factor that leads to the success of ant colonies is the existence of a sterile worker caste. This caste of females are subservient to the needs of their mother, the queen, and in so doing, have given up their own reproduction in order to raise brothers and sisters. The existence of sterile castes among these social insects significantly restricts the competition for mating and in the process fosters cooperation within a colony. Cooperation among ants is vital, because a solitary ant has an improbable chance of long-term survival and reproduction. However, as part of a group, colonies can thrive for decades. As a consequence, ants are one of the most successful families of species on the planet, accounting for a biomass that rivals that of the human species. The basic reason
https://en.wikipedia.org/wiki/Language%20Integrated%20Query
Language Integrated Query (LINQ, pronounced "link") is a Microsoft .NET Framework component that adds native data querying capabilities to .NET languages, originally released as a major part of .NET Framework 3.5 in 2007. LINQ extends the language by the addition of query expressions, which are akin to SQL statements, and can be used to conveniently extract and process data from arrays, enumerable classes, XML documents, relational databases, and third-party data sources. Other uses, which utilize query expressions as a general framework for readably composing arbitrary computations, include the construction of event handlers or monadic parsers. It also defines a set of method names (called standard query operators, or standard sequence operators), along with translation rules used by the compiler to translate query syntax expressions into expressions using fluent-style (called method syntax by Microsoft) with these method names, lambda expressions and anonymous types. Ports of LINQ exist for PHP (PHPLinq), JavaScript (linq.js), TypeScript (linq.ts), and ActionScript (ActionLinq), although none are strictly equivalent to LINQ in the .NET inspired languages C#, F# and VB.NET (where it is a part of the language, not an external library, and where it often addresses a wider range of needs). Architecture of LINQ in the .NET Framework Standard Query Operator API In what follows, the descriptions of the operators are based on the application of working with collections. Many of the operators take other functions as arguments. These functions may be supplied in the form of a named method or anonymous function. The set of query operators defined by LINQ is exposed to the user as the Standard Query Operator (SQO) API. The query operators supported by the API are: Select The Select operator performs a projection on the collection to select interesting aspects of the elements. The user supplies an arbitrary function, in the form of a named or lambda expression, which p
https://en.wikipedia.org/wiki/Resource%20fragmentation%20hypothesis
The resource fragmentation hypothesis was first proposed by Janzen & Pond (1975), and says that as species richness becomes large there is not a linear increase in the number of parasitoid species that can be supported. The mechanism for this hyperbolic relationship is suggested to be that each of the new host species are too rare to support the evolution of specialist parasitoids (Janzen & Pond, 1975). The resource fragmentation hypothesis is one of two hypotheses that seek to explain the distribution of the Ichneumonidae.
https://en.wikipedia.org/wiki/Quantitative%20proteomics
Quantitative proteomics is an analytical chemistry technique for determining the amount of proteins in a sample. The methods for protein identification are identical to those used in general (i.e. qualitative) proteomics, but include quantification as an additional dimension. Rather than just providing lists of proteins identified in a certain sample, quantitative proteomics yields information about the physiological differences between two biological samples. For example, this approach can be used to compare samples from healthy and diseased patients. Quantitative proteomics is mainly performed by two-dimensional gel electrophoresis (2-DE), preparative one-dimensional gel electrophoresis, or mass spectrometry (MS). However, a recent developed method of quantitative dot blot (QDB) analysis is able to measure both the absolute and relative quantity of an individual proteins in the sample in high throughput format, thus open a new direction for proteomic research. In contrast to 2-DE, which requires MS for the downstream protein identification, MS technology can identify and quantify the changes. Quantification using spectrophotometry The concentration of a certain protein in a sample may be determined using spectrophotometric procedures. The concentration of a protein can be determined by measuring the OD at 280 nm on a spectrophotometer, which can be used with a standard curve assay to quantify the presence of tryptophan, tyrosine, and phenylalanine. However, this method is not the most accurate because the composition of proteins can vary greatly and this method would not be able to quantify proteins that do not contain the aforementioned amino acids. This method is also inaccurate due to the possibility of nucleic acid contamination. Other more accurate spectrophotometric procedures for protein quantification include the Biuret, Lowry, BCA, and Bradford methods. An alternative method for label free protein quantification in clear liquid is cuvette-based SPR te
https://en.wikipedia.org/wiki/Comparison%20of%20JavaScript-based%20source%20code%20editors
This article provides basic feature comparison between some of the JavaScript-based source code editors available today. Overview List of features Feature testing was performed with Firefox 3.0.6 against the current demo version, and results may not match those in other browsers or downloadable versions. See also Comparison of online source code playgrounds HTML editor Online JavaScript IDE
https://en.wikipedia.org/wiki/HPCx
HPCx was a supercomputer (actually a cluster of IBM eServer p5 575 high-performance servers) located at the Daresbury Laboratory in Cheshire, England. The supercomputer was maintained by the HPCx Consortium, UoE HPCX Ltd, which was led by the University of Edinburgh: EPCC, with the Science and Technology Facilities Council and IBM. The project was funded by EPSRC. The HPCx service ended in January 2010,
https://en.wikipedia.org/wiki/Polish%20Logic
Polish Logic is an anthology of papers by several authors—Kazimierz Ajdukiewicz, Leon Chwistek, Stanislaw Jaskowski, Zbigniew Jordan, Tadeusz Kotarbinski, Stanisław Leśniewski, Jan Łukasiewicz, Jerzy Słupecki, and Mordchaj Wajsberg—published in 1967 and covering the period 1920–1939. The work focuses on the contributions of Polish logicians, more particularly, mathematical logicians, to modern logic. Library of Congress cataloging data LC Control No.: 67106639 Type of Material: Book (Print, Microform, Electronic, etc.) Personal Name: McCall, Storrs, comp. Main Title: Polish logic, 1920-1939 papers by Ajdukiewicz [and others]; Published/Created: Oxford, Clarendon P., 1967. Description: [2] viii, 406 p. 23 cm. Subjects: Logic, Symbolic and mathematical--Addresses, essays, lectures. LC Classification: BC135 .M18 History of logic 1967 non-fiction books Logic books
https://en.wikipedia.org/wiki/Degree%20of%20frost
A degree of frost is a non-standard unit of measure for air temperature meaning degrees below melting point (also known as "freezing point") of water (0 degrees Celsius or 32 degrees Fahrenheit). "Degree" in this case can refer to degree Celsius or degree Fahrenheit. When based on Celsius, 0 degrees of frost is the same as 0 °C, and any other value is simply the negative of the Celsius temperature. When based on Fahrenheit, 0 degrees of frost is equal to 32 °F. Conversion formulas: T [degrees of frost] = 32 °F − T [°F] T [°F] = 32 °F − T [degrees of frost] The term "degrees of frost" was widely used in accounts of the Heroic Age of Antarctic Exploration in the early 20th century. The term appears frequently in Ernest Shackleton's books South and Heart of the Antarctic, Apsley Cherry-Garrard's account of his Antarctic adventures in The Worst Journey in the World (wherein he recorded 109.5 degrees [Fahrenheit] of frost, –77.5 °F or –60.8 °C), in Jack London's "To Build A Fire", as well as Admiral Richard E. Byrd's book Alone.
https://en.wikipedia.org/wiki/Auxostat
An auxostat is a continuous culture device which, while in operation, uses feedback from a measurement taken on the growth chamber to control the media flow rate, maintaining the measurement at a constant. Auxo was the Greek goddess of spring growth, and represents nutrients as a prefix. However, the most typical auxostats are pH-auxostats, with feedback between the growth rate and a pH meter. Other auxostats may measure oxygen tension, ethanol concentrations, and sugar concentrations
https://en.wikipedia.org/wiki/Turbidostat
A turbidostat is a continuous microbiological culture device, similar to a chemostat or an auxostat, which has feedback between the turbidity of the culture vessel and the dilution rate. The theoretical relationship between growth in a chemostat and growth in a turbidostat is somewhat complex, in part because they are similar. A chemostat has a fixed volume and flow rate, and thus a fixed dilution rate. A turbidostat dynamically adjusts the flow rate (and therefore the dilution rate) to make the turbidity constant. At steady state, operation of both the chemostat and turbidostat are identical. It is only when classical chemostat assumptions are violated (for instance, out of equilibrium; or the cells are mutating) that a turbidostat is functionally different. One case may be while cells are growing at their maximum growth rate, in which case it is difficult to set a chemostat to the appropriate constant dilution rate. While most turbidostats use a spectrophotometer/turbidimeter to measure the optical density for control purposes, there exist other methods, such as dielectric permittivity. The morbidostat is a similar device built to study the evolution of antimicrobial resistance. The aim is also to maintain constant turbidity levels, but this is controlled using the addition of antimicrobials.
https://en.wikipedia.org/wiki/Tectin%20%28secretion%29
Tectin is an organic substance secreted by certain ciliates. Tectin may form an adhesive stalk, disc or other sticky secretion. Tectin may also form a gelatinous envelope or membrane enclosing some ciliates as a protective capsule or lorica. Tectin is also called pseudochitin. Granules or rods (called protrichocysts) in the pellicle of some ciliates are also thought to be involved in tectin secretion. See also Chitin Conchiolin Sporopollenin
https://en.wikipedia.org/wiki/Bisymmetric%20matrix
In mathematics, a bisymmetric matrix is a square matrix that is symmetric about both of its main diagonals. More precisely, an n × n matrix A is bisymmetric if it satisfies both A = AT and AJ = JA where J is the n × n exchange matrix. For example, any matrix of the form is bisymmetric. The associated exchange matrix for this example is Properties Bisymmetric matrices are both symmetric centrosymmetric and symmetric persymmetric. The product of two bisymmetric matrices is a centrosymmetric matrix. Real-valued bisymmetric matrices are precisely those symmetric matrices whose eigenvalues remain the same aside from possible sign changes following pre- or post-multiplication by the exchange matrix. If A is a real bisymmetric matrix with distinct eigenvalues, then the matrices that commute with A must be bisymmetric. The inverse of bisymmetric matrices can be represented by recurrence formulas.
https://en.wikipedia.org/wiki/Analysis%20of%20molecular%20variance
Analysis of molecular variance (AMOVA), is a statistical model for the molecular algorithm in a single species, typically biological. The name and model are inspired by ANOVA. The method was developed by Laurent Excoffier, Peter Smouse and Joseph Quattro at Rutgers University in 1992. Since developing AMOVA, Excoffier has written a program for running such analyses. This program, which runs on Windows, is called Arlequin and is freely available on Excoffier's website. There are also implementations in R language in the ade4 and the pegas packages, both available on CRAN (Comprehensive R Archive Network). Another implementation is in Info-Gen, which also runs on Windows. The student version is free and fully functional. Native language of the application is Spanish but an English version is also available. An additional free statistical package, GenAlEx, is geared toward teaching as well as research and allows for complex genetic analyses to be employed and compared within the commonly used Microsoft Excel interface. This software allows for calculation of analyses such as AMOVA, as well as comparisons with other types of closely related statistics including F-statistics and Shannon's index, and more.
https://en.wikipedia.org/wiki/Terminal%20countdown%20demonstration%20test
A terminal countdown demonstration test (TCDT) is a simulation of the final hours of a launch countdown and serves as a practice exercise in which both the launch team and flight crew rehearse launch day timelines and procedures. In the specific case of a TCDT for the Space Shuttle, the test culminated in a simulated ignition and RSLS Abort (automated shutdown of the orbiter's main engines). Following the simulated abort, the flight crew was briefed on emergency egress procedures and use of the fixed service structure slidewire system. On some earlier shuttle missions, and Apollo missions, the test would conclude with the flight crew evacuating the launch pad by use of these emergency systems, but this is no longer part of the test. Unmanned carrier rocket launches also undergo TCDTs, when countdown procedures are followed. These vary for specific rockets, for example solid-fuelled rockets would not simulate an engine shutdown, as it is impossible to shut down a solid rocket after it has been lit. TCDTs typically are carried out a few days before launch. See also Space Shuttle program Ares (rocket)
https://en.wikipedia.org/wiki/Succedaneous%20tooth
The succedaneous teeth are the permanent teeth that replace the deciduous teeth. Permanent molars are not succedaneous teeth because they do not replace any primary teeth. Succedaneous teeth originate from successional laminae whereas permanent molars originate from the general dental lamina. Begin to form as early as 24 weeks. See also Dental anatomy.
https://en.wikipedia.org/wiki/Robert%20B.%20Wilson
Robert Butler Wilson, Jr. (born May 16, 1937) is an American economist and the Adams Distinguished Professor of Management, Emeritus at Stanford University. He was jointly awarded the 2020 Nobel Memorial Prize in Economic Sciences, together with his Stanford colleague and former student Paul R. Milgrom, "for improvements to auction theory and inventions of new auction formats". Two more of his students, Alvin E. Roth and Bengt Holmström, are also Nobel Laureates in their own right. Wilson is known for his contributions to management science and business economics. His doctoral thesis introduced sequential quadratic programming, which became a leading iterative method for nonlinear programming. With other mathematical economists at Stanford, he helped to reformulate the economics of industrial organization and organization theory using non-cooperative game theory. His research on nonlinear pricing has influenced policies for large firms, particularly in the energy industry, especially electricity. Early life and academic career Wilson was born on May 16, 1937, in Geneva, Nebraska. He graduated from Lincoln High School in Lincoln, Nebraska and earned a full scholarship to Harvard University. He received his A.B. from Harvard College in 1959. He then completed his M.B.A. in 1961 and his D.B.A. in 1963 from the Harvard Business School. He worked at the University of California, Los Angeles for a very brief time and then joined the faculty at Stanford University. He has been on the faculty of the Stanford Business School since 1964. He was also an affiliated faculty member of Harvard Law School from 1993 to 2001. Research Wilson is known for research and teaching on market design, pricing, negotiation, and related topics concerning industrial organization and information economics. He is an expert on game theory and its applications. He has been a major contributor to auction designs and competitive bidding strategies in the oil, communication, and power industries,
https://en.wikipedia.org/wiki/Standalone%20program
A standalone program, also known as a freestanding program, is a computer program that does not load any external module, library function or program and that is designed to boot with the bootstrap procedure of the target processor – it runs on bare metal. In early computers like the ENIAC without the concept of an operating system, standalone programs were the only way to run a computer. Standalone programs are usually written in assembly language for a specific hardware. Later standalone programs typically were provided for utility functions such as disk formatting. Also, computers with very limited memory may use standalone programs, i.e. most computers until the mid-1950s and later still embedded processors. Standalone programs are now mainly limited to SoC's or microcontrollers (where battery life, price, and data space are at premiums) and critical systems. In extreme cases every possible set of inputs and errors must be tested and thus every potential output known; fully independent [separate physical suppliers and programing teams] yet fully parallel system-state monitoring; or where the attack surface must be minimized; an operating system would add unacceptable complexity and uncertainty (examples include industrial operator safety interrupts, commercial airlines, medical devices, ballistic missile launch controls and lithium-battery charge controllers in consumer devices [fire hazard and chip cost of approximately 10 cents]). Resource limited microcontrollers can also be made more tolerant of varied environmental conditions than the more powerful hardware needed for an operating system; this is possible because of the much lower clock frequency, pin spacing, lack of large data buses (e.g. DDR4 RAM modules), and limited transistor count allowance for wider design margins and thus the potential for more robust electrical and physical properties both in circuit layout and material choices. See also Bare machine
https://en.wikipedia.org/wiki/Plant%20lifecycle%20management
Plant lifecycle management (PLM) is the process of managing an industrial facility's data and information throughout its lifetime. Plant lifecycle management differs from product lifecycle management by its primary focus on the integration of logical, physical and technical plant data in a combined plant model. A PLM model can be used through a plants whole lifecycle, covering: Design, Construction, Erection, Commissioning, Handover, Operation, Maintenance/Refurbishment/Life Extension, Decommissioning, Land rehabilitation. Parts of the model Logical model The logical plant model may cover: Process & instrumentation diagrams (P&ID) Pipe & instrumentation diagrams (P&ID) P&I schematic Process flow Massflow diagram (similar to the process flow but used in the mineral industry) Electrical key diagram Cabling diagram Electrical Hydraulic Pneumatic Heating venting & air-conditioning (HVAC) Water and wastewater Physical model Physical parts of a plant are usually represented by 3D CAD models. The CAD system used would typically focus on top-down, routing, and DMU and would differ on many point from the systems used in the mechanical industry, or for Architectural engineering. Sometimes the CAD system would be supplemented by software to generate 3D views or walk-through features. Technical model The technical data is typically managed by an ERP system or some other database. There could also be links to systems for handling unstructured data, like EDM systems. Integration Integration with Enterprise (EPCM, ePCM), Integration with Enterprise (Owner/Operator), Integration with Regulator. Applicability New Build Return to Service (RTS) See also Lifecycle management ISO 10303 - Industrial automation systems and integration—Product data representation and exchange ISO 15926 - Process Plants including Oil and Gas facilities life-cycle data Notes Further reading about Virtual Mill about Augmented reality Product lifecycle management Computer-aided design Man
https://en.wikipedia.org/wiki/Willmore%20conjecture
In differential geometry, the Willmore conjecture is a lower bound on the Willmore energy of a torus. It is named after the English mathematician Tom Willmore, who conjectured it in 1965. A proof by Fernando Codá Marques and André Neves was announced in 2012 and published in 2014. Willmore energy Let v : M → R3 be a smooth immersion of a compact, orientable surface. Giving M the Riemannian metric induced by v, let H : M → R be the mean curvature (the arithmetic mean of the principal curvatures κ1 and κ2 at each point). In this notation, the Willmore energy W(M) of M is given by It is not hard to prove that the Willmore energy satisfies W(M) ≥ 4π, with equality if and only if M is an embedded round sphere. Statement Calculation of W(M) for a few examples suggests that there should be a better bound than W(M) ≥ 4π for surfaces with genus g(M) > 0. In particular, calculation of W(M) for tori with various symmetries led Willmore to propose in 1965 the following conjecture, which now bears his name For every smooth immersed torus M in R3, W(M) ≥ 2π2. In 1982, Peter Wai-Kwong Li and Shing-Tung Yau proved the conjecture in the non-embedded case, showing that if is an immersion of a compact surface, which is not an embedding, then W(M) is at least 8π. In 2012, Fernando Codá Marques and André Neves proved the conjecture in the embedded case, using the Almgren–Pitts min-max theory of minimal surfaces. Martin Schmidt claimed a proof in 2002, but it was not accepted for publication in any peer-reviewed mathematical journal (although it did not contain a proof of the Willmore conjecture, he proved some other important conjectures in it). Prior to the proof of Marques and Neves, the Willmore conjecture had already been proved for many special cases, such as tube tori (by Willmore himself), and for tori of revolution (by Langer & Singer).
https://en.wikipedia.org/wiki/Thicket
A thicket is a very dense stand of trees or tall shrubs, often dominated by only one or a few species, to the exclusion of all others. They may be formed by species that shed large numbers of highly viable seeds that are able to germinate in the shelter of the maternal plants. In some conditions, the formation or spread of thickets may be assisted by human disturbance of an area. Where a thicket is formed of briar (also spelled brier), which is a common name for any of a number of unrelated thorny plants, it may be called a briar patch. Plants termed briar include species in the genera Rosa (Rose), Rubus, and Smilax.
https://en.wikipedia.org/wiki/List%20of%20flags%20of%20Israel
The following is a list of Israeli flags. National flag and state flag Governmental flags Military and police flags Army flags Navy flags Air Force flags Police flags Knesset Guard flag Intelligence flags Prison Service Israel Fire and Rescue Services Municipal flags Organization flags Historical flags National flag proposals Lifeguard flags Minority flags See also Emblem of Israel Flag of Israel External links Flags Lists and galleries of flags National symbols of Israel
https://en.wikipedia.org/wiki/Differential%20inclusion
In mathematics, differential inclusions are a generalization of the concept of ordinary differential equation of the form where F is a multivalued map, i.e. F(t, x) is a set rather than a single point in . Differential inclusions arise in many situations including differential variational inequalities, projected dynamical systems, Moreau's sweeping process, linear and nonlinear complementarity dynamical systems, discontinuous ordinary differential equations, switching dynamical systems, and fuzzy set arithmetic. For example, the basic rule for Coulomb friction is that the friction force has magnitude μN in the direction opposite to the direction of slip, where N is the normal force and μ is a constant (the friction coefficient). However, if the slip is zero, the friction force can be any force in the correct plane with magnitude smaller than or equal to μN. Thus, writing the friction force as a function of position and velocity leads to a set-valued function. In differential inclusion, we not only take a set-valued map at the right hand side but also we can take a subset of a Euclidean space for some as following way. Let and Our main purpose is to find a function satisfying the differential inclusion a.e. in where is an open bounded set. Theory Existence theory usually assumes that F(t, x) is an upper hemicontinuous function of x, measurable in t, and that F(t, x) is a closed, convex set for all t and x. Existence of solutions for the initial value problem for a sufficiently small time interval [t0, t0 + ε), ε > 0 then follows. Global existence can be shown provided F does not allow "blow-up" ( as for a finite ). Existence theory for differential inclusions with non-convex F(t, x) is an active area of research. Uniqueness of solutions usually requires other conditions. For example, suppose satisfies a one-sided Lipschitz condition: for some C for all x1 and x2. Then the initial value problem has a unique solution. This is closely relate
https://en.wikipedia.org/wiki/List%20of%20public%20transport%20smart%20cards
The following tables list smart cards used for public transport and other electronic purse applications. Africa Americas Asia and Oceania Europe Gallery See also Calypso, an international electronic ticketing standard, originally designed by a group of transit operators CIPURSE, is an open security standard for transit fare collection systems Smartcards on buses and trams in Great Britain Smartcards on National Rail (Great Britain)
https://en.wikipedia.org/wiki/Outline%20of%20botany
The following outline is provided as an overview of and topical guide to botany: Botany – biological discipline which involves the study of plants. Core concepts of botany Bud Cell wall Chlorophyll Chloroplast Flora Flower Fruit Forest Leaf Meristem Photosynthesis Plant Plant cell Pollen Seed Seedling Spore Tree Vine Wood Subdisciplines of botany Branches of botany Agronomy Bryology (mosses and liverworts) Dendrology (woody plants) Ethnobotany Lichenology (lichens) Mycology (fungi) Paleobotany Palynology (spores and pollen) Phycology (algae) Phytosociology Plant anatomy Plant ecology Plant evolution Plant morphology Plant pathology Plant physiology Plant taxonomy Pteridology (ferns) History of botany History of botany History of plant systematics Kinds of plants Major plant groups Algae Cyanobacteria Brown algae Charophyta Chlorophyta Desmid Diatom Red algae Green algae Bryophytes Anthocerotophyta (hornworts) Bryophyta (mosses) Marchantiophyta (liverworts) Pteridophytes Lycopodiophyta (club mosses) Pteridophyta (ferns & horsetails) Rhyniophyta (early plants) Gymnosperms Pteridospermatophyta (seed "ferns") Cycadophyta Ginkgophyta Gnetophyta Pinophyta (conifers) Angiosperms Dicotyledon Asteraceae (sunflower family) Cactaceae (cactus family) Fabaceae (legume family) Lamiaceae (mint family) Rosaceae (rose family) Monocotyledon Araceae (arum family) Arecaceae (palm family) Iridaceae (iris family) Orchidaceae (orchid family) Poaceae (grass family) Some well-known plants List of culinary fruits List of edible seeds List of culinary herbs and spices List of culinary nuts List of vegetables List of woods General plant species concepts Plant taxonomy Cultivated plant taxonomy List of systems of plant taxonomy Clades Monophyletic Polyphyletic Speciation Isolating mechanisms Concept of species Species problem Notable botanists In alphabetical order by surname: Aristotle Arthur C
https://en.wikipedia.org/wiki/Inertia%20wheel%20pendulum
An inertia wheel pendulum is a pendulum with an inertia wheel attached. It can be used as a pedagogical problem in control theory. This type of pendulum is often confused with the gyroscopic effect, which has completely different physical nature. See also Inverted pendulum Robotic unicycle Spinning top
https://en.wikipedia.org/wiki/Solids%20with%20icosahedral%20symmetry
Solids with full icosahedral symmetry Platonic solids - regular polyhedra (all faces of the same type) Archimedean solids - polyhedra with more than one polygon face type. Catalan solids - duals of the Archimedean solids. Platonic solids Achiral Archimedean solids Achiral Catalan solids Kepler-Poinsot solids Achiral nonconvex uniform polyhedra Chiral Archimedean and Catalan solids Archimedean solids: Catalan solids: Chiral nonconvex uniform polyhedra See also The Fifty Nine Icosahedra Rotational symmetry
https://en.wikipedia.org/wiki/Sten%20scores
The results for some scales of some psychometric instruments are returned as sten scores, sten being an abbreviation for 'Standard Ten' and thus closely related to stanine scores. Definition A sten score indicates an individual's approximate position (as a range of values) with respect to the population of values and, therefore, to other people in that population. The individual sten scores are defined by reference to a standard normal distribution. Unlike stanine scores, which have a midpoint of five, sten scores have no midpoint (the midpoint is the value 5.5). Like stanines, individual sten scores are demarcated by half standard deviations. Thus, a sten score of 5 includes all standard scores from -.5 to zero and is centered at -0.25 and a sten score of 4 includes all standard scores from -1.0 to -0.5 and is centered at -0.75. A sten score of 1 includes all standard scores below -2.0. Sten scores of 6-10 "mirror" scores 5-1. The table below shows the standard scores that define stens and the percent of individuals drawn from a normal distribution that would receive sten score. Percentiles are the percentile of the sten score (which is the mid-point of a range of z-scores). Sten scores (for the entire population of results) have a mean of 5.5 and a standard deviation of 2. Calculation of sten scores When the score distribution is approximately normally distributed, sten scores can be calculated by a linear transformation: (1) the scores are first standardized; (2) then multiplied by the desired standard deviation of 2; and finally, (3) the desired mean of 5.5 is added. The resulting decimal value may be used as-is or rounded to an integer. For example, suppose that scale scores are found to have a mean of 23.5, a standard deviation of 4.2, and to be approximately normally distributed. Then sten scores for this scale can be calculated using the formula, . It is also usually necessary to truncate such scores, particularly if the scores are skewed.
https://en.wikipedia.org/wiki/Mott%20the%20Hoople%20%28album%29
Mott the Hoople is the debut studio album by the band of the same name. It was produced by Guy Stevens and released in 1969 by Island Records in the UK (ILPS 9108), and in 1970 by Atlantic Records in the US (SD 8258). It was re-issued by Angel Air in 2003 (SJPCD157). Background Stevens, the group's initial mentor and guide, wanted to create an album that would suggest Bob Dylan singing with the Rolling Stones. This was partially achieved, with the album including several Dylanesque cover versions along with aggressive rock originals. Years later, vocalist Ian Hunter - who had only just joined the band prior to Mott the Hoople'''s recording and had yet to play live with them - would insinuate, in an August 1980 Trouser Press magazine interview, that the Stones' 1971 track "Bitch" bore more than a passing resemblance to this album's "Rock and Roll Queen." (Both songs are in the key of A, and use the pentatonic scale.) An instrumental version of The Kinks' "You Really Got Me" introduces the album, though a vocal version was recorded and is available on Mott's compilation release Two Miles From Heaven. Doug Sahm's "At the Crossroads" (originally recorded by Sahm's Sir Douglas Quintet in 1968) and Sonny Bono's "Laugh at Me" (originally issued by Sonny & Cher on their second full-length album in 1966, but without vocals from Cher) are suitably reminiscent of Bob Dylan, as is Hunter's "Backsliding Fearlessly." Initial copies of the UK album were wrongly pressed with the song "The Road to Birmingham," (the B-side of their debut single in the UK) at the end of side one, with "Backsliding Fearlessly" replacing "Rock and Roll Queen" at the start of side two. The album's cover is a colorized reproduction of M. C. Escher's lithograph "Reptiles." In an interesting coincidence considering Guy Stevens' desire for Mott to sound like the Rolling Stones, in early 1969 Mick Jagger had approached Escher wanting to commission a painting for the cover of the Stones' upcoming album L
https://en.wikipedia.org/wiki/Entropy%20%28computing%29
In computing, entropy is the randomness collected by an operating system or application for use in cryptography or other uses that require random data. This randomness is often collected from hardware sources (variance in fan noise or HDD), either pre-existing ones such as mouse movements or specially provided randomness generators. A lack of entropy can have a negative impact on performance and security. Linux kernel The Linux kernel generates entropy from keyboard timings, mouse movements, and integrated drive electronics (IDE) timings and makes the random character data available to other operating system processes through the special files /dev/random and /dev/urandom. This capability was introduced in Linux version 1.3.30. There are some Linux kernel patches allowing one to use more entropy sources. The audio_entropyd project, which is included in some operating systems such as Fedora, allows audio data to be used as an entropy source. Also available are video_entropyd, which calculates random data from a video-source and entropybroker, which includes these three and can be used to distribute the entropy data to systems not capable of running any of these (e.g. virtual machines). Furthermore, one can use the HAVEGE algorithm through haveged to pool entropy. In some systems, network interrupts can be used as an entropy source as well. OpenBSD kernel OpenBSD has integrated cryptography as one of its main goals and has always worked on increasing its entropy for encryption but also for randomising many parts of the OS, including various internal operations of its kernel. Around 2011, two of the random devices were dropped and linked into a single source as it could produce hundreds of megabytes per second of high quality random data on an average system. This made depletion of random data by userland programs impossible on OpenBSD once enough entropy has initially been gathered. Hurd kernel A driver ported from the Linux kernel has been made available f
https://en.wikipedia.org/wiki/Windbelt
The Windbelt is a wind power harvesting device invented by Shawn Frayne in 2004 for converting wind power to electricity. It consists of a flexible polymer ribbon stretched between supports transverse to the wind direction, with magnets glued to it. When the wind blows across it, the ribbon vibrates due to vortex shedding, similar to the action of an aeolian harp. The vibrating movement of the magnets induces current in nearby pickup coils by electromagnetic induction. One prototype has powered two LEDs, a radio, and a clock (separately) using wind generated from a household fan. The cost of the materials was well under US$10. $2–$5 for 40 mW is a cost of $50–$125 per watt. There are three sizes in development: The microBelt, a 12 cm version. This could be put into production in around six months. Its expected to produce 1 milliwatt average. To charge a pair of ideal rechargeable AA cells (2.5Ah 1.2v) this would take 6000 hours, or 250 days. The Windcell, a 1-metre version that could be used to power meshed WiFi repeaters, charge cellphones, or run LED lights. This could go into production within 18 to 24 months. It is hoped that a square metre panel at 6 m/s average windspeed can generate 10 W average. An experimental 10-metre model that has no production date. The Windbelt's inventor, Shawn Frayn, was a winner of the 2007 Breakthrough Award from the publishers of the magazine, Popular Mechanics. He is trying to make the Windbelt cheaper. The inventor's claims that the device is 10–30 times more efficient than small wind turbines have been refuted by tests. The microWindbelt could generate 0.2 mW at a wind speed of 3.5 m/s and 5 mW at 7.5 m/s, which represent efficiencies (ηCp) of 0.21 and 0.53 respectively. Wind turbines typically have efficiencies of 1% to 10%. Since the Windbelt a number of other "flutter" wind harvester devices have been designed, but like the Windbelt almost all have efficiencies below turbine machines. Footnotes
https://en.wikipedia.org/wiki/Chronozone
A chronozone or chron is a unit in chronostratigraphy, defined by events such as geomagnetic reversals (magnetozones), or based on the presence of specific fossils (biozone or biochronozone). According to the International Commission on Stratigraphy, the term "chronozone" refers to the rocks formed during a particular time period, while "chron" refers to that time period. Although non-hierarchical, chronozones have been recognized as useful markers or benchmarks of time in the rock record. Chronozones are non-hierarchical in that chronozones do not need to correspond across geographic or geologic boundaries, nor be equal in length. Although a former, early constraint required that a chronozone be defined as smaller than a geological stage. Another early use was hierarchical in that Harland et al. (1989) used "chronozone" for the slice of time smaller than a faunal stage defined in biostratigraphy. The ICS superseded these earlier usages in 1994. The key factor in designating an internationally acceptable chronozone is whether the overall fossil column is clear, unambiguous, and widespread. Some accepted chronozones contain others, and certain larger chronozones have been designated which span whole defined geological time units, both large and small. For example, the chronozone Pliocene is a subset of the chronozone Neogene, and the chronozone Pleistocene is a subset of the chronozone Quaternary. See also Body form Chronology (geology) European Mammal Neogene Geologic time scale North American Land Mammal Age Type locality (geology) List of GSSPs
https://en.wikipedia.org/wiki/Darcy%20friction%20factor%20formulae
In fluid dynamics, the Darcy friction factor formulae are equations that allow the calculation of the Darcy friction factor, a dimensionless quantity used in the Darcy–Weisbach equation, for the description of friction losses in pipe flow as well as open-channel flow. The Darcy friction factor is also known as the Darcy–Weisbach friction factor, resistance coefficient or simply friction factor; by definition it is four times larger than the Fanning friction factor. Notation In this article, the following conventions and definitions are to be understood: The Reynolds number Re is taken to be Re = V D / ν, where V is the mean velocity of fluid flow, D is the pipe diameter, and where ν is the kinematic viscosity μ / ρ, with μ the fluid's Dynamic viscosity, and ρ the fluid's density. The pipe's relative roughness ε / D, where ε is the pipe's effective roughness height and D the pipe (inside) diameter. f stands for the Darcy friction factor. Its value depends on the flow's Reynolds number Re and on the pipe's relative roughness ε / D. The log function is understood to be base-10 (as is customary in engineering fields): if x = log(y), then y = 10x. The ln function is understood to be base-e: if x = ln(y), then y = ex. Flow regime Which friction factor formula may be applicable depends upon the type of flow that exists: Laminar flow Transition between laminar and turbulent flow Fully turbulent flow in smooth conduits Fully turbulent flow in rough conduits Free surface flow. Transition flow Transition (neither fully laminar nor fully turbulent) flow occurs in the range of Reynolds numbers between 2300 and 4000. The value of the Darcy friction factor is subject to large uncertainties in this flow regime. Turbulent flow in smooth conduits The Blasius correlation is the simplest equation for computing the Darcy friction factor. Because the Blasius correlation has no term for pipe roughness, it is valid only to smooth pipes. However, the Blasius correlation is som
https://en.wikipedia.org/wiki/Salvia%20verbenaca
Salvia verbenaca, also known as wild clary or wild sage, is native to the British Isles, the Mediterranean region in Southern Europe, North Africa, and Near East, and in the Caucasus. It can be found as an introduced species that has naturalized in meadows in the Eastern United States. S. verbenaca is a tall perennial herb with hairy stems and branches that erectly sprawl out. Its leaves are basal and toothed that vary from long. It has soft purple to violet flowers in mid summer. It is in flower from June to September, and the seeds ripen from July to October. The flowers are bisexual and are pollinated by bees. Some are also cleistogamous and pollinate themselves. The plant is noted for attracting pollinators and wildlife. It prefers neutral and alkaline soils and needs full sun. This aromatic sage is used as a flavoring in foods and to make tea; the flowers can be added to salads. Resources USDA treatment -Salvia verbenaca Salvia verbenaca - U.K. Floral images R.B.Garden-Sydney: Salvia verbenaca Databases verbenaca Herbs Flora of North Africa Flora of Western Asia Flora of Great Britain Flora of Europe Matorral shrubland Garden plants of Europe Garden plants of Africa Garden plants of Asia Edible plants Plants described in 1753 Taxa named by Carl Linnaeus
https://en.wikipedia.org/wiki/Electroneurogram
An electroneurogram is a method used to visualize directly recorded electrical activity of neurons in the central nervous system (brain, spinal cord) or the peripheral nervous system (nerves, ganglions). The acronym ENG is often used. An electroneurogram is similar to an electromyogram (EMG), but the latter is used to visualize muscular activity. An electroencephalogram (EEG) is a particular type of electroneurogram in which several electrodes are placed around the head and the general activity of the brain is recorded, without having very high resolution to distinguish between the activity of different groups of neurons. An electroneurogram is usually obtained by placing an electrode in the neural tissue. The electrical activity generated by neurons is recorded by the electrode and transmitted to an acquisition system, which usually allows to visualize the activity of the neuron. Each vertical line in an electroneurogram represents one neuronal action potential. Depending on the precision of the electrode used to record neural activity, an electroneurogram can contain the activity of a single neuron to thousands of neurons. Researchers adapt the precision of their electrode to either focus on the activity of a single neuron or the general activity of a group of neurons, both strategies having their advantages. External links An example of a neural recording Electrophysiology
https://en.wikipedia.org/wiki/Immune%20privilege
Certain sites of the mammalian body have immune privilege, meaning they are able to tolerate the introduction of antigens without eliciting an inflammatory immune response. Tissue grafts are normally recognised as foreign antigens by the body and attacked by the immune system. However, in immune privileged sites, tissue grafts can survive for extended periods of time without rejection occurring. Immunologically privileged sites include: the eyes the placenta and fetus the testicles the central nervous system Immune privilege is also believed to occur to some extent or able to be induced in articular cartilage. This was once thought to also include the brain, but this is now known to be incorrect, as it has been shown that immune cells of the central nervous system contribute to the maintenance of neurogenesis and spatial learning abilities in adulthood. Immune privilege is thought to be an evolutionary adaptation to protect vital structures from the potentially damaging effects of an inflammatory immune response. Inflammation in the brain or eye can lead to loss of organ function, while immune responses directed against a fetus can lead to miscarriage. Medically, a cornea transplant takes advantage of this, as does knee meniscal transplantation. Mechanisms Antigens from immune privileged regions have been found to interact with T cells in an unusual way: inducing tolerance of normally rejected stimuli. Immune privilege has emerged as an active rather than a passive process. Physical structures surrounding privileged sites cause a lack of lymphatic drainage, limiting the immune system's ability to enter the site. Other factors that contribute to the maintenance of immune privilege include: low expression of classical MHC class Ia molecules expression of immunoregulatory nonclassical, low polymorphic class Ib MHC molecules increased expression of surface molecules that inhibit complement activation local production of immunosuppressive cytokines such as T
https://en.wikipedia.org/wiki/G.I.%20Joe%3A%20The%20Rise%20of%20Cobra
G.I. Joe: The Rise of Cobra is a 2009 American military science fiction action film based on the G.I. Joe toy line. It is the first installment in the G.I. Joe film series. Directed by Stephen Sommers from a screenplay by Stuart Beattie, David Elliot, and Paul Lovett, the film features an ensemble cast based on the various characters of the toy line. The story follows two American soldiers, Duke and Ripcord, who join the G.I. Joe Team after being attacked by Military Armaments Research Syndicate (M.A.R.S.) troops. After leaked drafts of the script were criticized by fans, Larry Hama, writer of the comic book series G.I. Joe: A Real American Hero, was hired as creative consultant, and rewrites were made. Filming took place in Downey, California and Prague's Barrandov Studios, and six companies handled the visual effects. G.I. Joe: The Rise of Cobra premiered at the Andrews Air Force Base on July 31, 2009, and was released in the United States on August 7, by Paramount Pictures, following an extensive marketing campaign focused on the Mid-American public. The film received generally negative reviews from critics and grossed over $302 million worldwide against a $175 million budget. A sequel, titled G.I. Joe: Retaliation, was released in 2013. Plot In the near future, weapons master James McCullen has created a nanotech-based weapon—nanomites designed to devour metal and other materials, capable of destroying anything from tanks to cities. The nanobots can only be stopped by activating the kill switch. His company M.A.R.S. sells four warheads to NATO, and NATO troops led by American soldiers Duke and Ripcord deliver the warheads. Their convoy is ambushed by the Baroness, whom Duke recognizes to be his ex-fiancée Ana Lewis. Duke and Ripcord are rescued by Scarlett, Snake Eyes, Breaker, and Heavy Duty. They take the warheads to The Pit, G.I. Joe's command center in Egypt, and upon arriving, rendezvous with the head of the G.I. Joe Team, General Hawk. Hawk takes comm
https://en.wikipedia.org/wiki/Helium%20atom%20scattering
Helium atom scattering (HAS) is a surface analysis technique used in materials science. It provides information about the surface structure and lattice dynamics of a material by measuring the diffracted atoms from a monochromatic helium beam incident on the sample. History The first recorded helium diffraction experiment was completed in 1930 by Immanuel Estermann and Otto Stern on the (100) crystal face of lithium fluoride. This experimentally established the feasibility of atom diffraction when the de Broglie wavelength, λ, of the impinging atoms is on the order of the interatomic spacing of the material. At the time, the major limit to the experimental resolution of this method was due to the large velocity spread of the helium beam. It wasn't until the development of high pressure nozzle sources capable of producing intense and strongly monochromatic beams in the 1970s that HAS gained popularity for probing surface structure. Interest in studying the collision of rarefied gases with solid surfaces was helped by a connection with aeronautics and space problems of the time. Plenty of studies showing the fine structures in the diffraction pattern of materials using helium atom scattering were published in the 1970s. However, it wasn't until a third generation of nozzle beam sources was developed, around 1980, that studies of surface phonons could be made by helium atom scattering. These nozzle beam sources were capable of producing helium atom beams with an energy resolution of less than 1meV, making it possible to explicitly resolve the very small energy changes resulting from the inelastic collision of a helium atom with the vibrational modes of a solid surface, so HAS could now be used to probe lattice dynamics. The first measurement of such a surface phonon dispersion curve was reported in 1981, leading to a renewed interest in helium atom scattering applications, particularly for the study of surface dynamics. Basic principles Surface sensitivity
https://en.wikipedia.org/wiki/Weighted%20Companion%20Cube
The Weighted Companion Cube (also simply called the Companion Cube) is a fictional item featured in the Portal series of video games by Valve Corporation. Initially featured in a single level of the original Portal, Test Chamber 17, as one of Aperture Science's ubiquitous Weighted Storage Cubes with heart symbols printed on the outside, it is given to the game's main character, Chell, as part of the antagonist GLaDOS's sinister testing initiative. After carrying it through the entire level and ostensibly anthropomorphizing and "bonding" with the Cube, the malevolent AI forces her to unceremoniously dispose of it in an incinerator device. Companion Cubes later re-appear in the game's sequel with a slightly different design. The original Companion Cube is shown to have survived the events of both Portal and Portal 2, appearing as part of an ending gag. While GLaDOS has suggested the Companion Cube may be sentient, it is unclear whether this was solely to psychologically torment Chell. However, in the comic Portal 2: Lab Rat, Doug Rattman's Companion Cube is shown speaking to him, possibly as part of a schizophrenic hallucination. Following the game's release, the Weighted Companion Cube quickly increased in popularity among fans, spawning a wide array of official merchandise and fan works. It has since become a mascot for Valve's games, and has also been referenced in other, unrelated games as Easter eggs. Appearances Portal series The Companion Cube first appears in Portal's Test Chamber 17, where Chell is given it as a necessary tool to progress through the level. After being used to reach higher platforms, it must be taken with the player and used as a shield. The player must use the Companion Cube to activate three devices and jump across multiple platforms, before being forced to incinerate the cube. The Cube returns at the end of Portal 2, where it is discharged from the facility by GLaDOS following Chell's egress, still bearing burn marks from its suppose
https://en.wikipedia.org/wiki/Bioretention
Bioretention is the process in which contaminants and sedimentation are removed from stormwater runoff. The main objective of the bioretention cell is to attenuate peak runoff as well as to remove stormwater runoff pollutants. Construction of a bioretention area Stormwater is firstly directed into the designed treatment area, which conventionally consists of a sand bed (which serves as a transition to the actual soil), a filter media layer (which consists of layered materials of various composition), and plants atop the filter media. Various soil amendment such as water treatment residue (WTR), Coconut husk, biochar etc have been proposed over the years. These materials were reported to have enhanced performance in terms of pollutant removal. Runoff passes first over or through a sand bed, which slows the runoff's velocity, distributes it evenly along the length of the ponding area, which consists of a surface organic layer and/or groundcover and the underlying planting soil. Stored water in the bioretention area planting soil exfiltrates over a period of days into the underlying soils. Filtration Each of the components of the bioretention area is designed to perform a specific function. The grass buffer strip reduces incoming runoff velocity and filters particulates from the runoff. The sand bed also reduces the velocity, filters particulates, and spreads flow over the length of the bioretention area. Aeration and drainage of the planting soil are provided by the deep sand bed. The ponding area provides a temporary storage location for runoff prior to its evaporation or infiltration. Some particulates not filtered out by the grass filter strip or the sand bed settle within the ponding area. The organic or mulch layer also filters pollutants and provides an environment conducive to the growth of microorganisms, which degrade petroleum-based products and other organic material. This layer acts in a similar way to the leaf litter in a forest and prevents the e
https://en.wikipedia.org/wiki/Molar%20mass%20constant
The molar mass constant, usually denoted by Mu, is a physical constant defined as one twelfth of the molar mass of carbon-12: Mu = M(12C)/12. The molar mass of any element or compound is its relative atomic mass (atomic weight) multiplied by the molar mass constant. The mole and the relative atomic mass were originally defined in the International System of Units (SI) in such a way that the constant was exactly . That is, the numerical value of the molar mass of an element, in grams per mole of atoms, was equal to its atomic mass relative to the atomic mass constant, mu. Thus, for example, the average atomic mass of chlorine is approximately , making the mass of one mole of chlorine atoms approximately . On 20 May 2019, the SI definition of mole changed in such a way that the molar mass constant remains nearly but no longer exactly . However, the difference is insignificant for all practical purposes. According to the SI, the value of Mu now depends on the mass of one atom of carbon-12, which must be determined experimentally. As of that date, the 2018 CODATA recommended value of Mu is The molar mass constant is important in writing dimensionally correct equations. While one may informally say "the molar mass of an element M is the same as its atomic weight A", the atomic weight (relative atomic mass) A is a dimensionless quantity, whereas the molar mass M has the units of mass per mole. Formally, M is A times the molar mass constant Mu. Prior to 2019 redefinition The molar mass constant was unusual (but not unique) among physical constants by having an exactly defined value rather than being measured experimentally. From the old definition of the mole, the molar mass of carbon 12 was exactly 12 g/mol. From the definition of relative atomic mass, the relative atomic mass of carbon 12, that is the atomic weight of a sample of pure carbon 12, is exactly 12. The molar mass constant was thus given by The molar mass constant is related to the mass of a carbon
https://en.wikipedia.org/wiki/Helmholtz%20Centre%20for%20Environmental%20Research
The work of the Helmholtz Centre for Environmental Research – UFZ (prior to 28 November 2006 UFZ-Umweltforschungszentrum Leipzig-Halle GmbH) covers both basic research and applied research. The UFZ was established on 12 December 1991. The Centre commenced its research activities on 2 January 1992. The UFZ has locations in Leipzig, Halle and Magdeburg in Germany. In addition, it operates a experimental station in Bad Lauchstädt. The UFZ employs a total of about 1,200 employees (as of 2021). The UFZ has been operating KUBUS, a modern communications and event venue in Leipzig, since 2004. KUBUS has flexible event spaces in a range of sizes and accommodates up to 550 people. Research areas As an international competence centre for environmental sciences, the UFZ investigates interrelationships between humans and nature under the influence of global change. The research activities of UFZ scientists focus on the terrestrial environment – on densely populated urban and industrial conurbations, on agricultural landscapes and near-natural landscapes. They examine issues relating to future land use, the preservation of biological diversity and of ecosystem services, the sustainable management of soil and water resources and the effect of chemicals on humans and the environment – from the level of single cells and organisms up to the scale of regions. The work of the UFZ is characterised by integrated environmental research that overcomes disciplinary boundaries between the natural and social sciences (interdisciplinarity) and brings together decision-makers from business, government and society (transdisciplinarity). Major scientific infrastructures such as climate and land-use experiments (e.g., GCEF Global Change Experimental Facility, ProVis Centre for the visualisation of biochemical processes at cellular level), platforms and technologies for environmental monitoring (e.g. TERENO terrestrial environmental observatories ), modelling and visualisation (e.g., TESSIN/V
https://en.wikipedia.org/wiki/Symmetric%20scale
In music, a symmetric scale is a music scale which equally divides the octave. The concept and term appears to have been introduced by Joseph Schillinger and further developed by Nicolas Slonimsky as part of his famous Thesaurus of Scales and Melodic Patterns. In twelve-tone equal temperament, the octave can only be equally divided into two, three, four, six, or twelve parts, which consequently may be filled in by adding the same exact interval or sequence of intervals to each resulting note (called "interpolation of notes"). Examples include the octatonic scale (also known as the symmetric diminished scale; its mirror image is known as the inverse symmetric diminished scale) and the two-semitone tritone scale: As explained above, both are composed of repeating sub-units within an octave. This property allows these scales to be transposed to other notes, yet retain exactly the same notes as the original scale (Translational symmetry). This may be seen quite readily with the whole tone scale on C: {C, D, E, F, G, A, C} If transposed up a whole tone to D, contains exactly the same notes in a different permutation: {D, E, F, G, A, C, D} In the case of inversionally symmetrical scales, the inversion of the scale is identical. Thus the intervals between scale degrees are symmetrical if read from the "top" (end) or "bottom" (beginning) of the scale (mirror symmetry). Examples include the Ukrainian Dorian b9 scale (sixth mode of the Hungarian Major scale), the Jazz Minor b5 scale (third mode of the Hungarian Major Inverse), the Neapolitan Major scale (fourth mode of the Major Locrian scale), the Javanese slendro, the chromatic scale, whole-tone scale, Dorian scale, the Aeolian Dominant scale (fifth mode of the melodic minor), the Harmonic Minor scale, the Major Locrian Major 7th/Harmonic Major b5 scale, the Chromatic Lydian scale (fourth mode of Blues Leading-Tone scale), the Phrygian Major Lydian scale (fourth mode of Neapolitan Major b5 scale), and the double ha
https://en.wikipedia.org/wiki/Magnetorheological%20damper
A magnetorheological damper or magnetorheological shock absorber is a damper filled with magnetorheological fluid, which is controlled by a magnetic field, usually using an electromagnet. This allows the damping characteristics of the shock absorber to be continuously controlled by varying the power of the electromagnet. Fluid viscosity increases within the damper as electromagnet intensity increases. This type of shock absorber has several applications, most notably in semi-active vehicle suspensions which may adapt to road conditions, as they are monitored through sensors in the vehicle, and in prosthetic limbs. Types Mono tube Twin tube Double-ended MR damper VIP MR damper Commercial applications Many applications have been proposed using magnetorheological (MR) dampers. While vehicle applications are the most common use of MR dampers, useful medical applications have risen as well, including implants and rehabilitation methods. Since MR dampers are not yet perfect, they are limited in terms of application. Disadvantages do exist when using a large scale MR damper, for example, particle settling within the carrier fluid may occur that inhibits some possible application. History The technology was originally developed by General Motors Delphi Automotive Division based in the USA and then developed further by BeijingWest Industries in China after BeijingWest Industries bought the technology from General Motors. BeijingWest Industries has subsequently introduced improvements including a redesigned ECU and the introduction of a dual coil system. The first car to use the technology was the 2002.5 Cadillac Seville STS, and the first sports car to use the technology was the 2003 C5 Corvette. Automotive These types of systems are available from OEMs for several vehicles, including the Acura MDX, Audi TT and R8, Buick Lucerne, Cadillac ATS, CTS-V, DTS, XLR, SRX, STS, Chevrolet Corvette, Camaro ZL1, Ferrari 458 Italia, 599GTB, F12 Berlinetta, Mustang Mach-E, Shelby GT
https://en.wikipedia.org/wiki/Sales%20and%20operations%20planning
Sales and operations planning (S&OP) is an integrated business management process through which the executive/leadership team continually achieves focus, alignment, and synchronization among all organization functions. The S&OP process includes an updated forecast that leads to a sales plan, production plan, inventory plan, customer lead time (backlog) plan, new product development plan, strategic initiative plan, and resulting financial plan. Plan frequency and planning horizon depend on the specifics of the context. Short product life cycles and high demand volatility require a tighter S&OP than steadily consumed products. Done well, the S&OP process also enables effective supply chain management. The Sales and Operations planning process has a twofold scope. The first scope is the horizontal alignment to balance the supply and demand through integration between the company departments and with suppliers and customers. The second aim is the vertical alignment amid strategic plan and the operational plan of a company. A properly implemented S&OP process routinely reviews customer demand and supply resources and "re-plans" quantitatively across an agreed 'rolling' horizon. The re-planning process focuses on changes from the previously agreed sales and operations plan; while it helps the management team to understand how the company achieved its current level of performance, its focus is on future actions and anticipated results. Definitions S&OP was developed with the concept of aggregated production planning (APP) in the first part of 1950, then switched to manufacturing resource planning (MRP 2) around 1985, till the current definition of business process for the alignment of supply and demand. The term S&OP and its modern meaning were conceived of in the 1980s and are generally attributed to Richard Ling, then a consultant with the management consulting firm Oliver Wight. APICS defines S&OP as the "function of setting the overall level of manufacturing outp
https://en.wikipedia.org/wiki/Panjer%20recursion
The Panjer recursion is an algorithm to compute the probability distribution approximation of a compound random variable where both and are random variables and of special types. In more general cases the distribution of S is a compound distribution. The recursion for the special cases considered was introduced in a paper by Harry Panjer (Distinguished Emeritus Professor, University of Waterloo). It is heavily used in actuarial science (see also systemic risk). Preliminaries We are interested in the compound random variable where and fulfill the following preconditions. Claim size distribution We assume the to be i.i.d. and independent of . Furthermore the have to be distributed on a lattice with latticewidth . In actuarial practice, is obtained by discretisation of the claim density function (upper, lower...). Claim number distribution The number of claims N is a random variable, which is said to have a "claim number distribution", and which can take values 0, 1, 2, .... etc.. For the "Panjer recursion", the probability distribution of N has to be a member of the Panjer class, otherwise known as the (a,b,0) class of distributions. This class consists of all counting random variables which fulfill the following relation: for some and which fulfill . The initial value is determined such that The Panjer recursion makes use of this iterative relationship to specify a recursive way of constructing the probability distribution of S. In the following denotes the probability generating function of N: for this see the table in (a,b,0) class of distributions. In the case of claim number is known, please note the De Pril algorithm. This algorithm is suitable to compute the sum distribution of discrete random variables. Recursion The algorithm now gives a recursion to compute the . The starting value is with the special cases and and proceed with Example The following example shows the approximated density of where and with lattice w
https://en.wikipedia.org/wiki/Immunosenescence
Immunosenescence is the gradual deterioration of the immune system, brought on by natural age advancement. A 2020 review concluded that the adaptive immune system is affected more than the innate immune system. Immunosenescence involves both the host's capacity to respond to infections and the development of long-term immune memory. Age-associated immune deficiency is found in both long- and short-lived species as a function of their age relative to life expectancy rather than elapsed time. It has been studied in animal models including mice, marsupials and monkeys. Immunosenescence is a contributory factor to the increased frequency of morbidity and mortality among the elderly. Along with anergy and T-cell exhaustion, immunosenescence belongs among the major immune system dysfunctional states. However, while T-cell anergy is a reversible condition, as of 2020 no techniques for immunosenescence reversal had been developed. Immunosenescence is not a random deteriorative phenomenon, rather it appears to inversely recapitulate an evolutionary pattern. Most of the parameters affected by immunosenescence appear to be under genetic control. Immunosenescence can be envisaged as the result of the continuous challenge of the unavoidable exposure to a variety of antigens such as viruses and bacteria. Age-associated decline in immune function Aging of the immune system is a controversial phenomenon. Senescence refers to replicative senescence from cell biology, which describes the condition when the upper limit of cell divisions (Hayflick limit) has been exceeded, and such cells commit apoptosis or lose their functional properties. Immunosenescence generally means a robust shift in both structural and functional parameters that has a clinically relevant outcome. Thymus involution is probably the most relevant factor responsible for immunosenescence. Thymic involution is common in most mammals; in humans it begins after puberty, as the immunological defense against most nov
https://en.wikipedia.org/wiki/Lehmer%27s%20conjecture
Lehmer's conjecture, also known as the Lehmer's Mahler measure problem, is a problem in number theory raised by Derrick Henry Lehmer. The conjecture asserts that there is an absolute constant such that every polynomial with integer coefficients satisfies one of the following properties: The Mahler measure of is greater than or equal to . is an integral multiple of a product of cyclotomic polynomials or the monomial , in which case . (Equivalently, every complex root of is a root of unity or zero.) There are a number of definitions of the Mahler measure, one of which is to factor over as and then set The smallest known Mahler measure (greater than 1) is for "Lehmer's polynomial" for which the Mahler measure is the Salem number It is widely believed that this example represents the true minimal value: that is, in Lehmer's conjecture. Motivation Consider Mahler measure for one variable and Jensen's formula shows that if then In this paragraph denote  , which is also called Mahler measure. If has integer coefficients, this shows that is an algebraic number so is the logarithm of an algebraic integer. It also shows that and that if then is a product of cyclotomic polynomials i.e. monic polynomials whose all roots are roots of unity, or a monomial polynomial of i.e. a power for some . Lehmer noticed that is an important value in the study of the integer sequences for monic . If does not vanish on the circle then . If does vanish on the circle but not at any root of unity, then the same convergence holds by Baker's theorem (in fact an earlier result of Gelfond is sufficient for this, as pointed out by Lind in connection with his study of quasihyperbolic toral automorphisms). As a result, Lehmer was led to ask whether there is a constant such that provided is not cyclotomic?, or given , are there with integer coefficients for which ? Some positive answers have been provided as follows, but Lehmer's conjecture is not yet completel
https://en.wikipedia.org/wiki/Chiron%20FS
Chiron Filesystem is a fault-tolerant replication file system. Chiron FS is a FUSE based filesystem that implements replication at the filesystem level like RAID 1 does at the device level. The replicated filesystem may be of any kind; the only requisite is that it is mounted. There is no need for special configuration files; the setup is as simple as one mount command (or one line in fstab). There is no specific communication protocol, at mount time the invoking parameters indicate two or more paths to directories which will be the replicated underlying filesystems (they must be already in sync). This allows the client to use any kind of underlying filesystems such as Ext3, NFS or SSHFS and even mix them. Every write in the Chiron FS mount point subtree will be echoed to the underlying filesystems. Any read from Chiron FS mount point subtree will be made from only one of the underlying filesystems using a prioritized round robin algorithm. If one or more underlying filesystems fails, the virtualized filesystem provided by Chiron FS continues operating as long as there is at least one replica available. In this case, the failures are reported to a log file. If the failure is on a write operation, the failed replica is disabled and it is not used by Chiron FS until it is available and resynchronized with the others. Synchronization is not implemented yet in Chiron FS, so it must be done manually. If all replica fails then the calling application receives the error message as it would receive if it was accessing a non replicated filesystem. In this case there will be no log report. See also List of file systems External links Chiron FS web page Chiron FS repository Announcement list Discussion list Issues list File systems supported by the Linux kernel Network file systems Internet Protocol based network software Userspace file systems Discontinued software
https://en.wikipedia.org/wiki/Bridge%20and%20torch%20problem
The bridge and torch problem (also known as The Midnight Train and Dangerous crossing) is a logic puzzle that deals with four people, a bridge and a torch. It is in the category of river crossing puzzles, where a number of objects must move across a river, with some constraints. Story Four people come to a river in the night. There is a narrow bridge, but it can only hold two people at a time. They have one torch and, because it's night, the torch has to be used when crossing the bridge. Person A can cross the bridge in 1 minute, B in 2 minutes, C in 5 minutes, and D in 8 minutes. When two people cross the bridge together, they must move at the slower person's pace. The question is, can they all get across the bridge if the torch lasts only 15 minutes? Solution An obvious first idea is that the cost of returning the torch to the people waiting to cross is an unavoidable expense which should be minimized. This strategy makes A the torch bearer, shuttling each person across the bridge: This strategy does not permit a crossing in 15 minutes. To find the correct solution, one must realize that forcing the two slowest people to cross individually wastes time which can be saved if they both cross together: A second equivalent solution swaps the return trips. Basically, the two fastest people cross together on the 1st and 5th trips, the two slowest people cross together on the 3rd trip, and EITHER of the fastest people returns on the 2nd trip, and the other fastest person returns on the 4th trip. Thus the minimum time for four people is given by the following mathematical equations: When , A semi-formal approach Assume that a solution minimizes the total number of crossings. This gives a total of five crossings - three pair crossings and two solo-crossings. Also, assume we always choose the fastest for the solo-cross. First, we show that if the two slowest persons (C and D) cross separately, they accumulate a total crossing time of 15. This is done by taking pe