source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Paraproctitis
Paraproctitis is a purulent inflammation of the cellular tissues surrounding the rectum. The most frequent cause is penetration of bacterial flora from the rectum into the surrounding cellular tissues, which may occur through an anal fissure. The inflammation is sometimes limited to the formation of an anorectal abscess, and in some cases it spreads for a considerable distance and may be complicated by sepsis. The symptoms are acute pain in the rectal region, tenderness during defecation, elevated body temperature, and the appearance of an infiltrate in the anal region or on the buttocks. An unlanced abscess may burst and a fistula form. The disease becomes chronic after recurrences. Treatment includes administration of antibiotics and anti-inflammatory agents and, in the suppurative stage, surgical lancing of any anorectal abscess.
https://en.wikipedia.org/wiki/Peg%20%2B%20Cat
Peg + Cat is an animated children's television series based on the children's picture book "The Chicken Problem", which was published in 2012. The series, which featured the voice acting of Hayley Faith Negrin and Dwayne Hill, was created by Billy Aronson and Jennifer Oxley and produced by Fred Rogers Productions and 9 Story Media Group. It debuted on most PBS stations on October 7, 2013, as part of the revamped PBS Kids brand, and aired 63 episodes through April 23, 2018. In Canada the show is broadcast on Treehouse TV. The show is targeted to children 3 to 5 years old. The goal is to "inspire preschoolers’ natural curiosity about math and help them develop new skills and strategies for solving problems creatively in their daily lives." In keeping with the math theme, the animation is presented as if it were drawn on graph paper. On March 3, 2015, PBS Kids renewed Peg + Cat for a second season, which started on April 4, 2016. On March 28, 2016, a one-hour two-part film aired on PBS Kids. This new film, titled Peg + Cat Save the World, focused on the duo being called upon by the President of the United States (voiced by Sandra Oh), to prevent a global disaster. On March 14, 2016, PBS Kids released the first part of the film on its YouTube channel. Characters Peg – The main character, together with her talking cat, Cat. She is a young girl who wears a red hat and blue dress. She explains the situation in each episode directly to the "camera", announces when they have "a big problem", and reasons out solutions to math-related problems. During songs, she pulls out a ukulele and plays it. She has a special blue marble hidden under her hat. Her favorite crayon is "little bluey". She is voiced by Hayley Faith Negrin. Cat – An indigo cat whose best friend is Peg. He loves circles and accompanies Peg on her adventures. Cat often inspires Peg to realize a solution to a problem without being aware of it himself. Often helps to calm Peg when she is "totally freaking out". H
https://en.wikipedia.org/wiki/SIN3A
Paired amphipathic helix protein Sin3a is a protein that in humans is encoded by the SIN3A gene. Function The protein encoded by this gene is a transcriptional regulatory protein. It contains paired amphipathic helix (PAH) domains, which are important for protein-protein interactions and may mediate repression by the Mad-Max complex. Interactions SIN3A has been shown to interact with: CABIN1 HBP1, HDAC1, HDAC9, Histone deacetylase 2, Host cell factor C1, IKZF1, ING1, KLF11, MNT, MXD1, Methyl-CpG-binding domain protein 2, Nuclear receptor co-repressor 2, OGT, PHF12, Promyelocytic leukemia protein, RBBP4, RBBP7, SAP130, SAP30, SMARCA2, SMARCA4, SMARCC1, SUDS3, TAL1, and Zinc finger and BTB domain-containing protein 16. See also Transcription coregulator
https://en.wikipedia.org/wiki/Tropical%20salt%20pond%20ecosystem
Salt ponds are a natural feature of both temperate and tropical coastlines. These ponds form a vital buffer zone between terrestrial and marine ecosystems. Contaminants such as sediment, nitrates and phosphates are filtered out by salt ponds before they can reach the ocean. The depth, salinity and overall chemistry of these dynamic salt ponds fluctuate depending on temperature, rainfall, and anthropogenic influences such as nutrient runoff. The flora and fauna of tropical salt ponds differ markedly from those of temperate ponds. Mangrove trees are the dominant vegetation of tropical salt pond ecosystems, which also serve as vital feeding and breeding grounds for shore birds. Formation and cycle Tropical salt ponds form as bays are gradually closed off with berms of rubble from the reef. Mangroves grow atop the berms, which gradually close off the area to create a salt pond. These typically form at the base of watersheds with steep slopes, as sediments transported during storm events begin to fill in and cover up the rubble berm. Mangroves may grow over the berm, also contributing to the isolation of the salt pond. Typically, the ponds communicate with the open sea through ground seepage. Evaporation and precipitation cycles in salt ponds create variable environments with wide ranges of salinity and depth. Due to depth and temperature fluctuation salt pond could be classified as hyposaline 3-20 ppt, mesosaline 20-50 ppt, or hypersaline with ppt greater than 50. Another important aspect of salt ponds is their permanence. Salt ponds can eventually become filled in over time, and transition into an extension of the land. Some are intermittent ponds due to predictable dry and wet seasons while others are episodic (if the region has highly unpredictable weather). Flora and fauna Organisms typically found in and around tropical salt ponds include cyanobacteria, marine invertebrates, birds, algae and mangrove trees. For example, a typical Caribbean salt pond
https://en.wikipedia.org/wiki/Napkin%20psoriasis
Napkin psoriasis, or psoriasis in the diaper area, is characteristically seen in infants between two and eight months of age. See also Psoriasis Skin lesion
https://en.wikipedia.org/wiki/Food%20Safety%20and%20Inspection%20Service
The Food Safety and Inspection Service (FSIS), an agency of the United States Department of Agriculture (USDA), is the public health regulatory agency responsible for ensuring that United States' commercial supply of meat, poultry, and egg products is safe, wholesome, and correctly labeled and packaged. The FSIS draws its authority from the Federal Meat Inspection Act of 1906, the Poultry Products Inspection Act of 1957 and the Egg Products Inspection Act of 1970. The FSIS also acts as a national health department and is responsible for the safety of public food-related establishments as well as business investigation. Food products under the jurisdiction of the FSIS, and thus subject to inspection, are those that contain more than 3% meat or 2% poultry products, with several exceptions, and egg products (liquid, frozen or dried). Shell eggs, meat and poultry products not under the jurisdiction of the FSIS are under the jurisdiction of the United States Food and Drug Administration (FDA). The FSIS is led by the Under Secretary of Agriculture for Food Safety. Overview More than 7,800 FSIS inspection program personnel are assigned to about 6,200 federal slaughter, food processing, and import establishments in the United States. They verify the processing of tens of billions of pounds of meat and poultry, and billions of pounds of egg products. At slaughter establishments, inspectors perform antemortem inspections to prevent slaughter of diseased animals. Then, postmortem examinations are performed to identify diseased carcasses not evident antemortem. Regulations for rapid chilling, adequate trimming, and sanitary washing are enforced to reduce microbial contamination. Samples are collected for residue testing to ensure antibiotic, pesticide and other residues are below regulatory limits. For cattle, tissue samples are tested for the presence of bovine spongiform encephalopathy. In processing plants, procedures and formulations are monitored to ensure that FSIS req
https://en.wikipedia.org/wiki/Instrument%20effect
The instrument effect is an issue in experimental methodology meaning that any change during the measurement, or, the instrument, may influence the research validity. For example, in a control group design experiment, if the instruments used to measure the performance of the experiment group and the control group are different, a wrong conclusion about the experiment would be reached, the research result would be invalid.
https://en.wikipedia.org/wiki/Electron%20electric%20dipole%20moment
The electron electric dipole moment is an intrinsic property of an electron such that the potential energy is linearly related to the strength of the electric field: The electron's electric dipole moment (EDM) must be collinear with the direction of the electron's magnetic moment (spin). Within the Standard Model of elementary particle physics, such a dipole is predicted to be non-zero but very small, at most , where e stands for the elementary charge. The discovery of a substantially larger electron electric dipole moment would imply a violation of both parity invariance and time reversal invariance. Implications for Standard Model and extensions In the Standard Model, the electron EDM arises from the CP-violating components of the CKM matrix. The moment is very small because the CP violation involves quarks, not electrons directly, so it can only arise by quantum processes where virtual quarks are created, interact with the electron, and then are annihilated. If neutrinos are Majorana particles, a larger EDM (around ) is possible in the Standard Model. Many extensions to the Standard Model have been proposed in the past two decades. These extensions generally predict larger values for the electron EDM. For instance, the various technicolor models predict that ranges from 10−27 to 10−29 e⋅cm. Some supersymmetric models predict that but some other parameter choices or other supersymmetric models lead to smaller predicted values. The present experimental limit therefore eliminates some of these technicolor/supersymmetric theories, but not all. Further improvements, or a positive result, would place further limits on which theory takes precedence. Formal definition As the electron has a net charge, the definition of its electric dipole moment is ambiguous in that depends on the point about which the moment of the charge distribution is taken. If we were to choose to be the center of charge, then would be identically zero. A more interesting choice wou
https://en.wikipedia.org/wiki/Erythromycin/sulfafurazole
Erythromycin/sulfafurazole (trade name Pediazole) is a combination drug with the antibiotics erythromycin and sulfafurazole (the latter is also known as sulfisoxazole). It is indicated in acute otitis media in children, particularly when Hemophilus influenzae is the suspected pathogen.
https://en.wikipedia.org/wiki/Inforex%201300%20Systems
Inforex Inc. corporation manufactured and sold key-to-disk data entry systems in the 1970s and mid-1980s. The company was founded by ex-IBM engineers to develop direct data entry systems that allowed information to be entered on terminals and stored directly on disk drives, replacing keypunch machines using punched cards or paper tape, which had been the dominant tools for data entry since the turn of the twentieth century. Background information Key-to-disk systems were systems that took data entered by users from keypunch-like keyboards and held the information on a hard disk. The information was then transferred from disk to 1/2-inch magnetic tape for processing on the user's mainframe computer. At the time, large-scale entry of data for processing on a mainframe computer was labor-intensive and expensive. For example, a typical sales order might go through the following steps: 1) Order written on contract, collected by the salesman. 2) Order transferred to paper order sheet (unusually with multiple carbon copies) transcribed by the salesman or a secretary. 3) Order sheet, after verification and approval passed to the Data Center for entry into the computer system for processing. 4) Order sheet, entered by a keypunch operator to cards for processing. 5) Order card(s) verified by a second keypunch operator by repeating the card-punching, to verify accuracy. 6) Order card read by computer. 7) Parts ordered, equipment purchased. The same tried and practised methods were used to bill the customer, record customer payments, and pay outgoing expenses. The advantage of key-to-disk systems over card punches was the ability to see the entire content of an 80 byte card on a monitor to edit and correct mistakes. The Inforex Key-to-Disk-to-Tape system allowed an operator to directly read, edit, and write back any single tape record directly onto the original 9 track output tape, in the record's original position on the tape, allow keying errors to be corrected q
https://en.wikipedia.org/wiki/Treatise%20on%20Natural%20Philosophy
Treatise on Natural Philosophy was an 1867 text book by William Thomson (later Lord Kelvin) and Peter Guthrie Tait, published by Oxford University Press. The Treatise was often referred to as and , as explained by Alexander Macfarlane: Maxwell had facetiously referred to Thomson as and Tait as . Hence the Treatise on Natural Philosophy came to be commonly referred to as and in conversation with mathematicians. Reception The first volume was received by an enthusiastic review in Saturday Review: The grand result of all concurrent research in modern times has been to confirm what was but perhaps a dream of genius, or an instinct of the keen Greek intellect, that all the operations of nature are rooted and grounded in number and figure. The Treatise was also reviewed as Elements of Natural Philosophy (1873). Thomson & Tait's Treatise on Natural Philosophy was reviewed by J. C. Maxwell in Nature of 3 July 1879 indicating the importance given to kinematics: "The guiding idea … is that geometry itself is part of the science of motion." In 1892 Karl Pearson noted that and perpetuated a "subjectivity of force" that originated with Newton. In 1902 Alexander Macfarlane ascribed much of the inspiration of the book to William Rankine's 1865 paper "Outlines of the Science of Energetics": The main object of Thomson and Tait's Treatise on Natural Philosophy was to fill up Rankine's outlines, — expound all branches of physics from the standpoint of the doctrine of energy. The plan contemplated four volumes; the printing of the first volume began in 1862 and was completed in 1867. The other three volumes never appeared. When a second edition was called for, the matter of the first volume was increased by a number of appendices and appeared as two separately bound parts. The volume which did appear, although judged rather difficult reading even by accomplished mathematicians, has achieved great success. It has been translated in French and German; it has educated the new
https://en.wikipedia.org/wiki/Fang%20Lizhi
Fang Lizhi (方励之, pinying Fāng lì zhī; also Li-Zhi; February 12, 1936 – April 6, 2012) was a Chinese astrophysicist, vice-president of the University of Science and Technology of China, and activist whose liberal ideas inspired the pro-democracy student movement of 1986–87 and, finally, the Tiananmen Square protests of 1989. Because of his activism, he was expelled from the Chinese Communist Party in January 1987. For his work, Fang was a recipient of the Robert F Kennedy Human Rights Award in 1989, given each year to an individual whose courageous activism is at the heart of the human rights movement and in the spirit of Robert F. Kennedy's vision and legacy. He was elected an academician of the Chinese Academy of Sciences in 1980, but his position was revoked after 1989. Life and career in China Fang was born on 12 February 1936 in Beijing. His father worked on the railway. In 1948, a year before the People's Liberation Army took over the city, as a student of the Beijing No. 4 High School, he joined an underground youth organization that was associated with the Chinese Communist Party (CCP). One of his extracurricular activities was assembling radio receivers from used parts. In 1952, he enrolled in the Physics Department at Peking University, where he met his future wife, Li Shuxian (). Both Fang and Li were among the top students in their class. After graduating, he joined the CCP, started working at the Institute of Modern Physics and became involved in China's secret atomic bomb program, while Li stayed at Peking University as a junior faculty member. In 1957, during the Hundred Flowers Campaign, people were strongly encouraged by the CCP to openly express their opinions and criticisms. As party members, Li, Fang and another person in the physics department planned to write a letter to the party to offer their suggestions on education. This letter was still unfinished by the time the Hundred Flowers Campaign abruptly came to an end and the Anti-Rightist Cam
https://en.wikipedia.org/wiki/Large%20woody%20debris
Large woody debris (LWD) are the logs, sticks, branches, and other wood that falls into streams and rivers. This debris can influence the flow and the shape of the stream channel. Large woody debris, grains, and the shape of the bed of the stream are the three main providers of flow resistance, and are thus, a major influence on the shape of the stream channel. Some stream channels have less LWD than they would naturally because of removal by watershed managers for flood control and aesthetic reasons. The study of woody debris is important for its forestry management implications. Plantation thinning can reduce the potential for recruitment of LWD into proximal streams. The presence of large woody debris is important in the formation of pools which serve as salmon habitat in the Pacific Northwest. Entrainment of the large woody debris in a stream can also cause erosion and scouring around and under the LWD. The amount of scouring and erosion is determined by the ratio of the diameter of the piece, to the depth of the stream, and the embedding and orientation of the piece. Influence on stream flow around bends Large woody debris slow the flow through a bend in the stream, while accelerating flow in the constricted area downstream of the obstruction. See also Beaver dam Coarse woody debris Driftwood Log jam Stream restoration
https://en.wikipedia.org/wiki/Gradient%20echo
Gradient echo is a magnetic resonance imaging (MRI) sequence that has wide variety of applications, from magnetic resonance angiography to perfusion MRI and diffusion MRI. Rapid imaging acquisition allows it to be applied to 2D and 3D MRI imaging. Gradient echo uses magnetic gradients to generate a signal, instead of using 180 degrees radiofrequency pulse like spin echo; thus leading to faster image acquisition time. Mechanism Unlike spin-echo sequence, a gradient echo sequence does not use a 180 degrees RF pulse to make the spins of particles coherent. Instead, the gradient echo uses magnetic gradients to manipulate the spins, allowing the spins to dephase and rephase when required. After an excitation pulse (usually less than 90 degrees), the spins are dephased after a period of time (due to free induction decay) and also by applying a reversed magnetic gradient to decay the spins. No signal is produced because the spins are not coherent. When the spins are rephased via a magnetic gradient, they become coherent, and thus signal (or "echo") is generated to form images. Unlike spin echo, gradient echo does not need to wait for transverse magnetisation to decay completely before initiating another sequence, thus it requires very short repetition times (TR), and therefore to acquire images in a short time. After echo is formed, some transverse magnetisations remains because of short TR. Manipulating gradients during this time will produce images with different contrast. There are three main methods of manipulating contrast at this stage, namely steady-state free-precession (SSFP) that does not spoil the remaining transverse magnetisation, but attempts to recover them in subsequent RF pulses (thus producing T2-weighted images); the sequence with spoiler gradient that averages the transverse magnetisations in subsequent RF pulses by rotating residual transverse magnetisation into longitudinal plane and longitudinal magnetisation into transverse planes (thus producing
https://en.wikipedia.org/wiki/Asynchronous%20procedure%20call
Asynchronous procedure call is a unit of work in a computer. Usually a program works by executing a series of synchronous procedure calls on some thread. But if some data are not ready (for example, a program waits user to reply), then keeping thread in wait state is impractical, as a thread allocates considerable amount of memory for procedure stack, and this memory is not used. So such a procedure call is formed as an object with small amount of memory for input data, and this object is passed to the service which receive user inputs. When the user's reply is received, the service puts it in the object and passes that object to an execution service. Execution service consists of one or more dedicated worker threads and a queue for tasks. Each worker thread reads in a loop task queue and, when a task is retrieved, executes it. When there is no tasks, worker threads are waiting and so their memory is not used, but the number of worker threads is small enough (no sense to have more threads than there are processors on the machine). So life cycle of an asynchronous procedure call consists of 2 stages: passive stage, when it passively waits for input data, and active state, when that data is calculated in the same way as at usual procedure call. The object of the asynchronous procedure call can be reused for subsequent procedure calls with new data, received later. This allows to accumulate computed output data in that object, as it is usually done in objects, programmed with Object-oriented programming paradigm. Special care should be paid to avoid simultaneous execution of the same procedure call in order to keep computed data in consistent state. Such reusable asynchronous procedure is named Actor. Programming using Actors is described in Actor model and Dataflow programming. The difference is that Actor in the Actor model has exactly two ports: one port to receive input data, and another (hidden) port to provide serial handling of input messages, while Actor in D
https://en.wikipedia.org/wiki/Linear%20amplifier
A linear amplifier is an electronic circuit whose output is proportional to its input, but capable of delivering more power into a load. The term usually refers to a type of radio-frequency (RF) power amplifier, some of which have output power measured in kilowatts, and are used in amateur radio. Other types of linear amplifier are used in audio and laboratory equipment. Linearity refers to the ability of the amplifier to produce signals that are accurate copies of the input. A linear amplifier responds to different frequency components independently, and tends not to generate harmonic distortion or intermodulation distortion. No amplifier can provide perfect linearity however, because the amplifying devices—transistors or vacuum tubes—follow nonlinear transfer function and rely on circuitry techniques to reduce those effects. There are a number of amplifier classes providing various trade-offs between implementation cost, efficiency, and signal accuracy. Explanation Linearity refers to the ability of the amplifier to produce signals that are accurate copies of the input, generally at increased power levels. Load impedance, supply voltage, input base current, and power output capabilities can affect the efficiency of the amplifier. Class-A amplifiers can be designed to have good linearity in both single ended and push-pull topologies. Amplifiers of classes AB1, AB2 and B can be linear only when a tuned tank circuit is employed, or in the push-pull topology, in which two active elements (tubes, transistors) are used to amplify positive and negative parts of the RF cycle respectively. Class-C amplifiers are not linear in any topology. Amplifier classes There are a number of amplifier classes providing various trade-offs between implementation cost, efficiency, and signal accuracy. Their use in RF applications are listed briefly below: Class-A amplifiers are very inefficient, they can never have an efficiency better than 50%. The semiconductor or vacuum tube cond
https://en.wikipedia.org/wiki/Comparison%20of%20video%20codecs
Α video codec is software or a device that provides encoding and decoding for digital video, and which may or may not include the use of video compression and/or decompression. Most codecs are typically implementations of video coding formats. The compression may employ lossy data compression, so that quality-measurement issues become important. Shortly after the compact disc became widely available as a digital-format replacement for analog audio, it became feasible to also store and use video in digital form. A variety of technologies soon emerged to do so. The primary goal for most methods of compressing video is to produce video that most closely approximates the fidelity of the original source, while simultaneously delivering the smallest file-size possible. However, there are also several other factors that can be used as a basis for comparison. Introduction to comparison The following characteristics are compared in video codecs comparisons: Video quality per bitrate (or range of bitrates). Commonly video quality is considered the main characteristic of codec comparisons. Video quality comparisons can be subjective or objective. Performance characteristics such as compression/decompression speed, supported profiles/options, supported resolutions, supported rate control strategies, etc. General software characteristics for example: Manufacturer Supported OS (Linux, macOS, Windows) Version number Date of release Type of license (commercial, free, open source) Supported interfaces (VfW, DirectShow, etc.) Price (value for money, volume discounts, etc.) Video quality The quality the codec can achieve is heavily based on the compression format the codec uses. A codec is not a format, and there may be multiple codecs that implement the same compression specification for example, MPEG-1 codecs typically do not achieve quality/size ratio comparable to codecs that implement the more modern H.264 specification. But quality/size ratio of output produced b
https://en.wikipedia.org/wiki/Turn%20restriction%20routing
A routing algorithm decides the path followed by a packet from the source to destination routers in a network. An important aspect to be considered while designing a routing algorithm is avoiding a deadlock. Turn restriction routing is a routing algorithm for mesh-family of topologies which avoids deadlocks by restricting the types of turns that are allowed in the algorithm while determining the route from source node to destination node in a network. Reason for deadlock A deadlock (shown in fig 1) is a situation in which no further transportation of packets can take place due to the saturation of network resources like buffers or links. The main reason for a deadlock is the cyclic acquisition of channels in the network. For example, consider there are four channels in a network. Four packets have filled up the input buffers of these four channels and needs to be forwarded to the next channel. Now assume that the output buffers of all these channels are also filled with packets that need to be transmitted to the next channel. If these four channels form a cycle, it is impossible to transmit packets any further because the output buffers and input buffers of all channels are already full. This is known as cyclic acquisition of channels and this results in a deadlock. Solution to deadlock Deadlocks can either be detected, broken or avoided from happening altogether. Detecting and breaking deadlocks in the network is expensive in terms of latency and resources. So an easy and inexpensive solution is to avoid deadlocks by choosing routing techniques that prevent cyclic acquisition of channels. Logic behind turn restriction routing Logic behind turn restriction routing derives from a key observation. A cyclic acquisition of channels can take place only if all the four possible clockwise (or anti-clockwise) turns have occurred. This means deadlocks can be avoided by prohibiting at least one of the clockwise turns and one of the anti-clockwise turns. All the clockwis
https://en.wikipedia.org/wiki/Safety%20of%20magnetic%20resonance%20imaging
Magnetic resonance imaging (MRI) is in general a safe technique, although injuries may occur as a result of failed safety procedures or human error. During the last 150 years, thousands of papers focusing on the effects or side effects of magnetic or radiofrequency fields have been published. They can be categorized as incidental and physiological. Contraindications to MRI include most cochlear implants and cardiac pacemakers, shrapnel and metallic foreign bodies in the eyes. The safety of MRI during the first trimester of pregnancy is uncertain, but it may be preferable to other options. Since MRI does not use any ionizing radiation, its use generally is favored in preference to CT when either modality could yield the same information. (In certain cases, MRI is not preferred as it may be more expensive, time-consuming and claustrophobia-exacerbating.) Structure and certification In an effort to standardize the roles and responsibilities of MRI professionals, an international consensus document, written and endorsed by major MRI and medical physics professional societies from around the globe, has been published formally. The document outlines specific responsibilities for the following positions: MR Medical Director / Research Director (MRMD) – This individual is the supervising physician who has oversight responsibility for the safe use of MRI services. MR Safety Officer (MRSO) – Roughly analogous to a radiation safety officer, the MRSO acts on behalf of, and on the instruction of, the MRMD to execute safety procedures and practices at the point of care. MR Safety Expert (MRSE) – This individual serves in a consulting role to both the MRMD and MRSO, assisting in the investigation of safety questions that may include the need for extrapolation, interpolation, or quantification to approximate the risk of a specific study. The American Board of Magnetic Resonance Safety (ABMRS) provides testing and board certification for each of the three positions, MRMD, MRSO,
https://en.wikipedia.org/wiki/Sedleian%20Professor%20of%20Natural%20Philosophy
The Sedleian Professor of Natural Philosophy is the name of a chair at the Mathematical Institute of the University of Oxford. Overview The Sedleian Chair was founded by Sir William Sedley who, by his will dated 20 October 1618, left the sum of £2,000 to the University of Oxford for purchase of lands for its endowment. Sedley's bequest took effect in 1621 with the purchase of an estate at Waddesdon in Buckinghamshire to produce the necessary income. It is regarded as the oldest of Oxford's scientific chairs. Holders of the Sedleian Professorship have, since the mid 19th century, worked in a range of areas of applied mathematics and mathematical physics. They are simultaneously elected to fellowships at Queen's College, Oxford. The Sedleian Professors in the past century have been Augustus Love (1899-1940), who was distinguished for his work in the mathematical theory of elasticity, Sydney Chapman (1946-1953), who is renowned for his contributions to the kinetic theory of gases and solar-terrestrial physics, George Temple (1953-1968), who made significant contributions to mathematical physics and the theory of generalized functions, Brooke Benjamin (1979-1995), who did highly influential work in the areas of mathematical analysis and fluid mechanics, and Sir John Ball (1996-2019), who is distinguished for his work in the mathematical theory of elasticity, materials science, the calculus of variations, and infinite-dimensional dynamical systems. List of Sedleian Professors Notes
https://en.wikipedia.org/wiki/Biryukov%20equation
In the study of dynamical systems, the Biryukov equation (or Biryukov oscillator), named after Vadim Biryukov (1946), is a non-linear second-order differential equation used to model damped oscillators. The equation is given by where is a piecewise constant function which is positive, except for small as Eq. (1) is a special case of the Lienard equation; it describes the auto-oscillations. Solution (1) at a separate time intervals when f(y) is constant is given by where denotes the exponential function. Here Expression (2) can be used for real and complex values of . The first half-period’s solution at is The second half-period’s solution is The solution contains four constants of integration , the period and the boundary between and needs to be found. A boundary condition is derived from continuity of and . Solution of (1) in the stationary mode thus is obtained by solving a system of algebraic equations as The integration constants are obtained by the Levenberg–Marquardt algorithm. With , Eq. (1) named Van der Pol oscillator. Its solution cannot be expressed by elementary functions in closed form.
https://en.wikipedia.org/wiki/Posthitis
Posthitis is the inflammation of the foreskin (prepuce) of the penis. It is characterised by swelling and redness on the skin and it may be accompanied by a malodorous discharge. The term posthitis comes from the Greek "posthe", meaning foreskin, and "-itis", meaning inflammation. Causes Posthitis can have infectious causes such as bacteria or fungi, or non-infectious causes such as contact dermatitis or psoriasis. The inflammation may be caused by irritants in the environment. Common causative organisms include candida, chlamydia, and gonorrhea. The cause must be properly diagnosed before a treatment can be prescribed. A common risk factor is diabetes. Posthitis can lead to phimosis, the tightening of the foreskin which makes it difficult to retract over the glans. Posthitis can also lead to superficial ulcerations and diseases of the inguinal lymph nodes. Prevention Hygiene, in particular the regular cleaning of the glans, is generally considered sufficient to prevent infection and inflammation of the foreskin. Full retraction of the foreskin may not be possible in boys younger than about ten years and some may not be able to fully retract their foreskin for cleaning until their late teens. Treatment If contact dermatitis is suspected, soaps and other external irritants should be discontinued and a latex allergy should be investigated. The treatment depends on identification of the cause. Irritants in the environment should be removed. Antibiotics and antifungals can be used to treat the infection, but good hygiene such as keeping the area dry is essential to stop recurrence, however excessive washing with soap can cause contact dermatitis. If infection is sexually transmitted, sexual partners should be notified and treated. Posthitis and balanitis (inflammation of the glans penis) usually occur together as balanoposthitis. Circumcision can prevent balanoposthitis, though balanitis can still occur separately.
https://en.wikipedia.org/wiki/David%20Easley
David Alan Easley (born 1950s) is an American economist. Easley is the Henry Scarborough Professor of Social Science and is a professor of information science at Cornell University. He was previously an overseas fellow of Churchill College at Cambridge University. His research is in the field of economics, finance and decision theory. In economics, he focuses on learning, wealth dynamics and natural selection in markets. In finance, his work focuses on market microstructure and asset pricing. In decision theory, he works on modeling decision making in complex environments. In networks, he works on network formation and trading networks with colleagues in the computer science department at Cornell. He is a fellow of the Econometric Society and is a chair of the NASDAQ-OMX economic advisory board. Several of his scientific papers are listed among the most read in finance, according to the Social Science Research Network Easley earned his Ph.D. from Northwestern University in 1979. Notable publications The Postal Savings System in the Depression, with Maureen O'Hara, Journal of Economic History, Vol. 39, No. 3, September, 1979. Stochastic Equilibrium and Optimality with Rolling Plans, with Daniel Spulber, International Economic Review, Vol. 22, No. 1, February, 1981. Learning to be Rational, with Lawrence Blume, Journal of Economic Theory, Vol. 26, No. 2, April, 1982. Introduction to the Stability of Rational Expectations Equilibrium, with Lawrence Blume and Margaret Bray, Journal of Economic Theory, Vol. 26, No. 2, April, 1982. Characterization of Optimal Plans for Stochastic Dynamic Programs, with Lawrence Blume and Maureen O'Hara, Journal of Economic Theory, Vol. 28, No. 2, December, 1982. Consensus Beliefs Equilibrium and Market Efficiency" with Robert Jarrow, Journal of Finance, Vol. 38, No. 3, June, 1983. The Economic Role of the Nonprofit Firm, with Maureen O'Hara, Bell Journal of Economics, Autumn, 1983. Rational Expectations Equilibrium: An Alternative A
https://en.wikipedia.org/wiki/Model%20transformation%20language
A model transformation language in systems and software engineering is a language intended specifically for model transformation. Overview The notion of model transformation is central to model-driven development. A model transformation, which is essentially a program which operates on models, can be written in a general-purpose programming language, such as Java. However, special-purpose model transformation languages can offer advantages, such as syntax that makes it easy to refer to model elements. For writing bidirectional model transformations, which maintain consistency between two or more models, a specialist bidirectional model transformation language is particularly important, because it can help avoid the duplication that would result from writing each direction of the transformation separately. Currently, most model transformation languages are being developed in academia. The OMG has standardised a family of model transformation languages called QVT, but the field is still immature. There are ongoing debates regarding the benefits of specialised model transformation languages, compared to the use of general-purpose programming languages (GPLs) such as Java. While GPLs have advantages in terms of more widely-available practitioner knowledge and tool support, the specialised transformation languages do provide more declarative facilities and more powerful specialised features to support model transformations. Available transformation languages ATL : a transformation language developed by the INRIA Beanbag (see ) : an operation-based language for establishing consistency over data incrementally GReAT : a transformation language available in the GME Epsilon family (see ) : a model management platform that provides transformation languages for model-to-model, model-to-text, update-in-place, migration and model merging transformations. F-Alloy : a DSL reusing part of the Alloy syntax and allowing the concise specification of efficiently computable m
https://en.wikipedia.org/wiki/Economic%20voting
In political science, economic voting is a theoretical perspective which argues that voter behavior is heavily influenced by the economic conditions in their country at the time of the election. According to the classical form of this perspective, voters tend to vote more in favor of the incumbent candidate and party when the economy is doing well than when it is doing poorly. This view has been supported by considerable empirical evidence. There is a substantial literature which shows that across the world's democracies, economic conditions shape electoral outcomes. Economic voting is less likely when it is harder for voters to attribute economic performance to specific parties and candidates. Research on economic voting combines the disciplines of political science and economics using econometric techniques. Economic voting has been divided into several categories, including pocketbook voting (based on individual concerns) versus sociotropic voting (based on the economy at large), as well as retrospective voting (based on previous economic trends) versus prospective voting (based on expected future economic trends). Research conducted in the United States has indicated that, in presidential elections, American voters tend to be sociotropic and retrospective. However, when the incumbent candidate in a United States presidential election is not running, economic voter choice tends to be overwhelmingly prospective. One of the most prominent expressions of the economic voting perspective came when James Carville, the chief strategist for Bill Clinton's 1992 presidential campaign, placed a sign in the campaign office reading "It's the economy, stupid!". Research shows in the United States that voters punish the president's party in presidential, Senate, House, gubernatorial and state legislative elections when the local economy is doing poorly. There is empirical evidence to show that in India a positive relationship between economic growth and overall re-election p
https://en.wikipedia.org/wiki/Opioidergic
An opioidergic agent (or drug) is a chemical which functions to directly modulate the opioid neuropeptide systems (i.e., endorphin, enkephalin, dynorphin, nociceptin) in the body or brain. Examples include opioid analgesics such as morphine and opioid antagonists such as naloxone. Opioidergics also comprise allosteric modulators and enzyme affecting agents like enkephalinase inhibitors. Allosteric modulators BMS-986121: μ-PAM BMS-986122: μ-PAM Ignavine Oxytocin: μ-PAM δ-PAM (see reference) Cannabidiol Tetrahydrocannabinol Sodium (Na+) See also List of opioids Adenosinergic Adrenergic Cannabinoidergic Cholinergic Dopaminergic GABAergic Glycinergic Histaminergic Melatonergic Monoaminergic Serotonergic
https://en.wikipedia.org/wiki/Offset%20dish%20antenna
An offset dish antenna or off-axis dish antenna is a type of parabolic antenna. It is so called because the antenna feed is offset to the side of the reflector, in contrast to the common "front-feed" parabolic antenna where the feed antenna is suspended in front of the dish, on its axis. As in a front-fed parabolic dish, the feed is located at the focal point of the reflector, but the reflector is an asymmetric segment of a paraboloid, so the focus is located to the side. The purpose of this design is to move the feed antenna and its supports out of the path of the incoming radio waves. In an ordinary front-fed dish antenna, the feed structure and its supports are located in the path of the incoming beam of radio waves, partially obstructing them, casting a "shadow" on the dish, reducing the radio power received. In technical terms this reduces the aperture efficiency of the antenna, reducing its gain. In the offset design, the feed is positioned outside the area of the beam, usually below it on a boom sticking out from the bottom edge of the dish. The beam axis of the antenna, the axis of the incoming or outgoing radio waves, is skewed at an angle to the plane of the dish mouth. The design is most widely used for small parabolic antennas or "mini-dishes", such as common home satellite television dishes, where the feed structure is large enough in relation to the dish to block a significant proportion of the signal. Another application is on satellites, particularly the direct broadcast satellites which use parabolic dishes to beam television signals to homes on Earth. Because of the limited transmitter power provided by their solar cells, satellite antennas must function as efficiently as possible. The offset design is also widely used in radar antennas. These must collect as much signal as possible in order to detect faint return signals from faraway targets. Offset dish antennas are more difficult to design than front-fed antennas because the di
https://en.wikipedia.org/wiki/ALL-IN-1
ALL-IN-1 was an office automation product developed and sold by Digital Equipment Corporation in the 1980s. It was one of the first purchasable off the shelf electronic mail products. It was later known as Office Server V3.2 for OpenVMS Alpha and OpenVMS VAX systems before being discontinued. Overview ALL-IN-1 was advertised as an office automation system including functionality in Electronic Messaging, Word Processing and Time Management. It offered an application development platform and customization capabilities that ranged from scripting to code-level integration. ALL-IN-1 was designed and developed by Skip Walter, John Churin and Marty Skinner from Digital Equipment Corporation who began work in 1977. Sheila Chance was hired as the software engineering manager in 1981. The first version of the software, called CP/OSS, the Charlotte Package of Office System Services, named after the location of the developers, was released in May 1982. In 1983, the product was renamed ALL-IN-1 and the Charlotte group continued to develop versions 1.1 through 1.3. Digital then made the decision to move most of the development activity to its central engineering facility in Reading, United Kingdom, where a group there took responsibility for the product from version 2.0 (released in field test in 1984 and to customers in 1985) onward. The Charlotte group continued to work on the Time Management subsystem until version 2.3 and other contributions were made from groups based in Sophia Antipolis, France (System for Customization Management and the integration with VAX Notes), Reading (Message Router and MAILbus), and Nashua, New Hampshire (FMS). ALL-IN-1 V3.0 introduced shared file cabinets and the File Cabinet Server (FCS) to lay the foundation for an eventual integration with TeamLinks, Digital's PC office client. Previous integrations with PCs included PC ALL-IN-1, a DOS-based product introduced in 1989 that never proved popular with customers. Bob Wyman was the first produc
https://en.wikipedia.org/wiki/Wound%20response%20in%20plants
Plants are constantly exposed to different stresses that result in wounding. Plants have adapted to defend themselves against wounding events, like herbivore attacks or environmental stresses. There are many defense mechanisms that plants rely on to help fight off pathogens and subsequent infections. Wounding responses can be local, like the deposition of callose, and others are systemic, which involve a variety of hormones like jasmonic acid and abscisic acid. Overview There are many forms of defense that plants use to respond to wounding events. There are physical defense mechanisms that some plants utilize, through structural components, like lignin and the cuticle. The structure of a plant cell wall is incredibly important for wound responses, as both protect the plant from pathogenic infections by preventing various molecules from entering the cell. Plants are capable of activating innate immunity, by responding to wounding events with damage-associated Molecular Patterns (DAMPs). Additionally, plants rely on microbe-associated molecular patterns (MAMPs) to defend themselves upon sensing a wounding event. There are examples of both rapid and delayed wound responses, depending on where the damage took place. MAMPs/ DAMPS & Signaling Pathways Plants have pattern recognition receptors (PRRs) that recognize MAMPs, or microbe-associated molecular patterns. Upon entry of a pathogen, plants are vulnerable to infection and lose a fair amount of nutrients to said pathogen. The constitutive defenses are the physical barriers of the plant; including the cuticle or even the metabolites that act toxic and deter herbivores. Plants maintain an ability to sense when they have an injured area and induce a defensive response. Within wounded tissues, endogenous molecules become released and become Damage Associated Molecular Patterns (DAMPs), inducing a defensive response. DAMPs are typically caused by insects that feed off the plant. Such responses to wounds are found at th
https://en.wikipedia.org/wiki/Lightening%20holes
Lightening holes are holes in structural components of machines and buildings used by a variety of engineering disciplines to make structures lighter. The edges of the hole may be flanged to increase the rigidity and strength of the component. The holes can be circular, triangular, elliptical, or rectangular and should have rounded edges, but they should never have sharp corners, to avoid the risk of stress risers, and they must not be too close to the edge of a structural component. Usage Aviation Lightening holes are often used in the aviation industry. This allows the aircraft to be lightweight as possible, retaining the durability and airworthiness of the aircraft structure. Maritime Lightening holes have also been used in marine engineering to increase seaworthiness of the vessel. Motorsports Lightening holes became a prominent feature of motor racing in the 1920s and 1930s. Chassis members, suspension components, engine housings and even connecting rods were drilled with a range of holes, of sizes almost as large as the component. Military Lightening holes have been used in various military vehicles, aircraft, equipment and weaponry platforms. This allows equipment to be lighter in weight as well as increase the ruggedness and durability. They are usually made by drilling holes, pressed stamping or machining and can also save strategic materials and cost during wartime production. Architecture Lightening holes have been used on various architecture designs. During the 1980s and early 1990s, lightening holes were fashionable and somewhat seen as futuristic and were used in the likes of industrial units, car showrooms, shopping precincts, sports centres etc. Parsons House in London is a notable building that uses lightening holes since its renovation in 1988. Ringwood Health & Leisure Centre in Hampshire is another notable example. See also Honeycomb structure Hollow structural section Isogrid Truss
https://en.wikipedia.org/wiki/Discrete%20Mathematics%20%26%20Theoretical%20Computer%20Science
Discrete Mathematics & Theoretical Computer Science is a peer-reviewed open access scientific journal covering discrete mathematics and theoretical computer science. It was established in 1997 by Daniel Krob (Paris Diderot University). Since 2001, the editor-in-chief is Jens Gustedt (Institut National de Recherche en Informatique et en Automatique). Abstracting and indexing The journal is abstracted and indexed in Mathematical Reviews and the Science Citation Index Expanded. According to the Journal Citation Reports, the journal has a 2011 impact factor of 0.465.
https://en.wikipedia.org/wiki/Macro%20domain
In molecular biology, the Macro domain (often also written macrodomain) or A1pp domain is a module of about 180 amino acids which can bind ADP-ribose, an NAD metabolite, or related ligands. Binding to ADP-ribose can be either covalent or non-covalent: in certain cases it is believed to bind non-covalently, while in other cases (such as Aprataxin) it appears to bind both non-covalently through a zinc finger motif, and covalently through a separate region of the protein. Function The domain was described originally in association with the ADP-ribose 1-phosphate (Appr-1-P)-processing activity (A1pp) of the yeast YBR022W protein and called A1pp. However, the domain has been renamed Macro as it is the C-terminal domain of mammalian core histone macro-H2A. Macro domain proteins can be found in eukaryotes, in (mostly pathogenic) bacteria, in archaea and in ssRNA viruses, such as coronaviruses, Rubella and Hepatitis E viruses. In vertebrates the domain occurs in e.g. histone macroH2A, predicted poly-ADP-ribose polymerases (PARPs) and B aggressive lymphoma (BAL) protein. ADP-ribosylation of proteins is an important post-translational modification that occurs in a variety of biological processes, including DNA repair, regulation of transcription, chromatin biology, maintenance of genomic stability, telomere dynamics, cell differentiation and proliferation, necrosis and apoptosis, and long-term memory formation. The Macro domain recognises the ADP-ribose nucleotide and in some cases poly-ADP-ribose, and is thus a high-affinity ADP-ribose-binding module found in a number of otherwise unrelated proteins. ADP-ribosylation of DNA is relatively uncommon and has only been described for a small number of toxins that include pierisin, scabin and DarT. The Macro domain from the antitoxin DarG of the toxin-antitoxin system DarTG, both binds and removes the ADP-ribose modification added to DNA by the toxin DarT. The Macro domain from human, macroH2A1.1, binds an NAD metabolite O-acety
https://en.wikipedia.org/wiki/Negation%20as%20failure
Negation as failure (NAF, for short) is a non-monotonic inference rule in logic programming, used to derive (i.e. that is assumed not to hold) from failure to derive . Note that can be different from the statement of the logical negation of , depending on the completeness of the inference algorithm and thus also on the formal logic system. Negation as failure has been an important feature of logic programming since the earliest days of both Planner and Prolog. In Prolog, it is usually implemented using Prolog's extralogical constructs. More generally, this kind of negation is known as weak negation, in contrast with the strong (i.e. explicit, provable) negation. Planner semantics In Planner, negation as failure could be implemented as follows: if (not (goal p)), then (assert ¬p) which says that if an exhaustive search to prove p fails, then assert ¬p. This states that proposition p shall be assumed as "not true" in any subsequent processing. However, Planner not being based on a logical model, a logical interpretation of the preceding remains obscure. Prolog semantics In pure Prolog, NAF literals of the form can occur in the body of clauses and can be used to derive other NAF literals. For example, given only the four clauses NAF derives , and as well as and . Completion semantics The semantics of NAF remained an open issue until 1978, when Keith Clark showed that it is correct with respect to the completion of the logic program, where, loosely speaking, "only" and are interpreted as "if and only if", written as "iff" or "". For example, the completion of the four clauses above is The NAF inference rule simulates reasoning explicitly with the completion, where both sides of the equivalence are negated and negation on the right-hand side is distributed down to atomic formulae. For example, to show , NAF simulates reasoning with the equivalences In the non-propositional case, the completion needs to be augmented wit
https://en.wikipedia.org/wiki/BATF2
Basic leucine zipper transcription factor, ATF-like 2 is a protein that, in humans, is encoded by the BATF2 gene.
https://en.wikipedia.org/wiki/Code%20Hero
Code Hero is a planned educational video game by Primer Labs, designed by Alex Peake. The game is supposed to teach players how to write programming languages by having them do so in a 3D world. The game drew controversy following its Kickstarter campaign, when the studio ran out of funds, missed release deadlines and funding rewards, and communicated little with the community. Some financial backers threatened legal action following long periods of no communication. The Code Hero website was offline for an extended period in 2014, and Peake has not commented publicly about the state of this project since 2013. From May 2015 the website was again inactive, however it resurfaced in August 2016. Gameplay The main aim of Code Hero is to teach players how to write programming languages in an engaging way. Players use a gun which can copy code and place it in other areas of the level in order to create a full program in a language such as JavaScript and UnityScript whilst moving around a 3D world from a first-person perspective. Players start in a world called Gamebridge Unityversity's API from which they can choose a series of levels which teach basics of the programming languages; after this they move to the Humantheon, from which the player moves on to the rest of the game world, led by a robotic Ada Lovelace. Development Development on Code Hero began in January 2011, and in 2012 Peake started a Kickstarter campaign to raise $100,000 USD to fund further development of the game over the next six months. The Kickstarter concluded in February having raised $170,000 USD, at which point $30,000 USD had been raised through their website alongside this. Peake hired a development team, and in March 2012, they began working in space provided by IGN Indie Open House in San Francisco. The project experienced staff turnover, and by October 2012, ran out of money. After failing to deliver the backing rewards by the original date, and not updating the website or Twitte
https://en.wikipedia.org/wiki/Carath%C3%A9odory%20function
In mathematical analysis, a Carathéodory function (or Carathéodory integrand) is a multivariable function that allows us to solve the following problem effectively: A composition of two Lebesgue-measurable functions does not have to be Lebesgue-measurable as well. Nevertheless, a composition of a measurable function with a continuous function is indeed Lebesgue-measurable, but in many situations, continuity is a too restrictive assumption. Carathéodory functions are more general than continuous functions, but still allow a composition with Lebesgue-measurable function to be measurable. Carathéodory functions play a significant role in calculus of variation, and it is named after the Greek mathematician Constantin Carathéodory. Definition , for endowed with the Lebesgue measure, is a Carathéodory function if: 1. The mapping is Lebesgue-measurable for every . 2. the mapping is continuous for almost every . The main merit of Carathéodory function is the following: If is a Carathéodory function and is Lebesgue-measurable, then the composition is Lebesgue-measurable. Example Many problems in the calculus of variation are formulated in the following way: find the minimizer of the functional where is the Sobolev space, the space consisting of all function that are weakly differentiable and that the function itself and all its first order derivative are in ; and where for some , a Carathéodory function. The fact that is a Carathéodory function ensures us that is well-defined. p-growth If is Carathéodory and satisfies for some (this condition is called "p-growth"), then where is finite, and continuous in the strong topology (i.e. in the norm) of .
https://en.wikipedia.org/wiki/Nathan%20Netanyahu
Nathan S. Netanyahu (; born 28 November 1951) is an Israeli computer scientist and a professor of computer science at Bar-Ilan University. Netanyahu is the son of mathematician Elisha Netanyahu and Supreme Court of Israel justice Shoshana Netanyahu, the nephew of historian Benzion Netanyahu, and the cousin of current Prime Minister of Israel Benjamin Netanyahu. He did his graduate studies at the University of Maryland, College Park, earning a Ph.D. in 1992 under the supervision of David Mount and Azriel Rosenfeld. Netanyahu has co-authored highly cited research papers on nearest neighbor search and k-means clustering. He has published many papers on computer chess, was the local organizer of the 12th World Computer Chess Championship in 2004, and was program co-chair for the 4th International Conference on Computers and Games, colocated with the WCCC. Another frequent topic in his research is image registration.
https://en.wikipedia.org/wiki/Object-modeling%20technique
The object-modeling technique (OMT) is an object modeling approach for software modeling and designing. It was developed around 1991 by Rumbaugh, Blaha, Premerlani, Eddy and Lorensen as a method to develop object-oriented systems and to support object-oriented programming. OMT describes object model or static structure of the system. OMT was developed as an approach to software development. The purposes of modeling according to Rumbaugh are: testing physical entities before building them (simulation), communication with customers, visualization (alternative presentation of information), and reduction of complexity. OMT has proposed three main types of models: Object model: The object model represents the static and most stable phenomena in the modeled domain. Main concepts are classes and associations with attributes and operations. Aggregation and generalization (with multiple inheritance) are predefined relationships. Dynamic model: The dynamic model represents a state/transition view on the model. Main concepts are states, transitions between states, and events to trigger transitions. Actions can be modeled as occurring within states. Generalization and aggregation (concurrency) are predefined relationships. Functional model: The functional model handles the process perspective of the model, corresponding roughly to data flow diagrams. Main concepts are process, data store, data flow, and actors. OMT is a predecessor of the Unified Modeling Language (UML). Many OMT modeling elements are common to UML. Functional Model in OMT: In brief, a functional model in OMT defines the function of the whole internal processes in a model with the help of "Data Flow Diagrams (DFDs)". It details how processes are performed independently.
https://en.wikipedia.org/wiki/Lazy%20systematic%20unit%20testing
Lazy Systematic Unit Testing is a software unit testing method based on the two notions of lazy specification, the ability to infer the evolving specification of a unit on-the-fly by dynamic analysis, and systematic testing, the ability to explore and test the unit's state space exhaustively to bounded depths. A testing toolkit JWalk exists to support lazy systematic unit testing in the Java programming language. Lazy Specification Lazy specification refers to a flexible approach to software specification, in which a specification evolves rapidly in parallel with frequently modified code. The specification is inferred by a semi-automatic analysis of a prototype software unit. This can include static analysis (of the unit's interface) and dynamic analysis (of the unit's behaviour). The dynamic analysis is usually supplemented by limited interaction with the programmer. The term Lazy specification is coined by analogy with lazy evaluation in functional programming. The latter describes the delayed evaluation of sub-expressions, which are only evaluated on demand. The analogy is with the late stabilization of the specification, which evolves in parallel with the changing code, until this is deemed stable. Systematic Testing Systematic testing refers to a complete, conformance testing approach to software testing, in which the tested unit is shown to conform exhaustively to a specification, up to the testing assumptions. This contrasts with exploratory, incomplete or random forms of testing. The aim is to provide repeatable guarantees of correctness after testing is finished. Examples of systematic testing methods include the Stream X-Machine testing method and equivalence partition testing with full boundary value analysis.
https://en.wikipedia.org/wiki/Tolerance%20to%20infections
Tolerance to infection, or disease tolerance, is a mechanism that host organisms can use to fight parasites or pathogens that attack the host. Tolerance is not equivalent to resistance. Disease resistance is the host trait that prevents infection or reduces the number of pathogens and parasites within or on a host. Tolerance to infection can be illustrated via comparing host performance versus increasing load. This is a reaction norm in which host performance is regressed against increasing disease burden. The slope of the reaction norm defines the degree of tolerance. High tolerance is indicated as a flat slope, i.e., host performance is not influenced by increasing burden. Steep downward slope indicates low tolerance in which host performance is strongly reduced with increasing burden. An upward slope indicates overcompensation in which a host increases its performance with increasing burden. Genetic variation in tolerance and its correlation with resistance, can be quantified using random regression model. In livestock science, tolerance to infections is sometimes termed disease resilience. A variety of reactions to pathogens are thought to be involved in tolerance, including superior immune system regulation and supplying pathogens with sufficient nutrients to blunt attacks on cells. Human tolerance Humans experience tolerance. For example, 90% of people infected with tuberculosis experience no symptoms. Similarly, many humans tolerate helminth infestations. Research Much research makes use of the lethal dose 50 protocol. Subjects are given enough pathogen to kill half of them. The remaining half presumably exhibit the desired tolerance. In many cases, the survivors not only survive but are unaffected by the pathogen. Research is complicated by the fact that animal protocols typically involve expecting some of the subjects to die, which is not ethical in humans.
https://en.wikipedia.org/wiki/Cover-abundance
Cover-abundance is a measure of plant cover, used in phytosociology (or vegetation science). It is based on percentages at the top end, but uses abundance estimates for species with a low plant cover. Several scales of cover-abundance are used, e.g. the original 5-point cover scale of Braun-Blanquet or the Domin scale, with finer subdivisions (from simple presence through 10 grades of linked cover-abundance). External links Definition of cover-abundance at encyclopedia.com Ecology Ecology terminology Ecological metrics
https://en.wikipedia.org/wiki/National%20Numeracy
National Numeracy is an independent charity (registered no. 1145669 in England and Wales) based in Brighton, UK, that promotes the importance of numeracy and "everyday maths". The charity was founded in 2012; its chair is Perdita Fraser and vice chair Andy Haldane. Its current chief executive is Sam Sims, who replaced Mike Ellicock in 2020. The charity aims to challenge negative attitudes towards maths and promotes effective approaches to improving functional numeracy skills. Chris Humphries, former chair of National Numeracy and a former chief executive of the UK Commission for Employment and Skills, said: "It is simply inexcusable for anyone to say: 'I can't do maths.' It is a peculiarly British disease which we aim to eradicate." The charity's Theory of Change is detailed on their website. National Numeracy has been critical of the UK mathematics curriculum, claiming that it is flawed and requires radical improvement to ensure that everyone leaves compulsory education with essential numeracy skills. National Numeracy is supported by a number of celebrities, including Rachel Riley, financial journalist Martin Lewis of Money Saving Expert, author, television presenter and mathematics teacher Bobby Seagull, financial writer Iona Bain, Strictly Come Dancing's Katya Jones, Great British Bake Off 2020 winner Peter Sawkins, and the poet and comedian Harry Baker. It is also supported by organisations, including TP ICAP, KPMG, Experian, Ufi VocTech Trust, Garfield Weston Foundation and the Edge Foundation. History of National Numeracy A 2010 report commissioned by Lord Moser from New Philanthropy Capital recommended the creation of a national numeracy trust. The report, which focused on low levels of numeracy in the UK, showed how charities and funders can help people to be confidently numerate. These problems are a focus of National Numeracy's strategy. National Numeracy was legally registered as a charity in January 2012 with the press launch of the charity in Ma
https://en.wikipedia.org/wiki/FlexGen%20Power%20Systems
FlexGen is a United States energy storage technology company. The company is headquartered in Durham, North Carolina and was founded in 2009. FlexGen is the developer of the FlexGen HybridOS energy management system, which is capable of automating the dispatch of energy storage, renewable, and conventional power generation to provide enhanced capability and lower cost of energy. FlexGen originally designed energy storage products for the United States Military, these hybrid power systems were sold to the US Marine Corps, US Army, US Navy SEALs, and the Joint Special Operations Forces-Afghanistan. Systems that were fielded in those military branches showed at least 52% reduction in fuel consumption and an 80% reduction in generator runtime. Venture Capital Funding On August 3, 2015, FlexGen Power Systems completed a $25.5M Series A venture capital funding round. Led by Denver-based Altira Group, the venture funding round also included investments from General Electric Ventures and Caterpillar Ventures. On August 25, 2021, it was announced that Apollo Global Management Inc. invested $150M into FlexGen.
https://en.wikipedia.org/wiki/Bitwise%20operation
In computer programming, a bitwise operation operates on a bit string, a bit array or a binary numeral (considered as a bit string) at the level of its individual bits. It is a fast and simple action, basic to the higher-level arithmetic operations and directly supported by the processor. Most bitwise operations are presented as two-operand instructions where the result replaces one of the input operands. On simple low-cost processors, typically, bitwise operations are substantially faster than division, several times faster than multiplication, and sometimes significantly faster than addition. While modern processors usually perform addition and multiplication just as fast as bitwise operations due to their longer instruction pipelines and other architectural design choices, bitwise operations do commonly use less power because of the reduced use of resources. Bitwise operators In the explanations below, any indication of a bit's position is counted from the right (least significant) side, advancing left. For example, the binary value 0001 (decimal 1) has zeroes at every position but the first (i.e., the rightmost) one. NOT The bitwise NOT, or bitwise complement, is a unary operation that performs logical negation on each bit, forming the ones' complement of the given binary value. Bits that are 0 become 1, and those that are 1 become 0. For example: NOT 0111 (decimal 7) = 1000 (decimal 8) NOT 10101011 (decimal 171) = 01010100 (decimal 84) The result is equal to the two's complement of the value minus one. If two's complement arithmetic is used, then NOT x = -x − 1. For unsigned integers, the bitwise complement of a number is the "mirror reflection" of the number across the half-way point of the unsigned integer's range. For example, for 8-bit unsigned integers, NOT x = 255 - x, which can be visualized on a graph as a downward line that effectively "flips" an increasing range from 0 to 255, to a decreasing range from 255 to 0. A simple but illus
https://en.wikipedia.org/wiki/Golden%20rhombus
In geometry, a golden rhombus is a rhombus whose diagonals are in the golden ratio: Equivalently, it is the Varignon parallelogram formed from the edge midpoints of a golden rectangle. Rhombi with this shape form the faces of several notable polyhedra. The golden rhombus should be distinguished from the two rhombi of the Penrose tiling, which are both related in other ways to the golden ratio but have different shapes than the golden rhombus. Angles (See the characterizations and the basic properties of the general rhombus for angle properties.) The internal supplementary angles of the golden rhombus are: Acute angle: ; by using the arctangent addition formula (see inverse trigonometric functions): Obtuse angle: which is also the dihedral angle of the dodecahedron. Note: an "anecdotal" equality: Edge and diagonals By using the parallelogram law (see the basic properties of the general rhombus): The edge length of the golden rhombus in terms of the diagonal length is: Hence: The diagonal lengths of the golden rhombus in terms of the edge length are: Area By using the area formula of the general rhombus in terms of its diagonal lengths and : The area of the golden rhombus in terms of its diagonal length is: By using the area formula of the general rhombus in terms of its edge length : The area of the golden rhombus in terms of its edge length is: Note: , hence: As the faces of polyhedra Several notable polyhedra have golden rhombi as their faces. They include the two golden rhombohedra (with six faces each), the Bilinski dodecahedron (with 12 faces), the rhombic icosahedron (with 20 faces), the rhombic triacontahedron (with 30 faces), and the nonconvex rhombic hexecontahedron (with 60 faces). The first five of these are the only convex polyhedra with golden rhomb faces, but there exist infinitely many nonconvex polyhedra having this shape for all of their faces. See also Golden triangle
https://en.wikipedia.org/wiki/List%20of%20heaviest%20land%20mammals
The heaviest land mammal is the African bush elephant, which has a weight of up to . It measures 10–13 ft at the shoulder and consumes around of vegetation a day. Its tusks have been known to reach in length, although in modern populations they are most commonly recorded at a length of . The average walking speed of an elephant is , but they can run at recorded speeds of up to . Heaviest extant land mammals See also Largest organisms Notes
https://en.wikipedia.org/wiki/14th%20meridian%20west
The meridian 14° west of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Greenland, Iceland, the Atlantic Ocean, Africa, the Southern Ocean, and Antarctica to the South Pole. The 14th meridian west forms a great circle with the 166th meridian east. From Pole to Pole Starting at the North Pole and heading south to the South Pole, the 14th meridian west passes through: {| class="wikitable plainrowheaders" ! scope="col" width="125" | Co-ordinates ! scope="col" | Country, territory or sea ! scope="col" | Notes |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Arctic Ocean | style="background:#b0e0e6;" | |- | ! scope="row" | | |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Atlantic Ocean | style="background:#b0e0e6;" | Greenland Sea |- | ! scope="row" | | |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Atlantic Ocean | style="background:#b0e0e6;" | |- | ! scope="row" | | Island of Fuerteventura |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Atlantic Ocean | style="background:#b0e0e6;" | |- | ! scope="row" | Western Sahara | Claimed by |- | ! scope="row" | | |- | ! scope="row" | | |- | ! scope="row" | | |- | ! scope="row" | | |- | ! scope="row" | | |- | ! scope="row" | | |-valign="top" | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Atlantic Ocean | style="background:#b0e0e6;" | Passing just east of Ascension Island, (at ) |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Southern Ocean | style="background:#b0e0e6;" | |- | ! scope="row" | Antarctica | Queen Maud Land, claimed by |- |} See also 13th meridian west 15th meridian west w014 meridian west
https://en.wikipedia.org/wiki/Dynamic%20range%20compression
Dynamic range compression (DRC) or simply compression is an audio signal processing operation that reduces the volume of loud sounds or amplifies quiet sounds, thus reducing or compressing an audio signal's dynamic range. Compression is commonly used in sound recording and reproduction, broadcasting, live sound reinforcement and in some instrument amplifiers. A dedicated electronic hardware unit or audio software that applies compression is called a compressor. In the 2000s, compressors became available as software plugins that run in digital audio workstation software. In recorded and live music, compression parameters may be adjusted to change the way they affect sounds. Compression and limiting are identical in process but different in degree and perceived effect. A limiter is a compressor with a high ratio and, generally, a short attack time. Types There are two types of compression, downward and upward. Both downward and upward compression reduce the dynamic range of an audio signal. Downward compression reduces the volume of loud sounds above a certain threshold. The quiet sounds below the threshold remain unaffected. This is the most common type of compressor. A limiter can be thought of as an extreme form of downward compression as it compresses the sounds over the threshold especially hard. Upward compression increases the volume of quiet sounds below a certain threshold. The louder sounds above the threshold remain unaffected. Some compressors also have the ability to do the opposite of compression, namely expansion. Expansion increases the dynamic range of the audio signal. Like compression, expansion comes in two types, downward and upward. Downward expansion make the quiet sounds below the threshold even quieter. A noise gate can be thought of as an extreme form of downward expansion as the noise gate make the quiet sounds (for instance: noise) quieter or even silent, depending on the floor setting. Upward expansion make the louder sounds a
https://en.wikipedia.org/wiki/Autoimmune%20enteropathy
Autoimmune enteropathy (AIE) is a rare disorder of the immune system that affects infants, young children and (rarely) adults causing severe diarrhea, vomiting, and other morbidities of the digestive tract. AIE causes malabsorption of food, vitamins, and minerals often necessitating replacement fluids and total parenteral nutrition. Some disorders, such as IPEX syndrome, include autoimmune enteropathy as well as autoimmune "pathies" of the skin, thyroid, other glands, or kidneys. Symptoms The main symptoms of AIE include: Diarrhea (frequent loss of fluids) Intestinal inflammation Vomiting Intestinal bleeding Difficulty or inability to gain weight Rapid weight loss Decreased urine output from dehydration Diagnosis There is a diagnostic test for AIE that looks for an antibody against the enterocyte. The diagnostic test contains the Western Blot which can identify the antibody IgG or IgA and with the immunohistochemistry can localize these antibodies. Endoscopy with biopsies of the colon, small colon, stomach, and other locations may be helpful in diagnosing. This test is done to look at the stomach and small intestines and to see what cells are infiltrating the digestive tract. There are also documented cases of autoimmune enteropathy where the auto-antibodies were undetectable and the diagnosis was made on the basis of clinical presentation and response to treatment. Types There are 3 types of autoimmune enteropathy: Type 1: IPEX syndrome: Immune dysregulation, Polyendocrinopathy, Enteropathy, X – linked syndrome, which is caused by a mutation in the FOXP3 gene. This can only affect boys. Type 2: IPEX-like, which manifests similarly to IPEX syndrome but without recognizable mutations in the FOXP3 gene. This can affect both genders and includes a variety of manifestations of varying severity. Type 3: Autoimmune manifestations primarily limited to the GI tract. This can affect both genders and may also be considered IPEX-like. There is consid
https://en.wikipedia.org/wiki/OpenAP
OpenAP was the first open source Linux distribution released to replace the factory firmware on a number of commercially available IEEE 802.11b wireless access points, all based on the Eumitcom WL11000SA-N board. The idea of releasing third party and open source firmware for commercially available wireless access points has been followed by a number of more recent projects, such as OpenWrt and HyperWRT. OpenAP was released in early 2002 by Instant802 Networks, now known as Devicescape Software, complete with instructions for reprogramming the flash on any of the supported devices, full source code under the GNU General Public License, and a mailing list for discussions. External links http://savannah.nongnu.org/projects/openap/ Wi-Fi Free routing software Custom firmware
https://en.wikipedia.org/wiki/G20%20World%20Brain%20Mapping%20%26%20Therapeutic%20Scientific%20Summit
The G20 World Brain Mapping & Therapeutic Scientific Summit aims to contribute to President Obama’s BRAIN initiative and to expand action on the current and upcoming initiatives across the G20 nations, bringing the finest scientists, engineers, physicians and surgeons across the globe in order to rapidly introduce clinical solutions for neurological disorders, which cost the world economy hundreds of billions of dollars annually. G20 World Brain Mapping Summit was launched in 2014 on the initiative of The Society for Brain Mapping and Therapeutics (SBMT). History First Annual G20 World Brain Mapping Summit 2014 Amen Clinics, Compumedics Inc, SBMT, and BMF held the first annual summit on G20 World Brain Mapping and Therapeutics Initiative in Brisbane, Australia on 13 November in Mercure Hotel. Summit started with messages of cooperation from the U.S. Congressman Chaka Fattah, the U.S. Congressman Blumenauer (Chairman of the Congressional Neuroscience Caucus) and Member of the Canadian Parliament Kirsty Duncan. The summit program included talks from top US, Australian, Italian, Turkish, and European Brain Mapping Initiative scientists covering topics such as advanced imaging in diagnosis of Alzheimer's disease, psychiatric disorders, brain cancers, neurodegenerative disorders, big data in brain mapping, strategies for global clinical trials, policies that could facilitate translation, integration and commercialization of devices and therapeutics such as nanoneurosurgery/nanoneuroscience, neurotrauma, and military medicine, as well as a roundtable discussion with the US and Canadian Policymakers. Second Annual G20 World Brain Mapping Summit 2015 Üsküdar University, Üsküdar University NPIstanbul Hospital and SBMT held second annual summit on G20 World Brain Mapping and Therapeutics Initiative in Istanbul, Turkey on 13 November. Then, the speakers met in Antalya where G20 Summit was held on 15 November, the result of the preliminary report was announced by Prof. Nevza
https://en.wikipedia.org/wiki/LynxSecure
LynxSecure is a least privilege real-time separation kernel hypervisor from Lynx Software Technologies designed for safety and security critical applications found in military, avionic, industrial, and automotive markets. Overview Leveraging multi-core CPU hardware virtualization features and smaller than a microkernel (as small as 15kB), LynxSecure is primarily targeted to raise the assurance of systems that perform critical computing functions in regulated environments. Common use cases include; separating critical apps from internet domains, isolating security functions from application domains, verifying and filtering inter-domain communication. LynxSecure lives underneath applications and operating systems, runs completely transparent and cannot be tampered with. The software can be embedded into a broad class of devices from embedded to IT platforms. The stripped-down design aims to raise assurance of the host by removing the possibility of CPU privilege escalation and provide extremely tight control over CPU scheduling. Rather than attempting to shape system behavior indirectly by issuing commands to platform APIs according to a programming manual, LynxSecure allows developers to directly control system behavior through a unique system architecture specification written by the developer and enforced solely by the processor. With a traditional architecture, all hardware resources are owned by the real-time operating system (RTOS). This controls the CPU cores, memory, and peripherals. Applications must request access to those resources via APIs like fork(), malloc(), and write(). The RTOS is a monolithic collection of libraries that manages task scheduling, memory partitioning, and device I/O. This large block of code needs to be safety certified and bug free to be secure. A separation kernel relies on hardware virtualization functionality to do the heavy lifting. This creates efficient, tamper-proof, and non-bypassable virtual machines. Hardware resources a
https://en.wikipedia.org/wiki/PTFE%20fiber
PTFE fiber is a chemically resistant material. It is used in woven form in certain pump packings as well as in nonwoven form in hot gas bag filters for industries with corrosive exhausts. Because PTFE is relatively insoluble and has a very high melting point, PTFE fibers can not be fashioned from conventional melt or solution spinning. Instead they are made by combining particles of PTFE with cellulose, forming fibers of the cellulose and then sintering the PTFE particles (and charring the cellulose). The remnant char gives the fiber a brown color. It can be bleached white, although this reduces the strength.
https://en.wikipedia.org/wiki/Spheroidal%20wave%20function
Spheroidal wave functions are solutions of the Helmholtz equation that are found by writing the equation in spheroidal coordinates and applying the technique of separation of variables, just like the use of spherical coordinates lead to spherical harmonics. They are called oblate spheroidal wave functions if oblate spheroidal coordinates are used and prolate spheroidal wave functions if prolate spheroidal coordinates are used. If instead of the Helmholtz equation, the Laplace equation is solved in spheroidal coordinates using the method of separation of variables, the spheroidal wave functions reduce to the spheroidal harmonics. With oblate spheroidal coordinates, the solutions are called oblate harmonics and with prolate spheroidal coordinates, prolate harmonics. Both type of spheroidal harmonics are expressible in terms of Legendre functions. See also Oblate spheroidal coordinates, especially the section Oblate spheroidal harmonics, for a more extensive discussion. Oblate spheroidal wave function
https://en.wikipedia.org/wiki/Terminator%20%28solar%29
A terminator or twilight zone is a moving line that divides the daylit side and the dark night side of a planetary body. The terminator is defined as the locus of points on a planet or moon where the line through the center of its parent star is tangent. An observer on the terminator of such an orbiting body with an atmosphere would experience twilight due to light scattering by particles in the gaseous layer. Earth's terminator On Earth, the terminator is a circle with a diameter that is approximately that of Earth. The terminator passes through any point on Earth's surface twice a day, at sunrise and at sunset, apart from polar regions where this only occurs when the point is not experiencing midnight sun or polar night. The circle separates the portion of Earth experiencing daylight from that experiencing darkness (night). While a little over one half of Earth is illuminated at any point in time (with exceptions during eclipses), the terminator path varies by time of day due to Earth's rotation on its axis. The terminator path also varies by time of year due to Earth's orbital revolution around the Sun; thus, the plane of the terminator is nearly parallel to planes created by lines of longitude during the equinoxes, and its maximum angle is approximately 23.5° to the pole during the solstices. Surface transit speed At the equator, under flat conditions (without obstructions like mountains or at a height above any such obstructions), the terminator moves at approximately . This speed can appear to increase when near obstructions, such as the height of a mountain, as the shadow of the obstruction will be cast over the ground in advance of the terminator along a flat landscape. The speed of the terminator decreases as it approaches the poles, where it can reach a speed of zero (full-day sunlight or darkness). Supersonic aircraft like jet fighters or Concorde and Tupolev Tu-144 supersonic transports are the only aircraft able to overtake the maximum speed of the
https://en.wikipedia.org/wiki/Martha%20Aliaga
Martha Beatriz Bilotti-Aliaga (1937 – October 15, 2011) was an Argentine statistics educator, who served as the president of the Caucus for Women in Statistics. Early life and education Martha Beatriz Bilotti was born in Mendoza, Argentina, and did her undergraduate studies at the University of Buenos Aires. She earned a master's degree in Santiago, Chile, at the Inter-American Center for the Teaching of Statistics. She completed a doctorate in statistics at the University of Michigan in 1986; her dissertation, supervised by Michael B. Woodroofe, was A problem in sequential analysis. Personal life She married Alfredo Aliaga of Columbia, Maryland, and they had three children: Viviana, Pablo and Eduardo. Career After teaching in the Dominican Republic, she moved to Ann Arbor, Michigan, to become an associate professor at the University of Michigan in 1972. She taught from 1981 to 1985 at American University, and in the late 1980s at both the University of the District of Columbia and the University of Michigan (commuting between the two). She was president of the Caucus for Women in Statistics in 2002, and moved from Michigan to the American Statistical Association in 2003 as director of education. With Brenda Gunderson, she wrote a statistics textbook, Interactive Statistics (Prentice Hall, 1999; 4th ed., 2017). In 1999, Aliaga was elected as a Fellow of the American Statistical Association, and a member of the International Statistical Institute. Death Aliaga died on October 15, 2011, of gallbladder cancer at her home in Columbia.
https://en.wikipedia.org/wiki/Male
Male (symbol: ♂) is the sex of an organism that produces the gamete (sex cell) known as sperm, which fuses with the larger female gamete, or ovum, in the process of fertilization. A male organism cannot reproduce sexually without access to at least one ovum from a female, but some organisms can reproduce both sexually and asexually. Most male mammals, including male humans, have a Y chromosome, which codes for the production of larger amounts of testosterone to develop male reproductive organs. In humans, the word male can also be used to refer to gender, in the social sense of gender role or gender identity. The use of "male" in regard to sex and gender has been subject to discussion. Overview The existence of separate sexes has evolved independently at different times and in different lineages, an example of convergent evolution. The repeated pattern is sexual reproduction in isogamous species with two or more mating types with gametes of identical form and behavior (but different at the molecular level) to anisogamous species with gametes of male and female types to oogamous species in which the female gamete is very much larger than the male and has no ability to move. There is a good argument that this pattern was driven by the physical constraints on the mechanisms by which two gametes get together as required for sexual reproduction. Accordingly, sex is defined across species by the type of gametes produced (i.e.: spermatozoa vs. ova) and differences between males and females in one lineage are not always predictive of differences in another. Male/female dimorphism between organisms or reproductive organs of different sexes is not limited to animals; male gametes are produced by chytrids, diatoms and land plants, among others. In land plants, female and male designate not only the female and male gamete-producing organisms and structures but also the structures of the sporophytes that give rise to male and female plants. Evolution The evolution of ani
https://en.wikipedia.org/wiki/Land%20navigation
Land navigation is the discipline of following a route through unfamiliar terrain on foot or by vehicle, using maps with reference to terrain, a compass, and other navigational tools. It is distinguished from travel by traditional groups, such as the Tuareg across the Sahara and the Inuit across the Arctic, who use subtle cues to travel across familiar, yet minimally differentiated terrain. Land navigation is a core military discipline, which uses courses or routes that are an essential part of military training. Often, these courses are several miles long in rough terrain and are performed under adverse conditions, such as at night or in the rain. In the late 19th century, land navigation developed into the sport of orienteering. The earliest use of the term 'orienteering' appears to be in 1886. Nordic military garrisons began orienteering competitions in 1895. United States In the United States military, land navigation courses are required for the Marine Corps and the Army. Air Force escape and evasion training includes aspects of land navigation. Army Training Circular 3-25.26 is devoted to land navigation. See also History of orienteering Navigation Piloting Wayfinding
https://en.wikipedia.org/wiki/B%C3%B6rje%20Langefors
Börje Langefors (; 21 May 1915 – 13 December 2009) was a Swedish engineer and computer scientist, Emeritus Professor of Business Information Systems at the Department of Computer and Systems Science, Stockholm University and Royal Institute of Technology, Stockholm, and "one of those who made systems development a science." Children: Eva Langefors and Ola Langefors. Grandchildren: Charlotte Rosenmuller, Philip Krensler, Victor Krensler, Anna Langefors Bräutigam, Per Langefors. Biography Langefors was born in Ystad, Sweden, in 1915, and received his training from the Royal Institute of Technology, Stockholm. He started his career in Nordic Armature Factories (NAF) industries, and in 1949 he got recruited for the SAAB aircraft company. In 1965 he went to Stockholm and was stationed at the University at the Department of Mathematical Statistics. From 1967 to 1980 he was Professor of Business Information Systems at the Department of Computer and Systems Science, Stockholm University and Royal Institute of Technology, Stockholm. In 1974/75 he has been a fellow at the Netherlands Institute for Advanced Study in Wassenaar, the Netherlands, where he completed the writing of a book "Information and Control in Organizations" on Information Systems Architecture. Furthermore, Langefors was one of the key players in founding the IFIP TC8 Technical Committee of Information Systems in 1976. Among his former students and later colleagues in Stockholm were Janis Bubenko, Göran Goldkuhl, John Impagliazzo, Kristo Ivanov & Arne Sølvberg. In 1999 he received the LEO Award of the Association for Information Systems for his lifetime achievement. In commemoration of his contribution in the field of IS, a book was published with the title The Infological Equation: Essays in Honor of Börje Langefors. An annual award titled Börje Langeforspriset has been announced by the Swedish Information Systems Academy since 2011 for the best doctoral dissertation in Sweden. Work A major achievem
https://en.wikipedia.org/wiki/Sacral%20architecture
Sacral architecture (also known as sacred architecture or religious architecture) is a religious architectural practice concerned with the design and construction of places of worship or sacred or intentional space, such as churches, mosques, stupas, synagogues, and temples. Many cultures devoted considerable resources to their sacred architecture and places of worship. Religious and sacred spaces are amongst the most impressive and permanent monolithic buildings created by humanity. Conversely, sacred architecture as a locale for meta-intimacy may also be non-monolithic, ephemeral and intensely private, personal and non-public. Sacred, religious and holy structures often evolved over centuries and were the largest buildings in the world, prior to the modern skyscraper. While the various styles employed in sacred architecture sometimes reflected trends in other structures, these styles also remained unique from the contemporary architecture used in other structures. With the rise of Christianity and Islam, religious buildings increasingly became centres of worship, prayer and meditation. The Western scholarly discipline of the history of architecture itself closely follows the history of religious architecture from ancient times until the Baroque period, at least. Sacred geometry, iconography, and the use of sophisticated semiotics such as signs, symbols and religious motifs are endemic to sacred architecture. Spiritual aspects of religious architecture Sacred or religious architecture is sometimes called sacred space. Architect Norman L. Koonce has suggested that the goal of sacred architecture is to make "transparent the boundary between matter and mind, flesh and the spirit." In discussing sacred architecture, Protestant minister Robert Schuller suggested that "to be psychologically healthy, human beings need to experience their natural setting—the setting we were designed for, which is the garden." Meanwhile, Richard Kieckhefer suggests that entering into
https://en.wikipedia.org/wiki/Weitzenb%C3%B6ck%27s%20inequality
In mathematics, Weitzenböck's inequality, named after Roland Weitzenböck, states that for a triangle of side lengths , , , and area , the following inequality holds: Equality occurs if and only if the triangle is equilateral. Pedoe's inequality is a generalization of Weitzenböck's inequality. The Hadwiger–Finsler inequality is a strengthened version of Weitzenböck's inequality. Geometric interpretation and proof Rewriting the inequality above allows for a more concrete geometric interpretation, which in turn provides an immediate proof. Now the summands on the left side are the areas of equilateral triangles erected over the sides of the original triangle and hence the inequation states that the sum of areas of the equilateral triangles is always greater than or equal to threefold the area of the original triangle. This can now can be shown by replicating area of the triangle three times within the equilateral triangles. To achieve that the Fermat point is used to partition the triangle into three obtuse subtriangles with a angle and each of those subtriangles is replicated three times within the equilateral triangle next to it. This only works if every angle of the triangle is smaller than , since otherwise the Fermat point is not located in the interior of the triangle and becomes a vertex instead. However if one angle is greater or equal to it is possible to replicate the whole triangle three times within the largest equilateral triangle, so the sum of areas of all equilateral triangles stays greater than the threefold area of the triangle anyhow. Further proofs The proof of this inequality was set as a question in the International Mathematical Olympiad of 1961. Even so, the result is not too difficult to derive using Heron's formula for the area of a triangle: First method It can be shown that the area of the inner Napoleon's triangle, which must be nonnegative, is so the expression in parentheses must be greater than or equal to 0. Secon
https://en.wikipedia.org/wiki/Splendid%20China%20Folk%20Village
Splendid China Folk Village (Chinese: 锦绣中华民俗村, pinyin: Jǐnxiù Zhōnghuá Mínsú Cūn) is a theme park including two areas (Splendid China Miniature Park & China Folk Culture Village) located in Shenzhen, Guangdong province, People's Republic of China. The park's theme reflects the history, culture, art, ancient architecture, customs and habits of various nationalities. It is one of the world's largest scenery parks in the amount of scenarios reproduced. The park is developed and managed by the major travel and tourist corporation, China Travel Service. Location Splendid China is situated by the Shenzhen Bay in a tourist area of Overseas Chinese Town (OCT) in the Shenzhen Special Economic Zone. It is a 35-40 minute train ride from Luohu Station on Line 1 of the Shenzhen Metro or 30 minutes by bus (bus number 101 or mini-bus 23 are two examples). Hours and tickets Time: 9.00 am to 9.00 pm Entry Closing time: 6.00 pm 1 Day ticket Ticket Price: RMB 220 Child Ticket: RMB 90 (height 1.2 m to 1.5m) Small kids; free (smaller than 1.2m height) Annual Ticket Solo Annual Ticket: RMB 360 (only 1 person can use) Parent Annual Ticket: RMB 460 (One parent with 1 child under 1.5 m) Family Annual Ticket: RMB 660 (Two parents with 1 child under 1.5 m) Note: Annual Ticket prices as per February 2011. Solo as in November 2015 About the park Over 100 major tourist attractions have been miniaturized and laid out according to the map of China. Most attractions have been reduced on a scale of 1:15. It is divided into Scenic Spot Area and Comprehensive Service Area. The entire park covers 30 hectares. There are cars and trains to transport visitors around the park, making it possible to visit the Great Wall of China, Forbidden City, Temple of Heaven, Summer Palace, Three Gorges Dam, Potala Palace and the Terracotta Army in one day. The park also hosts several shows depicting various events in Chinese History (e.g. a horse riding show depicting a battle led by Genghis Khan), C
https://en.wikipedia.org/wiki/Philadelphus%20%C3%97%20lemoinei
Philadelphus × lemoinei is a shrub in the genus Philadelphus. In 1884, Victor Lemoine crossed Philadelphus microphyllus with Philadelphus coronarius and produced this hybrid plant which he named P. lemoinei. The following cultivars have gained the Royal Horticultural Society's Award of Garden Merit:- Philadelphus 'Manteau d'Hermine' Philadelphus 'Belle Étoile'
https://en.wikipedia.org/wiki/Solidarity
Solidarity or solidarism is an awareness of shared interests, objectives, standards, and sympathies creating a psychological sense of unity of groups or classes. Solidarity does not reject individuals and sees individuals as the basis of society. It refers to the ties in a society that bind people together as one. The term is generally employed in sociology and the other social sciences as well as in philosophy and bioethics. It is a significant concept in Catholic social teaching and in Christian democratic political ideology. What forms the basis of solidarity, and how it is implemented, vary between societies. In Global South societies it may be mainly based on kinship and shared values while Global North societies accumulate a variety of theories as to what contributes to a sense of solidarity or social cohesion. Solidarity is also one of six principles of the Charter of Fundamental Rights of the European Union and December 20 of each year is International Human Solidarity Day recognized as an international observance. Solidarity is not mentioned in the European Convention on Human Rights nor in the United Nations' Universal Declaration of Human Rights and has hence lesser legal meaning when compared to basic rights. Concepts of solidarity are mentioned in the Universal Declaration on Bioethics and Human Rights, but not defined clearly. As biotechnology and biomedical enhancement research and production increase, the need for distinct definition of solidarity within healthcare system frameworks is important. Discourse Émile Durkheim According to Émile Durkheim, the types of social solidarity correlate with types of society. Durkheim introduced the terms mechanical and organic solidarity as part of his theory of the development of societies in The Division of Labour in Society (1893). In a society exhibiting mechanical solidarity, its cohesion and integration comes from the homogeneity of individuals—people feel connected through similar work, educational a
https://en.wikipedia.org/wiki/Asymmetric%20simple%20exclusion%20process
In probability theory, the asymmetric simple exclusion process (ASEP) is an interacting particle system introduced in 1970 by Frank Spitzer. Many articles have been published on it in the physics and mathematics literature since then, and it has become a "default stochastic model for transport phenomena". The process with parameters is a continuous-time Markov process on , the 1s being thought of as particles and the 0s as empty sites. Each particle waits a random amount of time having the distribution of an exponential random variable with mean one and then attempts a jump, one site to the right with probability and one site to the left with probability . However, the jump is performed only if there is no particle at the target site. Otherwise, nothing happens and the particle waits another exponential time. All particles are doing this independently of each other. The model is related to the Kardar–Parisi–Zhang equation in the weakly asymmetric limit, i.e. when tends to zero under some particular scaling. Recently, progress has been made to understand the statistics of the current of particles and it appears that the Tracy–Widom distribution plays a key role. Sources
https://en.wikipedia.org/wiki/Intersex
Intersex people are individuals born with any of several sex characteristics including chromosome patterns, gonads, or genitals that, according to the Office of the United Nations High Commissioner for Human Rights, "do not fit typical binary notions of male or female bodies". Sex assignment at birth usually aligns with a child's anatomical sex and phenotype. The number of births with ambiguous genitals is in the range of 1:4500–1:2000 (0.02%–0.05%). Other conditions involve atypical chromosomes, gonads, or hormones. Some persons may be assigned and raised as a girl or boy but then identify with another gender later in life, while most continue to identify with their assigned sex. The number of births where the baby is intersex has been reported differently depending on who reports and which definition of intersex is used. Anne Fausto-Sterling and her book co-authors suggest that the prevalence of "nondimorphic sexual development" might be as high as 1.7%. A study published by Leonard Sax reports that this figure includes conditions such as late onset congenital adrenal hyperplasia and XXY/Klinefelter syndrome which most clinicians do not recognize as intersex; Sax states, "if the term intersex is to retain any meaning, the term should be restricted to those conditions in which chromosomal sex is inconsistent with phenotypic sex, or in which the phenotype is not classifiable as either male or female," stating the prevalence of intersex is about 0.018%. This means that for every 5,500 babies born, one either has sex chromosomes that do not match their appearance, or the appearance is so ambiguous that it is not clear whether the baby is male or female. Terms used to describe intersex people are contested, and change over time and place. Intersex people were previously referred to as "hermaphrodites" or "congenital eunuchs". In the 19th and 20th centuries, some medical experts devised new nomenclature in an attempt to classify the characteristics that they had obs
https://en.wikipedia.org/wiki/Dental%20microwear
Dental microwear analysis is a method to infer diet and behavior in extinct animals, especially in fossil specimens. Typically, the patterns of pits and scratches on the occlusal or buccal surface of the enamel are compared with patterns observed in extant species to infer ecological information. Hard foods in particular can lead to distinctive patterns (although see below). Microwear can also be used for inferring behavior, especially those related to the non-masticatory use of teeth as 'tools'. Other uses include investigating weaning in past populations. Methods used to collect data initially involved a microscope and manually collecting information on individual microwear features, but software to automatically collect data have improved markedly in recent years. Potential issues and on-going debates The role of phytoliths and environmental grit in creating microwear features is not well understood and recent research suggests such items may create a surprisingly large amount of the microwear features visible in fossil samples A recent study suggests hard food, or at least some types, may not contribute significantly to microwear textures Dental microwear is rapidly turned over during life and therefore may only give information about the last few days of an individual's life. In particular, this 'last supper' effect may create a severely biased sample
https://en.wikipedia.org/wiki/Lovelock%20theory%20of%20gravity
In theoretical physics, Lovelock's theory of gravity (often referred to as Lovelock gravity) is a generalization of Einstein's theory of general relativity introduced by David Lovelock in 1971. It is the most general metric theory of gravity yielding conserved second order equations of motion in an arbitrary number of spacetime dimensions D. In this sense, Lovelock's theory is the natural generalization of Einstein's General Relativity to higher dimensions. In three and four dimensions (D = 3, 4), Lovelock's theory coincides with Einstein's theory, but in higher dimensions the theories are different. In fact, for D > 4 Einstein gravity can be thought of as a particular case of Lovelock gravity since the Einstein–Hilbert action is one of several terms that constitute the Lovelock action. Lagrangian density The Lagrangian of the theory is given by a sum of dimensionally extended Euler densities, and it can be written as follows where Rμναβ represents the Riemann tensor, and where the generalized Kronecker delta δ is defined as the antisymmetric product Each term in corresponds to the dimensional extension of the Euler density in 2n dimensions, so that these only contribute to the equations of motion for n < D/2. Consequently, without lack of generality, t in the equation above can be taken to be for even dimensions and for odd dimensions. Coupling constants The coupling constants αn in the Lagrangian have dimensions of [length]2n − D, although it is usual to normalize the Lagrangian density in units of the Planck scale Expanding the product in , the Lovelock Lagrangian takes the form where one sees that coupling α0 corresponds to the cosmological constant Λ, while αn with n ≥ 2 are coupling constants of additional terms that represent ultraviolet corrections to Einstein theory, involving higher order contractions of the Riemann tensor Rμναβ. In particular, the second order term is precisely the quadratic Gauss–Bonnet term, which is the dimensionally exte
https://en.wikipedia.org/wiki/Tandy%203000
The Tandy 3000 is a personal computer introduced by Radio Shack in 1986 based on the 16-bit 8 MHz Intel 80286 microprocessor. Description The Tandy 3000 is functionally a clone of the IBM PC-AT, the first PC by a major manufacturer using the fully 16-bit Intel 286 processor. As such, it departed from Tandy's two previous PC workalikes (the Tandy 2000 in 1983 and the Tandy 1000 in 1985) in that it was built without proprietary technology. The motherboard contains no built-in circuitry for its disk controller or video display. Owners could outfit the computer, and upgrade it, with standard PC components sold by Tandy or available from third-party suppliers. Since the hardware is industry-standard throughout, there were no compatibility issues such as there were with the previous models 2000 and 1000. More accurately, any compatibility troubles that might arise were no fault of the computer, but rather, any third-party hardware installed or with the AT architecture upon which the computer was engineered. The operating system was an extra-cost item; the purchaser could choose MS-DOS 3.2 or Xenix V. Xenix and the extra memory it demanded was expensive but permitted up to six remote terminals to run programs on a single Tandy 3000 simultaneously. Microsoft's BASIC interpreter, bundled with Tandy's Deskmate productivity suite, was offered at extra cost. Digital Research's CP/M-86 was an option available from other software vendors. Later, others available for generic AT clones such as the Tandy 3000 included IBM's PC DOS, Digital Research's DR-DOS and GEM, and 16-bit versions of Microsoft's Windows (up to version 3.x). Still later IBM's graphical multitasking OS/2 was an option for machines equipped with enough memory and capable graphics display hardware. Base memory was 512 KB, expandable to 640 KB on the motherboard. RAM was expandable to a maximum of 12 MB using cards in the expansion slots. The Tandy 3000 has ten expansion slots: seven 16-bit AT compatible, two 8
https://en.wikipedia.org/wiki/Berthold%20Leibinger%20Zukunftspreis
The Berthold Leibinger Zukunftspreis (future prize) is an international award for "excellent research on the application or generation of laser light". Since 2006, it is biennially awarded by the German non-profit foundation Berthold Leibinger Stiftung as part of its Laser Prizes, with an amount of 50,000 euros. Recipients , two Zukunftspreis laureates have also received the Nobel Prize in Physics: Gérard Mourou in 2018, and Anne L'Huillier in 2023. Source: See also Berthold Leibinger Innovationspreis (affiliated innovation prize) Berthold Leibinger (founder of issuing foundation) List of physics awards
https://en.wikipedia.org/wiki/Motor%20imagery
Motor imagery is a mental process by which an individual rehearses or simulates a given action. It is widely used in sport training as mental practice of action, neurological rehabilitation, and has also been employed as a research paradigm in cognitive neuroscience and cognitive psychology to investigate the content and the structure of covert processes (i.e., unconscious) that precede the execution of action. In some medical, musical, and athletic contexts, when paired with physical rehearsal, mental rehearsal can be as effective as pure physical rehearsal (practice) of an action. Definition Motor imagery can be defined as a dynamic state during which an individual mentally simulates a physical action. This type of phenomenal experience implies that the subject feels themselves performing the action. It corresponds to the so-called internal imagery (or first person perspective) of sport psychologists. Mental practice of action Mental practice refers to use of visuo-motor imagery with the purpose of improving motor behavior. Visuo-motor imagery requires the use of one's imagination to simulate an action, without physical movement. It has come to the fore due to the relevance of imagery in enhancing sports and surgical performance. Sports Mental practice, when combined with physical practice, can be beneficial to beginners learning a sport, but even more helpful to professionals looking to enhance their skills. Physical practice generates the physical feedback necessary to improve, while mental practice creates a cognitive process physical practice cannot easily replicate. Medicine When surgeons and other medical practitioners mentally rehearse procedures along with their physical practice, it produces the same results as physical rehearsal, but costs much less. But unlike its use in sports, to improve a skill, mental practice is used in medicine as a form of stress reduction before operations. Music Mental practice is a technique used in music as well. Profe
https://en.wikipedia.org/wiki/Alexander%20Black%20%28photographer%29
Alexander Black (1859–1940) was an American author, photographer, newspaper man, and the inventor of the pre-cinema “Picture Play” which debuted in 1894. Early life Alexander Black was born in New York City in 1859, the eldest child of Peter Black and Sarah MacCrae, both born in Scotland. After a grammar school education and teaching himself printmaking, became a reporter at the Brooklyn Eagle. In 1878 at the age 19 he toured Europe for three months keeping a detailed sketchbook. Career Black's career began as a newspaper man in Brooklyn and stenographer for Brooklyn courts, working at the Brooklyn Daily Eagle starting in 1870, editor of the Brooklyn Times (1885–1905), New York World (1905–1910), Frank Seaman, Inc. (1910–1913), Newspaper Feature Service (1913–1926), and as art editor for King Features Syndicate (1926–1935), alongside freelance writing and photography. During this time he also became the first president of the department of photography at the Brooklyn Institute of Arts and Sciences in 1886. His first book was published in 1886 titled Photography Indoors and Out, intended to be a manual for amateur photographers. On the lyceum circuit, Black presented a magic lantern show of candid photography called "Life through a Detective Camera" (alternately titled "Ourselves as Others See Us") in 1889. Inspired by audience responses to these lectures, as well as emerging work by Eadward Muybridge capturing the effect of motion in photography, Black began to develop a plan to bring fiction to life through dissolving slides. Over the summer of 1894, he wrote and photographed his first "Picture Play" titled Miss Jerry at Carbon Studio at 5 West 16th Street, New York. The finished work debuted before a live audience on October 9, 1894 at Carbon Studio, featuring a "slow movie" composed of over one hundred glass slide photographs of posed motion, accompanied by a feature-length script. The Picture Play (See Miss Jerry) “Primarily my purpose was to illustrate
https://en.wikipedia.org/wiki/Ekkehard%20Bautz
Ekkehard Karl Friedrich Bautz is a molecular biologist and chair of the Institute of Molecular Genetics at the University of Heidelberg. Biography He was born September 24, 1933, in Konstanz, Germany. After studying chemistry at Freiburg University and the University of Zürich, at the age of 26, he emigrated to the United States and later became a U.S. citizen. In 1961, he obtained a doctorate in molecular biology from the University of Wisconsin. He did postdoctoral work at the University of Illinois with a fellowship awarded by the Damon Runyon Memorial Fund for Cancer Research and in 1962 became an assistant professor at the Institute of Microbiology at Rutgers. In 1964, he participated in the Evolving Genes and Proteins Symposium, a landmark event in the history of molecular evolution research. In 1966, he was promoted to associate professor at the same institution. In 1970, he was appointed full professor there, but chose to return to Germany in the same year to become chair of the Institute of Molecular Genetics at the University of Heidelberg. Bautz's most important discovery is that of sigma factor, the first known transcription factor. Scientific work He developed methods for the isolation of messenger RNA and continued research on transcription. Later, he focused on novel selection methods, in particular phage display and the generation of recombinant antibodies. In 1981, he founded the Center for Molecular Biology (ZMBH) in Heidelberg, where he served as chair of microbiology and acting director from 1983 to 1985. Professional activities Bautz was on the editorial board of the Journal of Virology from 1966 to 1970, and of Molecular and General Genetics from 1971 to 2000. He was chairman of the German Genetics Society from 1979 to 1981, and a board member of the German Cancer Research Centre from 1978 to 1983. In 1994, he was appointed a board member of the Zentralkommission für Biologische Sicherheit (ZKBS, engl.: Central Commission for Biologica
https://en.wikipedia.org/wiki/Large-format%20projection
Mostly large-format projection (or large-image projection) is used for the use of large-format slide projectors or extremely powerful video projectors for producing still-standing or dynamic images on projection areas of about 100–10,000 m2 and more. Sometimes the use of slide projector and video projector is combined to reach a kind of "picture in picture" projection to enable larger projections with a dynamic part inside. But that is a speciality of projection artists which have a lot of detailed know how of the projection parameters. Projectors
https://en.wikipedia.org/wiki/Statistical%20Methods%20for%20Research%20Workers
Statistical Methods for Research Workers is a classic book on statistics, written by the statistician R. A. Fisher. It is considered by some to be one of the 20th century's most influential books on statistical methods, together with his The Design of Experiments (1935). It was originally published in 1925, by Oliver & Boyd (Edinburgh); the final and posthumous 14th edition was published in 1970. Reviews According to Denis Conniffe: Ronald A. Fisher was "interested in application and in the popularization of statistical methods and his early book Statistical Methods for Research Workers, published in 1925, went through many editions and motivated and influenced the practical use of statistics in many fields of study. His Design of Experiments (1935) [promoted] statistical technique and application. In that book he emphasized examples and how to design experiments systematically from a statistical point of view. The mathematical justification of the methods described was not stressed and, indeed, proofs were often barely sketched or omitted altogether ..., a fact which led H. B. Mann to fill the gaps with a rigorous mathematical treatment in his well-known treatise, ." Chapters Prefaces Introduction Diagrams Distributions Tests of Goodness of Fit, Independence and Homogeneity; with table of χ2 Tests of Significance of Means, Difference of Means, and Regression Coefficients The Correlation Coefficient Intraclass Correlations and the Analysis of Variance Further Applications of the Analysis of Variance SOURCES USED FOR DATA AND METHODS INDEX In the second edition of 1928 a chapter 9 was added: The Principles of Statistical Estimation. See also The Design of Experiments Notes Further reading The March 1951 issue of the Journal of the American Statistical Association contains articles celebrating the 25th anniversary of the publication of the first edition. A.W.F. Edwards (2005) "R. A. Fisher, Statistical Methods for Research Workers, 1925," in I. G
https://en.wikipedia.org/wiki/Drosophila%20circadian%20rhythm
Drosophila circadian rhythm is a daily 24-hour cycle of rest and activity in the fruit flies of the genus Drosophila. The biological process was discovered and is best understood in the species Drosophila melanogaster. Other than normal sleep-wake activity, D. melanogaster has two unique daily behaviours, namely regular vibration (locomotor activity) during the process of hatching (called eclosion) from the pupa, and during mating. Locomotor activity is maximum at dawn and dusk, while eclosion is at dawn. Biological rhythms were first studied in Drosophila. Drosophila circadian rhythm have paved the way for understanding circadian behaviour and diseases related to sleep-wake conditions in other animals, including humans. This is because the circadian clocks are fundamentally similar. Drosophila circadian rhythm was discovered in 1935 by German zoologists, Hans Kalmus and Erwin Bünning. American biologist Colin S. Pittendrigh provided an important experiment in 1954, which established that circadian rhythm is driven by a biological clock. The genetics was first understood in 1971, when Seymour Benzer and Ronald J. Konopka reported that mutation in specific genes changes or stops the circadian behaviour. They discovered the gene called period (per), mutations of which alter the circadian rhythm. It was the first gene known to control behaviour. After a decade, Konopka, Jeffrey C. Hall, Michael Rosbash, and Michael W. Young discovered novel genes including timeless (tim), Clock (Clk), cycle (cyc), cry. These genes and their product proteins play a key role in the circadian clock. The research conducted in Benzer's lab is narrated in Time, Love, Memory by Jonathan Weiner. For their contributions, Hall, Rosbash and Young received the Nobel Prize in Physiology or Medicine in 2017. History During the process of eclosion by which an adult fly emerges from the pupa, Drosophila exhibits regular locomotor activity (by vibration) that occurs during 8-10 hours intervals star
https://en.wikipedia.org/wiki/BLAST%20model%20checker
The Berkeley Lazy Abstraction Software verification Tool (BLAST) is a software model checking tool for C programs. The task addressed by BLAST is the need to check whether software satisfies the behavioral requirements of its associated interfaces. BLAST employs counterexample-driven automatic abstraction refinement to construct an abstract model that is then model-checked for safety properties. The abstraction is constructed on the fly, and only to the requested precision. Achievements BLAST came first in the category DeviceDrivers64 in the 1st Competition on Software Verification (2012) that was held at TACAS 2012 in Tallinn. BLAST came third (category DeviceDrivers64) in the 2nd Competition on Software Verification (2013) that was held at TACAS 2013 in Rome. BLAST came first in the category DeviceDrivers64 in the 3rd Competition on Software Verification (2014) that was held at TACAS 2014 in Grenoble.
https://en.wikipedia.org/wiki/Darkdevil
Darkdevil (Reilly Tyne) is a superhero appearing in American comic books published by Marvel Comics. Created by Tom DeFalco and Pat Olliffe, the character first appeared in Spider-Girl #2 (November 1998). Darkdevil primarily appears in the Marvel Comics 2 future of the Marvel Universe. Publication history Darkkdevil debuted in Spider-Girl #2 (November 1998), by writer Tom DeFalco and artist Pat Olliffe. He appeared in the 2000 Darkdevil series, his first solo comic book series. He appred in the 2005 Last Hero Standing series. Fictional character biography Reilly Tyne is the son of Ben Reilly (Spider-Man's clone) and Elizabeth Tyne. Before he reached his teens, his inherited powers began to manifest but brought with them clonal degeneration. Kaine, the degenerated first clone of Peter Parker, found him, and placed him within a regeneration tank to slow the process. Kaine's efforts were for two goals: to resurrect Daredevil, who had previously died saving Kaine, and to heal Tyne. Kaine summoned the demon Zarathos, which attempted to possess Tyne, but he was saved by the soul of Daredevil, who drove out Zarathos, although both Daredevil's soul and a piece of the demon remained within Tyne, and he was left with a demonic appearance and certain demonic abilities. Through meditation and concentration, Tyne eventually learned to project a human appearance, but he now looked to be in his twenties, almost twice his actual age. Following in both of Daredevil's paths, he studied law and became an attorney, while taking on a costume bearing a resemblance to Daredevil's and using his demonic abilities to fight crime as Darkdevil. He apparently has access to at least some of Daredevil's memories, since he knows Spider-Man's secret identity. Darkdevil has fought alongside Spider-Girl several times, as well as the semi-retired Spider-Man. Neither Spider-Man nor Spider-Girl are aware of his genetic relation to them, but Darkdevil has hinted that he owes his existence to the ori
https://en.wikipedia.org/wiki/BT.1120
BT.1120 is a digital interface standard for HDTV studio signals published by the International Telecommunication Union. , the current version of BT.1120 is BT.1120-8.
https://en.wikipedia.org/wiki/Mammalian%20assemblage%20zone
In Pleistocene palaeontology, a mammalian assemblage zone (MAZ) is a collection of fossil bones of mammals.
https://en.wikipedia.org/wiki/1858%20Bradford%20sweets%20poisoning
The 1858 Bradford sweets poisoning was the arsenic poisoning of more than 200 people in Bradford, England, when sweets accidentally made with arsenic were sold from a market stall. Twenty-one victims died as a result. The event contributed to the passage of the Pharmacy Act 1868 in the United Kingdom and legislation regulating the adulteration of foodstuffs. Background William Hardaker, known to locals as "Humbug Billy", sold sweets from a stall in the Greenmarket in central Bradford (now the site of Bradford's Arndale Centre). Hardaker purchased his supplies from Joseph Neal, who made the sweets (or "lozenges") on Stone Street a few hundred yards to the north. The lozenges in question were peppermint humbugs, made of peppermint oil incorporated into a base of sugar and gum. However, sugar was expensive (6½d per ) and so Neal would substitute powdered gypsum (½d per ) — known as "daff" — for some of the required sugar. The adulteration of foodstuffs with cheaper substances was common at the time and the adulterators used obscure nicknames ("daff", "multum", "flash", "stuff") to hide the practice. Accidental poisoning On the occasion in question, on 30 October 1858, Neal sent James Archer, a lodger who lived at his house, to collect daff for Hardaker's humbugs from druggist Charles Hodgson. Hodgson's pharmacy was away at Baildon Bridge in Shipley. Hodgson was at his pharmacy, but did not serve Archer owing to illness and so his requests were seen to by his young assistant, William Goddard. Goddard asked Hodgson where the daff was, and was told that it was in a cask in a corner of the attic. However, rather than daff, Goddard sold Archer of arsenic trioxide. The mistake remained undetected even during manufacture of the sweets by James Appleton, an "experienced sweetmaker" employed by Neal, though Appleton did observe that the finished product looked different from the usual humbugs. Appleton was suffering symptoms of illness during the sweet-making process and w
https://en.wikipedia.org/wiki/600%20%28number%29
600 (six hundred) is the natural number following 599 and preceding 601. Mathematical properties Six hundred is a composite number, an abundant number, a pronic number and a Harshad number. Credit and cars In the United States, a credit score of 600 or below is considered poor, limiting available credit at a normal interest rate. NASCAR runs 600 advertised miles in the Coca-Cola 600, its longest race. The Fiat 600 is a car, the SEAT 600 its Spanish version. Integers from 601 to 699 600s 601 = prime number, centered pentagonal number 602 = 2 × 7 × 43, nontotient, number of cubes of edge length 1 required to make a hollow cube of edge length 11, area code for Phoenix, AZ along with 480 and 623 603 = 32 × 67, Harshad number, Riordan number, area code for New Hampshire 604 = 22 × 151, nontotient, totient sum for first 44 integers, area code for southwestern British Columbia (Lower Mainland, Fraser Valley, Sunshine Coast and Sea to Sky) 605 = 5 × 112, Harshad number, sum of the nontriangular numbers between the two successive triangular numbers 55 and 66, number of non-isomorphic set-systems of weight 9. 606 = 2 × 3 × 101, sphenic number, sum of six consecutive primes (89 + 97 + 101 + 103 + 107 + 109), admirable number 607 – prime number, sum of three consecutive primes (197 + 199 + 211), Mertens function(607) = 0, balanced prime, strictly non-palindromic number, Mersenne prime exponent 608 = 25 × 19, Mertens function(608) = 0, nontotient, happy number, number of regions formed by drawing the line segments connecting any two of the perimeter points of a 3 times 4 grid of squares. 609 = 3 × 7 × 29, sphenic number, strobogrammatic number 610s 610 = 2 × 5 × 61, sphenic number, nontotient, Fibonacci number, Markov number. Also a kind of telephone wall socket used in Australia. 611 = 13 × 47, sum of the three standard board sizes in Go (92 + 132 + 192), the 611th tribonacci number is prime 612 = 22 × 32 × 17, Harshad number, Zuckerman number , untouchable
https://en.wikipedia.org/wiki/Video%20Electronics%20Standards%20Association
VESA (), formally known as Video Electronics Standards Association, is an American technical standards organization for computer display standards. The organization was incorporated in California in July 1989 and has its office in San Jose. It claims a membership of over 300 companies. In November 1988, NEC Home Electronics announced its creation of the association to develop and promote a Super VGA computer display standard as a successor to IBM's proprietary Video Graphics Array (VGA) display standard. Super VGA enabled graphics display resolutions up to 800×600 pixels, compared to VGA's maximum resolution of 640×480 pixels—a 56% increase. The organization has since issued several additional standards related to computer video displays. Widely used VESA standards include DisplayHDR, DisplayPort, and Flat Display Mounting Interface. Standards Feature connector (VFC), obsolete connector that was often present on older videocards, used as an 8-bit video bus to other devices VESA Advanced Feature Connector (VAFC), newer version of the VFC that widens the bus to either a 16-bit or 32-bit bus VESA Local Bus (VLB), once used as a fast video bus (akin to the more recent Accelerated Graphics Port (AGP)) VESA BIOS Extensions (VBE), used for enabling standard support for advanced video modes Display Data Channel (DDC), a data link protocol which allows a host device to control an attached display and communicate EDID, DPMS, MCCS and similar messages Extended Display Identification Data (E-EDID), a data format for display identification data Monitor Control Command Set (MCCS), a message protocol for controlling display parameters such as brightness, contrast, display orientation from the host device DisplayID, display identification data format, which is a replacement for E-EDID VESA Display Power Management Signaling (DPMS), which allows monitors to be queried on the types of power saving modes they support Digital Packet Video Link (DPVL), a display link stand
https://en.wikipedia.org/wiki/Cervical%20canal
The cervical canal is the spindle-shaped, flattened canal of the cervix, the neck of the uterus. Anatomy The cervical canal communicates with the uterine cavity via the internal orifice of the uterus (or internal os) and with the vagina via the external orifice of the uterus (ostium of uterus or external os). The internal orifice of the uterus is an interior narrowing of the uterine cavity. It corresponds to a slight constriction known as the isthmus that can be seen on the surface of the uterus about midway between the apex and base. The external orifice of the uterus is a small, depressed, somewhat circular opening on the rounded extremity of the cervix, opening to the vagina. Through this aperture, the cervical cavity communicates with that of the vagina. The external orifice is bounded by two lips, an anterior and a posterior. The anterior is shorter and thicker, though it projects lower than the posterior because of the slope of the cervix. Normally, both lips are in contact with the posterior vaginal wall. Prior to pregnancy the external orifice has a rounded shape when viewed through the vaginal canal (as through a speculum). Following childbirth, the orifice takes on an appearance more like a transverse slit or is "H-shaped". The wall of the canal presents an anterior and a posterior longitudinal ridge, from each of which proceed a number of small oblique columns, the palmate folds, giving the appearance of branches from the stem of a tree; to this arrangement the name arbor vitae uteri is applied. The folds on the two walls are not exactly opposed, but fit between one another so as to close the cervical canal. Histology The cervical canal is generally lined by "endocervical mucosa" which consists of a single layer of mucinous columnar epithelium. However, after menopause, the functional squamocolumnar junction moves into the cervical canal, and hence the distal part of the cervical canal may be lined by stratified squamous epithelium (conforming to a
https://en.wikipedia.org/wiki/Hepelivirales
Hepelivirales is an order of viruses. Taxonomy The following families are recognized: Alphatetraviridae Benyviridae Hepeviridae Matonaviridae
https://en.wikipedia.org/wiki/Drude%20particle
Drude particles are model oscillators used to simulate the effects of electronic polarizability in the context of a classical molecular mechanics force field. They are inspired by the Drude model of mobile electrons and are used in the computational study of proteins, nucleic acids, and other biomolecules. Classical Drude oscillator Most force fields in current practice represent individual atoms as point particles interacting according to the laws of Newtonian mechanics. To each atom, a single electric charge is assigned that doesn't change during the course of the simulation. However, such models cannot have induced dipoles or other electronic effects due to a changing local environment. Classical Drude particles are massless virtual sites carrying a partial electric charge, attached to individual atoms via a harmonic spring. The spring constant and relative partial charges on the atom and associated Drude particle determine its response to the local electrostatic field, serving as a proxy for the changing distribution of the electronic charge of the atom or molecule. However, this response is limited to a changing dipole moment. This response is not enough to model interactions in environments with large field gradients, which interact with higher order moments. Efficiency of simulation The major computational cost of simulating classical Drude oscillators is the calculation of the local electrostatic field and the repositioning of the Drude particle at each step. Traditionally, this repositioning is done self consistently. This cost can be reduced by assigning a small mass to each Drude particle, applying a Lagrangian transformation and evolving the simulation in the generalised coordinates. This method of simulation has been used to create water models incorporating classical Drude oscillators. Quantum Drude oscillator Since the response of a classical Drude oscillator is limited, it is not enough to model interactions in heterogeneous media with large fiel
https://en.wikipedia.org/wiki/The%20Construction%20and%20Principal%20Uses%20of%20Mathematical%20Instruments
The Construction and Principal Uses of Mathematical Instruments () is a book by Nicholas Bion, first published in 1709. It was translated into English in 1723 by Edmund Stone. It was described as "the most famous book devoted to instruments" by historian of science David M. Knight. Nicholas Bion Nicholas Bion ( ; 1652–1733) was a French instrument maker and author with workshops in Paris. He was king's engineer for mathematical instruments. He died in Paris in 1733 aged 81. Bibliography Bion is author of the following: L'usage des Globes Célestes et Terrestres et des sphères suivant les differents systèmes du Monde (Amsterdam, 1700) Usage des Astrolabes Traité de la construction et des principaux usages des instrumens de mathématique (Paris, 1709) (online version)
https://en.wikipedia.org/wiki/Idiopathic%20CD4%2B%20lymphocytopenia
Idiopathic CD4+ lymphocytopenia (ICL) is a rare medical syndrome in which the body has too few CD4+ T lymphocytes, which are a kind of white blood cell. ICL is sometimes characterized as "HIV-negative AIDS", though, in fact, its clinical presentation differs somewhat from that seen with HIV/AIDS. People with ICL have a weakened immune system and are susceptible to opportunistic infections, although the rate of infections is lower than in people with AIDS. Cause The cause of ICL, like all idiopathic conditions, is unknown. It does not appear to be caused by a transmissible agent, such as a virus. It is widely believed that there is more than one cause. Pathophysiology The loss of CD4+ T cells appears to be through apoptosis. The accelerated deaths of the T cells is likely driven by crosslinking T cell receptors. Diagnosis The mandatory criteria for diagnosis of idiopathic CD4+ lymphocytopenia include: Low numbers of CD4+ cells, on two or more measurements over at least six weeks: CD4 cell count less than 300 cells per microliter, or Less than 20% of T lymphocytes are CD4+ Laboratory evidence of lack of HIV infection Absence of any alternative explanation for the CD4 lymphocytopenia A one-time finding of low CD4+ cells is usually associated with a recent infection and resolves on its own. Alternative explanations for the low CD4 counts include conditions such as blood cancers (aleukemia), treatment with chemotherapy, immunosuppressive medications, or other medications that suppress or kill T cells, infections, and problems with blood production. All criteria must be fulfilled for a diagnosis of ICL. In addition, if these findings are present but combined with other significant findings, such as anemia or thrombocytopenia, then other diagnoses must be considered. Treatment Fludarabine-based hematopoietic stem cell transplantation (HSCT) has shown to be a feasible treatment for ICL. Prognosis In contrast to the CD4+ cell depletion caused by HIV, in gene
https://en.wikipedia.org/wiki/Joseph%20L.%20Doob
Joseph Leo Doob (February 27, 1910 – June 7, 2004) was an American mathematician, specializing in analysis and probability theory. The theory of martingales was developed by Doob. Early life and education Doob was born in Cincinnati, Ohio, February 27, 1910, the son of a Jewish couple, Leo Doob and Mollie Doerfler Doob. The family moved to New York City before he was three years old. The parents felt that he was underachieving in grade school and placed him in the Ethical Culture School, from which he graduated in 1926. He then went on to Harvard where he received a BA in 1930, an MA in 1931, and a PhD (Boundary Values of Analytic Functions, advisor Joseph L. Walsh) in 1932. After postdoctoral research at Columbia and Princeton, he joined the department of mathematics of the University of Illinois in 1935 and served until his retirement in 1978. He was a member of the Urbana campus's Center for Advanced Study from its beginning in 1959. During the Second World War, he worked in Washington, D.C., and Guam as a civilian consultant to the Navy from 1942 to 1945; he was at the Institute for Advanced Study for the academic year 1941–1942 when Oswald Veblen approached him to work on mine warfare for the Navy. Work Doob's thesis was on boundary values of analytic functions. He published two papers based on this thesis, which appeared in 1932 and 1933 in the Transactions of the American Mathematical Society. Doob returned to this subject many years later when he proved a probabilistic version of Fatou's boundary limit theorem for harmonic functions. The Great Depression of 1929 was still going strong in the thirties and Doob could not find a job. B.O. Koopman at Columbia University suggested that statistician Harold Hotelling might have a grant that would permit Doob to work with him. Hotelling did, so the Depression led Doob to probability. In 1933 Kolmogorov provided the first axiomatic foundation for the theory of probability. Thus a subject that had originated fro
https://en.wikipedia.org/wiki/Triposo
Triposo was a social travel site and mobile app that uses algorithms for journey planners. The mobile appshowed the user recommendations on where to go depending on information they had given to the app. This included Facebook details. The app worked without internet connection; it downloaded information before departure. History Triposo was created in 2011 by ex-Google Dutch brothers Richard Osinga and Douwe Osinga with the help of Jon Tirsen. During development, Triposo received $3.5 million in a Series A round. By 2015, the app had been downloaded 10 million times. Triposo was acquired by Musement in October 2017. The Triposo app was no longer available for new users by 2021. It was shut down completely on March 1, 2023 and all personal data was permanently deleted.
https://en.wikipedia.org/wiki/PEGASE
PEGASE is a design for a space observatory developed by France in the early 2000s. It combined formation flying with infrared telescopes operating as a double-aperture interferometer. Three free-flying satellites would operate together;a beam combiner and two siderostats. The baseline of the interferometer would be adjustable to between 50 and 500 meters. The goal of the mission is the study of Hot Jupiters ("pegasids"), brown dwarfs and the interior of protoplanetary disks. The design was developed by Centre National d'Études Spatiales and was studied for a launch as early as 2010–2012. However, the Phase-0 part of the study in 2005 suggested it would take 8 or 9 years to develop. See also List of proposed space observatories
https://en.wikipedia.org/wiki/Adler-32
Adler-32 is a checksum algorithm written by Mark Adler in 1995, modifying Fletcher's checksum. Compared to a cyclic redundancy check of the same length, it trades reliability for speed. Adler-32 is more reliable than Fletcher-16, and slightly less reliable than Fletcher-32. History The Adler-32 checksum is part of the widely used zlib compression library, as both were developed by Mark Adler. A "rolling checksum" version of Adler-32 is used in the rsync utility. Calculation An Adler-32 checksum is obtained by calculating two 16-bit checksums A and B and concatenating their bits into a 32-bit integer. A is the sum of all bytes in the stream plus one, and B is the sum of the individual values of A from each step. At the beginning of an Adler-32 run, A is initialized to 1, B to 0. The sums are done modulo 65521 (the largest prime number smaller than 216). The bytes are stored in network order (big endian), B occupying the two most significant bytes. The function may be expressed as A = 1 + D1 + D2 + ... + Dn (mod 65521) B = (1 + D1) + (1 + D1 + D2) + ... + (1 + D1 + D2 + ... + Dn) (mod 65521) = n×D1 + (n−1)×D2 + (n−2)×D3 + ... + Dn + n (mod 65521) Adler-32(D) = B × 65536 + A where D is the string of bytes for which the checksum is to be calculated, and n is the length of D. Example The Adler-32 sum of the ASCII string "Wikipedia" would be calculated as follows: A = 920 = 0x398 (base 16) B = 4582 = 0x11E6 Output = (0x11E6 << 16) + 0x398 = 0x11E60398 = 300286872 The modulo operation had no effect in this example, since none of the values reached 65521. Comparison with the Fletcher checksum The first difference between the two algorithms is that Adler-32 sums are calculated modulo a prime number, whereas Fletcher sums are calculated modulo 24−1, 28−1, or 216−1 (depending on the number of bits used), which are all composite numbers. Using a prime number makes it possible for Adler-32 to catch differences in certain combinations of bytes that
https://en.wikipedia.org/wiki/Cyanobacterial%20morphology
Cyanobacterial morphology refers to the form or shape of cyanobacteria. Cyanobacteria are a large and diverse phylum of bacteria defined by their unique combination of pigments and their ability to perform oxygenic photosynthesis. Cyanobacteria often live in colonial aggregates that can take a multitude of forms. Of particular interest among the many species of cyanobacteria are those that live colonially in elongate hair-like structures, known as trichomes. These filamentous species can contain hundreds to thousands of cells. They often dominate the upper layers of microbial mats found in extreme environments such as hot springs, hypersaline water, deserts and polar regions, as well as being widely distributed in more mundane environments. Many filamentous species are also motile, gliding along their long axis, and displaying photomovement by which a trichome modulates its gliding according to the incident light. The latter has been found to play an important role in guiding the trichomes to optimal lighting conditions, which can either inhibit the cells if the incident light is too weak, or damage the cells if too strong. Overview Cellular functions require a well-organized and coordinated internal structure to operate effectively. Cells need to build, sustain, and sometimes modify their shape, which allows them to rapidly change their behaviour in response to external factors. During different life cycle stages, such as cell growth, cell division or cell differentiation, internal structures must dynamically adapt to the current requirements. In eukaryotes, these manifold tasks are fulfilled by the cytoskeleton: proteinaceous polymers that assemble into stable or dynamic filaments or tubules in vivo and in vitro. The eukaryotic cytoskeleton is historically divided into three classes: the actin filaments (consisting of actin monomers), the microtubules (consisting of tubulin subunits) and the intermediate filaments (IFs), although other cytoskeletal classes hav
https://en.wikipedia.org/wiki/Atomic%20broadcast
In fault-tolerant distributed computing, an atomic broadcast or total order broadcast is a broadcast where all correct processes in a system of multiple processes receive the same set of messages in the same order; that is, the same sequence of messages. The broadcast is termed "atomic" because it either eventually completes correctly at all participants, or all participants abort without side effects. Atomic broadcasts are an important distributed computing primitive. Properties The following properties are usually required from an atomic broadcast protocol: Validity: if a correct participant broadcasts a message, then all correct participants will eventually receive it. Uniform Agreement: if one correct participant receives a message, then all correct participants will eventually receive that message. Uniform Integrity: a message is received by each participant at most once, and only if it was previously broadcast. Uniform Total Order: the messages are totally ordered in the mathematical sense; that is, if any correct participant receives message 1 first and message 2 second, then every other correct participant must receive message 1 before message 2. Rodrigues and Raynal and Schiper et al. define the integrity and validity properties of atomic broadcast slightly differently. Note that total order is not equivalent to FIFO order, which requires that if a process sent message 1 before it sent message 2, then all participants must receive message 1 before receiving message 2. It is also not equivalent to "causal order", where if message 2 "depends on" or "occurs after" message 1 then all participants must receive message 2 after receiving message 1. While a strong and useful condition, total order requires only that all participants receive the messages in the same order, but does not place other constraints on that order. Fault tolerance Designing an algorithm for atomic broadcasts is relatively easy if it can be assumed that computers will not fail.
https://en.wikipedia.org/wiki/Hyperbolic%20coordinates
In mathematics, hyperbolic coordinates are a method of locating points in quadrant I of the Cartesian plane . Hyperbolic coordinates take values in the hyperbolic plane defined as: . These coordinates in HP are useful for studying logarithmic comparisons of direct proportion in Q and measuring deviations from direct proportion. For in take and . The parameter u is the hyperbolic angle to (x, y) and v is the geometric mean of x and y. The inverse mapping is . The function is a continuous mapping, but not an analytic function. Alternative quadrant metric Since HP carries the metric space structure of the Poincaré half-plane model of hyperbolic geometry, the bijective correspondence brings this structure to Q. It can be grasped using the notion of hyperbolic motions. Since geodesics in HP are semicircles with centers on the boundary, the geodesics in Q are obtained from the correspondence and turn out to be rays from the origin or petal-shaped curves leaving and re-entering the origin. And the hyperbolic motion of HP given by a left-right shift corresponds to a squeeze mapping applied to Q. Since hyperbolas in Q correspond to lines parallel to the boundary of HP, they are horocycles in the metric geometry of Q. If one only considers the Euclidean topology of the plane and the topology inherited by Q, then the lines bounding Q seem close to Q. Insight from the metric space HP shows that the open set Q has only the origin as boundary when viewed through the correspondence. Indeed, consider rays from the origin in Q, and their images, vertical rays from the boundary R of HP. Any point in HP is an infinite distance from the point p at the foot of the perpendicular to R, but a sequence of points on this perpendicular may tend in the direction of p. The corresponding sequence in Q tends along a ray toward the origin. The old Euclidean boundary of Q is no longer relevant. Applications in physical science Fundamental physical variables are sometimes related
https://en.wikipedia.org/wiki/Ingress%20router
An ingress router is a label switch router that is a starting point (source) for a given label-switched path (LSP). An ingress router may be an egress router or an intermediate router for any other LSP(s). Hence the role of ingress and egress routers is LSP specific. Usually, the MPLS label is attached with an IP packet at the ingress router and removed at the egress router, whereas label swapping is performed on the intermediate routers. However, in special cases (such as LSP Hierarchy in RFC 4206, LSP Stitching and MPLS local protection) the ingress router could be pushing label in label stack of an already existing MPLS packet (instead of an IP packet). Note that, although the ingress router is the starting point of an LSP, it may or may not be the source of the under-lying IP packets. MPLS networking
https://en.wikipedia.org/wiki/Diskagma
Diskagma ("disc-like fragment") is a genus of problematic fossil from a Paleoproterozoic (2200 million years old) paleosol from South Africa, and significant as the oldest likely eukaryote and earliest evidence for megascopic life on land. Description Diskagma buttonii is a small fossil less than 1mm in length found within the surface horizon of a vertisol paleosol above the Hekpoort Basalt dated to 2200 million years old. The opacity of the matrix and the size of the fossil meant that its three dimensional structure required imaging by computer-assisted x-ray tomography using a cyclotron source The fossils are shaped like an urn with an apical cup, which is filled with filamentous structures whose exact nature is uncertain due to recrystallization of the matrix under greenschist facies metamorphism. The base of these hollow urns is a system of hollow tubes running over the paleosol and connecting the urns into groups. The walls of Diskagma have scattered spiny or tubular extensions. Biological affinities Diskagma buttonii is a problematic fossil that has been named before its biological affinities have been understood. Its size and complexity suggest that it had the degree of cytoskeletal complexity found in eukaryotes, but it predates the other fossil candidate for the oldest eukaryote Grypania, now known to be 1800 million years old, and at 2200 million years old is much older than molecular clock estimates for eukaryotes of 1600 million years. Another similar fossil is Horodyskia. The size and hollow shape of Diskagma are similar to the living fungus Geosiphon, which is endosymbiotic with the cyanobacterium Nostoc. However, the apical cup and filaments are not seen in modern Geosiphon. Paleoenvironmental significance Diskagma buttonii dates to the Paleoproterozoic Great Oxygenation Event, a time of marked increase in atmospheric oxygenation compared with that of the Archean. If, like the living Geosiphon, the central cavity of Diskagma housed a photosymbion
https://en.wikipedia.org/wiki/Ply%20%28game%20theory%29
In two-or-more-player sequential games, a ply is one turn taken by one of the players. The word is used to clarify what is meant when one might otherwise say "turn". The word "turn" can be a problem since it means different things in different traditions. For example, in standard chess terminology, one move consists of a turn by each player; therefore a ply in chess is a half-move. Thus, after 20 moves in a chess game, 40 plies have been completed—20 by white and 20 by black. In the game of Go, by contrast, a ply is the normal unit of counting moves; so for example to say that a game is 250 moves long is to imply 250 plies. In poker with n players the word "street" is used for a full betting round consisting of n plies -each dealt card may sometimes also be called a "street". For instance in heads up Texas hold'em a street consists of 2 plies, with possible plays being check/raise/call/fold: the first by the player at the big blind, and the second by the dealer, who posts the small blind; and there are 4 streets: preflop, flop, turn, river -the latter 3 corresponding to community cards. The terms "half-street" and "half-street game" are sometimes used to describe, respectively, a single bet in a heads up game, and a simplified heads up poker game where only a single player bets. The word "ply" used as a synonym for "layer" goes back to the 15th century. Arthur Samuel first used the term in its game-theoretic sense in his seminal paper on machine learning in checkers in 1959, but with a slightly different meaning: the "ply", in Samuel's terminology, is actually the depth of analysis ("Certain expressions were introduced which we will find useful. These are: Ply, defined as the number of moves ahead, where a ply of two consists of one proposed move by the machine and one anticipated reply by the opponent"). In computing, the concept of a ply is important because one ply corresponds to one level of the game tree. The Deep Blue chess computer which d
https://en.wikipedia.org/wiki/Bulk%20density
In materials science, bulk density, also called apparent density, is a material property defined as the mass of the many particles of the material divided by the bulk volume. Bulk volume is defined as the total volume the particles occupy, including particle's own volume, inter-particle void volume, and the particles' internal pore volume. Bulk density is useful for materials such as powders, granules, and other "divided" solids, especially used in reference to mineral components (soil, gravel), chemical substances, pharmaceutical ingredients, foodstuff, or any other masses of corpuscular or particulate matter (particles). Bulk density is not the same as the particle density, which is an intrinsic property of the solid and does not include the volume for voids between particles (see: density of non-compact materials). Bulk density is an extrinsic property of a material; it can change depending on how the material is handled. For example, a powder poured into a cylinder will have a particular bulk density; if the cylinder is disturbed, the powder particles will move and usually settle closer together, resulting in a higher bulk density. For this reason, the bulk density of powders is usually reported both as "freely settled" (or "poured" density) and "tapped" density (where the tapped density refers to the bulk density of the powder after a specified compaction process, usually involving vibration of the container.) Soil The bulk density of soil depends greatly on the mineral make up of soil and the degree of compaction. The density of quartz is around but the (dry) bulk density of a mineral soil is normally about half that density, between . In contrast, soils rich in soil organic carbon and some friable clays tend to have lower bulk densities () due to a combination of the low-density of the organic materials themselves and increased porosity. For instance, peat soils have bulk densities from . Bulk density of soil is usually determined from a core sampl
https://en.wikipedia.org/wiki/STED%20microscopy
Stimulated emission depletion (STED) microscopy is one of the techniques that make up super-resolution microscopy. It creates super-resolution images by the selective deactivation of fluorophores, minimizing the area of illumination at the focal point, and thus enhancing the achievable resolution for a given system. It was developed by Stefan W. Hell and Jan Wichmann in 1994, and was first experimentally demonstrated by Hell and Thomas Klar in 1999. Hell was awarded the Nobel Prize in Chemistry in 2014 for its development. In 1986, V.A. Okhonin (Institute of Biophysics, USSR Academy of Sciences, Siberian Branch, Krasnoyarsk) had patented the STED idea. This patent was unknown to Hell and Wichmann in 1994. STED microscopy is one of several types of super resolution microscopy techniques that have recently been developed to bypass the diffraction limit of light microscopy to increase resolution. STED is a deterministic functional technique that exploits the non-linear response of fluorophores commonly used to label biological samples in order to achieve an improvement in resolution, that is to say STED allows for images to be taken at resolutions below the diffraction limit. This differs from the stochastic functional techniques such as Photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) as these methods use mathematical models to reconstruct a sub diffraction limit from many sets of diffraction limited images. Background In traditional microscopy, the resolution that can be obtained is limited by the diffraction of light. Ernst Abbe developed an equation to describe this limit. The equation is: where D is the diffraction limit, λ is the wavelength of the light, and NA is the numerical aperture, or the refractive index of the medium multiplied by the sine of the angle of incidence. n describes the refractive index of the specimen, α measures the solid half‐angle from which light is gathered by an objective, λ is