source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Baikal%20CPU
Baikal CPU was a line of MIPS and ARM-based microprocessors developed by fabless design firm Baikal Electronics, a spin-off of the Russian supercomputer company T-Platforms. Design Judging by the information available from online sources Baikal Electronics have selected a different approach compared to other Russian microprocessor initiatives such as the Elbrus-2SM, Elbrus-8S by MCST, and the Multiclet line of chips. The design by Baikal Electronics is based on existing commercial IP Cores from Imagination Technologies and ARM Holdings, compared to the more innovative approach of Multiclet, and the Elbrus CPU which has a history dating back to the Elbrus supercomputers from the Soviet Union. Company's history The Baikal Electronics company was established on January 11, 2012, as a daughter entity of T-Platforms and was registered as a public joint-stock company. T-NANO, a future investor in Baikal Electronics, registered on march 3, 2012 as a joint venture of T-Platformi (50.5 %) and the Russian Direct Investment Fund (49.5 %). In May 2012 Grigoriy Khrenov joined the company as the CTO. He is D.Sc. and formerly had been a Deputy Chief Designer at Micron Technology, then an Engineering Director at Cadence Design Systems. In August 2021 TSMC was contracted for chip production, but such production was banned by sanctions adopted in the aftermath of Russia's invasion of Ukraine. In August 2023, Baikal Electronics entered bankruptcy. US sanctions, MIPS architecture chosen Since March 8, 2013, the parent company T-Platforms was placed under US sanctions, which suspended the planned licensing agreement with ARM Ltd. Soon, on March 23, the ownership structure was changed: T-Platforms got 37.17 %, Rusnano through the Russian Direct Investment Fund - 62.83 %. For the first processor being developed (Baikal-T1 or BE-T1000) MIPS architecture by Imagination Technologies and 28 nm process were chosen. On December 31, 2013, US sanctions were lifted and a Technology Licen
https://en.wikipedia.org/wiki/Ocarina%20Networks
Ocarina Networks was a technology company selling a hardware/software solution designed to reduce data footprints with file-aware storage optimization. A subsidiary of Dell, their flagship product, the Ocarina Appliance/Reader, released in April 2008, uses patented data compression techniques incorporating such methods as record linkage and context-based lossless data compression. The product includes the hardware-appliance-based compressor, the Ocarina Optimizer (Models 2400, 3400, 4600) and a real-time decompressor, the software-based Ocarina Reader. History Ocarina was founded by Murli Thirumale, formerly a vice-president and general manager at Citrix Systems; Carter George, formerly a vice-president and co-founder of PolyServe (acquired by HP); and Goutham Rao, formerly Chief Technical Officer and Chief Architect for Advanced Solutions Group of Citrix Systems. Its solution works by identifying redundancy at a global file system level, and applying specific algorithms for different data formats, such as algorithms specific to images, text, executables, seismic data, and other "unstructured data". Ocarina's Optimizers work with existing storage systems through standard network protocols such as NFS, or are directly integrated with partner vendors storage systems. On July 19, 2010, Dell announced it plans to acquire Ocarina Networks. The transaction was completed on July 31, 2010. In late 2010, the original Ocarina Optimizer product family was removed from the market, enabling the Ocarina team to focus on the integration of dedupe and compression into Dell storage products. The most notable examples were the DR-family of deduplication appliances, launched in 2012, and integration of dedupe into Dell's Fluid File System. Technology The company's ECOsystem (Extract, Correlate, Optimize) provided data reduction technology, providing both deduplication and content-aware data compression in a reliable, scalable, policy-based package. ECOsystem consists of 3 pr
https://en.wikipedia.org/wiki/Thermal%20diffusivity
In heat transfer analysis, thermal diffusivity is the thermal conductivity divided by density and specific heat capacity at constant pressure. It is a measure of the rate of heat transfer inside a material. It has units of m2/s. Thermal diffusivity is usually denoted by lowercase alpha (), but , , (kappa), , and are also used. The formula is: where is thermal conductivity (W/(m·K)) is specific heat capacity (J/(kg·K)) is density (kg/m3) Together, can be considered the volumetric heat capacity (J/(m3·K)). As seen in the heat equation, one way to view thermal diffusivity is as the ratio of the time derivative of temperature to its curvature, quantifying the rate at which temperature concavity is "smoothed out". Thermal diffusivity is a contrasting measure to thermal effusivity. In a substance with high thermal diffusivity, heat moves rapidly through it because the substance conducts heat quickly relative to its volumetric heat capacity or 'thermal bulk'. Thermal diffusivity is often measured with the flash method. It involves heating a strip or cylindrical sample with a short energy pulse at one end and analyzing the temperature change (reduction in amplitude and phase shift of the pulse) a short distance away. Thermal diffusivity of selected materials and substances See also Heat equation Laser flash analysis Thermophoresis Thermal effusivity Thermal time constant
https://en.wikipedia.org/wiki/Medical%20examiner
The medical examiner is an appointed official in some American jurisdictions who is trained in pathology that investigates deaths that occur under unusual or suspicious circumstances, to perform post-mortem examinations, and in some jurisdictions to initiate inquests. In the US, there are two death investigation systems, the coroner system based on English law, and the medical examiner system, which evolved from the coroner system during the latter half of the 19th century. The type of system varies from municipality to municipality and from state to state, with over 2,000 separate jurisdictions for investigating unnatural deaths. In 2002, 22 states had a medical examiner system, 11 states had a coroner system, and 18 states had a mixed system. Since the 1940s, the medical examiner system has gradually replaced the coroner system, and serves about 48% of the US population. The coroner is not necessarily a medical doctor, but a lawyer, or even a layperson. In the 19th century, the public became dissatisfied with lay coroners and demanded that the coroner be replaced by a physician. In 1918, New York City introduced the office of the Chief Medical Examiner, and appointed physicians experienced in the field of pathology. In 1959, the medical subspecialty of forensic pathology was formally certified. The types of death reportable to the system are determined by federal, state or local laws. Commonly, these include violent, suspicious, sudden, and unexpected deaths, death when no physician or practitioner treated recently, inmates in public institutions, in custody of law enforcement, during or immediately following therapeutic or diagnostic procedures, or deaths due to neglect. Duties A medical examiner's duties vary by location, but typically include: investigating human organs like the stomach, liver, brain, determining cause of death, examining the condition of the body studying tissue, organs, cells, and bodily fluids issuing death certificates, maintainin
https://en.wikipedia.org/wiki/Strain%20rate
In mechanics and materials science, strain rate is the time derivative of strain of a material. Strain rate has dimension of inverse time and SI units of inverse second, s−1 (or its multiples). The strain rate at some point within the material measures the rate at which the distances of adjacent parcels of the material change with time in the neighborhood of that point. It comprises both the rate at which the material is expanding or shrinking (expansion rate), and also the rate at which it is being deformed by progressive shearing without changing its volume (shear rate). It is zero if these distances do not change, as happens when all particles in some region are moving with the same velocity (same speed and direction) and/or rotating with the same angular velocity, as if that part of the medium were a rigid body. The strain rate is a concept of materials science and continuum mechanics that plays an essential role in the physics of fluids and deformable solids. In an isotropic Newtonian fluid, in particular, the viscous stress is a linear function of the rate of strain, defined by two coefficients, one relating to the expansion rate (the bulk viscosity coefficient) and one relating to the shear rate (the "ordinary" viscosity coefficient). In solids, higher strain rates can often cause normally ductile materials to fail in a brittle manner. Definition The definition of strain rate was first introduced in 1867 by American metallurgist Jade LeCocq, who defined it as "the rate at which strain occurs. It is the time rate of change of strain." In physics the strain rate is generally defined as the derivative of the strain with respect to time. Its precise definition depends on how strain is measured. The strain is the ratio of two lengths, so it is a dimensionless quantity (a number that does not depend on the choice of measurement units). Thus, strain rate has dimension of inverse time and units of inverse second, s−1 (or its multiples). Simple deformations I
https://en.wikipedia.org/wiki/Egon%20B%C3%B6rger
Egon Börger (born 13 May 1946) is a German-born computer scientist based in Italy. Life and work Börger was born in Bad Laer, Westphalia, Lower Saxony, Germany. Between 1965 and 1971 he studied at the Sorbonne, Paris (France), Université Catholique de Louvain, Institut Supérieur de Philosophie de Louvain and University of Münster (Germany). Between 1972 and 1976, he was at the Università di Salerno in Italy, where he taught the first courses in the newborn Computer Science Degree. Since 1985 he has held a Chair in computer science at the University of Pisa, Italy. Since September 2010, he has been an elected member of the Academia Europaea. Egon Börger is a pioneer of applying logical methods in computer science. He is co-founder of the international conference series CSL. He is also one of the founders of the Abstract State Machines (ASM) formal method for accurate and controlled design and analysis of computer-based systems and cofounder of the series of international ASM workshops, which in 2008 merged with the regular meetings of the B and Z User Groups to form the international ABZ conference. Börger contributed to the theoretical foundations of the method and initiated its industrial applications in a variety of fields, in particular programming languages, System architecture, requirements and software (re-)engineering, control systems, protocols, web services. To this date, he is one of the leading scientists in ASM-based modeling and verification technology, which he has crucially shaped by his activities. In 2007, he received the Humboldt Research Award. Festschrifts were produced for Börger's 60th and 75th birthdays. Selected publications Egon Börger and Robert Stärk, Abstract State Machines: A Method for High-Level System Design and Analysis, Springer-Verlag, 2003. () Egon Börger Computability, Complexity, Logic (North-Holland, Amsterdam 1989, translated from the German original from 1985, Italian Translation Bollati-Borighieri 1989) Egon Börge
https://en.wikipedia.org/wiki//dev/full
In Linux, FreeBSD, NetBSD , or the always-full device, is a special file that always returns the error code (meaning "No space left on device") on writing, and provides an infinite number of zero bytes to any process that reads from it (similar to ). This device is usually used when testing the behaviour of a program when it encounters a "disk full" error. $ echo "Hello world" > /dev/full bash: echo: write error: No space left on device History Support for the always-full device in Linux is documented as early as 2007. Native support was added to FreeBSD in the 11.0 release in 2016, which had previously supported it through an optional module called lindev. The full device appeared in NetBSD 8. See also Fault injection in 9front
https://en.wikipedia.org/wiki/Radiation%20stress
In fluid dynamics, the radiation stress is the depth-integrated – and thereafter phase-averaged – excess momentum flux caused by the presence of the surface gravity waves, which is exerted on the mean flow. The radiation stresses behave as a second-order tensor. The radiation stress tensor describes the additional forcing due to the presence of the waves, which changes the mean depth-integrated horizontal momentum in the fluid layer. As a result, varying radiation stresses induce changes in the mean surface elevation (wave setup) and the mean flow (wave-induced currents). For the mean energy density in the oscillatory part of the fluid motion, the radiation stress tensor is important for its dynamics, in case of an inhomogeneous mean-flow field. The radiation stress tensor, as well as several of its implications on the physics of surface gravity waves and mean flows, were formulated in a series of papers by Longuet-Higgins and Stewart in 1960–1964. Radiation stress derives its name from the analogous effect of radiation pressure for electromagnetic radiation. Physical significance The radiation stress – mean excess momentum-flux due to the presence of the waves – plays an important role in the explanation and modeling of various coastal processes: Wave setup and setdown – the radiation stress consists in part of a radiation pressure, exerted at the free surface elevation of the mean flow. If the radiation stress varies spatially, as it does in the surf zone where the wave height reduces by wave breaking, this results in changes of the mean surface elevation called wave setup (in case of an increased level) and setdown (for a decreased water level); Wave-driven current, especially a longshore current in the surf zone – for oblique incidence of waves on a beach, the reduction in wave height inside the surf zone (by breaking) introduces a variation of the shear-stress component Sxy of the radiation stress over the width of the surf zone. This provides the forcin
https://en.wikipedia.org/wiki/The%20Yogi
The Yogi (German: Der Yoghi) is a 1916 German silent drama film directed by Rochus Gliese and Paul Wegener and starring Wegener and Lyda Salmonova. Wegener plays a double role as an inventor and an Indian mystic. It was shot at the Tempelhof Studios in Berlin. The film's sets were designed by Rochus Gliese. Cast Paul Wegener as The Yogi / The Inventor Lyda Salmonova as Myra Hedwig Gutzeit Fritz Huf as Gott Schiwa
https://en.wikipedia.org/wiki/BioMarin%20Pharmaceutical
BioMarin Pharmaceutical Inc. is an American biotechnology company headquartered in San Rafael, California. It has offices and facilities in the United States, South America, Asia, and Europe. BioMarin's core business and research is in enzyme replacement therapies (ERTs). BioMarin was the first company to provide therapeutics for mucopolysaccharidosis type I (MPS I), by manufacturing laronidase (Aldurazyme, commercialized by Genzyme Corporation). BioMarin was also the first company to provide therapeutics for phenylketonuria (PKU). Over the years, BioMarin has been criticised for drug pricing and for specific instances of denying access to drugs in clinical trials. History BioMarin was founded in 1997 by Christopher Starr Ph.D. and Grant W. Denison Jr. with an investment of a $1.5 million from Glyko Biomedical and went public in 1999. Seed investors were amongst others MPM Bioventures, Grosvenor Fund and Florian Schönharting. Business development In 2002, BioMarin acquired Glyko Biomedical. In 2009, BioMarin acquired Huxley Pharmaceuticals, Inc. (Huxley), which had rights to a proprietary form of 3,4-diaminopyridine (3,4-DAP), amifampridine phosphate. In 2010, BioMarin was granted marketing approval by the European Commission for 3,4-diaminopyridine (3,4-DAP), amifampridine phosphate for the treatment of the rare autoimmune disease Lambert–Eaton myasthenic syndrome (LEMS). BioMarin launched the product under the name Firdapse. In 2010, BioMarin acquired LEAD Therapeutics, Inc. (LEAD), a small private drug discovery and early stage development company with key compound LT-673, an orally available poly (ADP-ribose) polymerase (PARP) inhibitor studied for the treatment of patients with rare, genetically defined cancers. This acquisition was followed by the purchase of ZyStor Therapeutics, Inc. (ZyStor), a privately held biotechnology company developing ERTs for the treatment of lysosomal storage disorders and its lead product candidate, ZC-701, a fusion of
https://en.wikipedia.org/wiki/Frama-C
Frama-C stands for Framework for Modular Analysis of C programs. Frama-C is a set of interoperable program analyzers for C programs. Frama-C has been developed by the French Commissariat à l'Énergie Atomique et aux Énergies Alternatives (CEA-List) and Inria. It has also received funding from the Core Infrastructure Initiative. Frama-C, as a static analyzer, inspects programs without executing them. Despite its name, the software is not related to the French project Framasoft. Architecture Frama-C has a modular plugin architecture comparable to that of Eclipse (software) or GIMP. Frama-C relies on CIL (C Intermediate Language) to generate an abstract syntax tree. The abstract syntax tree supports annotations written in ANSI/ISO C Specification Language (ACSL). Several modules can manipulate the abstract syntax tree to add ANSI/ISO C Specification Language (ACSL) annotations. Among frequently used plugins are: Value analysis computes a value or a set of possible values for each variable in a program. This plugin uses abstract interpretation techniques and many other plugins make use of its results. Jessie verifies properties in a deductive manner. Jessie relies on the Why or Why3 back-end to enable proof obligations to be sent to automatic theorem provers like Z3, Simplify, Alt-Ergo or interactive theorem provers like Coq or Why. Using Jessie, an implementation of bubble-sort or a toy e-voting system can be proved to satisfy their respective specifications. It uses a separation memory model inspired by separation logic. WP (Weakest Precondition) similar to Jessie, verifies properties in a deductive manner. Unlike Jessie, it focuses on parameterization with regards to the memory model. WP is designed to cooperate with other Frama-C plugins such as the value analysis plug-in, unlike Jessie that compiles the C program directly into the Why language. WP can optionally use the Why3 platform to invoke many other automated and interactive provers. Impact analy
https://en.wikipedia.org/wiki/Second%20wind
Second wind is a phenomenon in endurance sports, such as marathons or road running (as well as other sports), whereby an athlete who is out of breath and too tired to continue (known as "hitting the wall"), finds the strength to press on at top performance with less exertion. The feeling may be similar to that of a "runner's high", the most obvious difference being that the runner's high occurs after the race is over. In muscle glycogenoses (muscle GSDs), an inborn error of carbohydrate metabolism impairs either the formation or utilization of muscle glycogen. As such, those with muscle glycogenoses do not need to do prolonged exercise to experience "hitting the wall". Instead, signs of exercise intolerance, such as an inappropriate rapid heart rate response to exercise, are experienced from the beginning of an activity, and some muscle GSDs can achieve second wind within about 10 minutes from the beginning of the aerobic activity, such as walking. (See below in pathology). In experienced athletes, "hitting the wall" is conventionally believed to be due to the body's glycogen stores being depleted, with "second wind" occurring when fatty acids become the predominant source of energy. The delay between "hitting the wall" and "second wind" occurring, has to do with the slow speed of which fatty acids sufficiently produce ATP (energy); with fatty acids taking approximately 10 minutes, whereas muscle glycogen is considerably faster at about 30 seconds. Some scientists believe the second wind to be a result of the body finding the proper balance of oxygen to counteract the buildup of lactic acid in the muscles. Others claim second winds are due to endorphin production. Heavy breathing during exercise also provides cooling for the body. After some time the veins and capillaries dilate and cooling takes place more through the skin, so less heavy breathing is needed. The increase in the temperature of the skin can be felt at the same time as the "second wind" takes place.
https://en.wikipedia.org/wiki/Pan-Slavic%20colors
The pan-Slavic colors—blue, white and red—were defined by the Prague Slavic Congress, 1848, based on the symbolism of the colors of the flag of Russia, which was introduced in the late 17th century. Historically, many Slavic nations and states adopted flags and other national symbols that used some combination of those three colors. Slavic countries that use or have used the colors include: Russia, Yugoslavia, Czechoslovakia, Czech Republic, Montenegro, Slovakia, Croatia, Serbia and Slovenia. On the other hand, Belarus, Bulgaria, North Macedonia, Poland and Ukraine have never adopted all the colors. Yugoslavia, both the Kingdom (Kingdom of Yugoslavia, 1918–1943) and the Republic (SFR Yugoslavia, 1943–1992) was a union of several Slavic nations, and therefore not only sported the pan-Slavic colors but adopted the pan-Slavic flag as its own (later adding a red star). After the initial breakup of Yugoslavia in the early 1990s, two of the remaining Yugoslav republics—Montenegro and Serbia—reconstituted as Federal Republic of Yugoslavia in 1992 and as State Union of Serbia and Montenegro in 2003, and continued to use the pan-Slavic flag until its own dissolution when Montenegro proclaimed independence in 2006. Serbia continues to use a flag with all three Pan-Slavic colors, along with fellow republics Croatia and Slovenia. Most flags with pan-Slavic colors have been introduced and recognized by Slavic nations following the first Slavic Congress of 1848, although Serbia adopted its red-blue-white tricolor in 1835 and the ethnic flag of Sorbs (blue-red-white) had already been designed in 1842. Czech Moravians proclaimed their flag (white-red-blue) at the very congress. In 1848, Croatian viceroy Josip Jelačić first designed the flag of Croatia with its modern tricolor (red-white-blue) for the then-concepted Triune Kingdom (and officially adopted by the Kingdom of Croatia), a group of Slovenian intellectuals in Vienna, Austria created the flag of Slovenia (white-blue-red)
https://en.wikipedia.org/wiki/PortMedia
PortMedia, formerly PortMusic, is a set of open source computer libraries for dealing with sound and MIDI. Currently the project has two main libraries: PortAudio, for digital audio input and output, and PortMidi, a library for MIDI input and output. A library for dealing with different audio file formats, PortSoundFile, is being planned, although another library, libsndfile, already exists and is licensed under the copyleft GNU Lesser General Public License. A standard MIDI file I/O library, PortSMF, is under construction. PortMusic has become PortMedia and is hosted on SourceForge. See also List of free software for audio External links PortMusic website Audio libraries Computer libraries Free audio software
https://en.wikipedia.org/wiki/Screw%20theory
Screw theory is the algebraic calculation of pairs of vectors, such as forces and moments or angular and linear velocity, that arise in the kinematics and dynamics of rigid bodies. The mathematical framework was developed by Sir Robert Stawell Ball in 1876 for application in kinematics and statics of mechanisms (rigid body mechanics). Screw theory provides a mathematical formulation for the geometry of lines which is central to rigid body dynamics, where lines form the screw axes of spatial movement and the lines of action of forces. The pair of vectors that form the Plücker coordinates of a line define a unit screw, and general screws are obtained by multiplication by a pair of real numbers and addition of vectors. An important result of screw theory is that geometric calculations for points using vectors have parallel geometric calculations for lines obtained by replacing vectors with screws. This is termed the transfer principle. Screw theory has become an important tool in robot mechanics, mechanical design, computational geometry and multibody dynamics. This is in part because of the relationship between screws and dual quaternions which have been used to interpolate rigid-body motions. Based on screw theory, an efficient approach has also been developed for the type synthesis of parallel mechanisms (parallel manipulators or parallel robots). Fundamental theorems include Poinsot's theorem (Louis Poinsot, 1806) and Chasles' theorem (Michel Chasles, 1832). Felix Klein saw screw theory as an application of elliptic geometry and his Erlangen Program. He also worked out elliptic geometry, and a fresh view of Euclidean geometry, with the Cayley–Klein metric. The use of a symmetric matrix for a von Staudt conic and metric, applied to screws, has been described by Harvey Lipkin. Other prominent contributors include Julius Plücker, W. K. Clifford, F. M. Dimentberg, Kenneth H. Hunt, J. R. Phillips. Basic concepts A spatial displacement of a rigid body can be d
https://en.wikipedia.org/wiki/F-ATPase
F-ATPase, also known as F-Type ATPase, is an ATPase/synthase found in bacterial plasma membranes, in mitochondrial inner membranes (in oxidative phosphorylation, where it is known as Complex V), and in chloroplast thylakoid membranes. It uses a proton gradient to drive ATP synthesis by allowing the passive flux of protons across the membrane down their electrochemical gradient and using the energy released by the transport reaction to release newly formed ATP from the active site of F-ATPase. Together with V-ATPases and A-ATPases, F-ATPases belong to superfamily of related rotary ATPases. F-ATPase consists of two domains: the Fo domain, which is integral in the membrane and is composed of 3 different types of integral proteins classified as a, b and c. the F1, which is peripheral (on the side of the membrane that the protons are moving into). F1 is composed of 5 polypeptide units α3β3γδε that bind to the surface of the Fo domain. F-ATPases usually work as ATP synthases instead of ATPases in cellular environments. That is to say, it usually makes ATP from the proton gradient instead of working in the other direction like V-ATPases typically do. They do occasionally revert as ATPases in bacteria. Structure Fo-F1 particles are mainly formed of polypeptides. The F1-particle contains 5 types of polypeptides, with the composition-ratio—3α:3β:1δ:1γ:1ε. The Fo has the 1a:2b:12c composition. Together they form a rotary motor. As the protons bind to the subunits of the Fo domains, they cause parts of it to rotate. This rotation is propagated by a 'camshaft' to the F1 domain. ADP and Pi (inorganic phosphate) bind spontaneously to the three β subunits of the F1 domain, so that every time it goes through a 120° rotation ATP is released (rotational catalysis). The Fo domains sits within the membrane, spanning the phospholipid bilayer, while the F1 domain extends into the cytosol of the cell to facilitate the use of newly synthesized ATP. The Bovine Mitochondrial F1-ATPa
https://en.wikipedia.org/wiki/Yr.no
yr.no is a website and a mobile app for weather forecasting and dissemination of other types of meteorological information hosted by the Norwegian Broadcasting Corporation in collaboration with the Norwegian Meteorological Institute. The website was launched in September 2007. The word yr means drizzle in Norwegian. Weather models and updating frequency Yr.no generates weather forecasts for millions of places around the world. Its 3-day forecast uses two different weather models with a 2.5 km resolution in Scandinavia and the Norwegian islands, and for other places, the ECMWF's IFS model in high-resolution configuration (HRES), with a 9 km resolution. For the 10 day forecast, yr.no employs the ECMWF-ENS model with a 18 km resolution for Norwegian territories, and for the rest of the world, IFS-HRES with a 9 km resolution. Outside of Scandinavia, the 3-day forecasts are updated every six hours, and the 10-day forecasts every 12 hours. Information sources In addition to data from the Norwegian Meteorological Institute, yr.no uses open data from various collaborators such as European Organisation for the Exploitation of Meteorological Satellites GeoNames The Norwegian Polar Institute It also collects information from different types of private weather stations.
https://en.wikipedia.org/wiki/Hypogeal%20germination
Hypogeal germination (from Ancient Greek [] 'below ground', from [] 'below' and [] 'earth, ground') is a botanical term indicating that the germination of a plant takes place below the ground. An example of a plant with hypogeal germination is the pea (Pisum sativum). The opposite of hypogeal is epigeal (above-ground germination). Germination Hypogeal germination implies that the cotyledons stay below the ground. The epicotyl (part of the stem above the cotyledon) grows, while the hypocotyl (part of the stem below the cotyledon) remains the same in length. In this way, the epicotyl pushes the plumule above the ground. Normally, the cotyledon is fleshy, and contains many nutrients that are used for germination. Because the cotyledon stays below the ground, it is much less vulnerable to, for example, night-frost or grazing. The evolutionary strategy is that the plant produces a relatively low number of seeds, but each seed has a bigger chance of surviving. Plants that show hypogeal germination need relatively little in the way of external nutrients to grow, therefore they are more frequent on nutrient-poor soils. The plants also need less sunlight, so they can be found more often in the middle of forests, where there is much competition to reach the sunlight. Plants that show hypogeal germination grow relatively slowly, especially in the first phase. In areas that are regularly flooded, they need more time between floodings to develop. On the other hand, they are more resistant when a flooding takes place. After the slower first phase, the plant develops faster than plants that show epigeal germination. It is possible that within the same genus one species shows hypogeal germination while another species shows epigeal germination. Some genera in which this happens are: Phaseolus: the runner bean (Phaseolus coccineus) shows hypogeal germination, whereas the common bean (Phaseolus vulgaris) shows epigeal germination Lilium: see Lily seed germination types Ar
https://en.wikipedia.org/wiki/Pfsync
pfsync is a computer protocol used to synchronise firewall states between machines running Packet Filter (PF) for high availability. It is used along with CARP to make sure a backup firewall has the same information as the main firewall. When the main machine in the firewall cluster dies, the backup machine is able to accept current connections without loss. See also OpenBSD PF (firewall) CARP Linux-HA Linux Virtual Server
https://en.wikipedia.org/wiki/Hautus%20lemma
In control theory and in particular when studying the properties of a linear time-invariant system in state space form, the Hautus lemma (after Malo L. J. Hautus), also commonly known as the Popov-Belevitch-Hautus test or PBH test, can prove to be a powerful tool. A special case of this result appeared first in 1963 in a paper by Elmer G. Gilbert, and was later expanded to the current PHB test with contributions by Vasile M. Popov in 1966, Vitold Belevitch in 1968, and Malo Hautus in 1969, who emphasized its applicability in proving results for linear time-invariant systems. Statement There exist multiple forms of the lemma: Hautus Lemma for controllability The Hautus lemma for controllability says that given a square matrix and a the following are equivalent: The pair is controllable For all it holds that For all that are eigenvalues of it holds that Hautus Lemma for stabilizability The Hautus lemma for stabilizability says that given a square matrix and a the following are equivalent: The pair is stabilizable For all that are eigenvalues of and for which it holds that Hautus Lemma for observability The Hautus lemma for observability says that given a square matrix and a the following are equivalent: The pair is observable. For all it holds that For all that are eigenvalues of it holds that Hautus Lemma for detectability The Hautus lemma for detectability says that given a square matrix and a the following are equivalent: The pair is detectable For all that are eigenvalues of and for which it holds that
https://en.wikipedia.org/wiki/Tone-Lok
Tone-Lok Effects are guitar effects pedals from a (now discontinued) product line, introduced by Ibanez in 1999. In contrast with other guitar pedals, they included a "Lok" feature, engaged for each adjustment by pressing down on its corresponding potentiometer's control knob. Pedals Guitar AP7 Analog Phaser AW7 Autowah CF7 Stereo Chorus/Flanger DE7 Delay/Echo DS7 Distortion FZ7 Fuzz LF7 Lo Fi PH7 Phaser PM7 Phase Modulator SH7 Seventh Heaven SM7 Smashbox TC7 Tri Mode Chorus TS7 Tubescreamer WD7 Weeping Demon WD7JR Weeping Demon Junior Bass PD7 Phat-Hed Bass Overdrive SB7 Synthesizer Bass
https://en.wikipedia.org/wiki/Yan%20Liu%20%28geographer%29
Yan Liu is an Australian geographer known for her use of cellular automata and particle systems to model patterns of urban development. Her work has also involved using electronic transit pass data to compare the transportation patterns of residents of Australian and Chinese cities. Liu has a Ph.D. from the University of Queensland, and an associate professor of human geography and deputy head of school in the University of Queensland School of Earth and Environmental Sciences. She is the author of the book Modelling urban development with geographical information systems and cellular automata (CRC Press, 2009).
https://en.wikipedia.org/wiki/15%20%28number%29
15 (fifteen) is the natural number following 14 and preceding 16. Mathematics 15 is: The eighth composite number and the sixth semiprime and the first odd and fourth discrete semiprime; its proper divisors are , , and , so the first of the form (3.q), where q is a higher prime. a deficient number, a lucky number, a bell number (i.e., the number of partitions for a set of size 4), a pentatope number, and a repdigit in binary (1111) and quaternary (33). In hexadecimal, and higher bases, it is represented as F. with an aliquot sum of 9; within an aliquot sequence of three composite numbers (15,9,4,3,1,0) to the Prime in the 3-aliquot tree. the second member of the first cluster of two discrete semiprimes (14, 15); the next such cluster is (21, 22). a triangular number, a hexagonal number, and a centered tetrahedral number. the number of partitions of 7. the smallest number that can be factorized using Shor's quantum algorithm. the magic constant of the unique order-3 normal magic square. the number of supersingular primes. the smallest positive number that can be expressed as the difference of two positive squares in more than one way: or (see image). Furthermore, 15's prime factors, (3 and 5), form the first twin-prime pair. The first 15 superabundant numbers are the same as the first 15 colossally abundant numbers. In decimal, 15 contains the digits 1 and 5 and is the result of adding together the integers from 1 to 5 (1 + 2 + 3 + 4 + 5 = 15). The only other number with this property (in decimal) is 27. There are 15 truncatable primes that are both right-truncatable and left-truncatable: 2, 3, 5, 7, 23, 37, 53, 73, 313, 317, 373, 797, 3137, 3797, 739397 There are 15 perfect matchings of the complete graph K6 and 15 rooted binary trees with four labeled leaves, both of these being among the types of objects counted by double factorials. With only two exceptions, all prime quadruplets enclose a multiple of 15, with 15 itself being enclosed by t
https://en.wikipedia.org/wiki/Perpendicular
In elementary geometry, two geometric objects are perpendicular if their intersection forms right angles (angles that are 90 degrees or π/2 radians wide) at the point of intersection called a foot. The condition of perpendicularity may be represented graphically using the perpendicular symbol, ⟂. Perpendicular intersections can happen between two lines (or two line segments), between a line and a plane, and between two planes. Perpendicularity is one particular instance of the more general mathematical concept of orthogonality; perpendicularity is the orthogonality of classical geometric objects. Thus, in advanced mathematics, the word "perpendicular" is sometimes used to describe much more complicated geometric orthogonality conditions, such as that between a surface and its normal vector. A line is said to be perpendicular to another line if the two lines intersect at a right angle. Explicitly, a first line is perpendicular to a second line if (1) the two lines meet; and (2) at the point of intersection the straight angle on one side of the first line is cut by the second line into two congruent angles. Perpendicularity can be shown to be symmetric, meaning if a first line is perpendicular to a second line, then the second line is also perpendicular to the first. For this reason, we may speak of two lines as being perpendicular (to each other) without specifying an order. A great example of perpendicularity can be seen in any compass, note the cardinal points; North, East, South, West (NESW) The line N-S is perpendicular to the line W-E and the angles N-E, E-S, S-W and W-N are all 90° to one another. Perpendicularity easily extends to segments and rays. For example, a line segment is perpendicular to a line segment if, when each is extended in both directions to form an infinite line, these two resulting lines are perpendicular in the sense above. In symbols, means line segment AB is perpendicular to line segment CD. A line is said to be perpendicular to a
https://en.wikipedia.org/wiki/Domain-specific%20architecture
A domain-specific architecture (DSA) is a programmable computer architecture specifically tailored to operate very efficiently within the confines of a given application domain. The term is often used in contrast to general-purpose architectures, such as CPUs, that are designed to operate on any computer program. History In conjunction with the semiconductor boom that started in the 1960s, computer architects were tasked with finding new ways to exploit the increasingly large number of transistors available. Moore's Law and Dennard Scaling enabled architects to focus on improving the performance of general-purpose microprocessors on general-purpose programs. These efforts yielded several technological innovations, such as multi-level caches, out-of-order execution, deep instruction pipelines, multithreading, and multiprocessing. The impact of these innovations was measured on generalist benchmarks such as SPEC, and architects were not concerned with the internal structure or specific characteristics of these programs. The end of Dennard Scaling pushed computer architects to switch from a single, very fast processor to several processor cores. Performance improvement could no longer be achieved by simply increasing the operating frequency of a single core. The end of Moore's Law shifted the focus away from general-purpose architectures towards more specialized hardware. Although general-purpose CPU will likely have a place in any computer system, heterogeneous systems composed of general-purpose and domain-specific components are the most recent trend for achieving high performance. While hardware accelerators and ASIC have been used in very specialized application domains since the inception of the semiconductor industry, they generally implement a specific function with very limited flexibility. In contrast, the shift towards domain-specific architectures wants to achieve a better balance of flexibility and specialization. A notable early example of a dom
https://en.wikipedia.org/wiki/Tethering
Tethering or phone-as-modem (PAM) is the sharing of a mobile device's Internet connection with other connected computers. Connection of a mobile device with other devices can be done over wireless LAN (Wi-Fi), over Bluetooth or by physical connection using a cable, for example through USB. If tethering is done over WLAN, the feature may be branded as a personal hotspot or mobile hotspot, which allows the device to serve as a portable router. Mobile hotspots may be protected by a PIN or password. The Internet-connected mobile device can act as a portable wireless access point and router for devices connected to it. Mobile device's OS support Many mobile devices are equipped with software to offer tethered Internet access. Windows Mobile 6.5, Windows Phone 7, Android (starting from version 2.2), and iOS 3.0 (or later) offer tethering over a Bluetooth PAN or a USB connection. Tethering over Wi-Fi, also known as Personal Hotspot, is available on iOS starting with iOS 4.2.5 (or later) on iPhone 4 or iPad (3rd gen), certain Windows Mobile 6.5 devices like the HTC HD2, Windows Phone 7, 8 and 8.1 devices (varies by manufacturer and model), and certain Android phones (varies widely depending on carrier, manufacturer, and software version). For IPv4 networks, the tethering normally works via NAT on the handset's existing data connection, so from the network point of view, there is just one device with a single IPv4 network address, though it is technically possible to attempt to identify multiple machines. On some mobile network operators, this feature is contractually unavailable by default, and may be activated only by paying to add a tethering package to a data plan or choosing a data plan that includes tethering. This is done primarily because with a computer sharing the network connection, there is typically substantially more network traffic. Some network-provided devices have carrier-specific software that may deny the inbuilt tethering ability normally available
https://en.wikipedia.org/wiki/Morphallaxis
Morphallaxis is the regeneration of specific tissue in a variety of organisms due to loss or death of the existing tissue. The word comes from the Greek allazein, (αλλάζειν) which means to change. The classical example of morphallaxis is that of the Cnidarian hydra, where when the animal is severed in two (by actively cutting it with, for example, a surgical knife) the remaining severed sections form two fully functional and independent hydra. The notable feature of morphallaxis is that a large majority of regenerated tissue comes from already-present tissue in the organism. That is, the one severed section of the hydra forms into a smaller version of the original hydra, approximately the same size as the severed section. Hence, there is an "exchange" of tissue. Researchers Wilson and Child showed circa 1930 that if the hydra was pulped and the disassociated food passed through a sieve, those cells then put into an aqueous solution would shortly reform into the original organism with all differentiated tissue correctly arranged. Morphallaxis is often contrasted with epimorphosis, which is characterized by a much greater relative degree of cellular proliferation. Although cellular differentiation is active in both processes, in morphallaxis the majority of the regeneration comes from reorganization or exchange, while in epimorphosis the majority of the regeneration comes from cellular differentiation. Thus, the two may be distinguished as a measure of degree. Epimorphosis is the regeneration of a part of an organism by proliferation at the cut surface. For example, in Planaria neoblasts help in regeneration. History The word comes from the Greek allazein, which means to exchange. The biological process was first discovered in hydra by Abraham Trembley, who was considered the father of environmental zoology. Abraham Trembley was doing research on a sample of pond water and examined the lifestyle of hydra. He couldn’t decide if they belonged to the animal or
https://en.wikipedia.org/wiki/Dissociative%20recombination
Dissociative recombination is a chemical process in which a positive polyatomic ion recombines with an electron, and as a result, the neutral molecule dissociates. This reaction is important for interstellar and atmospheric chemistry. On Earth, dissociative recombination rarely occurs naturally, as free electrons react with any molecule (even neutral molecules) they encounter. Even in the best laboratory conditions, dissociative recombination is hard to observe, but it is an important reaction in systems which have large populations of ionized molecules such as atmospheric-pressure plasmas. In astrophysics, dissociative recombination is one of the main mechanisms by which molecules are broken down, and other molecules are formed. The existence of dissociative recombination is possible due to the vacuum of the interstellar medium. A typical example of dissociative recombination in astrophysics is: CH3+ + e- -> CH2 + H See also Astrochemistry Ionization
https://en.wikipedia.org/wiki/G-structure%20on%20a%20manifold
In differential geometry, a G-structure on an n-manifold M, for a given structure group G, is a principal G-subbundle of the tangent frame bundle FM (or GL(M)) of M. The notion of G-structures includes various classical structures that can be defined on manifolds, which in some cases are tensor fields. For example, for the orthogonal group, an O(n)-structure defines a Riemannian metric, and for the special linear group an SL(n,R)-structure is the same as a volume form. For the trivial group, an {e}-structure consists of an absolute parallelism of the manifold. Generalising this idea to arbitrary principal bundles on topological spaces, one can ask if a principal -bundle over a group "comes from" a subgroup of . This is called reduction of the structure group (to ). Several structures on manifolds, such as a complex structure, a symplectic structure, or a Kähler structure, are G-structures with an additional integrability condition. Reduction of the structure group One can ask if a principal -bundle over a group "comes from" a subgroup of . This is called reduction of the structure group (to ), and makes sense for any map , which need not be an inclusion map (despite the terminology). Definition In the following, let be a topological space, topological groups and a group homomorphism . In terms of concrete bundles Given a principal -bundle over , a reduction of the structure group (from to ) is a -bundle and an isomorphism of the associated bundle to the original bundle. In terms of classifying spaces Given a map , where is the classifying space for -bundles, a reduction of the structure group is a map and a homotopy . Properties and examples Reductions of the structure group do not always exist. If they exist, they are usually not essentially unique, since the isomorphism is an important part of the data. As a concrete example, every even-dimensional real vector space is isomorphic to the underlying real space of a complex vector space: it
https://en.wikipedia.org/wiki/Nonlinear%20Dirac%20equation
See Ricci calculus and Van der Waerden notation for the notation. In quantum field theory, the nonlinear Dirac equation is a model of self-interacting Dirac fermions. This model is widely considered in quantum physics as a toy model of self-interacting electrons. The nonlinear Dirac equation appears in the Einstein–Cartan–Sciama–Kibble theory of gravity, which extends general relativity to matter with intrinsic angular momentum (spin). This theory removes a constraint of the symmetry of the affine connection and treats its antisymmetric part, the torsion tensor, as a variable in varying the action. In the resulting field equations, the torsion tensor is a homogeneous, linear function of the spin tensor. The minimal coupling between torsion and Dirac spinors thus generates an axial-axial, spin–spin interaction in fermionic matter, which becomes significant only at extremely high densities. Consequently, the Dirac equation becomes nonlinear (cubic) in the spinor field, which causes fermions to be spatially extended and may remove the ultraviolet divergence in quantum field theory. Models Two common examples are the massive Thirring model and the Soler model. Thirring model The Thirring model was originally formulated as a model in (1 + 1) space-time dimensions and is characterized by the Lagrangian density where is the spinor field, is the Dirac adjoint spinor, (Feynman slash notation is used), is the coupling constant, is the mass, and are the two-dimensional gamma matrices, finally is an index. Soler model The Soler model was originally formulated in (3 + 1) space-time dimensions. It is characterized by the Lagrangian density using the same notations above, except is now the four-gradient operator contracted with the four-dimensional Dirac gamma matrices , so therein . Other models Besides the Soler model, extensive work has been done where nonlinear versions of Dirac’s equation are used to describe purely classical, nonlinear particle-like s
https://en.wikipedia.org/wiki/The%20Open%20Ecology%20Journal
The Open Ecology Journal is an open-access peer-reviewed scientific journal covering ecology. It publishes original research articles and reviews. Abstracting and indexing The journal is indexed in: Chemical Abstracts EMBASE Scopus
https://en.wikipedia.org/wiki/Poem%20code
The poem code is a simple, and insecure, cryptographic method which was used during World War II by the British Special Operations Executive (SOE) to communicate with their agents in Nazi-occupied Europe. The method works by the sender and receiver pre-arranging a poem to use. The sender chooses a set number of words at random from the poem and gives each letter in the chosen words a number. The numbers are then used as a key for a transposition cipher to conceal the plaintext of the message. The cipher used was often double transposition. To indicate to the receiver which words had been chosen, an indicator group of letters is sent at the start of the message. Description To encrypt a message, the agent would select words from the poem as the key. Every poem code message commenced with an indicator group of five letters, whose position in the alphabet indicated which five words of an agent's poem would be used to encrypt the message. For instance, suppose the poem is the first stanza of Jabberwocky: ’Twas brillig, and the slithy toves      Did gyre and gimble in the wabe: All mimsy were the borogoves,      And the mome raths outgrabe. We could select the five words THE WABE TOVES TWAS MOME, which are at positions 4, 13, 6, 1, and 21 in the poem, and describe them with the corresponding indicator group DMFAU. The five words are written sequentially, and their letters numbered to create a transposition key to encrypt a message. Numbering proceeds by first numbering the A's in the five words starting with 1, then continuing with the B's, then the C's, and so on; any absent letters are simply skipped. In our example of THE WABE TOVES TWAS MOME, the two A's are numbered 1, 2; the B is numbered 3; there are no C's or D's; the four E's are numbered 4, 5, 6, 7; there are no G's; the H is numbered 8; and so on through the alphabet. This results in a transposition key of 15 8 4, 19 1 3 5, 16 11 18 6 13, 17 20 2 14, 9 12 10 7. This defines a permutation which is us
https://en.wikipedia.org/wiki/Copeland%E2%80%93Erd%C5%91s%20constant
The Copeland–Erdős constant is the concatenation of "0." with the base 10 representations of the prime numbers in order. Its value, using the modern definition of prime, is approximately 0.235711131719232931374143… . The constant is irrational; this can be proven with Dirichlet's theorem on arithmetic progressions or Bertrand's postulate (Hardy and Wright, p. 113) or Ramare's theorem that every even integer is a sum of at most six primes. It also follows directly from its normality (see below). By a similar argument, any constant created by concatenating "0." with all primes in an arithmetic progression dn + a, where a is coprime to d and to 10, will be irrational; for example, primes of the form 4n + 1 or 8n + 1. By Dirichlet's theorem, the arithmetic progression dn · 10m + a contains primes for all m, and those primes are also in cd + a, so the concatenated primes contain arbitrarily long sequences of the digit zero. In base 10, the constant is a normal number, a fact proven by Arthur Herbert Copeland and Paul Erdős in 1946 (hence the name of the constant). The constant is given by where pn is the nth prime number. Its continued fraction is [0; 4, 4, 8, 16, 18, 5, 1, …] (). Related constants Copeland and Erdős's proof that their constant is normal relies only on the fact that is strictly increasing and , where is the nth prime number. More generally, if is any strictly increasing sequence of natural numbers such that and is any natural number greater than or equal to 2, then the constant obtained by concatenating "0." with the base- representations of the 's is normal in base . For example, the sequence satisfies these conditions, so the constant 0.003712192634435363748597110122136… is normal in base 10, and 0.003101525354661104…7 is normal in base 7. In any given base b the number which can be written in base b as 0.0110101000101000101…b where the nth digit is 1 if and only if n is prime, is irrational. See also Smarandache–Wellin numbers:
https://en.wikipedia.org/wiki/Frank%E2%80%93Kasper%20phases
Topologically close pack (TCP) phases, also known as Frank-Kasper (FK) phases, are one of the largest groups of intermetallic compounds, known for their complex crystallographic structure and physical properties. Owing to their combination of periodic and aperiodic structure, some TCP phases belong to the class of quasicrystals. Applications of TCP phases as high-temperature structural and superconducting materials have been highlighted; however, they have not yet been sufficiently investigated for details of their physical properties. Also, their complex and often non-stoichiometric structure makes them good subjects for theoretical calculations. History In 1958, Frederick C. Frank and John S. Kasper, in their original work investigating many complex alloy structures, showed that non-icosahedral environments form an open-end network which they called the major skeleton, and is now identified as the declination locus. They came up with the methodology to pack asymmetric icosahedra into crystals using other polyhedra with larger coordination numbers. These coordination polyhedra were constructed to maintain topological close packing (TCP). Unit-cell geometries classification Based on the tetrahedral units, FK crystallographic structures are classified into low and high polyhedral groups denoted by their coordination numbers (CN) referring to the number of atom centering the polyhedron. Some atoms have an icosahedral structure with low coordination, labeled CN12. Some others have higher coordination numbers of 14, 15 and 16, labeled CN14, CN15, and CN16, respectively. These atoms with higher coordination numbers form uninterrupted networks connected along the directions where the five-fold icosahedral symmetry is replaced by six-fold local symmetry. The sites of 12-coordination are called minor sites and those with more than 12-fold coordination are major sites. Classic FK phases The most common members of a FK-phases family are: A15, Laves phases, σ, μ, M, P, and
https://en.wikipedia.org/wiki/Judgment%20%28mathematical%20logic%29
In mathematical logic, a judgment (or judgement) or assertion is a statement or enunciation in a metalanguage. For example, typical judgments in first-order logic would be that a string is a well-formed formula, or that a proposition is true. Similarly, a judgment may assert the occurrence of a free variable in an expression of the object language, or the provability of a proposition. In general, a judgment may be any inductively definable assertion in the metatheory. Judgments are used in formalizing deduction systems: a logical axiom expresses a judgment, premises of a rule of inference are formed as a sequence of judgments, and their conclusion is a judgment as well (thus, hypotheses and conclusions of proofs are judgments). A characteristic feature of the variants of Hilbert-style deduction systems is that the context is not changed in any of their rules of inference, while both natural deduction and sequent calculus contain some context-changing rules. Thus, if we are interested only in the derivability of tautologies, not hypothetical judgments, then we can formalize the Hilbert-style deduction system in such a way that its rules of inference contain only judgments of a rather simple form. The same cannot be done with the other two deductions systems: as context is changed in some of their rules of inferences, they cannot be formalized so that hypothetical judgments could be avoided—not even if we want to use them just for proving derivability of tautologies. This basic diversity among the various calculi allows such difference, that the same basic thought (e.g. deduction theorem) must be proven as a metatheorem in Hilbert-style deduction system, while it can be declared explicitly as a rule of inference in natural deduction. In type theory, some analogous notions are used as in mathematical logic (giving rise to connections between the two fields, e.g. Curry–Howard correspondence). The abstraction in the notion of judgment in mathematical logic can be expl
https://en.wikipedia.org/wiki/Halofuginone
Halofuginone, sold under the brand name Halocur, is a coccidiostat used in veterinary medicine. It is a synthetic halogenated derivative of febrifugine, a natural quinazolinone alkaloid which can be found in the Chinese herb Dichroa febrifuga (Chang Shan). Collgard Biopharmaceuticals is developing halofuginone for the treatment of scleroderma and it has received orphan drug designation from the U.S. Food and Drug Administration. Halofuginone inhibits the development of T helper 17 cells, immune cells that play an important role in autoimmune disease, but it does not affect other kinds of T cells which are involved in normal immune function. Halofuginone therefore has potential for the treatment of autoimmune disorders. Halofuginone is also an inhibitor of collagen type I gene expression and as a consequence it may inhibit tumor cell growth. Halofuginone exerts its effects by acting as a high affinity inhibitor of the enzyme glutamyl-prolyl tRNA synthetase. Inhibition of prolyl tRNA charging leads to the accumulation of uncharged prolyl tRNAs, which serve as a signal to initiate the amino acid starvation response, which in turn exerts anti-inflammatory and anti-fibrotic effects.
https://en.wikipedia.org/wiki/Leonhard%20Seppala
Leonhard "Sepp" Seppala (; September 14, 1877 – January 28, 1967) was a Norwegian-Finnish-American sled dog breeder, trainer and musher who with his dogs played a pivotal role in the 1925 serum run to Nome, and participated in the 1932 Winter Olympics. Seppala introduced the work dogs used by Native Siberians at the time to the American public; the breed came to be known as the Siberian Husky in the English-speaking world. The Leonhard Seppala Humanitarian Award, which honors excellence in sled dog care, is named in honour of him. Background Seppala was born in the Lyngen, Troms og Finnmark, Northern Norway. He was the eldest child of Isak Isaksen Seppälä (born in Sweden of Finnish descent) and Anne Henrikke Henriksdatter. His father's family name is of Finnish origin. Leonhard is considered to have been Kven. When Seppala was two years old, his family moved within Troms county to nearby Skjervøy municipality on the island of Skjervøya. While in Skjervøy, his father worked as a blacksmith and fisherman, building up a relatively large estate. Seppala initially followed in his father's footsteps as both a blacksmith and a fisherman. However, in 1900, he emigrated to Alaska during the Nome gold rush. His friend Jafet Lindeberg had returned from Alaska and convinced Seppala to come to work for his mining company in Nome. He became a naturalized citizen in 1906. During his first winter in Alaska, Seppala became a dogsled driver for Lindeberg's company. He enjoyed the task from his first run, which he recalled clearly for the rest of his life. He expressed pleasure in the rhythmic patter of the dogs' feet and the feeling of the sled gliding along the snow. While most drivers considered a long run, Seppala travelled between and most days. This also meant he worked as long as 12 hours a day. He kept his dogs in form during the summer by having them pull a cart on wheels instead of a sled. It was unusual at that time to keep sled dogs working when the snow thawed, or to
https://en.wikipedia.org/wiki/Excisionase
In molecular biology, excisionase is a bacteriophage protein encoded by the Xis gene. It is involved in excisive recombination by regulating the assembly of the excisive intasome and by inhibiting viral integration. It adopts an unusual winged-helix structure in which two alpha helices are packed against two extended strands. Also present in the structure is a two-stranded anti-parallel beta-sheet, whose strands are connected by a four-residue wing. During interaction with DNA, helix alpha2 is thought to insert into the major groove, while the wing contacts the adjacent minor groove or phosphodiester backbone. The C-terminal region of excisionase is involved in interaction with phage-encoded integrase (Int), and a putative C-terminal alpha helix may fold upon interaction with Int and/or DNA.
https://en.wikipedia.org/wiki/Tetrasodium%20pyrophosphate
Tetrasodium pyrophosphate, also called sodium pyrophosphate, tetrasodium phosphate or TSPP, is an inorganic compound with the formula Na4P2O7. As a salt, it is a white, water-soluble solid. It is composed of pyrophosphate anion and sodium ions. Toxicity is approximately twice that of table salt when ingested orally. Also known is the decahydrate Na4P2O710(H2O). Use Tetrasodium pyrophosphate is used as a buffering agent, an emulsifier, a dispersing agent, and a thickening agent, and is often used as a food additive. Common foods containing tetrasodium pyrophosphate include chicken nuggets, marshmallows, pudding, crab meat, imitation crab, canned tuna, and soy-based meat alternatives and cat foods and cat treats where it is used as a palatability enhancer. In toothpaste and dental floss, tetrasodium pyrophosphate acts as a tartar control agent, serving to remove calcium and magnesium from saliva and thus preventing them from being deposited on teeth. Tetrasodium pyrophosphate is used in commercial dental rinses before brushing to aid in plaque reduction. Tetrasodium pyrophosphate is sometimes used in household detergents to prevent similar deposition on clothing, but due to its phosphate content it causes eutrophication of water, promoting algae growth. Production Tetrasodium pyrophosphate is produced by the reaction of furnace-grade phosphoric acid with sodium carbonate to form disodium phosphate, which is then heated to 450 °C to form tetrasodium pyrophosphate: 2 Na2HPO4 → Na4P2O7 + H2O
https://en.wikipedia.org/wiki/Tropical%20and%20subtropical%20grasslands%2C%20savannas%2C%20and%20shrublands
Tropical and subtropical grasslands, savannas, and shrublands is a terrestrial biome defined by the World Wide Fund for Nature. The biome is dominated by grass and/or shrubs located in semi-arid to semi-humid climate regions of subtropical and tropical latitudes. Tropical grasslands are mainly found between 5 degrees and 20 degrees in both North and south of the Equator. Description Grasslands are dominated by grasses and other herbaceous plants. Savannas are grasslands with scattered trees. Shrublands are dominated by woody or herbaceous shrubs. Large expanses of land in the tropics do not receive enough rainfall to support extensive tree cover. The tropical and subtropical grasslands, savannas, and shrublands are characterized by rainfall levels between per year. Rainfall can be highly seasonal, with the entire year's rainfall sometimes occurring within a couple of weeks. African savannas occur between forest or woodland regions and grassland regions. Flora includes acacia and baobab trees, grass, and low shrubs. Acacia trees lose their leaves in the dry season to conserve moisture, while the baobab stores water in its trunk for the dry season. Many of these savannas are in Africa. Large mammals that have evolved to take advantage of the ample forage typify the biodiversity associated with these habitats. These large mammal faunas are richest in African savannas and grasslands. The most intact assemblages currently occur in East African Acacia savannas and Zambezian savannas consisting of mosaics of miombo, mopane, and other habitats. Large-scale migration of tropical savanna herbivores, such as wildebeest (Connochaetes taurinus) and zebra (Equus quagga), are continuing to decline through habitat alteration and hunting. They now only occur to any significant degree in East Africa and the central Zambezian region. Much of the extraordinary abundance of Guinean and Sahelian savannas has been eliminated, although the large-scale migrations of Ugandan Kob still o
https://en.wikipedia.org/wiki/Classification%20of%20discontinuities
Continuous functions are of utmost importance in mathematics, functions and applications. However, not all functions are continuous. If a function is not continuous at a point in its domain, one says that it has a discontinuity there. The set of all points of discontinuity of a function may be a discrete set, a dense set, or even the entire domain of the function. The oscillation of a function at a point quantifies these discontinuities as follows: in a removable discontinuity, the distance that the value of the function is off by is the oscillation; in a jump discontinuity, the size of the jump is the oscillation (assuming that the value at the point lies between these limits of the two sides); in an essential discontinuity, oscillation measures the failure of a limit to exist; the limit is constant. A special case is if the function diverges to infinity or minus infinity, in which case the oscillation is not defined (in the extended real numbers, this is a removable discontinuity). Classification For each of the following, consider a real valued function of a real variable defined in a neighborhood of the point at which is discontinuous. Removable discontinuity Consider the piecewise function The point is a removable discontinuity. For this kind of discontinuity: The one-sided limit from the negative direction: and the one-sided limit from the positive direction: at both exist, are finite, and are equal to In other words, since the two one-sided limits exist and are equal, the limit of as approaches exists and is equal to this same value. If the actual value of is not equal to then is called a . This discontinuity can be removed to make continuous at or more precisely, the function is continuous at The term removable discontinuity is sometimes broadened to include a removable singularity, in which the limits in both directions exist and are equal, while the function is undefined at the point This use is an abuse of terminology b
https://en.wikipedia.org/wiki/Naphthomycin%20A
Naphthomycin A is a type of naphthomycin. It was isolated as a yellow pigment from Streptomyces collinus and it shows antibacterial, antifungal, and antitumor activities. Naphthomycins have the longest ansa aliphatic carbon chain of the ansamycin family. Biosynthetic origins of the carbon skeleton from PKS1 were investigated by feeding 13C-labeled precursors and subsequent 13C-NMR product analysis. Naphthomycin gene clusters have been cloned and sequenced to confirm involvement in biosynthesis via deletion of a 7.2kb region. Thirty-two genes were identified in the 106kb cluster.
https://en.wikipedia.org/wiki/McIntosh%20MC-2300
The McIntosh MC-2300 is a solid-state power amplifier which was built by the American high-end audio company McIntosh Laboratory between 1971 and 1980. McIntosh produced the MC-2105 (with blue meters) and the MC-2100 (without) between 1969 and 1977. Both 100 watt-per-channel stereo amps (200 watts monophonic) sold. The MC-2300 was succeeded by the more powerful MC-2500 (500 WPC stereo/1000 watts mono), sold from 1980 to 1990; and then the MC-2600 (600 WPC stereo/1200 watts mono), which was available from 1990 to 1995. Design Physically, the MC-2300 is a very large and sturdy amplifier, measuring 10.5 in (26.7 cm) high x 19 in (48.3 cm) wide x 17 in (43.2 cm) deep, and weighing an impressive 128 lbs (58 kg). During its production run, 4545 units were made. Today, the MC-2300 remains a very sought-after amplifier for audiophiles and collectors. The MC-2300 can be utilized either as a 300-watt-per-channel stereo amp, or a 600-watt monoblock, and was rated by its manufacturer as being able to produce this amount of power continuously, with very little (less than 0.25%) distortion. McIntosh's ratings were conservative, however, because like many of their amplifiers, when bench-tested the MC-2300 has frequently been found to produce an even higher level of clean power. As such, it was ideal for use in demanding, professional applications. During the 1970s, the MC-2300 was an expensive piece of audio equipment, with a retail price of $1799 by the time of its discontinuation in 1980. That being said, its outstanding power and sound production quality made it a valued part of many recording studios and although some people prefer the sound of tube amplifiers, the overall greater reliability and freedom from repair of the newer solid-state amps was a major vote in their favor. History The improvisational rock band the Grateful Dead employed 48 McIntosh MC-2300 amps as the main power source for their enormous public-address system, the Wall of Sound. Designed by Owsley
https://en.wikipedia.org/wiki/Aegyptonycteris
Aegyptonycteris ("Egyptian bat") is a genus of extinct bat from the Late Eocene of North Africa. It is currently known from a single specimen (holotype CGM 83740) from the Birket Qarun Formation in the Fayum Depression in western Egypt. Aegyptonycteris is notable both its large size, comparable to the larger modern bat species, as well due to its omnivorous diet, as opposed to the mostly insectivorous diets of other Eocene bats (and the majority of modern species). This makes it a remarkable example of early chiropteran speciation, having not only attained a rather large size but also specialised towards a drastically different ecological niche from its contemporaries. Description Aegyptonycteris is currently known only from its holotype. Said specimen is composed of a right maxilla - including the posterior portion of the orbital door and base of the zygomatic arch - and two molar teeth. The anterior orbital door is broad and flat and the zygomatic arch is robust and well developed, characteristics seen in a variety of mammal groups including the contemporary primates, eulipotyphlans and metatherians, but the molars have classical chiropteran traits like dilambdodonty, lack of a mesostyle and a narrow protofossa, though it does differ from most other bats in the presence of a bulbous hypocone. Comparisons to other bat species show that the animal was probably similar in size to the modern Vampyrum spectrum, if not larger. Currently, it is unknown whether it had echolocation, though its omnivorous habits might imply the use of other senses like smell, as in modern frugivorous and omnivorous bat species. Classification Aegyptonycteris is recovered as a chiropteran on the basis of several dental characteristics (see above). It is considered rather aberrant, however, and is considered to be a fairly basal species. Ecology Based on its tooth morphology, Aegyptonycteris was most likely a generalistic omnivore. Unlike other contemporary bats such as Witwatia, it lac
https://en.wikipedia.org/wiki/Communication%20complexity
In theoretical computer science, communication complexity studies the amount of communication required to solve a problem when the input to the problem is distributed among two or more parties. The study of communication complexity was first introduced by Andrew Yao in 1979, while studying the problem of computation distributed among several machines. The problem is usually stated as follows: two parties (traditionally called Alice and Bob) each receive a (potentially different) -bit string and . The goal is for Alice to compute the value of a certain function, , that depends on both and , with the least amount of communication between them. While Alice and Bob can always succeed by having Bob send his whole -bit string to Alice (who then computes the function ), the idea here is to find clever ways of calculating with fewer than bits of communication. Note that, unlike in computational complexity theory, communication complexity is not concerned with the amount of computation performed by Alice or Bob, or the size of the memory used, as we generally assume nothing about the computational power of either Alice or Bob. This abstract problem with two parties (called two-party communication complexity), and its general form with more than two parties, is relevant in many contexts. In VLSI circuit design, for example, one seeks to minimize energy used by decreasing the amount of electric signals passed between the different components during a distributed computation. The problem is also relevant in the study of data structures and in the optimization of computer networks. For surveys of the field, see the textbooks by and . Formal definition Let where we assume in the typical case that and . Alice holds an -bit string while Bob holds an -bit string . By communicating to each other one bit at a time (adopting some communication protocol which is agreed upon in advance), Alice and Bob wish to compute the value of such that at least one party knows the val
https://en.wikipedia.org/wiki/Free%20drift
Free drift mode refers to the state of motion of an object in orbit whereby constant attitude is not maintained. When attitude is lost, the object is said to be in free drift, thereby relying on its own inertia to avoid attitude drift. This mode is often engaged purposefully as it can be useful when modifying, upgrading, or repairing an object in space, such as the International Space Station. Additionally, it allows work on areas near the thrusters on the ISS that are generally used to maintain attitude. While in free drift it is not possible to fully use the solar arrays on the ISS. This can cause a drop in power generation, requiring the conservation of energy. This may affect many systems that otherwise require a lot of energy. The amount of time that an object such as the ISS can remain safely in free-drift varies depending on moment of inertia, perturbation torques, tidal gradients, etc. The ISS itself generally can last about 45 minutes in this mode. Notes
https://en.wikipedia.org/wiki/WiBro
WiBro (wireless broadband) is a wireless broadband Internet technology developed by the South Korean telecoms industry. WiBro is the South Korean service name for IEEE 802.16e (mobile WiMAX) international standard. By the end of 2012, the Korean Communications Commission intends to increase WiBro broadband connection speeds to 10Mbit/s, around ten times the 2009 speed, which will complement their 1Gbit/s fibre-optic network. The WiBro networks were shut down at the end of 2018. WiBro adopts TDD for duplexing, OFDMA for multiple access and 8.75/10.00 MHz as a channel bandwidth. WiBro was devised to overcome the data rate limitation of mobile phones (for example CDMA 1x) and to add mobility to broadband Internet access (for example ADSL or Wireless LAN). In February 2002, the Korean government allocated 100 MHz of electromagnetic spectrum in the 2.3–2.4 GHz band, and in late 2004 WiBro Phase 1 was standardized by the TTA of Korea and in late 2005 ITU reflected WiBro as IEEE 802.16e (mobile WiMAX). Two South Korean telecom companies (KT, SKT) launched commercial service in June 2006, and the monthly fees were around US$30. WiBro base stations offer an aggregate data throughput of 30 to 50 Mbit/s per carrier and cover a radius of 1–5 km allowing for the use of portable internet usage. In detail, it provides mobility for moving devices up to 120 km/h (74.5 mi/h) compared to Wireless LAN having mobility up to walking speed and mobile phone technologies having mobility up to 250 km/h. From testing during the APEC Summit in Busan in late 2005, the actual range and bandwidth were quite a bit lower than these numbers. The technology will also offer quality of service. The inclusion of QoS allows for WiBro to stream video content and other loss-sensitive data in a reliable manner. These all appear to be (and may be) the stronger advantages over the fixed WiMAX standard (802.16a). Some Telcos in many countries were trying to commercialize this Mobile WiMAX (or WiBro). For e
https://en.wikipedia.org/wiki/Graviscalar
In theoretical physics, the hypothetical particle called the graviscalar or radion emerges as an excitation of general relativity's metric tensor, i.e. gravitational field, but is indistinguishable from a scalar in four dimensions, as shown in Kaluza–Klein theory. The scalar field comes from a component of the metric tensor where the figure 5 labels an additional fifth dimension. The only variations in the scalar field represent variations in the size of the extra dimension. Also, in models with multiple extra dimensions, there exist several such particles. Moreover, in theories with extended supersymmetry, a graviscalar is usually a superpartner of the graviton that behaves as a particle with spin 0. This concept closely relates to the gauged Higgs models. See also Graviphoton (aka gravivector) Dilaton Kaluza–Klein theory Randall–Sundrum models Goldberger–Wise mechanism
https://en.wikipedia.org/wiki/Arithmetic%20hyperbolic%203-manifold
In mathematics, more precisely in group theory and hyperbolic geometry, Arithmetic Kleinian groups are a special class of Kleinian groups constructed using orders in quaternion algebras. They are particular instances of arithmetic groups. An arithmetic hyperbolic three-manifold is the quotient of hyperbolic space by an arithmetic Kleinian group. Definition and examples Quaternion algebras A quaternion algebra over a field is a four-dimensional central simple -algebra. A quaternion algebra has a basis where and . A quaternion algebra is said to be split over if it is isomorphic as an -algebra to the algebra of matrices ; a quaternion algebra over an algebraically closed field is always split. If is an embedding of into a field we shall denote by the algebra obtained by extending scalars from to where we view as a subfield of via . Arithmetic Kleinian groups A subgroup of is said to be derived from a quaternion algebra if it can be obtained through the following construction. Let be a number field which has exactly two embeddings into whose image is not contained in (one conjugate to the other). Let be a quaternion algebra over such that for any embedding the algebra is isomorphic to the Hamilton quaternions. Next we need an order in . Let be the group of elements in of reduced norm 1 and let be its image in via . We then consider the Kleinian group obtained as the image in of . The main fact about these groups is that they are discrete subgroups and they have finite covolume for the Haar measure on . Moreover, the construction above yields a cocompact subgroup if and only if the algebra is not split over . The discreteness is a rather immediate consequence of the fact that is only split at its complex embeddings. The finiteness of covolume is harder to prove. An arithmetic Kleinian group is any subgroup of which is commensurable to a group derived from a quaternion algebra. It follows immediately from this definition that arit
https://en.wikipedia.org/wiki/1942%20%28video%20game%29
1942 is a vertically scrolling shooter by Capcom that was released as an arcade video game in 1984. Designed by Yoshiki Okamoto, it was the first game in the 194X series, and was followed by 1943: The Battle of Midway. 1942 is set in the Pacific Theater of World War II, and is loosely based on the Battle of Midway. Despite the game being created by Japanese developers, the goal is to reach Tokyo and destroy the Japanese air fleet; this was due to being the first Capcom game designed with Western markets in mind. It went on to be a commercial success in arcades, becoming Japan's fifth highest-grossing table arcade game of 1986 and one of top five highest-grossing arcade conversion kits that year in the United States. It was ported to the Nintendo Entertainment System, selling over copies worldwide, along with other home systems. Gameplay The player pilots a Lockheed P-38 Lightning dubbed the "Super Ace". The player has to shoot down enemy planes; to avoid enemy fire, the player can perform a roll or vertical loop. During the game, the player may collect a series of power-ups, one of them allowing the plane to be escorted by two other smaller fighters in a Tip Tow formation. Enemies included: Kawasaki Ki-61s, Mitsubishi A6M Zeros and Kawasaki Ki-48s. The boss plane is a Nakajima G10N. The game has "a special roll button that allows players to avoid dangerous situations by temporarily looping out of" the playfield. In addition to the standard high score, it also has a separate percentage high score, recording the best ratio of enemy fighters to enemies shot down. Development The game was designed by Yoshiki Okamoto. The game's main goal was to be easily accessible for players. This is why they decided to use a World War II theme. 1942 was also the first Capcom game designed with Western markets in mind. That was why they decided to have the player pilot an American P-38 fighter plane, to appeal to the American market. The game is loosely based on the Battle of
https://en.wikipedia.org/wiki/NtrC
NtrC (Nitrogen regulatory protein C) is the name of the protein necessary for the prokaryotic regulation transcription factor sigma N (sigma 54) to form an open complex with RNA polymerase in order to activate glnA transcription. The closed -> open conformational change of the sigma N-RNA polymerase complex around the glutamine synthetase gene promoter requires ATP and involves the formation of a loop between the enhancer and the promoter regions, which may be facilitated by DNA-bending proteins (such as IHF). The NtrC proteins bind at two sites located -160 and -80 upstream from the point of gene transcription.
https://en.wikipedia.org/wiki/Random%20sequence
The concept of a random sequence is essential in probability theory and statistics. The concept generally relies on the notion of a sequence of random variables and many statistical discussions begin with the words "let X1,...,Xn be independent random variables...". Yet as D. H. Lehmer stated in 1951: "A random sequence is a vague notion... in which each term is unpredictable to the uninitiated and whose digits pass a certain number of tests traditional with statisticians". Axiomatic probability theory deliberately avoids a definition of a random sequence. Traditional probability theory does not state if a specific sequence is random, but generally proceeds to discuss the properties of random variables and stochastic sequences assuming some definition of randomness. The Bourbaki school considered the statement "let us consider a random sequence" an abuse of language. Early history Émile Borel was one of the first mathematicians to formally address randomness in 1909. In 1919 Richard von Mises gave the first definition of algorithmic randomness, which was inspired by the law of large numbers, although he used the term collective rather than random sequence. Using the concept of the impossibility of a gambling system, von Mises defined an infinite sequence of zeros and ones as random if it is not biased by having the frequency stability property i.e. the frequency of zeros goes to 1/2 and every sub-sequence we can select from it by a "proper" method of selection is also not biased. The sub-sequence selection criterion imposed by von Mises is important, because although 0101010101... is not biased, by selecting the odd positions, we get 000000... which is not random. Von Mises never totally formalized his definition of a proper selection rule for sub-sequences, but in 1940 Alonzo Church defined it as any recursive function which having read the first N elements of the sequence decides if it wants to select element number N + 1. Church was a pioneer in the field of c
https://en.wikipedia.org/wiki/Hitachi%20Data%20Systems
Hitachi Data Systems (HDS) was a provider of modular mid-range and high-end computer data storage systems, software and services. Its operations are now a part of Hitachi Vantara. It was a wholly owned subsidiary of Hitachi Ltd. and part of the Hitachi Information Systems & Telecommunications Division. In 2017 its operations were merged with Pentaho and Hitachi Insight Group to form Hitachi Vantara. In 2010, Hitachi Data Systems sold through direct and indirect channels in more than 170 countries and regions. Its customers included over half of the Fortune 100 companies at the time. History Origin as Itel Itel was an equipment leasing company founded in 1967 by Peter Redfield and Gary Friedman, initially focusing on leasing IBM mainframes. Through creative financial arrangements and investments, Itel was able to lease IBM mainframes to customers at costs below what customers would have paid, making them second to IBM itself in revenues. A joint venture between National Semiconductor and Hitachi formed in 1977 was contracted by Itel to manufacture IBM-compatible mainframes branded as Advanced Systems. After initial success shipping 200 such systems and netting profits of $73 million, Itel increased their investments and personnel to market their Advanced Systems brand. When Itel requested lower prices in order to compete with IBM, Charlie Sporck, CEO of National Semiconductor, persuaded Itel to commit to long-term contracts with National Semiconductor and Hitachi. National Semiconductor takes over Advanced Systems Thereafter, news leaked that IBM was releasing a new technologically superior line of computers, and customers responded by holding back purchases, causing Itel's inventory to build up drastically. Hitachi agreed to Itel's request to cut back on shipment, but National Semiconductor was adamant in persisting with what the industry termed as National's blackmailing of Itel. In 1979, Redfield was forced to resign as CEO, and National Semiconductor took
https://en.wikipedia.org/wiki/%C5%A0id%C3%A1k%20correction
In statistics, the Šidák correction, or Dunn–Šidák correction, is a method used to counteract the problem of multiple comparisons. It is a simple method to control the family-wise error rate. When all null hypotheses are true, the method provides familywise error control that is exact for tests that are stochastically independent, conservative for tests that are positively dependent, and liberal for tests that are negatively dependent. It is credited to a 1967 paper by the statistician and probabilist Zbyněk Šidák. The Šidák method can be used to determine the statistical significance, and evaluate adjusted P value and confidence intervals. Usage Given m different null hypotheses and a familywise alpha level of , each null hypothesis is rejected that has a p-value lower than . This test produces a familywise Type I error rate of exactly when the tests are independent of each other and all null hypotheses are true. It is less stringent than the Bonferroni correction, but only slightly. For example, for = 0.05 and m = 10, the Bonferroni-adjusted level is 0.005 and the Šidák-adjusted level is approximately 0.005116. One can also compute confidence intervals matching the test decision using the Šidák correction by using 100 (1 − α)1/m % confidence intervals. For continuous problems, one can employ Bayesian logic to compute from the prior-to-posterior volume ratio. When there are considerably large numbers of hypotheses or when the hypotheses are correlated, correction factors like Bonferroni and Šidák give in quite conservative results, which leads us to consider other approaches. Proof The Šidák correction is derived by assuming that the individual tests are independent. Let the significance threshold for each test be ; then the probability that at least one of the tests is significant under this threshold is (1 - the probability that none of them are significant). Since it is assumed that they are independent, the probability that all of them are not s
https://en.wikipedia.org/wiki/Programming%20Language%20for%20Business
Programming Language for Business or PL/B is a business-oriented programming language originally called DATABUS and designed by Datapoint in 1972 as an alternative to COBOL because Datapoint's 8-bit computers could not fit COBOL into their limited memory, and because COBOL did not at the time have facilities to deal with Datapoint's built-in keyboard and screen. A version of DATABUS became an ANSI standard, and the name PL/B came about when Datapoint chose not to release its trademark on the DATABUS name. Functionality Much like Java and .NET, PL/B programs are compiled into an intermediate byte-code, which is then interpreted by a runtime library. Because of this, many PL/B programs can run on DOS, Unix, Linux, and Windows operating systems. The PL/B development environments are influenced by Java and Visual Basic, and offer many of the same features found in those languages. PL/B (Databus) is actively used all over the world, and has several forums on the Internet dedicated to supporting software developers. Since its inception, PL/B has been enhanced and adapted to keep it modernized and able to access various data sources. It has a database capability built-in with ISAM and Associative Hashed Indexes, as well as ODBC, SQL, Oracle, sequential, random access, XML and JSON files. All the constructs of modern programming languages have been incrementally added to the language. PL/B also has the ability to access external routines through COM, DLL's and .NET assemblies. Full access to the .NET framework is built into many versions. Several implementations of the language are capable of running as an Application Server like Citrix, and connecting to remote databases through a data manager. Source code example IF (DF_EDIT[ITEM] = "PHYS") STATESAVE MYSTATE IF (C_F07B != 2) DISPLAY *SETSWALL 1:1:1:80: *BGCOLOR=2,*COLOR=15: *P49:1," 7-Find " ELSE
https://en.wikipedia.org/wiki/Exception%20handling%20syntax
Exception handling syntax is the set of keywords and/or structures provided by a computer programming language to allow exception handling, which separates the handling of errors that arise during a program's operation from its ordinary processes. Syntax for exception handling varies between programming languages, partly to cover semantic differences but largely to fit into each language's overall syntactic structure. Some languages do not call the relevant concept "exception handling"; others may not have direct facilities for it, but can still provide means to implement it. Most commonly, error handling uses a try...[catch...][finally...] block, and errors are created via a throw statement, but there is significant variation in naming and syntax. Catalogue of exception handling syntaxes Ada Exception declarations Some_Error : exception; Raising exceptions raise Some_Error; raise Some_Error with "Out of memory"; -- specific diagnostic message Exception handling and propagation with Ada.Exceptions, Ada.Text_IO; procedure Foo is Some_Error : exception; begin Do_Something_Interesting; exception -- Start of exception handlers when Constraint_Error => ... -- Handle constraint error when Storage_Error => -- Propagate Storage_Error as a different exception with a useful message raise Some_Error with "Out of memory"; when Error : others => -- Handle all others Ada.Text_IO.Put("Exception: "); Ada.Text_IO.Put_Line(Ada.Exceptions.Exception_Name(Error)); Ada.Text_IO.Put_Line(Ada.Exceptions.Exception_Message(Error)); end Foo; Assembly language Most assembly languages will have a macro instruction or an interrupt address available for the particular system to intercept events such as illegal op codes, program check, data errors, overflow, divide by zero, and other such. IBM and Univac mainframes had the STXIT macro. Digital Equipment Corporation RT11 systems had trap vectors for program errors, i/o interrupts, and such. DOS h
https://en.wikipedia.org/wiki/Law%20of%20Maximum
The Law of Maximum also known as Law of the Maximum is a principle developed by Arthur Wallace which states that total growth of a crop or a plant is proportional to about 70 growth factors. Growth will not be greater than the aggregate values of the growth factors. Without the correction of the limiting growth factors, nutrients, waters and other inputs are not fully or judicially used resulting in wasted resources. Applications The factors range from 0 for no growth to 1 for maximum growth. Actual growth is calculated by the total multiplication of each growth factor. For example, if ten factors had a value of 0.5, the actual growth would be: 0.5 x 0.5 x 0.5 x 0.5 x 0.5 x 0.5 x 0.5 x 0.5 x 0.5 x 0.5 = 0.001, which is 0.1% of optimum. If each of ten factors had a value of 0.9 the actual growth would be: 0.9 x 0.9 x 0.9 x 0.9 x 0.9 x 0.9 x 0.9 x 0.9 x 0.9 x 0.9 = 0.349, which is 34.9% of optimum. Hence the need to achieve maximal value for each factor is critical in order to obtain maximal growth. Demonstrations of "Law of the Maximum" The following demonstrates the Law of the Maximum. For the various crops listed below, one, two or three factors were limiting while all the other factors were 1. When two or three factors were simultaneously limiting, predicted growth of the two or three factors was similar to the actual growth when the two or three factors were limits individually and then multiplied together. Growth Factors A. Adequacy of Nutrients B. Non-nutrient elements and nutrients excesses that cause toxicities (stresses) C. Interactions of the nutrients D. Soil Conditioning requirement and physical processes E. Additional biology F. Weather factors G. Management External links Law of the Maximum, in Handbook of soil science by Malcolm E. Sumner
https://en.wikipedia.org/wiki/Tadao%20Oda
(born 1940, Kyoto) is a Japanese mathematician working in the field of algebraic geometry, especially toric varieties. The field of toric varieties was developed by Demazure, Mumford, Miyake, Oda and others in the 1970s. He is also known for a book on toric varieties: Convex Bodies and Algebraic Geometry: An Introduction to the Theory of Toric Varieties. In 1958 Oda graduated from Tokai High School in Nagoya, Japan, where Shigefumi Mori and Hisasi Morikawa also graduated from. He earned his bachelor's degree from Kyoto University in 1962, and five years later earned a Ph.D. under David Mumford from Harvard University with thesis Abelian varieties over a perfect field and Dieudonné Modules. After completing his Ph.D., Oda was an associate professor at Nagoya University and became a professor at Tohoku University in 1975. He remained at the university for 28 years. He is an emeritus professor at Tohoku University. Oda wrote "Algebraic Geometry, Sendai, 1985" with Hisasi Morikawa, a former professor at Nagoya University. Tadao has belonged to the Kiwanis Club of Sendai since 1999 and was elected to a three-year term as a trustee of Kiwanis International in 2008. Works Convex bodies and algebraic geometry, Ergebnisse der Mathematik und ihrer Grenzgebiete, Springer Verlag 1988 as co-editor with Hisasi Morikawa: Algebraic Geometry, Sendai 1985, North Holland 1987 Lectures on torus embeddings and applications (based on joint work with Katsuya Miyake), Tata Institute of Fundamental Research, Springer Verlag 1985
https://en.wikipedia.org/wiki/Coefficient%20of%20coincidence
In genetics, the coefficient of coincidence (c.o.c.) is a measure of interference in the formation of chromosomal crossovers during meiosis. It is generally the case that, if there is a crossover at one spot on a chromosome, this decreases the likelihood of a crossover in a nearby spot. This is called interference. The coefficient of coincidence is typically calculated from recombination rates between three genes. If there are three genes in the order A B C, then we can determine how closely linked they are by frequency of recombination. Knowing the recombination rate between A and B and the recombination rate between B and C, we would naively expect the double recombination rate to be the product of these two rates. The coefficient of coincidence is calculated by dividing the actual frequency of double recombinants by this expected frequency: c.o.c. = actual double recombinant frequency / expected double recombinant frequency Interference is then defined as follows: interference = 1 − c.o.c. This figure tells us how strongly a crossover in one of the DNA regions (AB or BC) interferes with the formation of a crossover in the other region. Worked example Drosophila females of genotype a+a b+b c+c were crossed with males of genotype aa bb cc. This led to 1000 progeny of the following phenotypes: a+b+c+: 244 (parental genotype, shows no recombination) a+b+c: 81 (recombinant between B and C) a+bc+: 23 (double recombinant) a+bc: 152 (recombinant between A and B) ab+c+: 148 (recombinant between A and B) ab+c: 27 (double recombinant) abc+: 89 (recombinant between B and C) abc: 236 (parental genotype, shows no recombination) From these numbers it is clear that the b+/b locus lies between the a+/a locus and the c+/c locus. There are 23 + 152 + 148 + 27 = 350 progeny showing recombination between genes A and B. And there are 81 + 23 + 27 + 89 = 220 progeny showing recombination between genes B and C. Thus the expected rate of double recombination is (350 / 1000) *
https://en.wikipedia.org/wiki/Varicode
Varicode is a self-synchronizing code for use in PSK31. It supports all ASCII characters, but the characters used most frequently in English have shorter codes. The space between characters is indicated by a 00 sequence, an implementation of Fibonacci coding. Originally created for speeding up real-time keyboard-to-keyboard exchanges over low bandwidth links, Varicode is freely available. Limitations Varicode provides somewhat weaker compression in languages other than English that use same characters as in English. Varicode table Control characters Printable characters Character lengths Beginning with the single-bit code "1", valid varicode values may be formed by prefixing a "1" or "10" to a shorter code. Thus, the number of codes of length n is equal to the Fibonacci number Fn. Varicode uses the 88 values of lengths up to 9 bits, and 40 of the 55 codes of length 10. As transmitted, the codes are two bits longer due to the trailing delimiter 00.
https://en.wikipedia.org/wiki/John%20Reif
John H. Reif (born 1951) is an American academic, and Professor of Computer Science at Duke University, who has made contributions to large number of fields in computer science: ranging from algorithms and computational complexity theory to robotics. He has also published in many other scientific fields including chemistry (in particular, nanoscience), optics (in particular optical computing and design of head-mounted displays), and mathematics (in particular graph theory and game theory. Biography John Reif received a B.S. (magna cum laude) from Tufts University in 1973, a M.S. from Harvard University in 1975 and a Ph.D. from Harvard University in 1977. From 1983 to 1986 he was associate professor of Harvard University, and since 1986 he has been Professor of Computer Science at Duke University. Currently he holds the Hollis Edens Distinguished Professor, Trinity College of Arts and Sciences, Duke University. From 2011 to 2014 he was Distinguished Adjunct Professor, Faculty of Computing and Information Technology (FCIT), King Abdulaziz University (KAU), Jeddah, Saudi Arabia. He has also contributed to bringing together various disjoint research communities working in different areas of nano-sciences by organizing (as General Chairman) annual Conferences on "Foundations of Nanoscience: Self-assembled architectures and devices" (FNANO) for last 20 years. He has been awarded Fellow of the following organizations: American Association for the Advancement of Science, IEEE, ACM, and the Institute of Combinatorics. He is the son of Arnold E. Reif and like him he has dual citizenship in USA and Austria. Research contributions John Reif has made contributions to large number of fields in computer science: ranging from algorithms and computational complexity theory to robotics and to game theory. He developed efficient randomized algorithms and parallel algorithms for a wide variety of graph, geometric, numeric, algebraic, and logical problems. His Google Scholar H-
https://en.wikipedia.org/wiki/Evolution%20of%20bacteria
The evolution of bacteria has progressed over billions of years since the Precambrian time with their first major divergence from the archaeal/eukaryotic lineage roughly 3.2-3.5 billion years ago. This was discovered through gene sequencing of bacterial nucleoids to reconstruct their phylogeny. Furthermore, evidence of permineralized microfossils of early prokaryotes was also discovered in the Australian Apex Chert rocks, dating back roughly 3.5 billion years ago during the time period known as the Precambrian time. This suggests that an organism in of the phylum Thermotogota (formerly Thermotogae) was the most recent common ancestor of modern bacteria. Further chemical and isotopic analysis of ancient rock reveals that by the Siderian period, roughly 2.45 billion years ago, oxygen had appeared. This indicates that oceanic, photosynthetic cyanobacteria evolved during this period because they were the first microbes to produce oxygen as a byproduct of their metabolic process. Therefore, this phylum was thought to have been predominant roughly 2.3 billion years ago. However, some scientists argue they could have lived as early as 2.7 billion years ago, as this was roughly before the time of the Great Oxygenation Event, meaning oxygen levels had time to increase in the atmosphere before it altered the ecosystem during this event. The rise in atmospheric oxygen led to the evolution of Pseudomonadota (formerly proteobacteria). Today this phylum includes many nitrogen fixing bacteria, pathogens, and free-living microorganisms. This phylum evolved approximately 1.5 billion years ago during the Paleoproterozoic era. However, there are still many conflicting theories surrounding the origins of bacteria. Even though microfossils of ancient bacteria have been discovered, some scientists argue that the lack of identifiable morphology in these fossils means they can not be utilised to draw conclusions on an accurate evolutionary timeline of bacteria. Nevertheless, more recent
https://en.wikipedia.org/wiki/Game%20Developers%20Conference
The Game Developers Conference (GDC) is an annual conference for video game developers. The event includes an expo, networking events, and awards shows like the Game Developers Choice Awards and Independent Games Festival, and a variety of tutorials, lectures, and roundtables by industry professionals on game-related topics covering programming, design, audio, production, business and management, and visual arts. History Originally called the Computer Game Developers Conference, the first conference was organized in April 1988 by Chris Crawford in his San Jose, California-area living room. About twenty-seven designers attended, including Don Daglow, Brenda Laurel, Brian Moriarty, Gordon Walton, Tim Brengle, Cliff Johnson, Dave Menconi, and Carol and Ivan Manley. The second conference, held that same year at a Holiday Inn at Milpitas, attracted about 125 developers. Early conference directors included Brenda Laurel, Tim Brengle, Sara Reeder, Dave Menconi, Jeff Johannigman, Stephen Friedman, Chris Crawford, and Stephanie Barrett. Later directors include John Powers, Nicky Robinson, Anne Westfall, Susan Lee-Merrow, and Ernest W. Adams. In the early years the conference changed venue each year to accommodate its increases in size. Attendance in this period grew from 525 to 2,387. By 1994 the CGDC could afford to sponsor the creation of the Computer Game Developers Association with Adams as its founding director. Miller Freeman, Inc. took on the running of the conference in 1996, nearly doubling attendance to 4,000 that year. In 2005, the GDC moved to the new Moscone Center West, in the heart of San Francisco's SOMA district, and reported over 12,000 attendees. The GDC returned to San Jose in 2006, reporting over 12,500 attendees, and moved to San Francisco in 2007 – where the organizers expect it will stay for the foreseeable future. Attendance figures continued to rise in following years, with 18,000 attendees in the 2008 event. The 2009 Game Developers Conference wa
https://en.wikipedia.org/wiki/Uhlenhuth%20test
The Uhlenhuth test, also referred to as the antigen–antibody precipitin test for species, is a test which can determine the species of a blood sample. It was invented by Paul Uhlenhuth in 1901, based on the discovery that the blood of different species had one or more characteristic proteins. The test represented a major breakthrough and came to have tremendous importance in forensic science in the 20th century. The test was further refined for forensic use by the Swiss chemist Maurice Müller in the 1960s.
https://en.wikipedia.org/wiki/Random%20cluster%20model
In statistical mechanics, probability theory, graph theory, etc. the random cluster model is a random graph that generalizes and unifies the Ising model, Potts model, and percolation model. It is used to study random combinatorial structures, electrical networks, etc. It is also referred to as the RC model or sometimes the FK representation after its founders Cees Fortuin and Piet Kasteleyn. Definition Let be a graph, and be a bond configuration on the graph that maps each edge to a value of either 0 or 1. We say that a bond is closed on edge if , and open if . If we let be the set of open bonds, then an open cluster is any connected component in union the set of vertices. Note that an open cluster can be a single vertex (if that vertex is not incident to any open bonds). Suppose an edge is open independently with probability and closed otherwise, then this is just the standard Bernoulli percolation process. The probability measure of a configuration is given as The RC model is a generalization of percolation, where each cluster is weighted by a factor of . Given a configuration , we let be the number of open clusters, or alternatively the number of connected components formed by the open bonds. Then for any , the probability measure of a configuration is given as Z is the partition function, or the sum over the unnormalized weights of all configurations, The partition function of the RC model is a specialization of the Tutte polynomial, which itself is a specialization of the multivariate Tutte polynomial. Special values of q The parameter of the random cluster model can take arbitrary complex values. This includes the following special cases: : linear resistance networks. : negatively-correlated percolation. : Bernoulli percolation, with . : the Ising model. : -state Potts model. Edwards-Sokal representation The Edwards-Sokal (ES) representation of the Potts model is named after Robert G. Edwards and Alan D. Sokal. It provides
https://en.wikipedia.org/wiki/Ramaria%20vinosimaculans
Ramaria vinosimaculans, commonly known as the wine-staining coral, is a coral mushroom in the family Gomphaceae. It is found in North America.
https://en.wikipedia.org/wiki/Bureau%20van%20Dijk
Bureau van Dijk is a major publisher of business information, and specialises in private company data combined with software for searching and analysing companies. It is a Moody's Analytics company. Orbis is Bureau van Dijk's flagship company database. On 15 May 2017 it was announced that Moody's entered into a definitive agreement to acquire Bureau van Dijk, which completed in August 2017. Overview Bureau van Dijk's product range combines data from regulatory and other sources, including 160 information providers, with software to allow users to manipulate data for a range of research needs and applications. Unlike other providers of content, Bureau van Dijk reveals their sources and shows the source data – allowing users to create their own analytics and predictive analysis based on underlying primary data and reporting. The majority of staff focus on sales, marketing and customer support in offices in: Amsterdam, Beijing, Bratislava, Brussels, Buenos Aires, Chicago, Copenhagen, Dubai, Frankfurt, Geneva, Hong Kong, Johannesburg, Lisbon, London, Madrid, Manchester, Mexico City, Milan, Moscow, New York, Paris, Rome, San Francisco, São Paulo, Seoul, Shanghai, Singapore, Stockholm, Sydney, Tokyo, Vienna, Washington D.C. and Zurich. Bureau van Dijk's research team collects both global M&A data and intelligence around corporate ownership structures, and is based in Manchester, Brussels and Singapore. It also has a team of journalists writing news stories on deals and market rumours. Product management and software development are based in the Geneva and Brussels offices. The company has over 5,000 clients including banks, insurance companies, financial and consulting organisations, governments and research institutes. Bureau van Dijk claims to offer the most powerful comparable resource of information on private companies in the world. The company brand statement is "The Business of Certainty". The company regularly blogs content and white papers, offering free
https://en.wikipedia.org/wiki/Outline%20of%20immunology
The following outline is provided as an overview of and topical guide to immunology: Immunology – study of all aspects of the immune system in all organisms. It deals with the physiological functioning of the immune system in states of both health and disease; malfunctions of the immune system in immunological disorders (autoimmune diseases, hypersensitivities, immune deficiency, transplant rejection); the physical, chemical and physiological characteristics of the components of the immune system in vitro, in situ, and in vivo. Essence of immunology Immunology Branch of Biomedical science Immune system Immunity Branches of immunology: 1. General Immunology 2. Basic Immunology 3. Advanced Immunology 4. Medical Immunology 5. Pharmaceutical Immunology 9. Clinical Immunology 6. Environmental Immunology 8. Cellular and Molecular Immunology 9. Food and Agricultural Immunology Classical immunology Clinical immunology Computational immunology Diagnostic immunology Evolutionary immunology Systems immunology Immunomics Immunoproteomics Immunophysics Immunochemistry Ecoimmunology Immunopathology Nutritional immunology Psychoneuroimmunology Reproductive immunology Circadian immunology Immunotoxicology Palaeoimmunology Tissue-based immunology Testicular immunology - Testes Immunodermatology - Skin Intravascular immunology - Blood Osteoimmunology - Bone Mucosal immunology - Mucosal surfaces Respiratory tract antimicrobial defense system - Respiratory tract Neuroimmunology - Neuroimmune system in the Central nervous system Ocularimmunology - Ocular immune system in the Eye Cancer immunology/Immunooncology - Tumors History of immunology History of immunology Timeline of immunology General immunological concepts Immunity: Immunity against: Pathogens Pathogenic bacteria Viruses Fungi Protozoa Parasites Tumors Allergens Self-proteins Autoimmunity Alloimmunity Cross-reactivity Tolerance Central tolerance Peripheral tolerance
https://en.wikipedia.org/wiki/Antimicrobial%20copper-alloy%20touch%20surfaces
Antimicrobial copper-alloy touch surfaces can prevent frequently touched surfaces from serving as reservoirs for the spread of pathogenic microbes. This is especially true in healthcare facilities, where harmful viruses, bacteria, and fungi colonize and persist on doorknobs, push plates, railings, tray tables, tap (faucet) handles, IV poles, HVAC systems, and other equipment. These microbes can sometimes survive on surfaces for more than 30 days. Coppertouch Australia commissioned the Doherty Institute at the Melbourne University Australia to test its Antimicrobial Copper adhesive film. Lab tests proved a 96% kill rate of Influenza A virus with the film as compared to non treated surfaces. The surfaces of copper and its alloys, such as brass and bronze, are antimicrobial. They have an inherent ability to kill a wide range of harmful microbes relatively rapidly – often within two hours or less – and with a high degree of efficiency. These antimicrobial properties have been demonstrated by an extensive body of research. The research also suggests that if touch surfaces are made with copper alloys, the reduced transmission of disease-causing organisms can reduce patient infections in hospital intensive care units (ICU) by as much as 58%. Several companies have developed methods for utilizing the antimicrobial functionality of copper on existing high-touch surfaces. LuminOre and Aereus Technologies both utilize cold-spray antimicrobial copper coating technology to apply antimicrobial coatings to surfaces. Evidence As of 2019 a number of studies have found that copper surfaces may help prevent infection in the healthcare environment. Microorganisms are known to survive on inanimate surfaces for extended periods of time. Hand and surface disinfection practices are a primary measure against the spread of infection. Since approximately 80% of infectious diseases are known to be transmitted by touch, and pathogens found in healthcare facilities can survive on inanima
https://en.wikipedia.org/wiki/Jankov%E2%80%93von%20Neumann%20uniformization%20theorem
In descriptive set theory the Jankov–von Neumann uniformization theorem is a result saying that every measurable relation on a pair of standard Borel spaces (with respect to the sigma algebra of analytic sets) admits a measurable section. It is named after V. A. Jankov and John von Neumann. While the axiom of choice guarantees that every relation has a section, this is a stronger conclusion in that it asserts that the section is measurable, and thus "definable" in some sense without using the axiom of choice. Statement Let be standard Borel spaces and a subset that is measurable with respect to the analytic sets. Then there exists a measurable function such that, for all , if and only if . An application of the theorem is that, given any measurable function , there exists a universally measurable function such that for all .
https://en.wikipedia.org/wiki/Asynchronous%20circuit
Asynchronous circuit (clockless or self-timed circuit) is a sequential digital logic circuit that does not use a global clock circuit or signal generator to synchronize its components. Instead, the components are driven by a handshaking circuit which indicates a completion of a set of instructions. Handshaking works by simple data transfer protocols. Many synchronous circuits were developed in early 1950s as part of bigger asynchronous systems (e.g. ORDVAC). Asynchronous circuits and theory surrounding is a part of several steps in integrated circuit design, a field of digital electronics engineering. Asynchronous circuits are contrasted with synchronous circuits, in which changes to the signal values in the circuit are triggered by repetitive pulses called a clock signal. Most digital devices today use synchronous circuits. However asynchronous circuits have a potential to be much faster, have a lower level of power consumption, electromagnetic interference, and better modularity in large systems. Asynchronous circuits are an active area of research in digital logic design. It was not until the 1990s when viability of the asynchronous circuits was shown by real-life commercial products. Overview All digital logic circuits can be divided into combinational logic, in which the output signals depend only on the current input signals, and sequential logic, in which the output depends both on current input and on past inputs. In other words, sequential logic is combinational logic with memory. Virtually all practical digital devices require sequential logic. Sequential logic can be divided into two types, synchronous logic and asynchronous logic. Synchronous circuits In synchronous logic circuits, an electronic oscillator generates a repetitive series of equally spaced pulses called the clock signal. The clock signal is supplied to all the components of the IC. Flip-flops only flip when triggered by the edge of the clock pulse, so changes to the logic signals thr
https://en.wikipedia.org/wiki/Clubhouse%20%28app%29
Clubhouse is a social audio app for iOS and Android where users can communicate in audio chat rooms that accommodate groups of thousands of people. Clubhouse inspired competitor products from Meta, Twitter through Twitter Spaces, and Spotify through a product called Greenroom. History Clubhouse began as a social media startup by founders Paul Davison and Rohan Seth in Fall 2019. Originally designed for podcasts under the name Talkshow, the app was rebranded as "Clubhouse" and officially released for the iOS operating system in March 2020. Clubhouse was valued at $100 million after receiving funding from notable angel investors, including Ryan Hoover (Product Hunt), Balaji Srinivasan (Coinbase), James Beshara (Tilt.com), and several venture capitalists, including a $12 million Series A investment from the venture capital firm, Andreessen Horowitz, in May 2020. The app gained popularity in the early months of the COVID-19 pandemic and had 600,000 registered users by December 2020. In January 2021, CEO Paul Davison announced that the app's active weekly user base consisted of approximately 2 million individuals. The company announced that it would begin working on an Android version of the app. In that month, the app became widely used in Germany when German podcast hosts Philipp Klöckner and Philipp Gloeckler started an invite-chain over a Telegram group, bringing German influencers, journalists, and politicians to the platform. Clubhouse also raised their Series B at a $1 billion valuation. On February 1, 2021, Clubhouse had an estimated 3.5 million downloads globally and grew rapidly to 8.1 million downloads by February 15. This significant growth in popularity occurred after celebrities such as Elon Musk and Mark Zuckerberg made appearances on the app. In that month, Clubhouse hired an Android Software Developer. A year after the app's release, the number of weekly active users was greater than 10 million, but the user base declined 21% during three weeks from
https://en.wikipedia.org/wiki/3-Base%20Periodicity%20Property
The three-base periodicity property in the field of Genomics is a property that is characteristic of protein-coding DNA sequences. The existence of this property can be shown by performing Fourier analysis on signals derived from segments of DNA sequences. Because of its predictive power, it has been used as a preliminary indicator in gene prediction. DNA sequences are inherently signals as they are functions of an independent variable, position on the sequence. Thus, signal processing methods can be applied to them after the symbolic string is properly mapped to one (or more) numerical sequences. The reason for this periodicity is due to the biased distribution towards codon triplets, which is a consequence of genetic code degeneracy; while non-coding segments are uniformly randomly distributed and produce no significant signal in the frequency space. History This property has been dissected, tested and derived in a chronology of papers from different universities. The initial discovery was made in 1980 by Trifonov and Sussman who observed periodicity in DNA sequences by applying the autocorrelation function to chromatin DNA. Silverman and Linsker defined the Fourier transform of a sequence of bases, described how to "fourier analyze" it and proposed sample applications of this technique. Tsonis, Elsner and Tsonis did Fourier analysis of coding, non-coding and random sequences and proposed a reason for the 3-periodicity property found in coding sequences. Dodin proposed a method for analyzing the periodicity of DNA sequences based on the correlation function of the symbolic sequence. Tiwari, Ramachandran, Bhattacharya and Ramaswamy examined the signal-to-noise ratio of the period-3 peak within a sliding window over a sequence to identify likely coding regions. Coding vs. Non-Coding DNA DNA stores the information required to assemble, maintain and reproduce every living organism. A protein is a large molecule ("macromolecule") made up of smaller subunits, amino
https://en.wikipedia.org/wiki/History%20of%20compiler%20construction
In computing, a compiler is a computer program that transforms source code written in a programming language or computer language (the source language), into another computer language (the target language, often having a binary form known as object code or machine code). The most common reason for transforming source code is to create an executable program. Any program written in a high-level programming language must be translated to object code before it can be executed, so all programmers using such a language use a compiler or an interpreter. Thus, compilers are very important to programmers. Improvements to a compiler may lead to a large number of improved features in executable programs. The Production Quality Compiler-Compiler, in the late 1970s, introduced the principles of compiler organization that are still widely used today (e.g., a front-end handling syntax and semantics and a back-end generating machine code). First compilers Software for early computers was primarily written in assembly language, and before that directly in machine code. It is usually more productive for a programmer to use a high-level language, and programs written in a high-level language can be reused on different kinds of computers. Even so, it took a while for compilers to become established, because they generated code that did not perform as well as hand-written assembler, they were daunting development projects in their own right, and the very limited memory capacity of early computers created many technical problems for practical compiler implementations. Between 1942 and 1945, Konrad Zuse developed ("plan calculus"), the first high-level language for a computer, for which he envisioned a ("plan assembly device"), which would automatically translate the mathematical formulation of a program into machine-readable punched film stock. However, the first actual compiler for the language was implemented only decades later. Between 1949 and 1951, Heinz Rutishauser propose
https://en.wikipedia.org/wiki/Semiorthogonal%20decomposition
In mathematics, a semiorthogonal decomposition is a way to divide a triangulated category into simpler pieces. One way to produce a semiorthogonal decomposition is from an exceptional collection, a special sequence of objects in a triangulated category. For an algebraic variety X, it has been fruitful to study semiorthogonal decompositions of the bounded derived category of coherent sheaves, . Semiorthogonal decomposition Alexei Bondal and Mikhail Kapranov (1989) defined a semiorthogonal decomposition of a triangulated category to be a sequence of strictly full triangulated subcategories such that: for all and all objects and , every morphism from to is zero. That is, there are "no morphisms from right to left". is generated by . That is, the smallest strictly full triangulated subcategory of containing is equal to . The notation is used for a semiorthogonal decomposition. Having a semiorthogonal decomposition implies that every object of has a canonical "filtration" whose graded pieces are (successively) in the subcategories . That is, for each object T of , there is a sequence of morphisms in such that the cone of is in , for each i. Moreover, this sequence is unique up to a unique isomorphism. One can also consider "orthogonal" decompositions of a triangulated category, by requiring that there are no morphisms from to for any . However, that property is too strong for most purposes. For example, for an (irreducible) smooth projective variety X over a field, the bounded derived category of coherent sheaves never has a nontrivial orthogonal decomposition, whereas it may have a semiorthogonal decomposition, by the examples below. A semiorthogonal decomposition of a triangulated category may be considered as analogous to a finite filtration of an abelian group. Alternatively, one may consider a semiorthogonal decomposition as closer to a split exact sequence, because the exact sequence of triangulated categories is split by the subcategory , ma
https://en.wikipedia.org/wiki/List%20of%20hypergeometric%20identities
Below is a list of hypergeometric identities. Hypergeometric function lists identities for the Gaussian hypergeometric function Generalized hypergeometric function lists identities for more general hypergeometric functions Bailey's list is a list of the hypergeometric function identities in given by . Wilf–Zeilberger pair is a method for proving hypergeometric identities
https://en.wikipedia.org/wiki/Membrane%20lipid
Membrane lipids are a group of compounds (structurally similar to fats and oils) which form the lipid bilayer of the cell membrane. The three major classes of membrane lipids are phospholipids, glycolipids, and cholesterol. Lipids are amphiphilic: they have one end that is soluble in water ('polar') and an ending that is soluble in fat ('nonpolar'). By forming a double layer with the polar ends pointing outwards and the nonpolar ends pointing inwards membrane lipids can form a 'lipid bilayer' which keeps the watery interior of the cell separate from the watery exterior. The arrangements of lipids and various proteins, acting as receptors and channel pores in the membrane, control the entry and exit of other molecules and ions as part of the cell's metabolism. In order to perform physiological functions, membrane proteins are facilitated to rotate and diffuse laterally in two dimensional expanse of lipid bilayer by the presence of a shell of lipids closely attached to protein surface, called annular lipid shell. Biological roles The bilayer formed by membrane lipids serves as a containment unit of a living cell. Membrane lipids also form a matrix in which membrane proteins reside. Historically lipids were thought to merely serve a structural role. Functional roles of lipids are in fact many: They serve as regulatory agents in cell growth and adhesion. They participate in the biosynthesis of other biomolecules. They can serve to increase enzymatic activities of enzymes. Non-bilayer forming lipid like monogalactosyl diglyceride (MGDG) predominates the bulk lipids in thylakoid membranes, which when hydrated alone, forms reverse hexagonal cylindrical phase. However, in combination with other lipids and carotenoids/chlorophylls of thylakoid membranes, they too conform together as lipid bilayers. Major classes Phospholipids Phospholipids and glycolipids consist of two long, nonpolar (hydrophobic) hydrocarbon chains linked to a hydrophilic head group. The heads of
https://en.wikipedia.org/wiki/Arnold%20Sch%C3%B6nhage
Arnold Schönhage (born 1 December 1934 in Lockhausen, now Bad Salzuflen) is a German mathematician and computer scientist. Schönhage was professor at the Rheinische Friedrich-Wilhelms-Universität, Bonn, and also in Tübingen and Konstanz. Together with Volker Strassen he developed the Schönhage–Strassen algorithm for the multiplication of large numbers that has a runtime of O(N log N log log N). For many years, this was the fastest way to multiply large integers, although Schönhage and Strassen predicted that an algorithm with a run-time of N(logN) should exist. In 2019, Joris van der Hoeven and David Harvey finally developed an algorithm with this runtime, proving that Schönhage's and Strassen's prediction had been correct. Schönhage designed and implemented together with Andreas F. W. Grotefeld and Ekkehart Vetter a multitape Turing machine, called TP, in software. The machine is programmed in TPAL, an assembler language. They implemented numerous numerical algorithms including the Schönhage–Strassen algorithm on this machine.
https://en.wikipedia.org/wiki/Vibrational%20energy%20relaxation
Vibrational energy relaxation, or vibrational population relaxation, is a process in which the population distribution of molecules in quantum states of high energy level caused by an external perturbation returns to the Maxwell–Boltzmann distribution. In solution, the process proceeds with intra- and intermolecular energy transfer. The excess energy of the excited vibrational mode is transferred to the kinetic modes in the same molecule or to the surrounding molecules. Through this process, the initially excited vibrational mode moves to a vibrational state of a lower energy. The relaxation is called the longitudinal relaxation, and the time constant of the relaxation is called the longitudinal relaxation time, or T1. Vibrational energy relaxation has been studied with time-resolved spectroscopy. By the excitation of the pump pulse, the population distribution of the vibrationally excited state is made by infrared absorption or a Raman process when the molecule is in the electronic ground state. In addition, by the electronic transition, the molecule often moves to the vibrationally excited state of the electronic excited state. The process of the energy relaxation from these vibrationally excited states can be observed with the probe pulse, which is delayed from the pump pulse.
https://en.wikipedia.org/wiki/Continuing%20medical%20education
Continuing medical education (CME) is continuing education (CE) that helps those in the medical field maintain competence and learn about new and developing areas of their field. These activities may take place as live events, written publications, online programs, audio, video, or other electronic media. Content for these programs is developed, reviewed, and delivered by faculty who are experts in their individual clinical areas. Similar to the process used in academic journals, any potentially conflicting financial relationships for faculty members must be both disclosed and resolved in a meaningful way. However, critics complain that drug and device manufacturers often use their financial sponsorship to bias CMEs towards marketing their own products. Historical context Continuing medical education is not a new concept. From essentially the beginning of institutionalized medical instruction (medical instruction affiliated with medical colleges and teaching hospitals), health practitioners continued their learning by meeting with their peers. Grand rounds, case discussions, and meetings to discuss published medical papers constituted the continuing learning experience. In the 1950s through to the 1980s, CME was increasingly funded by the pharmaceutical industry. Concerns regarding informational bias (both intentional and unintentional) led to increasing scrutiny of the CME funding sources. This led to the establishment of certifying agencies such as the Society for Academic Continuing Medical Education which is an umbrella organization representing medical associations and bodies of academic medicine from the United States, Canada, Great Britain and Europe. The pharmaceutical industry has also developed guidelines regarding drug detailing and industry sponsorship of CME, such as the Pharmaceutical Advertising Advisory Board (PAAB) and Canada's Research-Based Pharmaceutical Companies (Rx&D). Requirements In the United States, many states require CME for medical p
https://en.wikipedia.org/wiki/Sungazing
Sungazing is the unsafe practice of looking directly at the Sun. It is sometimes done as part of a spiritual or religious practice, most often near dawn or dusk. The human eye is very sensitive, and exposure to direct sunlight can lead to solar retinopathy, pterygium, cataracts, and often blindness. Studies have shown that even when viewing a solar eclipse the eye can still be exposed to harmful levels of ultraviolet radiation. Movements Referred to as sunning by William Horatio Bates as one of a series of exercises included in his Bates method, it became a popular form of alternative therapy in the early 20th century. His methods were widely debated at the time but ultimately discredited for lack of scientific rigor. The British Medical Journal reported in 1967 that "Bates (1920) advocated prolonged sun-gazing as the treatment of myopia, with disastrous results". See also Inedia (breatharianism) Joseph Plateau Scientific skepticism
https://en.wikipedia.org/wiki/Ecological%20speciation
Ecological speciation is a form of speciation arising from reproductive isolation that occurs due to an ecological factor that reduces or eliminates gene flow between two populations of a species. Ecological factors can include changes in the environmental conditions in which a species experiences, such as behavioral changes involving predation, predator avoidance, pollinator attraction, and foraging; as well as changes in mate choice due to sexual selection or communication systems. Ecologically-driven reproductive isolation under divergent natural selection leads to the formation of new species. This has been documented in many cases in nature and has been a major focus of research on speciation for the past few decades. Ecological speciation has been defined in various ways to identify it as distinct from nonecological forms of speciation. The evolutionary biologist Dolph Schluter defines it as "the evolution of reproductive isolation between populations or subsets of a single population by adaptation to different environments or ecological niches", while others believe natural selection is the driving force. The key difference between ecological speciation and other kinds of speciation is that it is triggered by divergent natural selection among different habitats, as opposed to other kinds of speciation processes like random genetic drift, the fixation of incompatible mutations in populations experiencing similar selective pressures, or various forms of sexual selection not involving selection on ecologically relevant traits. Ecological speciation can occur either in allopatry, sympatry, or parapatry—the only requirement being that speciation occurs as a result of adaptation to different ecological or micro-ecological conditions. Ecological speciation can occur pre-zygotically (barriers to reproduction that occur before the formation of a zygote) or post-zygotically (barriers to reproduction that occur after the formation of a zygote). Examples of pre-zygotic
https://en.wikipedia.org/wiki/Polarization-division%20multiple%20access
Polarization-division multiple access (PDMA) is a channel access method used in some cellular networks and broadcast satellite services. Separate antennas are used in this type, each with different polarization and followed by separate receivers, allowing simultaneous regional access of satellites. Each corresponding ground station antenna needs to be polarized in the same way as its counterpart in the satellite. This is generally accomplished by providing each participating ground station with an antenna that has dual polarization. The frequency band allocated to each antenna beam can be identical because the uplink signals are orthogonal in polarization. This technique allows frequency reuse. See also Frequency-division multiple access Code-division multiple access Time-division multiple access Channel access methods Polarization (waves)
https://en.wikipedia.org/wiki/Orbit%20%28control%20theory%29
The notion of orbit of a control system used in mathematical control theory is a particular case of the notion of orbit in group theory. Definition Let be a control system, where belongs to a finite-dimensional manifold and belongs to a control set . Consider the family and assume that every vector field in is complete. For every and every real , denote by the flow of at time . The orbit of the control system through a point is the subset of defined by Remarks The difference between orbits and attainable sets is that, whereas for attainable sets only forward-in-time motions are allowed, both forward and backward motions are permitted for orbits. In particular, if the family is symmetric (i.e., if and only if ), then orbits and attainable sets coincide. The hypothesis that every vector field of is complete simplifies the notations but can be dropped. In this case one has to replace flows of vector fields by local versions of them. Orbit theorem (Nagano–Sussmann) Each orbit is an immersed submanifold of . The tangent space to the orbit at a point is the linear subspace of spanned by the vectors where denotes the pushforward of by , belongs to and is a diffeomorphism of of the form with and . If all the vector fields of the family are analytic, then where is the evaluation at of the Lie algebra generated by with respect to the Lie bracket of vector fields. Otherwise, the inclusion holds true. Corollary (Rashevsky–Chow theorem) If for every and if is connected, then each orbit is equal to the whole manifold . See also Frobenius theorem (differential topology)
https://en.wikipedia.org/wiki/144th%20meridian%20west
The meridian 144° west of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, North America, the Pacific Ocean, the Southern Ocean, and Antarctica to the South Pole. The 144th meridian west forms a great circle with the 36th meridian east. The 144th meridian west is the western edge of the grid indexing scheme for Canada's National Topographic System. From Pole to Pole Starting at the North Pole and heading south to the South Pole, the 144th meridian west passes through: {| class="wikitable plainrowheaders" ! scope="col" width="130" | Co-ordinates ! scope="col" | Country, territory or sea ! scope="col" | Notes |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Arctic Ocean | style="background:#b0e0e6;" | |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Beaufort Sea | style="background:#b0e0e6;" | |- | ! scope="row" | | Alaska — Arey Island and the mainland |-valign="top" | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Pacific Ocean | style="background:#b0e0e6;" | Passing just west of Makemo atoll, (at ) Passing just east of Hiti atoll, (at ) |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Southern Ocean | style="background:#b0e0e6;" | |- | ! scope="row" | Antarctica | Unclaimed territory |- |} See also 143rd meridian west 145th meridian west w144 meridian west
https://en.wikipedia.org/wiki/Incidence%20and%20Symmetry%20in%20Design%20and%20Architecture
Incidence and Symmetry in Design and Architecture is a book on symmetry, graph theory, and their applications in architecture, aimed at architecture students. It was written by Jenny Baglivo and Jack E. Graver and published in 1983 by Cambridge University Press in their Cambridge Urban and Architectural Studies book series. It won an Alpha Sigma Nu Book Award in 1983, and has been recommended for undergraduate mathematics libraries by the Basic Library List Committee of the Mathematical Association of America. Topics Incidence and Symmetry in Design and Architecture is divided into two parts of roughly equal length, each divided into four chapters. The first part, "Incidence", is primarily on graph theory. Its topics include the basic definitions of directed graphs and undirected graphs, homeomorphisms of graphs, Dijkstra's algorithm for the shortest path problem, planar graphs, polyhedral graphs, and Euler's polyhedral formula. This theory is applied to the grid bracing problem in structural rigidity, where the authors derive a novel equivalence between stabilizing a square grid by cross bracing and the strong connectivity augmentation of directed bipartite graphs. Other applications include optimal route design for facilities such as roads and power lines, the connectivity of floor plans of buildings, and the arrangement of building corridors to optimize average distance. This part of the book concludes with a treatment of the classification of two-dimensional topological surfaces. The second part of the book is "Symmetry". Its first chapter includes the basic definitions of group theory and of a Euclidean plane isometry, and the classification of isometries into translations, rotations, reflections, and glide reflections. The second of its chapters concerns the discrete isometry groups in the plane including the frieze groups and wallpaper groups, and the classification of two-dimensional patterns by their symmetries. Another chapter provides some partial gener
https://en.wikipedia.org/wiki/Red%20Sulmona%20Garlic
The Red Sulmona Garlic (Red Sulmona Garlic), also known as 'Aglio rosso di Sulmona, is a Abruzzese variety of garlic; it is listed as a traditional Italian food product (P.A.T.) by the Ministry of Agricultural, Food and Forestry Policies. Red Sulmona garlic is grown on the Conca di Sulmona plateau, in the Valle Peligna area, in the province of L'Aquila. An all-Abruzzo excellence, which is grown in the autumn months, between November and December, and which is harvested during the summer, between June and July.
https://en.wikipedia.org/wiki/Planarity%20testing
In graph theory, the planarity testing problem is the algorithmic problem of testing whether a given graph is a planar graph (that is, whether it can be drawn in the plane without edge intersections). This is a well-studied problem in computer science for which many practical algorithms have emerged, many taking advantage of novel data structures. Most of these methods operate in O(n) time (linear time), where n is the number of edges (or vertices) in the graph, which is asymptotically optimal. Rather than just being a single Boolean value, the output of a planarity testing algorithm may be a planar graph embedding, if the graph is planar, or an obstacle to planarity such as a Kuratowski subgraph if it is not. Planarity criteria Planarity testing algorithms typically take advantage of theorems in graph theory that characterize the set of planar graphs in terms that are independent of graph drawings. These include Kuratowski's theorem that a graph is planar if and only if it does not contain a subgraph that is a subdivision of K5 (the complete graph on five vertices) or K3,3 (the utility graph, a complete bipartite graph on six vertices, three of which connect to each of the other three). Wagner's theorem that a graph is planar if and only if it does not contain a minor (subgraph of a contraction) that is isomorphic to K5 or K3,3. The Fraysseix–Rosenstiehl planarity criterion, characterizing planar graphs in terms of a left-right ordering of the edges in a depth-first search tree. The Fraysseix–Rosenstiehl planarity criterion can be used directly as part of algorithms for planarity testing, while Kuratowski's and Wagner's theorems have indirect applications: if an algorithm can find a copy of K5 or K3,3 within a given graph, it can be sure that the input graph is not planar and return without additional computation. Other planarity criteria, that characterize planar graphs mathematically but are less central to planarity testing algorithms, include: Whitney's pla
https://en.wikipedia.org/wiki/Causal%20perturbation%20theory
Causal perturbation theory is a mathematically rigorous approach to renormalization theory, which makes it possible to put the theoretical setup of perturbative quantum field theory on a sound mathematical basis. It goes back to a seminal work by Henri Epstein and Vladimir Jurko Glaser. Overview When developing quantum electrodynamics in the 1940s, Shin'ichiro Tomonaga, Julian Schwinger, Richard Feynman, and Freeman Dyson discovered that, in perturbative calculations, problems with divergent integrals abounded. The divergences appeared in calculations involving Feynman diagrams with closed loops of virtual particles. It is an important observation that in perturbative quantum field theory, time-ordered products of distributions arise in a natural way and may lead to ultraviolet divergences in the corresponding calculations. From the generalized functions point of view, the problem of divergences is rooted in the fact that the theory of distributions is a purely linear theory, in the sense that the product of two distributions cannot consistently be defined (in general), as was proved by Laurent Schwartz in the 1950s. Epstein and Glaser solved this problem for a special class of distributions that fulfill a causality condition, which itself is a basic requirement in axiomatic quantum field theory. In their original work, Epstein and Glaser studied only theories involving scalar (spinless) particles. Since then, the causal approach has been applied also to a wide range of gauge theories, which represent the most important quantum field theories in modern physics.
https://en.wikipedia.org/wiki/Personal%20Handy-phone%20System
The Personal Handy-phone System (PHS), also marketed as the Personal Communication Telephone (PCT) in Thailand, and the Personal Access System (PAS) and commercially branded as Xiaolingtong () in Mainland China, was a mobile network system operating in the 1880–1930 MHz frequency band, used mainly in Japan, China, Taiwan, and some other Asian countries and regions. Outline Technology PHS is essentially a cordless telephone like DECT, with the capability to handover from one cell to another. PHS cells are small, with transmission power of base station a maximum of 500 mW and range typically measures in tens or at most hundreds of metres (some can range up to about 2 kilometres in line-of-sight), contrary to the multi-kilometre ranges of CDMA and GSM. This makes PHS suitable for dense urban areas, but impractical for rural areas, and the small cell size also makes it difficult if not impossible to make calls from rapidly moving vehicles. PHS uses TDMA/TDD for its radio channel access method, and 32 kbit/s ADPCM for its voice codec. Modern PHS phone can also support many value-added services such as high speed wireless data/Internet connection (64 kbit/s and higher), WWW access, e-mailing, and text messaging. PHS technology is also a popular option for providing a wireless local loop, where it is used for bridging the "last mile" gap between the POTS network and the subscriber's home. It was developed under the concept of providing a wireless front-end of an ISDN network. Thus a PHS base station is compatible with ISDN and is often connected directly to ISDN telephone exchange equipment e.g. a digital switch. In spite of its low-cost base station, micro-cellular system and "Dynamic Cell Assignment" system, PHS offers higher number-of-digits frequency use efficiency with lower cost (throughput per area basis), compared with typical 3G cellular telephone systems. It enables flat-rate wireless service such as AIR-EDGE, throughout Japan. The speed of an AIR-EDGE
https://en.wikipedia.org/wiki/Charles%20Colbourn
Charles Joseph Colbourn (born October 24, 1953) is a Canadian computer scientist and mathematician, whose research concerns graph algorithms, combinatorial designs, and their applications. From 1996 to 2001 he was the Dorothean Professor of Computer Science at the University of Vermont; since then he has been a professor of Computer Science and Engineering at Arizona State University. Colbourn was born on October 24, 1953, in Toronto, Ontario; despite working in the United States since 1996 he retains his Canadian citizenship. He did his undergraduate studies at the University of Toronto, graduating in 1976; after a master's degree at the University of Waterloo, he returned to Toronto for a Ph.D., which he received in 1980 under the supervision of Derek Corneil. He has held faculty positions at the University of Saskatchewan, University of Waterloo, University of Vermont, and Arizona State University, as well as visiting positions at several other universities. He has been one of three editors-in-chief of the Journal of Combinatorial Designs since 1992. In 2004, the Institute of Combinatorics and its Applications named Colbourn as that year's winner of their Euler Medal for lifetime achievements in combinatorics.
https://en.wikipedia.org/wiki/Automatic%20label%20placement
Automatic label placement, sometimes called text placement or name placement, comprises the computer methods of placing labels automatically on a map or chart. This is related to the typographic design of such labels. The typical features depicted on a geographic map are line features (e.g. roads), area features (countries, parcels, forests, lakes, etc.), and point features (villages, cities, etc.). In addition to depicting the map's features in a geographically accurate manner, it is of critical importance to place the names that identify these features, in a way that the reader knows instantly which name describes which feature. Automatic text placement is one of the most difficult, complex, and time-consuming problems in mapmaking and GIS (Geographic Information System). Other kinds of computer-generated graphics – like charts, graphs etc. – require good placement of labels as well, not to mention engineering drawings, and professional programs which produce these drawings and charts, like spreadsheets (e.g. Microsoft Excel) or computational software programs (e.g. Mathematica). Naively placed labels overlap excessively, resulting in a map that is difficult or even impossible to read. Therefore, a GIS must allow a few possible placements of each label, and often also an option of resizing, rotating, or even removing (suppressing) the label. Then, it selects a set of placements that results in the least overlap, and has other desirable properties. For all but the most trivial setups, the problem is NP-hard. Rule-based algorithms Rule-based algorithms try to emulate an experienced human cartographer. Over centuries, cartographers have developed the art of mapmaking and label placement. For example, an experienced cartographer repeats road names several times for long roads, instead of placing them once, or in the case of Ocean City depicted by a point very close to the shore, the cartographer would place the label "Ocean City" over the land to emphasize that it
https://en.wikipedia.org/wiki/Adequate%20pointclass
In the mathematical field of descriptive set theory, a pointclass can be called adequate if it contains all recursive pointsets and is closed under recursive substitution, bounded universal and existential quantification and preimages by recursive functions.
https://en.wikipedia.org/wiki/Rip%20current
A rip current (also rip) is a specific type of water current that can occur near beaches where waves break. A rip is a strong, localized, and narrow current of water that moves directly away from the shore by cutting through the lines of breaking waves, like a river flowing out to sea. The force of the current in a rip is strongest and fastest next to the surface of the water. Rip currents can be hazardous to people in the water. Swimmers who are caught in a rip current and who do not understand what is happening, or who may not have the necessary water skills, may panic, or they may exhaust themselves by trying to swim directly against the flow of water. Because of these factors, rip currents are the leading cause of rescues by lifeguards at beaches. In the United States they cause an average of 71 deaths by drowning per year . A rip current is not the same thing as undertow, although some people use the term incorrectly when they are talking about a rip current. Contrary to popular belief, neither rip nor undertow can pull a person down and hold them under the water. A rip simply carries floating objects, including people, out to just beyond the zone of the breaking waves, at which point the current dissipates and releases everything it is carrying. Causes and occurrence A rip current forms because wind and breaking waves push surface water towards the land. This causes a slight rise in the water level along the shore. This excess water will tend to flow back to the open water via the route of least resistance. When there is a local area which is slightly deeper, such as a break in an offshore sand bar or reef, this can allow water to flow offshore more easily, and this will initiate a rip current through that gap. Water that has been pushed up near the beach flows along the shore towards the outgoing rip as "feeder currents". The excess water flows out at a right angle to the beach, in a tight current called the "neck" of the rip. The "neck" is where the flow
https://en.wikipedia.org/wiki/Angel%20problem
The angel problem is a question in combinatorial game theory proposed by John Horton Conway. The game is commonly referred to as the angels and devils game. The game is played by two players called the angel and the devil. It is played on an infinite chessboard (or equivalently the points of a 2D lattice). The angel has a power k (a natural number 1 or higher), specified before the game starts. The board starts empty with the angel in one square. On each turn, the angel jumps to a different empty square which could be reached by at most k moves of a chess king, i.e. the distance from the starting square is at most k in the infinity norm. The devil, on its turn, may add a block on any single square not containing the angel. The angel may leap over blocked squares, but cannot land on them. The devil wins if the angel is unable to move. The angel wins by surviving indefinitely. The angel problem is: can an angel with high enough power win? There must exist a winning strategy for one of the players. If the devil can force a win then it can do so in a finite number of moves. If the devil cannot force a win then there is always an action that the angel can take to avoid losing and a winning strategy for it is always to pick such a move. More abstractly, the "pay-off set" (i.e., the set of all plays in which the angel wins) is a closed set (in the natural topology on the set of all plays), and it is known that such games are determined. Of course, for any infinite game, if player 2 doesn't have a winning strategy, player 1 can always pick a move that leads to a position where player 2 doesn't have a winning strategy, but in some games, simply playing forever doesn't confer a win to player 1, so undetermined games may exist. Conway offered a reward for a general solution to this problem ($100 for a winning strategy for an angel of sufficiently high power, and $1000 for a proof that the devil can win irrespective of the angel's power). Progress was made first in higher
https://en.wikipedia.org/wiki/Informed%20assent
The term informed assent describes the process whereby minors may agree to participate in clinical trials. It is similar to the process of informed consent in adults, however there remains some overlap between the terms. Background In adult medical research, the term informed consent is used to describe a state whereby a competent individual, having been fully informed about the nature, benefits and risks of a clinical trial, agrees to their own participation. National authorities define certain populations as vulnerable and therefore unable to provide informed consent, such as those without the necessary cognitive, psychological, or social maturity to understand these benefits and risks. The oft-reported belief that minors (for the purposes of this discussion, read minors as persons under the age of 18 years) are considered a vulnerable population and therefore may not autonomously provide informed consent, is actually an oversimplification that does not always hold true. In fact, the requirements for children participating in clinical trials are somewhat indistinct, with freedom to vary both between countries and within countries. For this reason, two terms have sprung into existence: pediatric consent and pediatric assent. Geographic variation In United States William G. Bartholome, MD, drafted the first statement for pediatric participation presented to the original American Academy of Pediatrics (AAP) Committee on Bioethics in 1985. The U.S. Food and Drug Administration encourages clinical trials in children in order to ensure the development of safe and effective pediatric medicines. According to the relevant Code of Federal Regulations (45 CFR 46, Subpart d), investigators wishing to conduct clinical trials in children in the United States are required to seek the permission of both parents and patients. This regulation defines informed assent as "a child's affirmative agreement to participate in research" and stipulates that mere failure to object can
https://en.wikipedia.org/wiki/Effect%20of%20radiation%20on%20perceived%20temperature
The "radiation effect" results from radiation heat exchange between human bodies and surrounding surfaces, such as walls and ceilings. It may lead to phenomena such as houses feeling cooler in the winter and warmer in the summer at the same temperature. For example, in a room in which air temperature is maintained at 22 °C at all times, but in which the inner surfaces of the house is estimated to be an average temperature of 10 °C in the winter or 25 °C in the summer, heat transfer from the surfaces to the individual will occur, resulting in a difference in the perceived temperature. We can observe and compare the rate of radiation heat transfer between a person and the surrounding surfaces if we first make a few simplifying assumptions: The heat exchange in the environment is in a "steady state", meaning that there is a constant flow of heat either into or out of the house. The person is completely surrounded by the interior surfaces of the room. Heat transfer by convection is not considered. The walls, ceiling, and floor are all at the same temperature. For an average person, the outer surface area is 1.4 m2, the surface temperature is 30 °C, and the emissivity (ε) is 0.95. Emissivity is the ability of a surface to emit radiative energy compared to that of a black body at the same temperature. We will be using the following equation to find out how much heat is lost by a person standing in the same room in summertime as compared to the winter, at exactly the same thermostat reading temperature: Where is the rate of heat loss (W), is the emissivity (or the ability of an objects surface to emit energy by radiation) of a person, is the Stefan–Boltzmann constant (), is the surface area of a person, is the surface temperature of a person (K), and is the surface temperature of the walls, ceiling, and floor (K). This equation is only valid for an object standing in a completely enclosed room, box, etc. In the winter, the amount of heat loss from a pers
https://en.wikipedia.org/wiki/Maternal%20effect%20dominant%20embryonic%20arrest
Maternal effect dominant embryonic arrest (Medea) is a selfish gene composed of a toxin and an antidote. A mother carrying Medea will express the toxin in her germline, killing her progeny. If the children also carry Medea, they produce copies of the antidote, saving their lives. Therefore, if a mother has one Medea allele and one non-Medea allele, half of her children will inherit Medea and survive while the other half will inherit the non-Medea allele and die (unless they receive Medea from their father). Medea's selfish behavior gives it a selective advantage over normal genes. If introduced into a population at sufficiently high levels, the Medea gene will spread, replacing entire populations of normal beetles with beetles carrying Medea. Because of this, Medea has been proposed as a way of genetically modifying insect populations. By linking the Medea construct to a gene of interest - for instance, a gene conferring resistance to malaria - Medea'''s unique dynamics could be exploited to drive both genes into a population. These findings have dramatic implications for the control of insect-borne diseases such as malaria and dengue fever. Construction of Medea Medea, which has been found in nature only in flour beetles, is an example of a selfish gene that has been simulated in the lab and tested in the fruit fly Drosophila melanogaster. The toxin was a microRNA that blocked the expression of myd88, a gene vital for embryonic development in insects. The antidote was an extra copy of myd88. The offspring receiving the extra copy of myd88 survived and hatched, while those without the extra copy died. In lab trials where 25% of the original members were homozygous for Medea, the gene spread to the entire population within 10 to 12 generations. Etymology Medea was named for the Greek mythological figure of Medea, who killed her children when her husband left her for another woman. See also Intragenomic conflict Green-beard effect Medea gene Toxi
https://en.wikipedia.org/wiki/Provisional%20Low%20Temperature%20Scale%20of%202000
The Provisional Low Temperature Scale of 2000 (PLTS-2000) is an equipment calibration standard for making measurements of very low temperatures, in the range of 0.9 mK (millikelvin) to 1 K, adopted by the International Committee for Weights and Measures in October 2000. It is based on the melting pressure of solidified helium-3. At these low temperatures, the melting pressure of helium-3 varies from about 2.9 MPa to nearly 4.0 MPa. At the temperature of approximately 315 mK, a minimum of pressure (2.9 MPa) occurs. Although this gives a disadvantage of non-monotonicity, in that two different temperatures can give the same pressure, the scale is otherwise robust since the melting pressure of helium-3 is insensitive to many experimental factors. See also International Temperature Scale of 1990 (ITS-90) — the calibration standard used for all temperatures above 0.6 K Leiden scale
https://en.wikipedia.org/wiki/Orbot
Orbot is a free proxy app that provides anonymity on the Internet for users of the Android and iOS operating systems. It allows traffic from apps such as web browsers, email clients, map programs, and others to be routed via the Tor network. This tool is used to keep the communications of users anonymous and hidden from governments and third parties that might be monitoring their internet traffic. Reception In 2014 Orbot was discussed in detail in an article on "reporting securely from an Android device". In January 2016, Lisa Vaas of NakedSecurity by Sophos described plans to use Tor, including with Orbot on Android, to connect to Facebook. In July 2021, Tech Radar named Orbot one of 8 "Best privacy apps for Android in 2021" but warned of slower speeds. In July 2021 Android Authority discussed Tor Browser and Orbot in brief reviews of "15 best Android browsers". In November 2021, John Leyden of The Daily Swig described collaboration between the Tor Project and the Guardian Project to develop Orbot for censorship circumvention for any application on a device, but warned Orbot does not remove identifying information from app traffic. In February 2022, Andrew Orr of the Mac Observer wrote about using Orbot on iOS. In April 2022, Shubham Agarwal of Laptop magazine, in a detailed review of Tor, recommended installing Orbot on Android phones to use Tor. In July 2022, Laiba Mohsin of PhoneWorld.com described Orbot as a simple way to access the Dark Web on mobile. In October 2022, Damir Mujezinovic of MakeUseOf described Orbot as a "flagship" product for both iOS and Android to use the Tor network, and said it "will not make you completely anonymous, but it can certainly help bypass certain geographical restrictions," In November 2022, Mujezinovic wrote a detailed guide to using Orbot on iOS or Android. In January 2023, Ramces Red of MakeTechEasier.com wrote instructions for using the Tor network with Orbot for a mobile Monero wallet.