source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Big%20design%20up%20front
Big design up front (BDUF) is a software development approach in which the program's design is to be completed and perfected before that program's implementation is started. It is often associated with the waterfall model of software development. Synonyms for big design up front (BDUF) are big modeling up front (BMUF) and big requirements up front (BRUF). These are viewed as anti-patterns within agile software development. Arguments for Proponents of waterfall model argue that time spent in designing is a worthwhile investment, with the hope that less time and effort will be spent fixing a bug in the early stages of a software product's lifecycle than when that same bug is found and must be fixed later. That is, it is much easier to fix a requirements bug in the requirements phase than to fix that same bug in the implementation phase, as to fix a requirements bug in the implementation phase requires scrapping at least some of the implementation and design work which has already been completed. Joel Spolsky, a popular online commentator on software development, has argued strongly in favor of big design up front: "Many times, thinking things out in advance saved us serious development headaches later on. ... [on making a particular specification change] ... Making this change in the spec took an hour or two. If we had made this change in code, it would have added weeks to the schedule. I can’t tell you how strongly I believe in Big Design Up Front, which the proponents of Extreme Programming consider anathema. I have consistently saved time and made better products by using BDUF and I’m proud to use it, no matter what the XP fanatics claim. They’re just wrong on this point and I can’t be any clearer than that." However, several commentators have argued that what Joel has called big design up front doesn't resemble the BDUF criticized by advocates of XP and other agile software development methodologies because he himself says his example was neither recognizabl
https://en.wikipedia.org/wiki/Soft-bodied%20organism
Soft-bodied organisms are animals that lack skeletons. The group roughly corresponds to the group Vermes as proposed by Carl von Linné. All animals have muscles but, since muscles can only pull, never push, a number of animals have developed hard parts that the muscles can pull on, commonly called skeletons. Such skeletons may be internal, as in vertebrates, or external, as in arthropods. However, many animals groups do very well without hard parts. This include animals such as earthworms, jellyfish, tapeworms, squids and an enormous variety of animals from almost every part of the kingdom Animalia. Commonality Most soft-bodied animals are small, but they do make up the majority of the animal biomass. If we were to weigh up all animals on Earth with hard parts against soft-bodied ones, estimates indicate that the biomass of soft-bodied animals would be at least twice that of animals with hard parts, quite possibly much larger. Particularly the roundworms are extremely numerous. The nematodologist Nathan Cobb described the ubiquitous presence of nematodes on Earth as follows: "In short, if all the matter in the universe except the nematodes were swept away, our world would still be dimly recognizable, and if, as disembodied spirits, we could then investigate it, we should find its mountains, hills, vales, rivers, lakes, and oceans represented by a film of nematodes. The location of towns would be decipherable, since for every massing of human beings there would be a corresponding massing of certain nematodes. Trees would still stand in ghostly rows representing our streets and highways. The location of the various plants and animals would still be decipherable, and, had we sufficient knowledge, in many cases even their species could be determined by an examination of their erstwhile nematode parasites." Anatomy Not being a true phylogenetic group, soft-bodied organisms vary enormously in anatomy. Cnidarians and flatworms have a single opening to the gut and a d
https://en.wikipedia.org/wiki/CJ%20Affiliate
CJ Affiliate (formerly Commission Junction) is an online advertising company owned by Publicis Groupe operating in the affiliate marketing industry, which operates worldwide. The corporate headquarters is in Santa Barbara, California, and there are offices in Atlanta, GA, Chicago, IL, New York, NY San Francisco, CA Westlake Village, CA and Westborough, MA in the US, and in the UK, Germany, France, Spain, Sweden, India, and South Africa. beFree, Inc. / Value-click, Inc. Former Commission Junction competitor beFree, Inc. was acquired by Value-click, Inc. in 2002, before Commission Junction. beFree was gradually phased out in favor of Commission Junction. On February 3, 2014 Value-click, Inc. announced it has changed its name to Conversant, Inc., bringing former Value-click, Inc. companies Commission Junction, Dotomi, Greystripe, Mediaplex, and Value-click Media under one name. Conversant was bought by Alliance Data in 2014. Commission Junction continues to be known as CJ Affiliate. See also Affiliate marketing Affiliate programs directories Affiliate networks
https://en.wikipedia.org/wiki/Blytt%E2%80%93Sernander%20system
The Blytt–Sernander classification, or sequence, is a series of north European climatic periods or phases based on the study of Danish peat bogs by Axel Blytt (1876) and Rutger Sernander (1908). The classification was incorporated into a sequence of pollen zones later defined by Lennart von Post, one of the founders of palynology. Description Layers in peat were first noticed by Heinrich Dau in 1829. A prize was offered by the Royal Danish Academy of Sciences and Letters to anyone who could explain them. Blytt hypothesized that the darker layers were deposited in drier times; the lighter, in moister times, applying his terms Atlantic (warm, moist) and Boreal (cool, dry). In 1926 C. A. Weber noticed the sharp boundary horizons, or Grenzhorizonte, in German peat, which matched Blytt's classification. Sernander defined subboreal and subatlantic periods, as well as the late glacial periods. Other scientists have since added other information. The classification was devised before the development of more accurate dating methods, such as C-14 dating and oxygen isotope ratio cycles. Geologists working in different regions are studying sea levels, peat bogs and ice core samples by a variety of methods, with a view toward further verifying and refining the Blytt–Sernander sequence. They find a general correspondence across Eurasia and North America. The fluctuations of climatic change are more complex than Blytt–Sernander periodizations can identify. For example, recent peat core samples at Roskilde Fjord and also Lake Kornerup in Denmark identified 40 to 62 distinguishable layers of pollen, respectively. However, no universally accepted replacement model has been proposed. Problems Dating and calibration Today the Blytt–Sernander sequence has been substantiated by a wide variety of scientific dating methods, mainly radiocarbon dates obtained from peat. Earlier radiocarbon dates were often left uncalibrated; that is, they were derived by assuming a constant concentrati
https://en.wikipedia.org/wiki/Image%20persistence
Image persistence, or image retention, is the LCD and plasma display equivalent of screen burn-in. Unlike screen burn, the effects are usually temporary and often not visible without close inspection. Plasma displays experiencing severe image persistence can result in screen burn-in instead. Image persistence can occur as easily as having something remain unchanged on the screen in the same location for a duration of even 10 minutes, such as a web page or document. Minor cases of image persistence are generally only visible when looking at darker areas on the screen, and usually invisible to the eye during ordinary computer use. Cause Liquid crystals have a natural relaxed state. When a voltage is applied they rearrange themselves to block certain light waves. If left with the same voltage for an extended period of time (e.g. displaying a pointer or the Taskbar in one place, or showing a static picture for extended periods of time), the liquid crystals can develop a tendency to stay in one position. This ever-so-slight tendency to stay arranged in one position can throw the requested color off by a slight degree, which causes the image to look like the traditional "burn-in" on phosphor based displays. In fact, the root cause of LCD image retention is different from phosphor aging, but the phenomenon is the same, namely uneven use of display pixels. Slight LCD image retention can be recovered. When severe image retention occurs, the liquid crystal molecules have been polarized and cannot rotate in the electric field, so they cannot be recovered. The cause of this tendency is unclear. It might be due to various factors, including accumulation of ionic impurities inside the LCD, impurities introduced during the fabrication of the LCD, imperfect driver settings, electric charge building up near the electrodes, parasitic capacitance, or a DC voltage component that occurs unavoidably in some display pixels owing to anisotropy in the dielectric constant of the liquid c
https://en.wikipedia.org/wiki/Electronic%20Communications%20Act%202000
The Electronic Communications Act 2000 (c.7) is an Act of the Parliament of the United Kingdom that: Had provisions to regulate the provision of cryptographic services in the UK (ss.1-6); and Confirms the legal status of electronic signatures (ss.7-10). The United Kingdom government had come to the conclusion that encryption, encryption services and electronic signatures would be important to e-commerce in the UK. By 1999, however, only the security services still hankered after key escrow. So a "sunset clause" was put in the bill. The Electronic Communications Act 2000 gave the Home Office the power to create a registration regime for encryption services. This was given a five-year period before it would automatically lapse, which eventually happened in May 2006.
https://en.wikipedia.org/wiki/Gun-type%20fission%20weapon
Gun-type fission weapons are fission-based nuclear weapons whose design assembles their fissile material into a supercritical mass by the use of the "gun" method: shooting one piece of sub-critical material into another. Although this is sometimes pictured as two sub-critical hemispheres driven together to make a supercritical sphere, typically a hollow projectile is shot onto a spike, which fills the hole in its center. Its name is a reference to the fact that it is shooting the material through an artillery barrel as if it were a projectile. Since it is a relatively slow method of assembly, plutonium cannot be used unless it is purely the 239 isotope. Production of impurity-free plutonium is very difficult and is impractical. The required amount of uranium is relatively large, and thus the overall efficiency is relatively low. The main reason for this is the uranium metal does not undergo compression (and resulting density increase) as does the implosion design. Instead, gun type bombs assemble the supercritical mass by amassing such a large quantity of uranium that the overall distance through which daughter neutrons must travel has so many mean free paths it becomes very probable most neutrons will find uranium nuclei to collide with, before escaping the supercritical mass. The first time gun-type fission weapons were discussed was as part of the British Tube Alloys nuclear bomb development program, the world's first nuclear bomb development program. The British MAUD Report of 1941 laid out how "an effective uranium bomb which, containing some 25 lb of active material, would be equivalent as regards destructive effect to 1,800 tons of T.N.T". The bomb would use the gun-type design "to bring the two halves together at high velocity and it is proposed to do this by firing them together with charges of ordinary explosive in a form of double gun". The method was applied in four known US programs. First, the "Little Boy" weapon which was detonated over Hiroshima
https://en.wikipedia.org/wiki/Ion%20plating
Ion plating (IP) is a physical vapor deposition (PVD) process that is sometimes called ion assisted deposition (IAD) or ion vapor deposition (IVD) and is a modified version of vacuum deposition. Ion plating uses concurrent or periodic bombardment of the substrate, and deposits film by atomic-sized energetic particles called ions. Bombardment prior to deposition is used to sputter clean the substrate surface. During deposition the bombardment is used to modify and control the properties of the depositing film. It is important that the bombardment be continuous between the cleaning and the deposition portions of the process to maintain an atomically clean interface. If this interface is not properly cleaned, then it can result into a weaker coating or poor adhesion. They are many different processes to vacuum deposited coatings in which they are used for various applications such as corrosion resistance and wear on the material. Process In ion plating, the energy, flux and mass of the bombarding species along with the ratio of bombarding particles to depositing particles are important processing variables. The depositing material may be vaporized either by evaporation, sputtering (bias sputtering), arc vaporization or by decomposition of a chemical vapor precursor chemical vapor deposition (CVD). The energetic particles used for bombardment are usually ions of an inert or reactive gas, or, in some cases, ions of the condensing film material ("film ions"). Ion plating can be done in a plasma environment where ions for bombardment are extracted from the plasma or it may be done in a vacuum environment where ions for bombardment are formed in a separate ion gun. The latter ion plating configuration is often called Ion Beam Assisted Deposition (IBAD). By using a reactive gas or vapor in the plasma, films of compound materials can be deposited. Ion plating is used to deposit hard coatings of compound materials on tools, adherent metal coatings, optical coatings with hig
https://en.wikipedia.org/wiki/Wire%20rope
Wire rope is composed of as few as two solid, metal wires twisted into a helix that forms a composite rope, in a pattern known as laid rope. Larger diameter wire rope consists of multiple strands of such laid rope in a pattern known as cable laid. Manufactured using an industrial machine known as a strander, the wires are fed through a series of barrels and spun into their final composite orientation. In stricter senses, the term wire rope refers to a diameter larger than , with smaller gauges designated cable or cords. Initially wrought iron wires were used, but today steel is the main material used for wire ropes. Historically, wire rope evolved from wrought iron chains, which had a record of mechanical failure. While flaws in chain links or solid steel bars can lead to catastrophic failure, flaws in the wires making up a steel cable are less critical as the other wires easily take up the load. While friction between the individual wires and strands causes wear over the life of the rope, it also helps to compensate for minor failures in the short run. Wire ropes were developed starting with mining hoist applications in the 1830s. Wire ropes are used dynamically for lifting and hoisting in cranes and elevators, and for transmission of mechanical power. Wire rope is also used to transmit force in mechanisms, such as a Bowden cable or the control surfaces of an airplane connected to levers and pedals in the cockpit. Only aircraft cables have WSC (wire strand core). Also, aircraft cables are available in smaller diameters than wire rope. For example, aircraft cables are available in diameter while most wire ropes begin at a diameter. Static wire ropes are used to support structures such as suspension bridges or as guy wires to support towers. An aerial tramway relies on wire rope to support and move cargo overhead. History Modern wire rope was invented by the German mining engineer Wilhelm Albert in the years between 1831 and 1834 for use in mining in the Harz M
https://en.wikipedia.org/wiki/Obturator%20foramen
The obturator foramen is the large, bilaterally paired opening of the bony pelvis. It is formed by the pubis and ischium. It is mostly closed by the obturator membrane except for a small opening - the obturator canal - through which the obturator nerve and vessels pass. Structure The obturator foramen is situated inferior and somewhat anterior to the acetabulum. It is bounded by the pubis bone and the ischium: superiorly by the (grooved obturator surface) of the superior ramus of pubis, inferiorly by the ramus of ischium, and laterally by (the anterior edge of) the body of ischium (including by the margin of the acetabulum). The margin of the foramen is thin and uneven, and gives attachment to the obturator membrane. Superiorly, it presents a deep groove - the obturator groove - which passes obliquely inferomedially from the pelvis. The foramen is largely closed by the obturator membrane save for a small opening at the superolateral end of the obturator foramen - the obturator canal - which establishes a communication between the pelvic cavity and the thigh. This canal gives passage to the obturator nerve, artery, and veins. The free edge of the obturator membrane that bounds the obturator canal attaches at two tubercles (which may be indistinct): The anterior obturator tubercle - situated on the obturator crest at the anterior extremity of the inferior border of the superior ramus of pubis. The posterior obturator tubercle - situated at the anterior border of the acetabular notch (and thus on the medial border of the ischium). Variation In accordance with the overall sex dimorphism of the pelvis, the obturator foramina are oval in the male, and wider and rather triangular in the female. Unilateral pelvic hypoplasia can cause differences in size between the obturator foramina. Rarely, the obturator foramen may be doubled on one side. See also Obturator internus muscle Obturator externus muscle Additional images
https://en.wikipedia.org/wiki/Super%20PI
Super PI is a computer program that calculates pi to a specified number of digits after the decimal point—up to a maximum of 32 million. It uses Gauss–Legendre algorithm and is a Windows port of the program used by Yasumasa Kanada in 1995 to compute pi to 232 digits. Significance Super PI is popular in the overclocking community, both as a benchmark to test the performance of these systems and as a stress test to check that they are still functioning correctly. Credibility concerns The competitive nature of achieving the best Super PI calculation times led to fraudulent Super PI results, reporting calculation times faster than normal. Attempts to counter the fraudulent results resulted in a modified version of Super PI, with a checksum to validate the results. However, other methods exist of producing inaccurate or fake time results, raising questions about the program's future as an overclocking benchmark. Super PI utilizes x87 floating point instructions which are supported on all x86 and x86-64 processors, current versions which also support the lower precision Streaming SIMD Extensions vector instructions. The future Super PI is single threaded, so its relevance as a measure of performance in the current era of multi-core processors is diminishing quickly. Therefore, wPrime has been developed to support multiple threaded calculations to be run at the same time so one can test stability on multi-core machines. Other multithreaded programs include: Hyper PI, IntelBurnTest, Prime95, Montecarlo superPI, OCCT or y-cruncher. Last but not least, while SuperPi is unable to calculate more than 32 million digits, and Alexander J. Yee & Shigeru Kondo were able to set a record of 10 Trillion 50 Digits of Pi using y-cruncher under a 2 x Intel Xeon X5680 @ 3.33 GHz - (12 physical cores, 24 hyperthreaded) computer on October 16, 2011 Super PI is much slower than these other programs, and utilizes inferior algorithms to them.
https://en.wikipedia.org/wiki/Vasomotor
Vasomotor refers to actions upon a blood vessel which alter its diameter. More specifically, it can refer to vasodilator action and vasoconstrictor action. Control Sympathetic innervation Sympathetic nerve fibers travel around the tunica media of the artery, secrete neurotransmitters such as norepinephrine into the extracellular fluid surrounding the smooth muscle (tunica media) from the terminal knob of the axon. The smooth muscle cell membranes have α and β-adrenergic receptors for these neurotransmitters. Activation of α-adrenergic receptors promotes vasoconstriction, while the activation of β-adrenergic receptors mediates the relaxation of muscle cells, resulting in vasodilation. Normally, α-adrenergic receptors predominate in smooth muscle of resistance vessels. Endothelium derived chemicals Endothelin, and angiotensin are the vasoconstrictors of smooth muscles while nitric oxide and prostacyclin are vasodilators of the smooth muscles. Pathology Some vasoactive chemicals such as vasodilator acetylcholine are known for causing reduced/increased blood flow in the tumours by vasomotor changes. Inadequate blood supply to the tumour cells can cause the cells to be radio-resistant and resulted in reduced accessibility to chemotherapeutic agents. Injuries to nerves of the lower trunk of the brachial plexus (Klumpke's paralysis) and compression of median nerve at the flexor retinaculum of the hand (Carpal Tunnel Syndrome) can cause vasomotor changes at the areas innervated by the nerves. This area of the skin will become warmer because of vasodilation (loss of vasoconstriction). Depression of the vasomotor center of the brain can cause the loss of vasomotor tone of blood vessels, resulting in massive dilatation of veins. This will result in a condition called as neurogenic shock. See also Vasoconstriction Vasodilation
https://en.wikipedia.org/wiki/Solid-state%20laser
A solid-state laser is a laser that uses a gain medium that is a solid, rather than a liquid as in dye lasers or a gas as in gas lasers. Semiconductor-based lasers are also in the solid state, but are generally considered as a separate class from solid-state lasers, called laser diodes. Solid-state media Generally, the active medium of a solid-state laser consists of a glass or crystalline "host" material, to which is added a "dopant" such as neodymium, chromium, erbium, thulium or ytterbium. Many of the common dopants are rare-earth elements, because the excited states of such ions are not strongly coupled with the thermal vibrations of their crystal lattices (phonons), and their operational thresholds can be reached at relatively low intensities of laser pumping. There are many hundreds of solid-state media in which laser action has been achieved, but relatively few types are in widespread use. Of these, probably the most common is neodymium-doped yttrium aluminum garnet (Nd:YAG). Neodymium-doped glass (Nd:glass) and ytterbium-doped glasses or ceramics are used at very high power levels (terawatts) and high energies (megajoules), for multiple-beam inertial confinement fusion. The first material used for lasers was synthetic ruby crystals. Ruby lasers are still used for a few applications, but they are not common because of their low power efficiencies. At room temperature, ruby lasers emit only short pulses of light, but at cryogenic temperatures they can be made to emit a continuous train of pulses. Some solid-state lasers can also be tunable using several intracavity techniques, which employ etalons, prisms, and gratings, or a combination of these. Titanium-doped sapphire is widely used for its broad tuning range, 660 to 1080 nanometers. Alexandrite lasers are tunable from 700 to 820 nm and yield higher-energy pulses than titanium-sapphire lasers because of the gain medium's longer energy storage time and higher damage threshold. Pumping Solid state lasin
https://en.wikipedia.org/wiki/PointCast
PointCast was a dot-com company founded in 1992 by Christopher R. Hassett in Sunnyvale, California. PointCast Network The company's initial product amounted to a screensaver that displayed news and other information, delivered live over the Internet. The PointCast Network used push technology, which was a hot concept at the time, and received enormous press coverage when it launched in beta form on February 13, 1996. The product did not perform as well as expected, often believed to be because its traffic burdened corporate networks with excessive bandwidth use, and was banned in many places. It demanded more bandwidth than the home dial-up Internet connections of the day could provide, and people objected to the large number of advertisements that were pushed over the service as well. PointCast offered corporations a proxy server that would dramatically reduce the bandwidth used. But even this didn't help save PointCast. A more likely reason than bandwidth was the increasing popularity of "portal websites". When PointCast first started Yahoo offered little more than a hierarchical structure on the internet (broken down by subject much like DMOZ) but was soon to introduce the portal which was customizable and offered a much more convenient way to read the news. News Corporation purchase offer and change of CEO At its height in January 1997, News Corporation made an offer of $450 million to purchase the company. However, the offer was withdrawn in March. While there were rumors that it was withdrawn due to issues with the price and revenue projections, James Murdoch said it was due to PointCast's inaction. Shortly after not accepting the purchase offer, the board of directors decided to replace Christopher Hassett as the CEO. Some reasons included turning down the recent purchase offer, software performance problems (using too much corporate bandwidth) and declining market share (lost to the then-emerging Web portal sites.) After five months, David Dorman was
https://en.wikipedia.org/wiki/Ikeda%20map
In physics and mathematics, the Ikeda map is a discrete-time dynamical system given by the complex map The original map was proposed first by Kensuke Ikeda as a model of light going around across a nonlinear optical resonator (ring cavity containing a nonlinear dielectric medium) in a more general form. It is reduced to the above simplified "normal" form by Ikeda, Daido and Akimoto stands for the electric field inside the resonator at the n-th step of rotation in the resonator, and and are parameters which indicate laser light applied from the outside, and linear phase across the resonator, respectively. In particular the parameter is called dissipation parameter characterizing the loss of resonator, and in the limit of the Ikeda map becomes a conservative map. The original Ikeda map is often used in another modified form in order to take the saturation effect of nonlinear dielectric medium into account: A 2D real example of the above form is: where u is a parameter and For , this system has a chaotic attractor. Attractor This shows how the attractor of the system changes as the parameter is varied from 0.0 to 1.0 in steps of 0.01. The Ikeda dynamical system is simulated for 500 steps, starting from 20000 randomly placed starting points. The last 20 points of each trajectory are plotted to depict the attractor. Note the bifurcation of attractor points as is increased. Point trajectories The plots below show trajectories of 200 random points for various values of . The inset plot on the left shows an estimate of the attractor while the inset on the right shows a zoomed in view of the main trajectory plot. Octave/MATLAB code for point trajectories The Octave/MATLAB code to generate these plots is given below: % u = ikeda parameter % option = what to plot % 'trajectory' - plot trajectory of random starting points % 'limit' - plot the last few iterations of random starting points function ikeda(u, option) P = 200; % how many starting points N = 100
https://en.wikipedia.org/wiki/Nuclear%20reactor%20physics
Nuclear reactor physics is the field of physics that studies and deals with the applied study and engineering applications of chain reaction to induce a controlled rate of fission in a nuclear reactor for the production of energy. Most nuclear reactors use a chain reaction to induce a controlled rate of nuclear fission in fissile material, releasing both energy and free neutrons. A reactor consists of an assembly of nuclear fuel (a reactor core), usually surrounded by a neutron moderator such as regular water, heavy water, graphite, or zirconium hydride, and fitted with mechanisms such as control rods which control the rate of the reaction. The physics of nuclear fission has several quirks that affect the design and behavior of nuclear reactors. This article presents a general overview of the physics of nuclear reactors and their behavior. Criticality In a nuclear reactor, the neutron population at any instant is a function of the rate of neutron production (due to fission processes) and the rate of neutron losses (due to non-fission absorption mechanisms and leakage from the system). When a reactor’s neutron population remains steady from one generation to the next (creating as many new neutrons as are lost), the fission chain reaction is self-sustaining and the reactor's condition is referred to as "critical". When the reactor’s neutron production exceeds losses, characterized by increasing power level, it is considered "supercritical", and when losses dominate, it is considered "subcritical" and exhibits decreasing power. The "Six-factor formula" is the neutron life-cycle balance equation, which includes six separate factors, the product of which is equal to the ratio of the number of neutrons in any generation to that of the previous one; this parameter is called the effective multiplication factor k, also denoted by Keff, where k = Є Lf ρ Lth f η, where Є = "fast-fission factor", Lf = "fast non-leakage factor", ρ = "resonance escape probability", Lth =
https://en.wikipedia.org/wiki/Stathmin
Stathmin, also known as metablastin and oncoprotein 18 is a protein that in humans is encoded by the STMN1 gene. Stathmin is a highly conserved 17 kDa protein that is crucial for the regulation of the cell cytoskeleton. Changes in the cytoskeleton are important because the cytoskeleton is a scaffold required for many cellular processes, such as cytoplasmic organization, cell division and cell motility. More specifically, stathmin is crucial in regulating the cell cycle. It is found solely in eukaryotes. Its function as an important regulatory protein of microtubule dynamics has been well-characterized. Eukaryotic microtubules are one of three major components of the cell's cytoskeleton. They are highly dynamic structures that continuously alternate between assembly and disassembly. Stathmin performs an important function in regulating rapid microtubule remodeling of the cytoskeleton in response to the cell's needs. Microtubules are cylindrical polymers of α,β-tubulin. Their assembly is in part determined by the concentration of free tubulin in the cytoplasm. At low concentrations of free tubulin, the growth rate at the microtubule ends is slowed and results in an increased rate of depolymerization (disassembly). Structure Stathmin, and the related proteins SCG10 and XB3, contain a N-terminal domain (XB3 contains an additional N-terminal hydrophobic region), a 78 amino acid coiled-coil region, and a short C-terminal domain. Function The function of Stathmin is to regulate the cytoskeleton of the cell. The cytoskeleton is made up of long hollow cylinders named microtubules. These microtubules are made up of alpha and beta tubulin heterodimers. The changes in cytoskeleton are known as microtubule dynamics; the addition of the tubulin subunits lead to polymerisation and their loss, depolymerisation. Stathmin regulates these by promoting depolymerization of microtubules or preventing polymerization of tubulin heterodimers. Additionally, Stathmin is thought
https://en.wikipedia.org/wiki/Bulb%20of%20vestibule
In female anatomy, the vestibular bulbs, bulbs of the vestibule or clitoral bulbs are two elongated masses of erectile tissue typically described as being situated on either side of the vaginal opening. They are united to each other in front by a narrow median band. Some research indicates that they do not surround the vaginal opening, and are more closely related to the clitoris than to the vestibule. Structure Research indicates that the vestibular bulbs are more closely related to the clitoris than to the vestibule because of the similarity of the trabecular and erectile tissue within the clitoris and bulbs, and the absence of trabecular tissue in other genital organs, with the erectile tissue's trabecular nature allowing engorgement and expansion during sexual arousal. Ginger et al. state that although a number of texts report that they surround the vaginal opening, this does not appear to be the case and tunica albuginea does not envelop the erectile tissue of the bulb. The vestibular bulbs are homologous to the bulb of penis and the adjoining part of the corpus spongiosum of the male and consist of two elongated masses of erectile tissue. Their posterior ends are expanded and are in contact with the greater vestibular glands; their anterior ends are tapered and joined to one another by the pars intermedia; their deep surfaces are in contact with the inferior fascia of the urogenital diaphragm; superficially, they are covered by the bulbospongiosus. Physiology During the response to sexual arousal the bulbs fill with blood, which then becomes trapped, causing erection. As the clitoral bulbs fill with blood, they tightly cuff the vaginal opening, causing the vulva to expand outward. This puts pressure on nearby structures that include the corpus cavernosum of clitoris and crus of clitoris, inducing pleasure. The blood inside the bulb's erectile tissue is released to the circulatory system by the spasms of orgasm, but if orgasm does not occur, the blood will
https://en.wikipedia.org/wiki/Smilax%20rotundifolia
Smilax rotundifolia, also known as roundleaf greenbrier or common greenbrier, is a woody vine native to the southeastern and eastern United States and eastern Canada. It is a common and conspicuous part of the natural forest ecosystems in much of its native range. The leaves are glossy green, petioled, alternate, and circular to heart-shaped. They are generally 5–13 cm long. Common greenbrier climbs other plants using green tendrils growing out of the petioles. The stems are rounded and green and are armed with sharp thorns. The flowers are greenish white, and are produced from April to August. The fruit is a bluish black berry that ripens in September. Cultivation and uses The young shoots of common greenbrier are reported to be excellent when cooked like asparagus. The young leaves and tendrils can be prepared like spinach or added directly to salads. Being familiar with eating Smilax is common in the South Carolina Lowcountry, where Smilax is often called 'chaineybriar.' The roots have a natural gelling agent in them that can be extracted and used as a thickening agent. Description Like its common names suggest, Smilax rotundifolia is a green vine with thorns. It is a crawling vine that can tangle itself within other plants and climb with small tendrils. The plant can grow up to 20 feet long by climbing objects and vegetation. If there is nothing for it to climb upon it will grow along the ground. It has woody stems that are pale green in color and are glabrous, the youngest of which are often square-shaped. As the vine dies, the stem turns from green to a dark brown color. Along the stem there are often black-tipped thorns that are about 1/3-inch-long. Some stems of Common green brier do not have thorns. The upper surfaces of the leaves are darker than the undersides. The rounded alternate leaves are about 2 to 5 inches long. The leaves are glabrous and never glaucous. There are 3 to 5 primary veins per leaf. Along the lower surfaces of the primary ve
https://en.wikipedia.org/wiki/Human%E2%80%93robot%20interaction
Human–robot interaction (HRI) is the study of interactions between humans and robots. Human–robot interaction is a multidisciplinary field with contributions from human–computer interaction, artificial intelligence, robotics, natural language processing, design, and psychology. A subfield known as physical human–robot interaction (pHRI) has tended to focus on device design to enable people to safely interact with robotic systems. Origins Human–robot interaction has been a topic of both science fiction and academic speculation even before any robots existed. Because much of active HRI development depends on natural language processing, many aspects of HRI are continuations of human communications, a field of research which is much older than robotics. The origin of HRI as a discrete problem was stated by 20th-century author Isaac Asimov in 1941, in his novel I, Robot. Asimov coined Three Laws of Robotics, namely: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. These three laws provide an overview of the goals engineers and researchers hold for safety in the HRI field, although the fields of robot ethics and machine ethics are more complex than these three principles. However, generally human–robot interaction prioritizes the safety of humans that interact with potentially dangerous robotics equipment. Solutions to this problem range from the philosophical approach of treating robots as ethical agents (individuals with moral agency), to the practical approach of creating safety zones. These safety zones use technologies such as lidar to detect human presence or physical barriers to protect humans by preventing any contact between machine and operator. Although initially robots in the human–robot
https://en.wikipedia.org/wiki/Max%20Mason
Charles Max Mason (–), better known as Max Mason, was an American mathematician. Mason was president of the University of Chicago (1925–1928) and the third president of the Rockefeller Foundation (1929–1936). Mason's mathematical research interests included differential equations, the calculus of variations, and electromagnetic theory. Education B.Litt., 1898, University of Wisconsin-Madison Ph.D., Mathematics, University of Göttingen, 1903. Dissertation: "Randwertaufgaben bei gewöhnlichen Differentialgleichungen" (Boundary value functions with ordinary differential equations) Advisor: Hilbert Career Massachusetts Institute of Technology (MIT), 1903–1904, Instructor of Mathematics. Yale University, 1904–1908, Assistant Professor of Mathematics. University of Wisconsin–Madison, 1908–1909, University of Wisconsin–Madison, Associate Professor of Mathematics. University of Wisconsin–Madison, 1909–1925, Professor of Physics. National Research Council, 1917–1919, Submarine Committee. (Invented a submarine detection device, which was the basis for sonar detectors used in World War II.) University of Chicago, 1925–1928, President. Rockefeller Foundation, 1928–1929, Director, Natural Sciences Division. Rockefeller Foundation, 1929–1936, President. Palomar Observatory (California), 1936–1949, Chairman of the team directing the construction of the observatory. On , he appeared on Edgar Bergen's radio show to chat about the new observatory and trade jokes with Charlie McCarthy. In 1948, he, along with Lee A. DuBridge, William A. Fowler, Linus Pauling, and Bruce H. Sage, was awarded the Medal for Merit by President Harry S. Truman. Notes and references External links Archival collections Max Mason papers, 1898-1961, Niels Bohr Library & Archives Max Mason papers, 1750-1815, Royal Observatory Edinburgh Charles Mason papers, 1750-1815, American Philosophical Society 1877 births 1961 deaths 20th-century American mathematicians Mathematical analysts Mathematics ed
https://en.wikipedia.org/wiki/List%20of%20podcast%20clients
A podcast client, or podcatcher, is a computer program used to stream or download podcasts, usually via an RSS or XML feed. While podcast clients are best known for streaming and downloading podcasts, many are also capable of downloading video, newsfeeds, text, and pictures. Some of these podcast clients can also automate the transfer of received audio files to a portable media player. Although many include a directory of high-profile podcasts, they generally allow users to manually subscribe directly to a feed by providing the URL. The core concepts had been developing since 2000, the first commercial podcast client software was developed in 2001. Podcasts were made popular when Apple added podcatching to its iTunes software and iPod portable media player in June 2005. Apple Podcasts is currently included in all Apple devices, such as iPhone, iPad and Mac computers. Podcast clients See also Comparison of audio player software
https://en.wikipedia.org/wiki/Detritus
In biology, detritus () is dead particulate organic material, as distinguished from dissolved organic material. Detritus typically includes the bodies or fragments of bodies of dead organisms, and fecal material. Detritus typically hosts communities of microorganisms that colonize and decompose (i.e. remineralize) it. In terrestrial ecosystems it is present as leaf litter and other organic matter that is intermixed with soil, which is denominated "soil organic matter". The detritus of aquatic ecosystems is organic substances that is suspended in the water and accumulates in depositions on the floor of the body of water; when this floor is a seabed, such a deposition is denominated "marine snow". Theory The corpses of dead plants or animals, material derived from animal tissues (e.g. molted skin), and fecal matter gradually lose their form due to physical processes and the action of decomposers, including grazers, bacteria, and fungi. Decomposition, the process by which organic matter is decomposed, occurs in several phases. Micro- and macro-organisms that feed on it rapidly consume and absorb materials such as proteins, lipids, and sugars that are low in molecular weight, while other compounds such as complex carbohydrates are decomposed more slowly. The decomposing microorganisms degrade the organic materials so as to gain the resources they require for their survival and reproduction. Accordingly, simultaneous to microorganisms' decomposition of the materials of dead plants and animals is their assimilation of decomposed compounds to construct more of their biomass (i.e. to grow their own bodies). When microorganisms die, fine organic particles are produced, and if small animals that feed on microorganisms eat these particles they collect inside the intestines of the consumers, and change shape into large pellets of dung. As a result of this process, most of the materials of dead organisms disappear and are not visible and recognizable in any form, but are pres
https://en.wikipedia.org/wiki/Constance%20Kamii
Constance Kamii was a Swiss-Japanese-American mathematics education scholar and psychologist. She was a professor in the Early Childhood Education Program Department of Curriculum and Instruction at the University of Alabama in Birmingham, Alabama. Overview Constance Kamii was born in Geneva, Switzerland, and attended elementary schools there and in Japan. She finished high school in Los Angeles, attended Pomona College, and received her Ph.D. in education and psychology from the University of Michigan. She was a professor of early childhood education at the University of Alabama in Birmingham. A major concern of hers since her work on the Perry Preschool Project in the 1960s was the conceptualization of goals and objectives for early childhood education on the basis of a scientific theory explaining children’s sociological and intellectual development. Convinced that the only theory in existence that explains this development from the first day of life to adolescence was that of Jean Piaget, she studied under him on and off for 15 years. When she was not studying under Piaget in Geneva, she worked closely with teachers in the United States to develop practical ways of using his theory in classrooms. The outcome of this classroom research can be seen in Physical Knowledge in Preschool Education and Group Games in Early Education, which she wrote with Rheta DeVries. Since 1980, she had been extending this curriculum research to the primary grades and wrote Young Children Reinvent Arithmetic (about first grade), Young Children Continue to Reinvent Arithmetic, 2nd Grade, and Young Children Continue to Reinvent Arithmetic, 3rd Grade. In all these books, she emphasized the long-range, over-all aim of education envisioned by Piaget, which is children’s development of sociological and intellectual autonomy. Kamii studied under Jean Piaget to develop an early childhood curriculum based on his theory. This work can be seen in Physical Knowledge in Preschool Education (19
https://en.wikipedia.org/wiki/Radioisotope%20piezoelectric%20generator
A radioisotope piezoelectric generator (RPG) is a type of radioisotope generator that converts energy stored in radioactive materials into motion, which is used to generate electricity using the repeated deformation of a piezoelectric material. This approach creates a high-impedance source and, unlike chemical batteries, the devices will work at a very wide range of temperatures. Description A piezoelectric cantilever is mounted directly above a base of the radioactive isotope nickel-63. All of the radiation emitted as the millicurie-level nickel-63 thin film decays is in the form of beta radiation, which consists of electrons. As the cantilever accumulates the emitted electrons, it builds up a negative charge at the same time that the isotope film becomes positively charged. The beta particles essentially transfer electronic charge from the thin film to the cantilever. The opposite charges cause the cantilever to bend toward the isotope film. Just as the cantilever touches the thin-film isotope, the charge jumps the gap. That permits current to flow back onto the isotope, equalizing the charge and resetting the cantilever. As long as the isotope is decaying - a process that can last for decades - the tiny cantilever will continue its up-and-down motion. As the cantilever directly generates electricity when deformed, a charge pulse is released each time the cantilever cycles. Radioactive isotopes can continue to release energy over periods ranging from weeks to decades. The half-life of nickel-63, for example, is over 100 years. Thus, a battery using this isotope might continue to supply useful energy for at least half that time. Researchers have demonstrated devices with about 7% efficiency with high frequencies of 120 Hz to low-frequency (every three hours) self-reciprocating actuators. History In 2002 researchers at Cornell University published and patented the first design. See also Atomic battery Thermionic converter Betavoltaics Optoelectric nuclear bat
https://en.wikipedia.org/wiki/Radiant%20exitance
In radiometry, radiant exitance or radiant emittance is the radiant flux emitted by a surface per unit area, whereas spectral exitance or spectral emittance is the radiant exitance of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. This is the emitted component of radiosity. The SI unit of radiant exitance is the watt per square metre (), while that of spectral exitance in frequency is the watt per square metre per hertz (W·m−2·Hz−1) and that of spectral exitance in wavelength is the watt per square metre per metre (W·m−3)—commonly the watt per square metre per nanometre (). The CGS unit erg per square centimeter per second () is often used in astronomy. Radiant exitance is often called "intensity" in branches of physics other than radiometry, but in radiometry this usage leads to confusion with radiant intensity. Mathematical definitions Radiant exitance Radiant exitance of a surface, denoted ("e" for "energetic", to avoid confusion with photometric quantities), is defined as where is the partial derivative symbol, is the radiant flux emitted, and is the surface area. If we want to talk about the radiant flux received by a surface, we speak of irradiance. The radiant exitance of a black surface, according to the Stefan–Boltzmann law, is equal to: where is the Stefan–Boltzmann constant, and is the temperature of that surface. For a real surface, the radiant exitance is equal to: where is the emissivity of that surface. Spectral exitance Spectral exitance in frequency of a surface, denoted Me,ν, is defined as where is the frequency. Spectral exitance in wavelength of a surface, denoted Me,λ, is defined as where is the wavelength. The spectral exitance of a black surface around a given frequency or wavelength, according to the Lambert's cosine law and the Planck's law, is equal to: where is the Planck constant, is the frequency, is the wavelength, is the Boltz
https://en.wikipedia.org/wiki/Gobe%20Software
Gobe Software, Inc was a software company founded in 1997 by members of the ClarisWorks development team that developed and published an integrated desktop software suite for BeOS. In later years, it was the distributor of BeOS itself. History Gobe was founded in 1997 by members of the ClarisWorks development team and some of the authors of the original Styleware application for the Apple II. After leaving StyleWare and creating the product later known as ClarisWorks and AppleWorks, Bob Hearn, Scott Holdaway joined Tom Hoke, Scott Lindsey, Bruce Q. Hammond, and Carl Grice who also worked at Apple Computer's Claris subsidiary and formed Gobe Software, Inc with the notion to create a next-generation integrated office suite similar to ClarisWorks, but for the BeOS platform. It released Gobe Productive in 1998. When Be Inc. outsourced publication of BeOS in 2000, Gobe became the publisher of BeOS in North America, Australia, and sections of Asia. Only weeks after signing up other publishers around the globe, Be, Inc. halted development for the BeOS platform and publicly announced that all of its corporate focus would be on "Internet Appliances" and made public announcements that hampered forward momentum of the BeOS platform. In addition, the publishers in general and Gobe in particular did not have source code access to the BeOS and were not able to continue its development or add drivers that the platform needed to be a viable alternative to Windows or Linux. Gobe also published Hicom Entertainment/Next Generation Entertainments "Corum III" role-playing game for BeOS during this period. The failure of Be, Inc and BeOS meant ports had to be undertaken, and Windows and Linux variants were developed. Although the company shipped a Windows version of its software in December 2001, it was unable to obtain sufficient operating capital after the 2000 stock market crash and suspended operations 2002. In 2008 Gobe management began to work with distribution and developm
https://en.wikipedia.org/wiki/Great%20vessels
Great vessels are the large vessels that bring blood to and from the heart. These are: Superior vena cava Inferior vena cava Pulmonary arteries Pulmonary veins Aorta Transposition of the great vessels is a group of congenital heart defects involving an abnormal spatial arrangement of any of the great vessels.
https://en.wikipedia.org/wiki/Comparison%20sort
A comparison sort is a type of sorting algorithm that only reads the list elements through a single abstract comparison operation (often a "less than or equal to" operator or a three-way comparison) that determines which of two elements should occur first in the final sorted list. The only requirement is that the operator forms a total preorder over the data, with: if a ≤ b and b ≤ c then a ≤ c (transitivity) for all a and b, a ≤ b or b ≤ a (connexity). It is possible that both a ≤ b and b ≤ a; in this case either may come first in the sorted list. In a stable sort, the input order determines the sorted order in this case. A metaphor for thinking about comparison sorts is that someone has a set of unlabelled weights and a balance scale. Their goal is to line up the weights in order by their weight without any information except that obtained by placing two weights on the scale and seeing which one is heavier (or if they weigh the same). Examples Some of the most well-known comparison sorts include: Quicksort Heapsort Shellsort Merge sort Introsort Insertion sort Selection sort Bubble sort Odd–even sort Cocktail shaker sort Cycle sort Merge-insertion sort Smoothsort Timsort Block sort Performance limits and advantages of different sorting techniques There are fundamental limits on the performance of comparison sorts. A comparison sort must have an average-case lower bound of Ω(n log n) comparison operations, which is known as linearithmic time. This is a consequence of the limited information available through comparisons alone — or, to put it differently, of the vague algebraic structure of totally ordered sets. In this sense, mergesort, heapsort, and introsort are asymptotically optimal in terms of the number of comparisons they must perform, although this metric neglects other operations. Non-comparison sorts (such as the examples discussed below) can achieve O(n) performance by using operations other than comparisons, allowing them to sidestep this lower
https://en.wikipedia.org/wiki/Elliptic%20complex
In mathematics, in particular in partial differential equations and differential geometry, an elliptic complex generalizes the notion of an elliptic operator to sequences. Elliptic complexes isolate those features common to the de Rham complex and the Dolbeault complex which are essential for performing Hodge theory. They also arise in connection with the Atiyah-Singer index theorem and Atiyah-Bott fixed point theorem. Definition If E0, E1, ..., Ek are vector bundles on a smooth manifold M (usually taken to be compact), then a differential complex is a sequence of differential operators between the sheaves of sections of the Ei such that Pi+1 ∘ Pi=0. A differential complex with first order operators is elliptic if the sequence of symbols is exact outside of the zero section. Here π is the projection of the cotangent bundle T*M to M, and π* is the pullback of a vector bundle. See also Chain complex
https://en.wikipedia.org/wiki/Collinearity
In geometry, collinearity of a set of points is the property of their lying on a single line. A set of points with this property is said to be collinear (sometimes spelled as colinear). In greater generality, the term has been used for aligned objects, that is, things being "in a line" or "in a row". Points on a line In any geometry, the set of points on a line are said to be collinear. In Euclidean geometry this relation is intuitively visualized by points lying in a row on a "straight line". However, in most geometries (including Euclidean) a line is typically a primitive (undefined) object type, so such visualizations will not necessarily be appropriate. A model for the geometry offers an interpretation of how the points, lines and other object types relate to one another and a notion such as collinearity must be interpreted within the context of that model. For instance, in spherical geometry, where lines are represented in the standard model by great circles of a sphere, sets of collinear points lie on the same great circle. Such points do not lie on a "straight line" in the Euclidean sense, and are not thought of as being in a row. A mapping of a geometry to itself which sends lines to lines is called a collineation; it preserves the collinearity property. The linear maps (or linear functions) of vector spaces, viewed as geometric maps, map lines to lines; that is, they map collinear point sets to collinear point sets and so, are collineations. In projective geometry these linear mappings are called homographies and are just one type of collineation. Examples in Euclidean geometry Triangles In any triangle the following sets of points are collinear: The orthocenter, the circumcenter, the centroid, the Exeter point, the de Longchamps point, and the center of the nine-point circle are collinear, all falling on a line called the Euler line. The de Longchamps point also has other collinearities. Any vertex, the tangency of the opposite side with an excircl
https://en.wikipedia.org/wiki/Set%20%28music%29
A set (pitch set, pitch-class set, set class, set form, set genus, pitch collection) in music theory, as in mathematics and general parlance, is a collection of objects. In musical contexts the term is traditionally applied most often to collections of pitches or pitch-classes, but theorists have extended its use to other types of musical entities, so that one may speak of sets of durations or timbres, for example. A set by itself does not necessarily possess any additional structure, such as an ordering or permutation. Nevertheless, it is often musically important to consider sets that are equipped with an order relation (called segments); in such contexts, bare sets are often referred to as "unordered", for the sake of emphasis. Two-element sets are called dyads, three-element sets trichords (occasionally "triads", though this is easily confused with the traditional meaning of the word triad). Sets of higher cardinalities are called tetrachords (or tetrads), pentachords (or pentads), hexachords (or hexads), heptachords (heptads or, sometimes, mixing Latin and Greek roots, "septachords"), octachords (octads), nonachords (nonads), decachords (decads), undecachords, and, finally, the dodecachord. A time-point set is a duration set where the distance in time units between attack points, or time-points, is the distance in semitones between pitch classes. Serial In the theory of serial music, however, some authors (notably Milton Babbitt) use the term "set" where others would use "row" or "series", namely to denote an ordered collection (such as a twelve-tone row) used to structure a work. These authors speak of "twelve tone sets", "time-point sets", "derived sets", etc. (See below.) This is a different usage of the term "set" from that described above (and referred to in the term "set theory"). For these authors, a set form (or row form) is a particular arrangement of such an ordered set: the prime form (original order), inverse (upside down), retrograde (backw
https://en.wikipedia.org/wiki/Multiple%20Registration%20Protocol
Multiple Registration Protocol (MRP), which replaced Generic Attribute Registration Protocol (GARP), is a generic registration framework defined by the IEEE 802.1ak amendment to the IEEE 802.1Q standard. MRP allows bridges, switches or other similar devices to register and de-register attribute values, such as VLAN identifiers and multicast group membership across a large local area network. MRP operates at the data link layer. History GARP was defined by the IEEE 802.1 working group to provide a generic framework allowing bridges (or other devices like switches) to register and de-register attribute values such as VLAN identifiers and multicast group membership. GARP defines the architecture, rules of operation, state machines and variables for the registration and de-registration of attribute values. GARP was used by two applications: GARP VLAN Registration Protocol (GVRP) for registering VLAN trunking between multilayer switches, and by the GARP Multicast Registration Protocol (GMRP). The latter two were both mostly enhancements for VLAN-aware switches per definition in IEEE 802.1Q. Multiple Registration Protocol (MRP) was introduced in order to replace GARP, with the IEEE 802.1ak amendment in 2007. The two GARP applications were also modified in order to use MRP. GMRP was replaced by Multiple MAC Registration Protocol (MMRP) and GVRP was replaced by Multiple VLAN Registration Protocol (MVRP). This change essentially moved the definitions of GARP, GVRP, and GMRP into an 802.1Q based environment, implying they were already VLAN aware. This also allowed for significant streamlining of the underlying protocol without much change to the interface of the applications themselves. The new protocol and applications fixed a problem with the old GARP-based GVRP-based system, where a simple registration or a failover could take an extremely long time to converge on a large network, incurring a significant bandwidth degradation. It is expected GARP will be removed from
https://en.wikipedia.org/wiki/Projected%20set
In music, a projected set is a technique where a collection of pitches or pitch classes is extended in a texture through the emphasized simultaneous statement of a set followed or preceded by a successive emphasized statement of each of its members. For example, a set may be stated as a simultaneity and then a series of phrases may end on notes which are the members of the set, as in the downbeat of m. 19 through measures 46 of Béla Bartók's Second String Quartet. (Wilson 1992, p. 23) Pattern completion is "the use of a projected set to organize a work over a long span of time" (ibid, p. 210n5). Sources Wilson, Paul (1992). The Music of Béla Bartók. . Musical set theory
https://en.wikipedia.org/wiki/Megakaryoblast
A megakaryoblast is a precursor cell to a promegakaryocyte, which in turn becomes a megakaryocyte during haematopoiesis. It is the beginning of the thrombocytic series. Development The megakaryoblast derives from a CFU-Meg colony unit of pluripotential hemopoietic stem cells. (Some sources use the term "CFU-Meg" to identify the CFU.) The CFU-Meg derives from the CFU-GEMM (common myeloid progenitor). Structure These cells tend to range from 8μm to 30μm, owing to the variation in size between different megakaryoblasts. The nucleus is three to five times the size of the cytoplasm, and is generally round or oval in shape. Several nucleoli are visible, while the chromatin varies from cell to cell, ranging from fine to heavy and dense. The cytoplasm is generally basophilic and stains blue. In smaller cells, it contains no granules, but larger megakaryoblasts may contain fine granules.
https://en.wikipedia.org/wiki/Myeloid%20tissue
Myeloid tissue, in the bone marrow sense of the word myeloid (myelo- + -oid), is tissue of bone marrow, of bone marrow cell lineage, or resembling bone marrow, and myelogenous tissue (myelo- + -genous) is any tissue of, or arising from, bone marrow; in these senses the terms are usually used synonymously, as for example with chronic myeloid/myelogenous leukemia. In hematopoiesis, myeloid cells, or myelogenous cells are blood cells that arise from a progenitor cell for granulocytes, monocytes, erythrocytes, or platelets (the common myeloid progenitor, that is, CMP or CFU-GEMM), or in a narrower sense also often used, specifically from the lineage of the myeloblast (the myelocytes, monocytes, and their daughter types). Thus, although all blood cells, even lymphocytes, are normally born in the bone marrow in adults, myeloid cells in the narrowest sense of the term can be distinguished from lymphoid cells, that is, lymphocytes, which come from common lymphoid progenitor cells that give rise to B cells and T cells. Those cells' differentiation (that is, lymphopoiesis) is not complete until they migrate to lymphatic organs such as the spleen and thymus for programming by antigen challenge. Thus, among leukocytes, the term myeloid is associated with the innate immune system, in contrast to lymphoid, which is associated with the adaptive immune system. Similarly, myelogenous usually refers to nonlymphocytic white blood cells, and erythroid can often be used to distinguish "erythrocyte-related" from that sense of myeloid and from lymphoid. The word myelopoiesis has several senses in a way that parallels those of myeloid, and myelopoiesis in the narrower sense is the regulated formation specifically of myeloid leukocytes (myelocytes), allowing that sense of myelopoiesis to be contradistinguished from erythropoiesis and lymphopoiesis (even though all blood cells are normally produced in the marrow in adults). Myeloid neoplasms always concern bone marrow cell lineage and ar
https://en.wikipedia.org/wiki/EXpressDSP
eXpressDSP is a software package produced by Texas Instruments (TI). This software package is a suite of tools used to develop applications on Texas Instruments digital signal processor line of chips. It consists of: An integrated development environment called Code Composer Studio IDE. DSP/BIOS Real-Time OS kernel Standards for application interoperability and reuse Code examples for common applications, called the eXpressDSP Reference Frameworks A number of third-party products from TI's DSP Third Party Program eXpressDSP Algorithm Interface Standard TI publishes an eXpressDSP Algorithm Interface Standard (XDAIS), an Application Programming Interface (API) designed to enable interoperability of real-time DSP algorithms.
https://en.wikipedia.org/wiki/Spatial%20analysis
Spatial analysis is any of the formal techniques which studies entities using their topological, geometric, or geographic properties. Spatial analysis includes a variety of techniques using different analytic approaches, especially spatial statistics. It may be applied in fields as diverse as astronomy, with its studies of the placement of galaxies in the cosmos, or to chip fabrication engineering, with its use of "place and route" algorithms to build complex wiring structures. In a more restricted sense, spatial analysis is geospatial analysis, the technique applied to structures at the human scale, most notably in the analysis of geographic data. It may also be applied to genomics, as in transcriptomics data. Complex issues arise in spatial analysis, many of which are neither clearly defined nor completely resolved, but form the basis for current research. The most fundamental of these is the problem of defining the spatial location of the entities being studied. Classification of the techniques of spatial analysis is difficult because of the large number of different fields of research involved, the different fundamental approaches which can be chosen, and the many forms the data can take. History Spatial analysis began with early attempts at cartography and surveying. Land surveying goes back to at least 1,400 B.C in Egypt: the dimensions of taxable land plots were measured with measuring ropes and plumb bobs. Many fields have contributed to its rise in modern form. Biology contributed through botanical studies of global plant distributions and local plant locations, ethological studies of animal movement, landscape ecological studies of vegetation blocks, ecological studies of spatial population dynamics, and the study of biogeography. Epidemiology contributed with early work on disease mapping, notably John Snow's work of mapping an outbreak of cholera, with research on mapping the spread of disease and with location studies for health care delivery. Statis
https://en.wikipedia.org/wiki/Silencer%20%28genetics%29
In genetics, a silencer is a DNA sequence capable of binding transcription regulation factors, called repressors. DNA contains genes and provides the template to produce messenger RNA (mRNA). That mRNA is then translated into proteins. When a repressor protein binds to the silencer region of DNA, RNA polymerase is prevented from transcribing the DNA sequence into RNA. With transcription blocked, the translation of RNA into proteins is impossible. Thus, silencers prevent genes from being expressed as proteins. RNA polymerase, a DNA-dependent enzyme, transcribes the DNA sequences, called nucleotides, in the 3' to 5' direction while the complementary RNA is synthesized in the 5' to 3' direction. RNA is similar to DNA, except that RNA contains uracil, instead of thymine, which forms a base pair with adenine. An important region for the activity of gene repression and expression found in RNA is the 3' untranslated region. This is a region on the 3' terminus of RNA that will not be translated to protein but includes many regulatory regions. Not much is yet known about silencers but scientists continue to study in hopes to classify more types, locations in the genome, and diseases associated with silencers. Functionality Locations within the genome A silencer is a sequence-specific element that induces a negative effect on the transcription of its particular gene. There are many positions in which a silencer element can be located in DNA. The most common position is found upstream of the target gene where it can help repress the transcription of the gene. This distance can vary greatly between approximately -20 bp to -2000 bp upstream of a gene. Certain silencers can be found downstream of a promoter located within the intron or exon of the gene itself. Silencers have also been found within the 3 prime untranslated region (3' UTR) of mRNA. Types Currently, there are two main types of silencers in DNA, which are the classical silencer element and the non-classical
https://en.wikipedia.org/wiki/Endophysics
Endophysics literally means “physics from within”. It is the study of how the observations are affected and limited by the observer being within the universe. This is in contrast with the common exophysics assumption of a system observed from the “outside”. The term endophysics has been coined by David Finkelstein in a letter to the founder of the field Otto E. Rössler. See also Physics Internal measurement (This notion is very similar to endophysics.)
https://en.wikipedia.org/wiki/Tip%20growth
Tip growth is an extreme form of polarised growth of living cells that results in an elongated cylindrical cell morphology with a rounded tip at which the growth activity takes place. Tip growth occurs in algae (e.g., Acetabularia acetabulum), fungi (hyphae) and plants (e.g. root hairs and pollen tubes). Tip growth is a process that has many similarities in diverse walled cells such as pollen tubes, root hairs, and hyphae. Fungal tip growth and hyphal tropisms Fungal hyphae extend continuously at their extreme tips, where enzymes are released into the environment and where new wall materials are synthesised. The rate of tip extension can be extremely rapid - up to 40 micrometres per minute. It is supported by the continuous movement of materials into the tip from older regions of the hyphae. So, in effect, a fungal hypha is a continuously moving mass of protoplasm in a continuously extending tube. This unique mode of growth - apical growth - is the hallmark of fungi, and it accounts for much of their environmental and economic significance.
https://en.wikipedia.org/wiki/Intraflagellar%20transport
Intraflagellar transport (IFT) is a bidirectional motility along axoneme microtubules that is essential for the formation (ciliogenesis) and maintenance of most eukaryotic cilia and flagella. It is thought to be required to build all cilia that assemble within a membrane projection from the cell surface. Plasmodium falciparum cilia and the sperm flagella of Drosophila are examples of cilia that assemble in the cytoplasm and do not require IFT. The process of IFT involves movement of large protein complexes called IFT particles or trains from the cell body to the ciliary tip and followed by their return to the cell body. The outward or anterograde movement is powered by kinesin-2 while the inward or retrograde movement is powered by cytoplasmic dynein 2/1b. The IFT particles are composed of about 20 proteins organized in two subcomplexes called complex A and B. IFT was first reported in 1993 by graduate student Keith Kozminski while working in the lab of Dr. Joel Rosenbaum at Yale University. The process of IFT has been best characterized in the biflagellate alga Chlamydomonas reinhardtii as well as the sensory cilia of the nematode Caenorhabditis elegans. It has been suggested based on localization studies that IFT proteins also function outside of cilia. Biochemistry Intraflagellar transport (IFT) describes the bi-directional movement of non-membrane-bound particles along the doublet microtubules of the flagellar, and motile cilia axoneme, between the axoneme and the plasma membrane. Studies have shown that the movement of IFT particles along the microtubule is carried out by two different microtubule motors; the anterograde (towards the flagellar tip) motor is heterotrimeric kinesin-2, and the retrograde (towards the cell body) motor is cytoplasmic dynein 1b. IFT particles carry axonemal subunits to the site of assembly at the tip of the axoneme; thus, IFT is necessary for axonemal growth. Therefore, since the axoneme needs a continually fresh supply of prote
https://en.wikipedia.org/wiki/RDM%20%28lighting%29
Remote Device Management (RDM) is a protocol enhancement to USITT DMX512 that allows bi-directional communication between a lighting or system controller and attached RDM compliant devices over a standard DMX line. This protocol will allow configuration, status monitoring, and management of these devices in such a way that does not disturb the normal operation of standard DMX512 devices that do not recognize the RDM protocol. The standard was originally developed by the Entertainment Services and Technology Association - Technical Standards (ESTA) and is officially known as "ANSI E1.20, Remote Device Management Over DMX512 Networks. Technical Details RDM Physical layer The RDM protocol and the RDM physical layer were designed to be compatible with legacy equipment. All compliant legacy DMX512 receivers should be usable in mixed systems with an RDM controller (console) and RDM responders (receivers). DMX receivers and RDM responders can be used with a legacy DMX console to form a DMX512 only system. From a user’s point of view the system layout is very similar to a DMX system. The controller is placed at one end of the main cable segment. The cable is run receiver to receiver in a daisy-chain fashion. RDM enabled splitters are used the same way DMX splitters would be. The far end (the non console or splitter end) of a cable segment should be terminated. RDM requires two significant topology changes compared to DMX. However, these changes are generally internal to equipment and therefore not seen by the user. First, a controller’s (console’s) output is terminated. Second, this termination must provide a bias to keep the line in the ‘marking state’ when no driver is enabled. The reason for the additional termination is that a network segment will be driven at many points along its length. Hence, either end of the segment, if unterminated, will cause reflections. A DMX console’s output drivers are always enabled. The RDM protocol is designed so that except
https://en.wikipedia.org/wiki/Abdomen
The abdomen (colloquially called the belly, tummy, midriff, tucky or stomach) is the part of the body between the thorax (chest) and pelvis, in humans and in other vertebrates. The abdomen is the front part of the abdominal segment of the torso. The area occupied by the abdomen is called the abdominal cavity. In arthropods it is the posterior tagma of the body; it follows the thorax or cephalothorax. In humans, the abdomen stretches from the thorax at the thoracic diaphragm to the pelvis at the pelvic brim. The pelvic brim stretches from the lumbosacral joint (the intervertebral disc between L5 and S1) to the pubic symphysis and is the edge of the pelvic inlet. The space above this inlet and under the thoracic diaphragm is termed the abdominal cavity. The boundary of the abdominal cavity is the abdominal wall in the front and the peritoneal surface at the rear. In vertebrates, the abdomen is a large body cavity enclosed by the abdominal muscles, at the front and to the sides, and by part of the vertebral column at the back. Lower ribs can also enclose ventral and lateral walls. The abdominal cavity is continuous with, and above, the pelvic cavity. It is attached to the thoracic cavity by the diaphragm. Structures such as the aorta, inferior vena cava and esophagus pass through the diaphragm. Both the abdominal and pelvic cavities are lined by a serous membrane known as the parietal peritoneum. This membrane is continuous with the visceral peritoneum lining the organs. The abdomen in vertebrates contains a number of organs belonging to, for instance, the digestive system, urinary system, and muscular system. Contents The abdominal cavity contains most organs of the digestive system, including the stomach, the small intestine, and the colon with its attached appendix. Other digestive organs are known as the accessory digestive organs and include the liver, its attached gallbladder, and the pancreas, and these communicate with the rest of the system via various d
https://en.wikipedia.org/wiki/Bring%20radical
In algebra, the Bring radical or ultraradical of a real number a is the unique real root of the polynomial The Bring radical of a complex number a is either any of the five roots of the above polynomial (it is thus multi-valued), or a specific root, which is usually chosen such that the Bring radical is real-valued for real a and is an analytic function in a neighborhood of the real line. Because of the existence of four branch points, the Bring radical cannot be defined as a function that is continuous over the whole complex plane, and its domain of continuity must exclude four branch cuts. George Jerrard showed that some quintic equations can be solved in closed form using radicals and Bring radicals, which had been introduced by Erland Bring. In this article, the Bring radical of a is denoted For real argument, it is odd, monotonically decreasing, and unbounded, with asymptotic behavior for large . Normal forms The quintic equation is rather difficult to obtain solutions for directly, with five independent coefficients in its most general form: The various methods for solving the quintic that have been developed generally attempt to simplify the quintic using Tschirnhaus transformations to reduce the number of independent coefficients. Principal quintic form The general quintic may be reduced into what is known as the principal quintic form, with the quartic and cubic terms removed: If the roots of a general quintic and a principal quintic are related by a quadratic Tschirnhaus transformation the coefficients α and β may be determined by using the resultant, or by means of the power sums of the roots and Newton's identities. This leads to a system of equations in α and β consisting of a quadratic and a linear equation, and either of the two sets of solutions may be used to obtain the corresponding three coefficients of the principal quintic form. This form is used by Felix Klein's solution to the quintic. Bring–Jerrard normal form It is possible to
https://en.wikipedia.org/wiki/Apportionment%20%28politics%29
Apportionment is the process by which seats in a legislative body are distributed among administrative divisions, such as states or parties, entitled to representation. This page presents the general principles and issues related to apportionment. The page Apportionment by country describes specific practices used around the world. The page Mathematics of apportionment describes mathematical formulations and properties of apportionment rules. The simplest and most universal principle is that elections should give each voter's intentions equal weight. This is both intuitive and stated in laws such as the Fourteenth Amendment to the United States Constitution (the Equal Protection Clause). However, there are a variety of historical and technical reasons why this principle is not followed absolutely or, in some cases, as a first priority. Common problems Fundamentally, the representation of a population in the thousands or millions by a reasonable size, thus accountable governing body involves arithmetic that will not be exact. Although weighing a representative's votes (on proposed laws and measures etc.) according to the number of their constituents could make representation more exact, giving each representative exactly one vote avoids complexity in governance. Over time, populations migrate and change in number. Governing bodies, however, usually exist for a defined term of office. While parliamentary systems provide for dissolution of the body in reaction to political events, no system tries to make real-time adjustments (during one term of office) to reflect demographic changes. Instead, any redistricting takes effect at the next scheduled election or next scheduled census. Apportionment by district In some representative assemblies, each member represents a geographic district. Equal representation requires that districts comprise the same number of residents or voters. But this is not universal, for reasons including the following: In federations li
https://en.wikipedia.org/wiki/Radiophobia
Radiophobia is a fear of ionizing radiation. Examples include health patients refusing X-rays because they believe the radiation will kill them, such as Steve Jobs and Bob Marley who both died after refusing radiation treatment for their cancer. Given that overdoses of radiation are harmful, even deadly (i.e. radiation-induced cancer, and acute radiation syndrome) it is reasonable to fear high doses of radiation. The term is also used to describe the opposition to the use of nuclear technology (i.e. nuclear power) arising from concerns disproportionately greater than actual risks would merit. Early use The term was used in a paper entitled "Radio-phobia and radio-mania" presented by Dr Albert Soiland of Los Angeles in 1903. In the 1920s, the term was used to describe people who were afraid of radio broadcasting and receiving technology. In 1931, radiophobia was referred to in The Salt Lake Tribune as a "fear of loudspeakers", an affliction that Joan Crawford was reported as suffering. The term "radiophobia" was also printed in Australian newspapers in the 1930s and 1940s, assuming a similar meaning. The 1949 poem by Margarent Mercia Baker entitled "Radiophobia" laments the intrusion of advertising into radio broadcasts. The term remained in use with its original association with radios and radio broadcasting during the 1940s and 1950s. During the 1950s and 1960s, the Science Service associated the term with fear of gamma radiation and the medical use of x-rays. A Science Service article published in several American newspapers proposed that "radiophobia" could be attributed to the publication of information regarding the "genetic hazards" of exposure to ionising radiation by the National Academy of Sciences in 1956. In a newspaper column published in 1970, Dr Harold Pettit MD wrote:"A healthy respect for the hazards of radiation is desirable. When atomic testing began in the early 1950s, these hazards were grossly exaggerated, producing a new psychological disor
https://en.wikipedia.org/wiki/Z-factor
The Z-factor is a measure of statistical effect size. It has been proposed for use in high-throughput screening (where it is also known as Z-prime), and commonly written as Z' to judge whether the response in a particular assay is large enough to warrant further attention. Background In high-throughput screens, experimenters often compare a large number (hundreds of thousands to tens of millions) of single measurements of unknown samples to positive and negative control samples. The particular choice of experimental conditions and measurements is called an assay. Large screens are expensive in time and resources. Therefore, prior to starting a large screen, smaller test (or pilot) screens are used to assess the quality of an assay, in an attempt to predict if it would be useful in a high-throughput setting. The Z-factor is an attempt to quantify the suitability of a particular assay for use in a full-scale, high-throughput screen. Definition The Z-factor is defined in terms of four parameters: the means () and standard deviations () of both the positive (p) and negative (n) controls (, , and , ). Given these values, the Z-factor is defined as: In practice, the Z-factor is estimated from the sample means and sample standard deviations Interpretation The following interpretations for the Z-factor are taken from: Note that by the standards of many types of experiments, a zero Z-factor would suggest a large effect size, rather than a borderline useless result as suggested above. For example, if σp=σn=1, then μp=6 and μn=0 gives a zero Z-factor. But for normally-distributed data with these parameters, the probability that the positive control value would be less than the negative control value is less than 1 in 105. Extreme conservatism is used in high throughput screening due to the large number of tests performed. Limitations The constant factor 3 in the definition of the Z-factor is motivated by the normal distribution, for which more than 99% of values
https://en.wikipedia.org/wiki/Microhematuria
Microhematuria, also called microscopic hematuria (both usually abbreviated as MH), is a medical condition in which urine contains small amounts of blood; the blood quantity is too low to change the color of the urine (otherwise, it is known as gross hematuria). While not dangerous in itself, it may be a symptom of kidney disease, such as IgA nephropathy or Sickle cell trait, which should be monitored by a doctor. The American Urological Association (AUA) recommends a definition of microscopic hematuria as three or more red blood cells per high-power microscopic field in urinary sediment from two of three properly collected urinalysis specimens. Microhematuria is usually asymptomatic, and as of 2001 there were medical guidelines on how to handle asymptomatic microhematuria (AMH) so as to avoid problems such as overtreatment or misdiagnosis. In 2020 American Urological Association guidelines were updated. See also Proteinuria Hematuria Myoglobinuria Hemoglobinuria
https://en.wikipedia.org/wiki/Health%20assessment
A health assessment is a plan of care that identifies the specific needs of a person and how those needs will be addressed by the healthcare system or skilled nursing facility. Health assessment is the evaluation of the health status by performing a physical exam after taking a health history. It is done to detect diseases early in people that may look and feel well. Evidence does not support routine health assessments in otherwise healthy people. Health assessment is the evaluation of the health status of an individual along the health continuum. The purpose of the assessment is to establish where on the health continuum the individual is because this guides how to approach and treat the individual. The health care approaches range from preventive, to treatment, to palliative care in relation to the individual's status on the health continuum. It is not the treatment or treatment plan. The plan related to findings is a care plan which is preceded by the specialty such as medical, physical therapy, nursing, etc. History Health assessment has been separated by authors from physical assessment to include the focus on health occurring on a continuum as a fundamental teaching. In the healthcare industry it is understood health occurs on a continuum, so the term used is assessment but may be preference by the speciality's focus such as nursing, physical therapy, etc. In healthcare, the assessment's focus is biopsychosocial but the intensity of focus may vary by the type of healthcare practitioner. For example, in the emergency room the focus is chief complaint and how to help that person related to the perceived problem. If the problem is a heart attack then the intensity of focus is on the biological/physical problem initially. See also Nursing assessment
https://en.wikipedia.org/wiki/Pearson%E2%80%93Anson%20effect
The Pearson–Anson effect, discovered in 1922 by Stephen Oswald Pearson and Horatio Saint George Anson, is the phenomenon of an oscillating electric voltage produced by a neon bulb connected across a capacitor, when a direct current is applied through a resistor. This circuit, now called the Pearson-Anson oscillator, neon lamp oscillator, or sawtooth oscillator, is one of the simplest types of relaxation oscillator. It generates a sawtooth output waveform. It has been used in low frequency applications such as blinking warning lights, stroboscopes, tone generators in electronic organs and other electronic music circuits, and in time bases and deflection circuits of early cathode-ray tube oscilloscopes. Since the development of microelectronics, these simple negative resistance oscillators have been superseded in many applications by more flexible semiconductor relaxation oscillators such as the 555 timer IC. Neon bulb as a switching device A neon bulb, often used as an indicator lamp in appliances, consists of a glass bulb containing two electrodes, separated by an inert gas such as neon at low pressure. Its nonlinear current-voltage characteristics (diagram below) allow it to function as a switching device. When a voltage is applied across the electrodes, the gas conducts almost no electric current until a threshold voltage is reached (point b), called the firing or breakdown voltage, Vb. At this voltage electrons in the gas are accelerated to a high enough speed to knock other electrons off gas atoms, which go on to knock off more electrons in a chain reaction. The gas in the bulb ionizes, starting a glow discharge, and its resistance drops to a low value. In its conducting state the current through the bulb is limited only by the external circuit. The voltage across the bulb drops to a lower voltage called the maintaining voltage Vm. The bulb will continue to conduct current until the applied voltage drops below the extinction voltage Ve (point d), w
https://en.wikipedia.org/wiki/Eosinophilic%20fasciitis
Eosinophilic fasciitis (), also known as Shulman's syndrome, is an inflammatory disease that affects the fascia, other connective tissues, surrounding muscles, blood vessels and nerves. Unlike other forms of fasciitis, eosinophilic fasciitis is typically self-limited and confined to the arms and legs, although it can require treatment with corticosteroids, and some cases are associated with aplastic anemia. The condition was first characterized in 1974, but it is not yet known whether it is actually a distinct condition or merely a variant presentation of another syndrome. The presentation is similar to that of scleroderma or systemic sclerosis. However, unlike scleroderma, eosinophilic fasciitis affects the deeper fascial layers, rather than the dermis; the characteristic and severe effects of scleroderma and systemic sclerosis, such as Raynaud's syndrome, involvement of the extremities, prominent small blood vessels (telangiectasias), and visceral changes such as swallowing problems, are absent. Nevertheless, the term remains used for diagnostic purposes. Signs and symptoms Because the disease is rare and clinical presentations vary, a clear set of symptoms is difficult to define. Severe pain and swelling are often reported, and skin can resemble orange peel in appearance. Less common features include joint pain and carpal tunnel syndrome. Cause Most cases are idiopathic, but strenuous exercise, initiation of hemodialysis, infection with Borrelia burgdorferi, and certain medications, such as statins, phenytoin, ramipril, and subcutaneous heparin, may trigger the condition. Diagnosis The key to diagnosis is the observation of skin changes in combination with eosinophilia, but the most accurate test is a biopsy of skin, fascia, and muscle. Treatment Common treatments include corticosteroids such as prednisone, although medications such as hydroxychloroquine have also been used. Early initiation of treatment usually portends a good prognosis if there is no visc
https://en.wikipedia.org/wiki/Sense%20Plan%20Act
Sense-Plan-Act was the predominant robot control methodology through 1985. Sense - gather information using the sensors Plan - create a world model using all the information, and plan the next move Act SPA is used in iterations: After the acting phase, the sensing phase, and the entire cycle, is repeated. see also: OODA loop, PDCA, Continual improvement process Robot architectures
https://en.wikipedia.org/wiki/Exercise-induced%20bronchoconstriction
Exercise-induced asthma (EIA) occurs when the airways narrow as a result of exercise. The preferred term for this condition is exercise-induced bronchoconstriction (EIB). While exercise does not cause asthma, it is frequently an asthma trigger. It might be expected that people with E.I.B. would present with shortness of breath, and/or an elevated respiratory rate and wheezing, consistent with an asthma attack. However, many will present with decreased stamina, or difficulty in recovering from exertion compared to team members, or paroxysmal coughing from an irritable airway. Similarly, examination may reveal wheezing and prolonged expiratory phase, or may be quite normal. Consequently, a potential for under-diagnosis exists. Measurement of airflow, such as peak expiratory flow rates, which can be done inexpensively on the track or sideline, may prove helpful. In athletes, symptoms of bronchospasm such as chest discomfort, breathlessness, and fatigue are often falsely attributed to the individual being “out of shape”, having asthma, or possessing a hyperreactive airway rather than EIB Cause While the potential triggering events for EIB are well recognized, the underlying pathogenesis is poorly understood. It usually occurs after at least several minutes of vigorous, aerobic activity, which increases oxygen demand to the point where breathing through the nose (nasal breathing) must be supplemented by mouth breathing. The resultant inhalation of air that has not been warmed and humidified by the nasal passages seems to generate increased blood flow to the linings of the bronchial tree, resulting in edema. Constriction of these small airways then follows, worsening the degree of obstruction to airflow. There is increasing evidence that the smooth muscle that lines the airways becomes progressively more sensitive to changes that occur as a result of injury to the airways from dehydration. The chemical mediators that provoke the muscle spasm appear to arise fr
https://en.wikipedia.org/wiki/Thomas%20G.%20Barnes
Thomas G. Barnes (August 14, 1911 – October 23, 2001) was an American creationist, who argued in support of his religious belief in a young earth by making the scientific claims that the Earth's magnetic field was consistently decaying. Biography Barnes obtained three degrees in Physics: an AB from Hardin-Simmons University in 1933, an MS from Brown University under Robert Bruce Lindsay in 1936, and an honorary Sc.D. again from Hardin-Simmons University in 1950. His detractors have questioned his credentials based on the fact that his doctorate was honorary. At the time that Barnes joined the Creation Research Society (CRS) in the early 1960s, he was the head of the Schellenger Research Laboratories at Texas Western College (now University of Texas at El Paso), where he was completing a textbook on electricity and magnetism, and on whose faculty he served from 1938 until he retired in 1981. Barnes headed one of the first projects of the CRS, to create a creationist high school biology text. Barnes served as the president of the CRS in the mid-1970s. Earth's magnetic field decay Barnes claimed to calculate the half-life of the earth's magnetic field as approximately 1,400 years based on 130 years of empirical data. Some creationists have used Barnes' argument as evidence for a young earth, less than 10,000 years as suggested by the Bible. His critics have challenged this concept, claiming that Barnes failed to take experimental uncertainties into account and used an obsolete model of the interior of the earth. Works Books Thomas G. Barnes, Science and Biblical Faith: A Science Documentary, Creation Research Society Books, 191pp. (1993) (). Thomas G. Barnes, Space Medium: The Key to Unified Physics, Geo/Space Research Foundation, 170pp. (1986) (). Thomas G. Barnes, Physics of the Future: A Classical Unification of Physics, Master Books, 208pp., (1983) (). Thomas G. Barnes, Origin and Destiny of the Earth's Magnetic Field, Institute for Creation Research, (19
https://en.wikipedia.org/wiki/Pumping%20lemma%20for%20context-free%20languages
In computer science, in particular in formal language theory, the pumping lemma for context-free languages, also known as the Bar-Hillel lemma, is a lemma that gives a property shared by all context-free languages and generalizes the pumping lemma for regular languages. The pumping lemma can be used to construct a proof by contradiction that a specific language is not context-free. Conversely, the pumping lemma does not suffice to guarantee that a language is context-free; there are other necessary conditions, such as Ogden's lemma, or the Interchange lemma. Formal statement If a language is context-free, then there exists some integer (called a "pumping length") such that every string in that has a length of or more symbols (i.e. with ) can be written as with substrings and , such that 1. , 2. , and 3. for all . Below is a formal expression of the Pumping Lemma. Informal statement and explanation The pumping lemma for context-free languages (called just "the pumping lemma" for the rest of this article) describes a property that all context-free languages are guaranteed to have. The property is a property of all strings in the language that are of length at least , where is a constant—called the pumping length—that varies between context-free languages. Say is a string of length at least that is in the language. The pumping lemma states that can be split into five substrings, , where is non-empty and the length of is at most , such that repeating and the same number of times () in produces a string that is still in the language. It is often useful to repeat zero times, which removes and from the string. This process of "pumping up" with additional copies of and is what gives the pumping lemma its name. Finite languages (which are regular and hence context-free) obey the pumping lemma trivially by having equal to the maximum string length in plus one. As there are no strings of this length the pumping lemma is not violated.
https://en.wikipedia.org/wiki/Multiple%20line%20segment%20intersection
In computational geometry, the multiple line segment intersection problem supplies a list of line segments in the Euclidean plane and asks whether any two of them intersect (cross). Simple algorithms examine each pair of segments. However, if a large number of possibly intersecting segments are to be checked, this becomes increasingly inefficient since most pairs of segments are not close to one another in a typical input sequence. The most common, and more efficient, way to solve this problem for a high number of segments is to use a sweep line algorithm, where we imagine a line sliding across the line segments and we track which line segments it intersects at each point in time using a dynamic data structure based on binary search trees. The Shamos–Hoey algorithm applies this principle to solve the line segment intersection detection problem, as stated above, of determining whether or not a set of line segments has an intersection; the Bentley–Ottmann algorithm works by the same principle to list all intersections in logarithmic time per intersection. See also Bentley–Ottmann algorithm
https://en.wikipedia.org/wiki/Charcot%E2%80%93Leyden%20crystals
Charcot–Leyden crystals are microscopic crystals composed of eosinophil protein galectin-10 found in people who have allergic diseases such as asthma or parasitic infections such as parasitic pneumonia or ascariasis. Appearance Charcot–Leyden crystals are composed of an eosinophilic lysophospholipase binding protein called Galectin -10. They vary in size and may be as large as 50 µm in length. Charcot–Leyden crystals are slender and pointed at both ends, consisting of a pair of hexagonal pyramids joined at their bases. Normally colorless, they are stained purplish-red by trichrome. Clinical significance They are indicative of a disease involving eosinophilic inflammation or proliferation, such as is found in allergic reactions (asthma, bronchitis, allergic rhinitis and rhinosinusitis) and parasitic infections such as Entamoeba histolytica, Necator americanus, and Ancylostoma duodenale. Charcot–Leyden crystals are often seen pathologically in patients with bronchial asthma. History Friedrich Albert von Zenker was the first to notice these crystals, doing so in 1851, after which they were described jointly by Jean-Martin Charcot and Charles-Philippe Robin in 1853, then in 1872 by Ernst Viktor von Leyden. See also Curschmann's Spirals
https://en.wikipedia.org/wiki/Callendar%E2%80%93Van%20Dusen%20equation
The Callendar–Van Dusen equation is an equation that describes the relationship between resistance (R) and temperature (T) of platinum resistance thermometers (RTD). As commonly used for commercial applications of RTD thermometers, the relationship between resistance and temperature is given by the following equations. The relationship above 0 °C (up to the melting point of aluminum ~ 660 °C) is a simplification of the equation that holds over a broader range down to -200 °C. The longer form was published in 1925 (see below) by M.S. Van Dusen and is given as: While the simpler form was published earlier by Callendar, it is generally valid only over the range between 0 °C to 661 °C and is given as: Where constants A, B, and C are derived from experimentally determined parameters α, β, and δ using resistance measurements made at 0 °C, 100 °C and 260 °C. Together, It is important to note that these equations are listed as the basis for the temperature/resistance tables for idealized platinum resistance thermometers and are not intended to be used for the calibration of an individual thermometer, which would require the experimentally determined parameters to be found. These equations are cited in International Standards for platinum RTD's resistance versus temperature functions DIN/IEC 60751 (also called IEC 751), also adopted as BS-1904, and with some modification, JIS C1604. The equation was found by British physicist Hugh Longbourne Callendar, and refined for measurements at lower temperatures by M. S. Van Dusen, a chemist at the U.S. National Bureau of Standards (now known as the National Institute of Standards and Technology ) in work published in 1925 in the Journal of the American Chemical Society. Starting in 1968, the Callendar-Van Dusen Equation was replaced by an interpolating formula given by a 20th order polynomial first published in The International Practical Temperature Scale of 1968 by the Comité International des Poids et Mesures. Starting
https://en.wikipedia.org/wiki/Older%20Dryas
The Older Dryas was a stadial (cold) period between the Bølling and Allerød interstadials (warmer phases), about 14,000 years Before Present, towards the end of the Pleistocene. Its date is not well defined, with estimates varying by 400 years, but its duration is agreed to have been around 200 years. The gradual warming since the Last Glacial Maximum (27,000 to 24,000 years BP) has been interrupted by two cold spells: the Older Dryas and the Younger Dryas (c. 12,900–11,650 BP). In northern Scotland, the glaciers were thicker and deeper during the Older Dryas than the succeeding Younger Dryas, and there is no evidence of human occupation of Britain. In Northwestern Europe there was also an earlier Oldest Dryas (18.5–17 ka BP 15–14 ka BP). The Dryas are named after an indicator genus, the Arctic and Alpine plant Dryas, the remains of which are found in higher concentrations in deposits from colder periods. The Older Dryas was a variable cold, dry Blytt–Sernander period, observed in climatological evidence in only some regions, dependent on latitude. In regions in which it is not observed, the Bølling–Allerød is considered a single interstadial period. Evidence of the Older Dryas is strongest in northern Eurasia, particularly part of Northern Europe, roughly equivalent to Pollen zone Ic. Dates In the Greenland oxygen isotope record, the Older Dryas appears as a downward peak establishing a small, low-intensity gap between the Bølling and the Allerød. That configuration presents a difficulty in estimating its time, as it is more of a point than a segment. The segment is small enough to escape the resolution of most carbon-14 series, as the points are not close enough together to find the segment. One approach to the problem assigns a point and then picks an arbitrary segment. The Older Dryas is sometimes considered to be "centered" near 14,100 BP or to be 100 to 150 years long "at" 14,250 BP. A second approach finds carbon-14 or other dates as close to the end of
https://en.wikipedia.org/wiki/KEMA
KEMA (Keuring van Elektrotechnische Materialen te Arnhem) NV, established in 1927, is a global energy consultancy company headquartered in Arnhem, Netherlands. It offers management consulting, technology consulting & services to the energy value chain that include business and technical consultancy, operational support, measurements & inspection, and testing & certification services. On 22 December 2011, DNV acquired 74.3% of KEMA's shares, creating a global consulting and certification company with 2300 experts located in over 20 countries. On 12 September 2013, DNV and GL merged into DNV GL, becoming the world's leading ship and offshore classification society. As DNV GL the company continued to issue KEMA certificates from their laboratories. On 30 December 2019, the property of KEMA B.V. has been transferred from DNV GL to CESI S.p.A. The acquisition includes all the high voltage testing, inspection and certification activities carried out at the KEMA owned laboratories in Arnhem (The Netherlands) and Prague (Czech Republic). The transaction was completed on March 2, 2020 with the acquisition of the Chalfont laboratory (USA). The KEMA testing and inspections facilities include the world’s largest high-power laboratory, with the highest short circuit power of 10,000 MVA, and the world’s first laboratory capable of testing ultra-high voltage components for super grids, as well as the Flex Power Grid Laboratory, for advanced testing of smart grids components. History KEMA was founded in 1927 as the Dutch electricity industry’s Arnhem-based test house, providing electrical safety testing and certification activities. In the span of eighty years, The company is a risk management company actively providing independent applied research and consultancy services via an international network of subsidiaries and agencies. On 22 December 2011, DNV acquired 74.3% of KEMA's shares, creating a global consulting and certification company within the cleaner energy, sustai
https://en.wikipedia.org/wiki/Fatou%E2%80%93Lebesgue%20theorem
In mathematics, the Fatou–Lebesgue theorem establishes a chain of inequalities relating the integrals (in the sense of Lebesgue) of the limit inferior and the limit superior of a sequence of functions to the limit inferior and the limit superior of integrals of these functions. The theorem is named after Pierre Fatou and Henri Léon Lebesgue. If the sequence of functions converges pointwise, the inequalities turn into equalities and the theorem reduces to Lebesgue's dominated convergence theorem. Statement of the theorem Let f1, f2, ... denote a sequence of real-valued measurable functions defined on a measure space (S,Σ,μ). If there exists a Lebesgue-integrable function g on S which dominates the sequence in absolute value, meaning that |fn| ≤ g for all natural numbers n, then all fn as well as the limit inferior and the limit superior of the fn are integrable and Here the limit inferior and the limit superior of the fn are taken pointwise. The integral of the absolute value of these limiting functions is bounded above by the integral of g. Since the middle inequality (for sequences of real numbers) is always true, the directions of the other inequalities are easy to remember. Proof All fn as well as the limit inferior and the limit superior of the fn are measurable and dominated in absolute value by g, hence integrable. The first inequality follows by applying Fatou's lemma to the non-negative functions fn + g and using the linearity of the Lebesgue integral. The last inequality is the reverse Fatou lemma. Since g also dominates the limit superior of the |fn|, by the monotonicity of the Lebesgue integral. The same estimates hold for the limit superior of the fn.
https://en.wikipedia.org/wiki/Nearly%20free%20electron%20model
In solid-state physics, the nearly free electron model (or NFE model and quasi-free electron model) is a quantum mechanical model of physical properties of electrons that can move almost freely through the crystal lattice of a solid. The model is closely related to the more conceptual empty lattice approximation. The model enables understanding and calculation of the electronic band structures, especially of metals. This model is an immediate improvement of the free electron model, in which the metal was considered as a non-interacting electron gas and the ions were neglected completely. Mathematical formulation The nearly free electron model is a modification of the free-electron gas model which includes a weak periodic perturbation meant to model the interaction between the conduction electrons and the ions in a crystalline solid. This model, like the free-electron model, does not take into account electron–electron interactions; that is, the independent electron approximation is still in effect. As shown by Bloch's theorem, introducing a periodic potential into the Schrödinger equation results in a wave function of the form where the function has the same periodicity as the lattice: (where is a lattice translation vector.) Because it is a nearly free electron approximation we can assume that where denotes the volume of states of fixed radius (as described in Gibbs paradox). A solution of this form can be plugged into the Schrödinger equation, resulting in the central equation: where the kinetic energy is given by which, after dividing by , reduces to if we assume that is almost constant and The reciprocal parameters and are the Fourier coefficients of the wave function and the screened potential energy , respectively: The vectors are the reciprocal lattice vectors, and the discrete values of are determined by the boundary conditions of the lattice under consideration. In any perturbation analysis, one must consider the base case to whi
https://en.wikipedia.org/wiki/Internet%20Mail%20Consortium
The Internet Mail Consortium (IMC) was an organization between 1996 and 2002 that claimed to be the only international organization focused on cooperatively managing and promoting the rapidly expanding world of electronic mail on the Internet. Purpose The goals of the IMC included greatly expanding the role of mail on the Internet into areas such as commerce and entertainment, advancing new Internet mail technologies, and making it easier for all Internet users, particularly novices, to get the most out of the growing communications medium. It did this by providing information about all the Internet mail standards and technologies. They also prepared reports that supplemented the Internet Engineering Task Force's RFCs. Headquartered in Santa Cruz, California, the IMC was founded by Paul E. Hoffman about 1996 and ceased activity in 2002. See also Versit Consortium
https://en.wikipedia.org/wiki/Start%20codon
The start codon is the first codon of a messenger RNA (mRNA) transcript translated by a ribosome. The start codon always codes for methionine in eukaryotes and archaea and a N-formylmethionine (fMet) in bacteria, mitochondria and plastids. The start codon is often preceded by a 5' untranslated region (5' UTR). In prokaryotes this includes the ribosome binding site. Alternative start codons Alternative start codons are different from the standard AUG codon and are found in both prokaryotes (bacteria and archaea) and eukaryotes. Alternate start codons are still translated as Met when they are at the start of a protein (even if the codon encodes a different amino acid otherwise). This is because a separate transfer RNA (tRNA) is used for initiation. Eukaryotes Alternate start codons (non-AUG) are very rare in eukaryotic genomes. However, naturally occurring non-AUG start codons have been reported for some cellular mRNAs. Seven out of the nine possible single-nucleotide substitutions at the AUG start codon of dihydrofolate reductase are functional as translation start sites in mammalian cells. In addition to the canonical Met-tRNA Met and AUG codon pathway, mammalian cells can initiate translation with leucine using a specific leucyl-tRNA that decodes the codon CUG. Candida albicans uses a CAG start codon. Prokaryotes Prokaryotes use alternate start codons significantly, mainly GUG and UUG. E. coli uses 83% AUG (3542/4284), 14% (612) GUG, 3% (103) UUG and one or two others (e.g., an AUU and possibly a CUG). Well-known coding regions that do not have AUG initiation codons are those of lacI (GUG) and lacA (UUG) in the E. coli lac operon. Two more recent studies have independently shown that 17 or more non-AUG start codons may initiate translation in E. coli. Mitochondria Mitochondrial genomes use alternate start codons more significantly (AUA and AUG in humans). Many such examples, with codons, systematic range, and citations, are given in the NCBI list of tr
https://en.wikipedia.org/wiki/Tributyltin%20oxide
Tributyltin oxide (TBTO) is an organotin compound chiefly used as a biocide (fungicide and molluscicide), especially a wood preservative. Its chemical formula is [(C4H9)3Sn]2O. It is a colorless viscous liquid. It is poorly soluble in water (20 ppm) but highly soluble in organic solvents. It is a potent skin irritant. Historically, tributyltin oxide's biggest application was as a marine anti-biofouling agent. Concerns over toxicity of these compounds have led to a worldwide ban by the International Maritime Organization. It is now considered a severe marine pollutant and a Substance of Very High Concern by the EU. Today, it is mainly used in wood preservation
https://en.wikipedia.org/wiki/Niobium%E2%80%93tin
Niobium–tin is an intermetallic compound of niobium (Nb) and tin (Sn), used industrially as a type-II superconductor. This intermetallic compound has a simple structure: A3B. It is more expensive than niobium–titanium (NbTi), but remains superconducting up to a magnetic flux density of , compared to a limit of roughly 15 T for NbTi. Nb3Sn was discovered to be a superconductor in 1954. The material's ability to support high currents and magnetic fields was discovered in 1961 and started the era of large-scale applications of superconductivity. The critical temperature is . Application temperatures are commonly around , the boiling point of liquid helium at atmospheric pressure. In April 2008 a record non-copper current density was claimed of 2,643 A mm−2 at 12 T and 4.2 K. History Nb3Sn was discovered to be a superconductor in 1954, one year after the discovery of V3Si, the first example of an A3B superconductor. In 1961 it was discovered that niobium–tin still exhibits superconductivity at large currents and strong magnetic fields, thus becoming the first known material to support the high currents and fields necessary for making useful high-power magnets and electric power machinery. Notable uses The central solenoid and toroidal field superconducting magnets for the planned experimental ITER fusion reactor use niobium–tin as a superconductor. The central solenoid coil will produce a field of . The toroidal field coils will operate at a maximum field of 11.8 T. Estimated use is of Nb3Sn strands and 250 metric tonnes of NbTi strands. At the Large Hadron Collider at CERN, extra-strong quadrupole magnets (for focussing beams) made with niobium–tin are being installed in key points of the accelerator between late 2018 and early 2020. Niobium tin had been proposed in 1986 as an alternative to niobium–titanium, since it allowed coolants less complex than superfluid helium, but this was not pursued in order to avoid delays while competing with the then-planned US
https://en.wikipedia.org/wiki/Linear%20epitope
In immunology, a linear epitope (also sequential epitope) is an epitope—a binding site on an antigen—that is recognized by antibodies by its linear sequence of amino acids (i.e. primary structure). In contrast, most antibodies recognize a conformational epitope that has a specific three-dimensional shape (tertiary structure). An antigen is any substance that the immune system can recognize as being foreign and which provokes an immune response. Since antigens are usually proteins that are too large to bind as a whole to any receptor, only specific segments that form the antigen bind with a specific antibody. Such segments are called epitopes. Likewise, it is only the paratope of the antibody that comes in contact with the epitope. Proteins are composed of repeating nitrogen-containing subunits called amino acids. The linear sequence of amino acids that compose a protein is called its primary structure, which typically does not present as simple line of sequential proteins (much like a knot, rather than a straight string). But, when an antigen is broken down in a lysosome, it yields small peptides, which can be recognized through the amino acids that lie continuously in a line, and hence are called linear epitopes. Significance While performing molecular assays involving use of antibodies such as in the Western blot, immunohistochemistry, and ELISA, one should carefully choose antibodies that recognize linear or conformational epitopes. For instance, if a protein sample is boiled, treated with beta-mercaptoethanol, and run in SDS-PAGE for the Western blot, the proteins are essentially denatured and therefore cannot assume their natural three-dimensional conformations. Therefore, antibodies that recognize linear epitopes instead of conformational epitopes are chosen for immunodetection. In contrast, in immunohistochemistry where protein structure is preserved, antibodies that recognize conformational epitopes are preferred. See also Conformational epitope Pol
https://en.wikipedia.org/wiki/School%20Mathematics%20Project
The School Mathematics Project arose in the United Kingdom as part of the new mathematics educational movement of the 1960s. It is a developer of mathematics textbooks for secondary schools, formerly based in Southampton in the UK. Now generally known as SMP, it began as a research project inspired by a 1961 conference chaired by Bryan Thwaites at the University of Southampton, which itself was precipitated by calls to reform mathematics teaching in the wake of the Sputnik launch by the Soviet Union, the same circumstances that prompted the wider New Math movement. It maintained close ties with the former Collaborative Group for Research in Mathematics Education at the university. Instead of dwelling on 'traditional' areas such as arithmetic and geometry, SMP dwelt on subjects such as set theory, graph theory and logic, non-cartesian co-ordinate systems, matrix mathematics, affine transforms, Euclidean vectors, and non-decimal number systems. Course books SMP, Book 1 This was published in 1965. It was aimed at entry level pupils at secondary school, and was the first book in a series of 4 preparing pupils for Elementary Mathematics Examination at 'O' level. SMP, Book 3 The computer paper tape motif on early educational material reads "THE SCHOOL MATHEMATICS PROJECT DIRECTED BY BRYAN THWAITES". O O O O O O OO O O O O OO O O O O O O O OOOO O O O O OO O O O O O O O O O O OO O O OO O O O O O O OOO O O O OO O ··································································· O OO OO OO OOO O O O O OO O O O O O O OO OO OO OOO OOO O OO O OO O O OO OOO OO O THE SCHOOL MATHEMATICS PROJECT DIRECTED BY BRYAN THWAITES The code for this tape is introduced in Book 3 as part of the notional computer system now described. Simpol programming language The Simpol language was devised by The School Mathematics Project in the 1960s
https://en.wikipedia.org/wiki/Rhus%20typhina
Rhus typhina, the staghorn sumac, is a species of flowering plant in the family Anacardiaceae, native to eastern North America. It is primarily found in southeastern Canada, the northeastern and midwestern United States, and the Appalachian Mountains, but it is widely cultivated as an ornamental throughout the temperate world. It is an invasive species in some parts of the world. Etymology The Latin specific epithet typhina is explained in Carl Linnaeus and Ericus Torner's description of the plant with the phrase "Ramis hirtis uti typhi cervini", meaning "the branches are rough like antlers in velvet". Description Rhus typhina is a dioecious, deciduous shrub or small tree growing up to tall by broad. It has alternate, pinnately compound leaves long, each with 9–31 serrate leaflets long. Leaf petioles and stems are densely covered in rust-colored hairs. The velvety texture and the forking pattern of the branches, reminiscent of antlers, have led to the common name "stag's horn sumac". Staghorn sumac grows as female or male clones. Small, greenish-white through yellowish flowers occur in dense terminal panicles, and small, green through reddish drupes occur in dense infructescences. Flowers occur from May through July and fruit ripens from June through September in this species’ native range. Infructescences are long and broad at their bases. Fall foliage is brilliant shades of red, orange and yellow. Fruit can remain on plants from late summer through spring. It is eaten by many birds in winter. Staghorn sumac spreads by seeds and rhizomes and forms clones often with the older shoots in the center and younger shoots around central older ones. Large clones can grow from ortets in several years. Within Anacardiaceae, staghorn sumac is not closely related to poison sumac (Toxicodendron vernix), even though they share the name "sumac". In late summer some shoots have galls on leaf undersides, caused by the sumac leaf gall aphid, Melaphis rhois. The galls are
https://en.wikipedia.org/wiki/Neuron%20doctrine
The neuron doctrine is the concept that the nervous system is made up of discrete individual cells, a discovery due to decisive neuro-anatomical work of Santiago Ramón y Cajal and later presented by, among others, H. Waldeyer-Hartz. The term neuron (spelled neurone in British English) was itself coined by Waldeyer as a way of identifying the cells in question. The neuron doctrine, as it became known, served to position neurons as special cases under the broader cell theory evolved some decades earlier. He appropriated the concept not from his own research but from the disparate observation of the histological work of Albert von Kölliker, Camillo Golgi, Franz Nissl, Santiago Ramón y Cajal, Auguste Forel and others. Historical context Theodor Schwann proposed in 1839 that the tissues of all organisms are composed of cells. Schwann was expanding on the proposal of his good friend Matthias Jakob Schleiden the previous year that all plant tissues were composed of cells. The nervous system stood as an exception. Although nerve cells had been described in tissue by numerous investigators including Jan Purkinje, Gabriel Valentin, and Robert Remak, the relationship between the nerve cells and other features such as dendrites and axons was not clear. The connections between the large cell bodies and smaller features could not be observed, and it was possible that neurofibrils would stand as an exception to cell theory as non-cellular components of living tissue. Technical limitations of microscopy and tissue preparation were largely responsible. Chromatic aberration, spherical aberration and the dependence on natural light all played a role in limiting microscope performance in the early 19th century. Tissue was typically lightly mashed in water and pressed between a glass slide and cover slip. There was also a limited number of dyes and fixatives available prior to the middle of the 19th century. A landmark development came from Camillo Golgi who invented a silver
https://en.wikipedia.org/wiki/Hostility
Hostility is seen as a form of emotionally charged aggressive behavior. In everyday speech, it is more commonly used as a synonym for anger and aggression. It appears in several psychological theories. For instance it is a facet of neuroticism in the NEO PI, and forms part of personal construct psychology, developed by George Kelly. Hostility/hospitality For hunter gatherers, every stranger from outside the small tribal group was a potential source of hostility. Similarly, in archaic Greece, every community was in a state of hostility, latent or overt, with every other community - something only gradually tempered by the rights and duties of hospitality. Tensions between the two poles of hostility and hospitality remain a potent force in the 21st century world. Us/them Robert Sapolsky argues that the tendency to form in-groups and out-groups of Us and Them, and to direct hostility at the latter, is inherent in humans. He also explores the possibility raised by Samuel Bowles that intra-group hostility is reduced when greater hostility is directed at Thems, something exploited by insecure leaders when they mobilise external conflicts so as to reduce in-group hostility towards themselves. Non-verbal indicators Automatic mental functioning suggests that among universal human indicators of hostility are the grinding or gnashing of teeth, the clenching and shaking of fists, and grimacing. Desmond Morris would add stamping and thumping. The Haka represents a ritualised set of such non-verbal signs of hostility. Kelly's model In psychological terms, George Kelly considered hostility as the attempt to extort validating evidence from the environment to confirm types of social prediction, constructs, that have failed. Instead of reconstructing their constructs to meet disconfirmations with better predictions, the hostile person attempts to force or coerce the world to fit their view, even if this is a forlorn hope, and even if it entails emotional expenditure and/or ha
https://en.wikipedia.org/wiki/Stolp%20radio%20transmitter
Stolp radio transmitter was a broadcasting station close to Rathsdamnitz, Germany ( since 1945: Dębnica Kaszubska, Poland) southeast of Stolp, Germany (since 1945 Słupsk, Poland). The facility, which went in service on December 1, 1938, was designed to explore whether a reduction of fading effects could be achieved via an extended surface antenna, as well as an increase in directionality by changing the phase position of individual radiators. A group of 10 antennas arranged on a circle with 1 km diameter around a central antenna was planned. A model test consisting of a 50 m high, free-standing wooden tower supporting a vertical wire which worked as the antenna was completed. Until July 1939 six further towers of the same type were built on a circle with 150 m diameter around the central antenna tower. All these towers were the tallest wooden lattice towers with a triangular cross-section ever built in Germany. The antennas were fed through an underground cable, which runs from the transmitter building, 180 m away from the central tower to the central tower, where a distributor for the transmission power was installed. From this distributor, overhead single-wire lines mounted on 4 m high wooden poles run to the antenna towers on the circle for feeding their antennas with the transmission power. In 1940 south of the transmission building a 50 m tall guyed mast radiator, which was manufactured by Jucho, was erected. The facility survived World War II and was shortly after World War II used to broadcast the program of the Russian military broadcaster "Radio Volga". However, in 1955 the facility was completely demolished after the removal of all technical equipment.
https://en.wikipedia.org/wiki/Eagle%20Computer
Eagle Computer, Inc., was an early American computer company based in Los Gatos, California. Spun off from Audio-Visual Laboratories (AVL), it first sold a line of popular CP/M computers which were highly praised in the computer magazines of the day. After the IBM PC was launched, Eagle produced the Eagle 1600 series, which ran MS-DOS but were not true clones. When it became evident that the buying public wanted actual clones of the IBM PC, even if a non-clone had better features, Eagle responded with a line of clones, including a portable. The Eagle PCs were always rated highly in computer magazines. CP/M models Multi-image models The AVL Eagle I and II had audio-visual connectors on the back. As a separate company, Eagle sold the Eagle I, II, III, IV, and V computer models and external SCSI/SASI hard-disk boxes called the File 10 and the File 40. The first Eagle computers were produced by Audio Visual Labs (AVL), a company founded by Gary Kappenman in New Jersey in the early 1970s to produce proprietary large-format multi-image equipment. Kappenman introduced the world's first microprocessor-controlled multi-image programming computers, the ShowPro III and V, which were dedicated controllers. In 1980, AVL introduced the first non-dedicated controller, the Eagle. This first Eagle computer used a 16 kHz processor and had a 5-inch disk drive for online storage. The Eagle ran PROCALL (PROgrammable Computer Audio-visual Language Library) software for writing cues to control up to 30 Ektagraphic projectors, five 16 mm film projectors and 20 auxiliary control points. Digital control data was sourced via an RCA or XLR-type audio connector at the rear of the unit. AVL's proprietary "ClockTrak" (a biphase digital timecode similar to, but incompatible with SMPTE timecode) was sourced from the control channel of a multitrack analog audio tape deck. The timed list of events in the Eagle was synchronized to the ClockTrak. Later versions of PROCALL included the option of
https://en.wikipedia.org/wiki/Adiabatic%20flame%20temperature
In the study of combustion, the adiabatic flame temperature is the temperature reached by a flame under ideal conditions. It is an upper bound of the temperature that is reached in actual processes. There are two types adiabatic flame temperature: constant volume and constant pressure, depending on how the process is completed. The constant volume adiabatic flame temperature is the temperature that results from a complete combustion process that occurs without any work, heat transfer or changes in kinetic or potential energy. Its temperature is higher than in the constant pressure process because no energy is utilized to change the volume of the system (i.e., generate work). Common flames In daily life, the vast majority of flames one encounters are those caused by rapid oxidation of hydrocarbons in materials such as wood, wax, fat, plastics, propane, and gasoline. The constant-pressure adiabatic flame temperature of such substances in air is in a relatively narrow range around 1950 °C. This is mostly because the heat of combustion of these compounds is roughly proportional to the amount of oxygen consumed, which proportionally increases the amount of air that has to be heated, so the effect of a larger heat of combustion on the flame temperature is offset. Incomplete reaction at higher temperature further curtails the effect of a larger heat of combustion. Because most combustion processes that happen naturally occur in the open air, there is nothing that confines the gas to a particular volume like the cylinder in an engine. As a result, these substances will burn at a constant pressure, which allows the gas to expand during the process. Common flame temperatures Assuming initial atmospheric conditions (1bar and 20 °C), the following table lists the flame temperature for various fuels under constant pressure conditions. The temperatures mentioned here are for a stoichiometric fuel-oxidizer mixture (i.e. equivalence ratio φ = 1). Note that these are theoreti
https://en.wikipedia.org/wiki/Runge%27s%20theorem
In complex analysis, Runge's theorem (also known as Runge's approximation theorem) is named after the German mathematician Carl Runge who first proved it in the year 1885. It states the following: Denoting by C the set of complex numbers, let K be a compact subset of C and let f be a function which is holomorphic on an open set containing K. If A is a set containing at least one complex number from every bounded connected component of C\K then there exists a sequence of rational functions which converges uniformly to f on K and such that all the poles of the functions are in A. Note that not every complex number in A needs to be a pole of every rational function of the sequence . We merely know that for all members of that do have poles, those poles lie in A. One aspect that makes this theorem so powerful is that one can choose the set A arbitrarily. In other words, one can choose any complex numbers from the bounded connected components of C\K and the theorem guarantees the existence of a sequence of rational functions with poles only amongst those chosen numbers. For the special case in which C\K is a connected set (in particular when K is simply-connected), the set A in the theorem will clearly be empty. Since rational functions with no poles are simply polynomials, we get the following corollary: If K is a compact subset of C such that C\K is a connected set, and f is a holomorphic function on an open set containing K, then there exists a sequence of polynomials that approaches f uniformly on K (the assumptions can be relaxed, see Mergelyan's theorem). Runge's theorem generalises as follows: one can take A to be a subset of the Riemann sphere C∪{∞} and require that A intersect also the unbounded connected component of K (which now contains ∞). That is, in the formulation given above, the rational functions may turn out to have a pole at infinity, while in the more general formulation the pole can be chosen instead anywhere in the unbounded connected co
https://en.wikipedia.org/wiki/Co-receptor
A co-receptor is a cell surface receptor that binds a signalling molecule in addition to a primary receptor in order to facilitate ligand recognition and initiate biological processes, such as entry of a pathogen into a host cell. Properties The term co-receptor is prominent in literature regarding signal transduction, the process by which external stimuli regulate internal cellular functioning. The key to optimal cellular functioning is maintained by possessing specific machinery that can carry out tasks efficiently and effectively. Specifically, the process through which intermolecular reactions forward and amplify extracellular signals across the cell surface has developed to occur by two mechanisms. First, cell surface receptors can directly transduce signals by possessing both serine and threonine or simply serine in the cytoplasmic domain. They can also transmit signals through adaptor molecules through their cytoplasmic domain which bind to signalling motifs. Secondly, certain surface receptors lacking a cytoplasmic domain can transduce signals through ligand binding. Once the surface receptor binds the ligand it forms a complex with a corresponding surface receptor to regulate signalling. These categories of cell surface receptors are prominently referred to as co-receptors. Co-receptors are also referred to as accessory receptors, especially in the fields of biomedical research and immunology. Co-receptors are proteins that maintain a three-dimensional structure. The large extracellular domains make up approximately 76–100% of the receptor. The motifs that make up the large extracellular domains participate in ligand binding and complex formation. The motifs can include glycosaminoglycans, EGF repeats, cysteine residues or ZP-1 domains. The variety of motifs leads to co-receptors being able to interact with two to nine different ligands, which themselves can also interact with a number of different co-receptors. Most co-receptors lack a cytoplasmic domain
https://en.wikipedia.org/wiki/Karloff%E2%80%93Zwick%20algorithm
The Karloff–Zwick algorithm, in computational complexity theory, is a randomised approximation algorithm taking an instance of MAX-3SAT Boolean satisfiability problem as input. If the instance is satisfiable, then the expected weight of the assignment found is at least 7/8 of optimal. There is strong evidence (but not a mathematical proof) that the algorithm achieves 7/8 of optimal even on unsatisfiable MAX-3SAT instances. Howard Karloff and Uri Zwick presented the algorithm in 1997. The algorithm is based on semidefinite programming. It can be derandomized using, e.g., the techniques from to yield a deterministic polynomial-time algorithm with the same approximation guarantees. Comparison to random assignment For the related MAX-E3SAT problem, in which all clauses in the input 3SAT formula are guaranteed to have exactly three literals, the simple randomized approximation algorithm which assigns a truth value to each variable independently and uniformly at random satisfies 7/8 of all clauses in expectation, irrespective of whether the original formula is satisfiable. Further, this simple algorithm can also be easily derandomized using the method of conditional expectations. The Karloff–Zwick algorithm, however, does not require the restriction that the input formula should have three literals in every clause. Optimality Building upon previous work on the PCP theorem, Johan Håstad showed that, assuming P ≠ NP, no polynomial-time algorithm for MAX 3SAT can achieve a performance ratio exceeding 7/8, even when restricted to satisfiable instances of the problem in which each clause contains exactly three literals. Both the Karloff–Zwick algorithm and the above simple algorithm are therefore optimal in this sense.
https://en.wikipedia.org/wiki/Blind%20deconvolution
In electrical engineering and applied mathematics, blind deconvolution is deconvolution without explicit knowledge of the impulse response function used in the convolution. This is usually achieved by making appropriate assumptions of the input to estimate the impulse response by analyzing the output. Blind deconvolution is not solvable without making assumptions on input and impulse response. Most of the algorithms to solve this problem are based on assumption that both input and impulse response live in respective known subspaces. However, blind deconvolution remains a very challenging non-convex optimization problem even with this assumption. In image processing In image processing, blind deconvolution is a deconvolution technique that permits recovery of the target scene from a single or set of "blurred" images in the presence of a poorly determined or unknown point spread function (PSF). Regular linear and non-linear deconvolution techniques utilize a known PSF. For blind deconvolution, the PSF is estimated from the image or image set, allowing the deconvolution to be performed. Researchers have been studying blind deconvolution methods for several decades, and have approached the problem from different directions. Most of the work on blind deconvolution started in early 1970s. Blind deconvolution is used in astronomical imaging and medical imaging. Blind deconvolution can be performed iteratively, whereby each iteration improves the estimation of the PSF and the scene, or non-iteratively, where one application of the algorithm, based on exterior information, extracts the PSF. Iterative methods include maximum a posteriori estimation and expectation-maximization algorithms. A good estimate of the PSF is helpful for quicker convergence but not necessary. Examples of non-iterative techniques include SeDDaRA, the cepstrum transform and APEX. The cepstrum transform and APEX methods assume that the PSF has a specific shape, and one must estimate the width of t
https://en.wikipedia.org/wiki/Foramen%20spinosum
The foramen spinosum is a small open hole in the greater wing of the sphenoid bone that gives passage to the middle meningeal artery and vein, and the meningeal branch of the mandibular nerve (sometimes it passes through the foramen ovale instead). The foramen spinosum is often used as a landmark in neurosurgery due to its close relations with other cranial foramina. It was first described by Jakob Benignus Winslow in the 18th century. Structure The foramen spinosum is a small foramen in the greater wing of the sphenoid bone of the skull. It connects the middle cranial fossa (superiorly), and infratemporal fossa (inferiorly). Contents The foramen transmits the middle meningeal artery and vein, and sometimes the meningeal branch of the mandibular nerve (it may pass through the foramen ovale instead). Relations The foramen is situated just anterior to the sphenopetrosal suture. It is located posterolateral to the foramen ovale, and anterior to the sphenoidal spine. A groove for the middle meningeal artery and vein extends anterolaterally from the foramen. Variation The foramen spinosum varies in size and location. The foramen is rarely absent, usually unilaterally, in which case the middle meningeal artery enters the cranial cavity through the foramen ovale. It may be incomplete, which may occur in almost half of the population. Conversely, in a minority of cases (less than 1%), it may also be duplicated, particularly when the middle meningeal artery is also duplicated. The foramen may pass through the sphenoid bone at the apex of the spinous process, or along its medial surface. Development In the newborn, the foramen spinosum is about 2.25 mm long and in adults about 2.56 mm. The width of the foramen variesfrom 1.05 mm to about 2.1 mm in adults. The average diameter of the foramen spinosum is 2.63 mm in adults. It is usually between 3 and 4 mm away from the foramen ovale in adults. The earliest perfect ring-shaped formation of the foramen spinosum w
https://en.wikipedia.org/wiki/Maxillary%20nerve
In neuroanatomy, the maxillary nerve (V) is one of the three branches or divisions of the trigeminal nerve, the fifth (CN V) cranial nerve. It comprises the principal functions of sensation from the maxilla, nasal cavity, sinuses, the palate and subsequently that of the mid-face, and is intermediate, both in position and size, between the ophthalmic nerve and the mandibular nerve. Structure It begins at the middle of the trigeminal ganglion as a flattened plexiform band then it passes through the lateral wall of the cavernous sinus. It leaves the skull through the foramen rotundum, where it becomes more cylindrical in form, and firmer in texture. After leaving foramen rotundum it gives two branches to the pterygopalatine ganglion. It then crosses the pterygopalatine fossa, inclines lateralward on the back of the maxilla, and enters the orbit through the inferior orbital fissure. It then runs forward on the floor of the orbit, at first in the infraorbital groove and then in the infraorbital canal remaining outside the periosteum of the orbit. It then emerges on the face through the infraorbital foramen and terminates by dividing into inferior palpebral, lateral nasal and superior labial branches. The nerve is accompanied by the infraorbital branch of (the third part of) the maxillary artery and the accompanying vein. Branches Its branches may be divided into four groups, depending upon where they branch off: in the cranium, in the pterygopalatine fossa, in the infraorbital canal, or on the face. In the cranium Middle meningeal nerve in the meninges From the pterygopalatine fossa Zygomatic nerve (zygomaticotemporal nerve, zygomaticofacial nerve), through the Inferior orbital fissure Nasopalatine nerve, through the sphenopalatine foramen Posterior superior alveolar nerve Greater and lesser palatine nerves Pharyngeal nerve In the infraorbital canal Middle superior alveolar nerve Anterior superior alveolar nerve Infraorbital nerve On the face Inferior pa
https://en.wikipedia.org/wiki/Ophthalmic%20nerve
The ophthalmic nerve (CN V1) is a sensory nerve of the head. It is one of three divisions of the trigeminal nerve (CN V), a cranial nerve. It has three major branches which provide sensory innervation to the eye, and the skin of the upper face and anterior scalp, as well as other structures of the head. Structure It measures about 2.5 cm in length. Origin The ophthalmic nerve is the first branch of the trigeminal nerve (CN V), the first and smallest of its three divisions. It arises from the superior part of the trigeminal ganglion. Course It passes anterior-ward along the lateral wall of the cavernous sinus inferior to the oculomotor nerve (CN III) and trochlear nerve (N IV). It divides into its three main branches as it approaches the superior orbital fissure. Branches Within the skull, the ophthalmic nerve produces: meningeal branch (tentorial nerve) The ophthalmic nerve divides into three major branches which pass through the superior orbital fissure: frontal nerve supraorbital nerve supratrochlear nerve lacrimal nerve nasociliary nerve posterior ethmoidal nerve anterior ethmoidal nerve external nasal nerve long ciliary nerves infratrochlear nerve communicating branch to ciliary ganglion Distribution The ophthalmic nerve provides sensory innervation to the cornea, ciliary body, and iris; to the lacrimal gland and conjunctiva; to the part of the mucous membrane of the nasal cavity; and to the skin of the eyelids, eyebrow, forehead and nose. It carries sensory branches from the eyes, conjunctiva, lacrimal gland, nasal cavity, frontal sinus, ethmoidal cells, falx cerebri, dura mater in the anterior cranial fossa, superior parts of the tentorium cerebelli, upper eyelid, dorsum of the nose, and anterior part of the scalp. Roughly speaking, the ophthalmic nerve supplies general somatic afferents to the upper face, head, and eye: Face: Upper eyelid and associated conjunctiva. Eyebrow, forehead, scalp all the way to the lambdoid suture. Skull: Roof o
https://en.wikipedia.org/wiki/Rpath
In computing, rpath designates the run-time search path hard-coded in an executable file or library. Dynamic linking loaders use the rpath to find required libraries. Specifically, it encodes a path to shared libraries into the header of an executable (or another shared library). This RPATH header value (so named in the Executable and Linkable Format header standards) may either override or supplement the system default dynamic linking search paths. The rpath of an executable or shared library is an optional entry in the .dynamic section of the ELF executable or shared libraries, with the type DT_RPATH, called the DT_RPATH attribute. It can be stored there at link time by the linker. Tools such as chrpath and patchelf can create or modify the entry later. Use of the DT_RPATH entry by the dynamic linker The different dynamic linkers for ELF implement the use of the DT_RPATH attribute in different ways. GNU ld.so The dynamic linker of the GNU C Library searches for shared libraries in the following locations in order: The (colon-separated) paths in the DT_RPATH dynamic section attribute of the binary if present and the DT_RUNPATH attribute does not exist. The (colon-separated) paths in the environment variable LD_LIBRARY_PATH, unless the executable is a setuid/setgid binary, in which case it is ignored. LD_LIBRARY_PATH can be overridden by calling the dynamic linker with the option --library-path (e.g. /lib/ld-linux.so.2 --library-path $HOME/mylibs myprogram). The (colon-separated) paths in the DT_RUNPATH dynamic section attribute of the binary if present. Lookup based on the ldconfig cache file (often located at /etc/ld.so.cache) which contains a compiled list of candidate libraries previously found in the augmented library path (set by /etc/ld.so.conf). If, however, the binary was linked with the -z nodefaultlib linker option, libraries in the default library paths are skipped. In the trusted default path /lib, and then /usr/lib. If the binary was linked
https://en.wikipedia.org/wiki/Nasociliary%20nerve
The nasociliary nerve is a branch of the ophthalmic nerve (CN V1) (which is in turn a branch of the trigeminal nerve (CN V)). It is intermediate in size between the other two branches of the ophthalmic nerve, the frontal nerve and lacrimal nerve. Structure Course The nasociliary nerve enters the orbit via the superior orbital fissure, through the common tendinous ring, and between the two heads of the lateral rectus muscle and between the superior and inferior rami of the oculomotor nerve. It passes across the optic nerve (CN II) along with the ophthalmic artery. It then runs obliquely beneath (inferior to) the superior rectus muscle and superior oblique muscle to the medial wall of the orbital cavity whereupon it emits the posterior ethmoidal nerve, and the anterior ethmoidal nerve. Branches Branches of the nasociliary nerve include: posterior ethmoidal nerve anterior ethmoidal nerve long ciliary nerves infratrochlear nerve communicating branch to ciliary ganglion Function The branches of the nasociliary nerve provide sensory innervation to structures surrounding the eye such as the cornea, eyelids, conjunctiva, ethmoid air cells and mucosa of the nasal cavity. Clinical significance Clinical assessment Since both the short and long ciliary nerves carry the afferent limb of the corneal reflex, one can test the integrity of the nasociliary nerve (and, ultimately, the trigeminal nerve) by examining this reflex in the patient. Normally both eyes should blink when either cornea (not the conjunctiva, which is supplied by the adjacent cutaneous nerves) is irritated. If neither eye blinks, then either the ipsilateral nasociliary nerve is damaged, or the facial nerve (CN VII, which carries the efferent limb of this reflex) is bilaterally damaged. If only the contralateral eye blinks, then the ipsilateral facial nerve is damaged. If only the ipsilateral eye blinks, then the contralateral facial nerve is damaged. Additional images
https://en.wikipedia.org/wiki/Demographic%20trap
According to the Encyclopedia of International Development, the term demographic trap is used by demographers "to describe the combination of high fertility (birth rates) and declining mortality (death rates) in developing countries, resulting in a period of high population growth rate (PGR)." High fertility combined with declining mortality happens when a developing country moves through the demographic transition of becoming developed. During "stage 2" of the demographic transition, quality of health care improves and death rates fall, but birth rates still remain high, resulting in a period of high population growth. The term "demographic trap" is used by some demographers to describe a situation where stage 2 persists because "falling living standards reinforce the prevailing high fertility, which in turn reinforces the decline in living standards." This results in more poverty, where people rely on more children to provide them with economic security. Social scientist John Avery explains that this results because the high birth rates and low death rates "lead to population growth so rapid that the development that could have slowed population is impossible." Results One of the significant outcomes of the "demographic trap" is explosive population growth. This is currently seen throughout Asia, Africa and Latin America, where death rates have dropped during the last half of the 20th century due to advanced health care. However, in subsequent decades most of those countries were unable to keep improving economic development to match their population's growth, by filling the education needs for more school age children, creating more jobs for the expanding workforce, and providing basic infrastructure and services, such as sewage, roads, bridges, water supplies, electricity, and stable food supplies. A possible result of a country remaining trapped in stage 2 is its government may reach a state of "demographic fatigue," writes Donald Kaufman. In this condit
https://en.wikipedia.org/wiki/DNA-binding%20domain
A DNA-binding domain (DBD) is an independently folded protein domain that contains at least one structural motif that recognizes double- or single-stranded DNA. A DBD can recognize a specific DNA sequence (a recognition sequence) or have a general affinity to DNA. Some DNA-binding domains may also include nucleic acids in their folded structure. Function One or more DNA-binding domains are often part of a larger protein consisting of further protein domains with differing function. The extra domains often regulate the activity of the DNA-binding domain. The function of DNA binding is either structural or involves transcription regulation, with the two roles sometimes overlapping. DNA-binding domains with functions involving DNA structure have biological roles in DNA replication, repair, storage, and modification, such as methylation. Many proteins involved in the regulation of gene expression contain DNA-binding domains. For example, proteins that regulate transcription by binding DNA are called transcription factors. The final output of most cellular signaling cascades is gene regulation. The DBD interacts with the nucleotides of DNA in a DNA sequence-specific or non-sequence-specific manner, but even non-sequence-specific recognition involves some sort of molecular complementarity between protein and DNA. DNA recognition by the DBD can occur at the major or minor groove of DNA, or at the sugar-phosphate DNA backbone (see the structure of DNA). Each specific type of DNA recognition is tailored to the protein's function. For example, the DNA-cutting enzyme DNAse I cuts DNA almost randomly and so must bind to DNA in a non-sequence-specific manner. But, even so, DNAse I recognizes a certain 3-D DNA structure, yielding a somewhat specific DNA cleavage pattern that can be useful for studying DNA recognition by a technique called DNA footprinting. Many DNA-binding domains must recognize specific DNA sequences, such as DBDs of transcription factors that activate
https://en.wikipedia.org/wiki/Fresnel%20%28unit%29
A fresnel is a unit of frequency equal to 1012 s−1. It was occasionally used in the field of spectroscopy, but its use has been superseded by terahertz (with the identical value 1012 hertz). It is named for Augustin-Jean Fresnel the physicist whose expertise in optics led to the creation of Fresnel lenses.
https://en.wikipedia.org/wiki/Donor%20number
In chemistry a donor number (DN) is a quantitative measure of Lewis basicity. A donor number is defined as the negative enthalpy value for the 1:1 adduct formation between a Lewis base and the standard Lewis acid SbCl5 (antimony pentachloride), in dilute solution in the noncoordinating solvent 1,2-dichloroethane with a zero DN. The units are kilocalories per mole for historical reasons. The donor number is a measure of the ability of a solvent to solvate cations and Lewis acids. The method was developed by V. Gutmann in 1976. Likewise Lewis acids are characterized by acceptor numbers (AN, see Gutmann–Beckett method). Typical solvent values are: acetonitrile 14.1 kcal/mol (59.0 kJ/mol) acetone 17 kcal/mol (71 kJ/mol) methanol 19 kcal/mol (79 kJ/mol) tetrahydrofuran 20 kcal/mol (84 kJ/mol) dimethylformamide (DMF) 26.6 kcal/mol (111 kJ/mol) dimethyl sulfoxide (DMSO) 29.8 kcal/mol (125 kJ/mol) ethanol 31.5 kcal/mol (132 kJ/mol) pyridine 33.1 kcal/mol (138 kJ/mol) triethylamine 61 kcal/mol (255 kJ/mol) The donor number of a solvent can be measured via calorimetry, although it is frequently measured with nuclear magnetic resonance (NMR) spectroscopy using assumptions on complexation. A critical review of the donor number concept has pointed out the serious limitations of this affinity scale. Furthermore, it has been shown that to define the order of Lewis base strength (or Lewis acid strength) at least two properties must be considered. For Pearson qualitative HSAB theory, the two properties are hardness and strength, while for Drago's quantitative ECW model, the two properties are electrostatic and covalent.
https://en.wikipedia.org/wiki/Lichenin
Lichenin, also known as lichenan or moss starch, is a complex glucan occurring in certain species of lichens. It can be extracted from Cetraria islandica (Iceland moss). It has been studied since about 1957. Structure Chemically, lichenin is a mixed-linkage glucan, consisting of repeating glucose units linked by β-1,3 and β-1,4 glycosidic bonds. Uses It is an important carbohydrate for reindeers and northern flying squirrels, which eat the lichen Bryoria fremontii. It can be extracted by digesting Iceland moss in a cold, weak solution of carbonate of soda for some time, and then boiling. By this process the lichenin is dissolved and on cooling separates as a colorless jelly. Iodine imparts no color to it. Other uses of the name In his 1960 novel Trouble with Lichen, John Wyndham gives the name Lichenin to a biochemical extract of lichen used to extend life expectancy beyond 300 years.
https://en.wikipedia.org/wiki/Etioplast
Etioplasts are an intermediate type of plastid that develop from proplastids that have not been exposed to light, and convert into chloroplasts upon exposure to light. They are usually found in stem and leaf tissue of flowering plants (Angiosperms) grown either in complete darkness, or in extremely low-light conditions. Etymology The word "etiolated" (from French word étioler — "straw") was first coined by Erasmus Darwin in 1791 to describe the white and straw-like appearance of dark-grown plants. However, the term "etioplast" did not exist until 1967 when it was invented by John T. O. Kirk and Richard A. E. Tilney-Bassett to distinguish etioplasts from proplastids, their precursors. Structure Etioplasts are characterized by the absence of chlorophyll and the presence of a complicated structure called a prolamellar body (PLB). Usually, a single one is present in each etioplast. PLB is composed of symmetrically arranged, tetrahedrally-branched tubules and may contain ribosomes and plastoglobules inside. The latter are rich with carotenoids, especially lutein and violaxanthin, which may help in transition to chloroplasts. Due to the higher presence of carotenoids than protochlorophyllide, etiolated leaves appear pale yellow instead of just white. Transition to chloroplast Every PLB contains protochlorophyllide which is rapidly converted into chlorophyllide by the enzyme protochlorophyllide reductase upon exposure to light. Following this, chlorophyllide is converted to chlorophyll through enzymatic processes. This is stimulated by plant growth hormones: cytokinins and gibberellins. The structure of PLB itself is almost immediately disrupted, and thylakoid and grana development is started in reaction to light: photosystem I activates within 15 minutes, photosystem II within 2 hours, and after approximately 3 hours an etioplast is completely converted into a functional chloroplast. The transitional stage between an etioplast and a chloroplast which still contains
https://en.wikipedia.org/wiki/Experts%20Exchange
Experts Exchange (EE) is a website for people in information technology (IT) related jobs to ask each other for tech help, receive instant help via chat, hire freelancers, and browse tech jobs. Controversy has surrounded their policy of providing answers only via paid subscription. History Experts Exchange went live in October 1996. The first question asked was for a "Case sensitive Win31 HTML Editor". Experts Exchange went bankrupt in 2001 after venture capitalists moved the company to San Mateo, CA, and was brought back largely through the efforts of unpaid volunteers. Later, Austin Miller and Randy Redberg took ownership of Experts Exchange, and the company was made profitable again. Experts Exchange claims to have more than 3 million solutions. Its users are mainly young to middle-aged males in the IT field. Paywall In the past, the site employed HTTP cookie and HTTP referer inspection to display content selectively. The page shown employed JavaScript to display answers to humans after some content showing how to become a member. Subsequently, when an internal link was clicked by the user, they were blocked from viewing the answer information until either becoming a paid member or spoofing their browser's User Agent string to that of a search engine crawler such as GoogleBot. In response to these obfuscation techniques, which prevented anonymous users from seeing answer content, a few members of the community wrote articles about how to bypass the obfuscation by spoofing one's web browser referrer using an addon like Smart Referrer and setting the referer as being from Google. Stack Overflow founder Jeff Atwood cited Experts-Exchange's poor reputation and paywall as a motivation for creating Stack Overflow. See also Bulletin board system Chat room Internet forum Virtual community
https://en.wikipedia.org/wiki/Electron%20scattering
Electron scattering occurs when electrons are displaced from their original trajectory. This is due to the electrostatic forces within matter interaction or, if an external magnetic field is present, the electron may be deflected by the Lorentz force. This scattering typically happens with solids such as metals, semiconductors and insulators; and is a limiting factor in integrated circuits and transistors. The application of electron scattering is such that it can be used as a high resolution microscope for hadronic systems, that allows the measurement of the distribution of charges for nucleons and nuclear structure. The scattering of electrons has allowed us to understand that protons and neutrons are made up of the smaller elementary subatomic particles called quarks. Electrons may be scattered through a solid in several ways: Not at all: no electron scattering occurs at all and the beam passes straight through. Single scattering: when an electron is scattered just once. Plural scattering: when electron(s) scatter several times. Multiple scattering: when electron(s) scatter many times over. The likelihood of an electron scattering and the degree of the scattering is a probability function of the specimen thickness to the mean free path. History The principle of the electron was first theorised in the period of 1838-1851 by a natural philosopher by the name of Richard Laming who speculated the existence of sub-atomic, unit charged particles; he also pictured the atom as being an 'electrosphere' of concentric shells of electrical particles surrounding a material core. It is generally accepted that J. J. Thomson first discovered the electron in 1897, although other notable members in the development in charged particle theory are George Johnstone Stoney (who coined the term "electron"), Emil Wiechert (who was first to publish his independent discovery of the electron), Walter Kaufmann, Pieter Zeeman and Hendrik Lorentz. Compton scattering was first observed at
https://en.wikipedia.org/wiki/Attack%20model
In cryptanalysis, attack models or attack types are a classification of cryptographic attacks specifying the kind of access a cryptanalyst has to a system under attack when attempting to "break" an encrypted message (also known as ciphertext) generated by the system. The greater the access the cryptanalyst has to the system, the more useful information they can get to utilize for breaking the cypher. In cryptography, a sending party uses a cipher to encrypt (transform) a secret plaintext into a ciphertext, which is sent over an insecure communication channel to the receiving party. The receiving party uses an inverse cipher to decrypt the ciphertext to obtain the plaintext. A secret knowledge is required to apply the inverse cipher to the ciphertext. This secret knowledge is usually a short number or string called a key. In a cryptographic attack a third party cryptanalyst analyzes the ciphertext to try to "break" the cipher, to read the plaintext and obtain the key so that future enciphered messages can be read. It is usually assumed that the encryption and decryption algorithms themselves are public knowledge and available to the cryptographer, as this is the case for modern ciphers which are published openly. This assumption is called Kerckhoffs's principle. Models Some common attack models are: Ciphertext-only attack (COA) - in this type of attack it is assumed that the cryptanalyst has access only to the ciphertext, and has no access to the plaintext. This type of attack is the most likely case encountered in real life cryptanalysis, but is the weakest attack because of the cryptanalyst's lack of information. Modern ciphers are required to be very resistant to this type of attack. In fact, a successful cryptanalysis in the COA model usually requires that the cryptanalyst must have some information on the plaintext, such as its distribution, the language in which the plaintexts are written in, standard protocol data or framing which is part of the pla
https://en.wikipedia.org/wiki/Transcritical%20bifurcation
In bifurcation theory, a field within mathematics, a transcritical bifurcation is a particular kind of local bifurcation, meaning that it is characterized by an equilibrium having an eigenvalue whose real part passes through zero. A transcritical bifurcation is one in which a fixed point exists for all values of a parameter and is never destroyed. However, such a fixed point interchanges its stability with another fixed point as the parameter is varied. In other words, both before and after the bifurcation, there is one unstable and one stable fixed point. However, their stability is exchanged when they collide. So the unstable fixed point becomes stable and vice versa. The normal form of a transcritical bifurcation is This equation is similar to the logistic equation, but in this case we allow and to be positive or negative (while in the logistic equation and must be non-negative). The two fixed points are at and . When the parameter is negative, the fixed point at is stable and the fixed point is unstable. But for , the point at is unstable and the point at is stable. So the bifurcation occurs at . A typical example (in real life) could be the consumer-producer problem where the consumption is proportional to the (quantity of) resource. For example: where is the logistic equation of resource growth; and is the consumption, proportional to the resource .
https://en.wikipedia.org/wiki/Saddle-node%20bifurcation
In the mathematical area of bifurcation theory a saddle-node bifurcation, tangential bifurcation or fold bifurcation is a local bifurcation in which two fixed points (or equilibria) of a dynamical system collide and annihilate each other. The term 'saddle-node bifurcation' is most often used in reference to continuous dynamical systems. In discrete dynamical systems, the same bifurcation is often instead called a fold bifurcation. Another name is blue sky bifurcation in reference to the sudden creation of two fixed points. If the phase space is one-dimensional, one of the equilibrium points is unstable (the saddle), while the other is stable (the node). Saddle-node bifurcations may be associated with hysteresis loops and catastrophes. Normal form A typical example of a differential equation with a saddle-node bifurcation is: Here is the state variable and is the bifurcation parameter. If there are two equilibrium points, a stable equilibrium point at and an unstable one at . At (the bifurcation point) there is exactly one equilibrium point. At this point the fixed point is no longer hyperbolic. In this case the fixed point is called a saddle-node fixed point. If there are no equilibrium points. In fact, this is a normal form of a saddle-node bifurcation. A scalar differential equation which has a fixed point at for with is locally topologically equivalent to , provided it satisfies and . The first condition is the nondegeneracy condition and the second condition is the transversality condition. Example in two dimensions An example of a saddle-node bifurcation in two dimensions occurs in the two-dimensional dynamical system: As can be seen by the animation obtained by plotting phase portraits by varying the parameter , When is negative, there are no equilibrium points. When , there is a saddle-node point. When is positive, there are two equilibrium points: that is, one saddle point and one node (either an attractor or a repellor). Other exa
https://en.wikipedia.org/wiki/John%20Kingman
Sir John Frank Charles Kingman (born 28 August 1939) is a British mathematician. He served as N. M. Rothschild and Sons Professor of Mathematical Sciences and Director of the Isaac Newton Institute at the University of Cambridge from 2001 until 2006, when he was succeeded by David Wallace. He is known for developing the mathematics of the Coalescent theory, a theoretical model of inheritance, which is fundamental to modern population genetics. Education and early life The grandson of a coal miner and son of a government scientist with a PhD in chemistry, Kingman was born in Beckenham, Kent, and grew up in the outskirts of London, where he attended Christ's College, Finchley, which was then a state grammar school. He was awarded a scholarship to read mathematics at Pembroke College, Cambridge, in 1956. On graduating in 1960, he began work on his PhD under the supervision of Peter Whittle, studying queueing theory, Markov chains and regenerative phenomena. Career and research Whittle left Cambridge for the University of Manchester, and, rather than follow him there, Kingman moved instead to the University of Oxford, where he resumed his work under David Kendall. After another year, Kendall was appointed a professor at Cambridge and so Kingman returned to Cambridge. He returned, however, as a member of the teaching staff (and a Fellow of Pembroke College) and never completed his PhD. He married Valerie Crompton, a historian at the University of Sussex in 1964, and in 1965 he took up the post of Reader at the newly built University of Sussex where she was teaching, and was elected Professor of Mathematics and Statistics after only a year. He said of this post: Sussex in the 1960s was a very exciting place, alive with ideas and opportunities. My wife was teaching history there, and we made many friends across the whole range of subjects. He held this post until 1969, when he moved, figuratively, but not physically, to Oxford as Wallis Professor of Mathematics, a
https://en.wikipedia.org/wiki/Spin%20density%20wave
Spin-density wave (SDW) and charge-density wave (CDW) are names for two similar low-energy ordered states of solids. Both these states occur at low temperature in anisotropic, low-dimensional materials or in metals that have high densities of states at the Fermi level . Other low-temperature ground states that occur in such materials are superconductivity, ferromagnetism and antiferromagnetism. The transition to the ordered states is driven by the condensation energy which is approximately where is the magnitude of the energy gap opened by the transition. Fundamentally SDWs and CDWs involve the development of a superstructure in the form of a periodic modulation in the density of the electronic spins and charges with a characteristic spatial frequency that does not transform according to the symmetry group that describes the ionic positions. The new periodicity associated with CDWs can easily be observed using scanning tunneling microscopy or electron diffraction while the more elusive SDWs are typically observed via neutron diffraction or susceptibility measurements. If the new periodicity is a rational fraction or multiple of the lattice constant, the density wave is said to be commensurate; otherwise the density wave is termed incommensurate. Some solids with a high form density waves while others choose a superconducting or magnetic ground state at low temperatures, because of the existence of nesting vectors in the materials' Fermi surfaces. The concept of a nesting vector is illustrated in the Figure for the famous case of chromium, which transitions from a paramagnetic to SDW state at a Néel temperature of 311 K. Cr is a body-centered cubic metal whose Fermi surface features many parallel boundaries between electron pockets centered at and hole pockets at H. These large parallel regions can be spanned by the nesting wavevector shown in red. The real-space periodicity of the resulting spin-density wave is given by . The formation of an SDW with
https://en.wikipedia.org/wiki/MINTO
MINTO (Mixed Integer Optimizer) is an integer programming solver which uses branch and bound algorithm. MINTO is a software system that solves mixed integer programming problem by a branch and bound algorithm with linear programming relaxations. It also provides automatic constraint classification, preprocessing, primal heuristics and constraint generation. It also has inbuilt cut generation and can create knapsack cuts, GUB cuts, clique cuts, implication cuts, flow cuts, mixed integer rounding and Gomory cuts. Moreover, the user can enrich the basic algorithm by providing a variety of specialized application routines that can customize MINTO to achieve higher efficiency for a problem class. MINTO does not have a linear programming (LP) solver of its own. It can use most of the LP solvers, like CLP, CPLEX, XPRESS through the OSI interface of COIN-OR. MINTO can read files in MPS and can also be called as a solver from AMPL. It can run on both Linux and Windows operating system. MINTO is a non-commercial solver and the executables are available for free download from its home page at COR@L. See also COIN-OR (Computational Infrastructure for Operations Research)
https://en.wikipedia.org/wiki/Man%20After%20Man
Man After Man: An Anthropology of the Future is a 1990 speculative evolution and science fiction book written by Scottish geologist and palaeontologist Dougal Dixon and illustrated by Philip Hood. The book also features a foreword by Brian Aldiss. Man After Man explores a hypothetical future path of human evolution set from 200 years in the future to 5 million years in the future, with several future human species evolving through genetic engineering and natural means through the course of the book. Man After Man is Dixon's third work on speculative evolution, following After Man (1981) and The New Dinosaurs (1988). Unlike the previous two books, which were written much like field guides, the focus of Man After Man lies much on the individual perspectives of future human individuals of various species. Man After Man, like its predecessors, uses its fictional setting to explore and explain real natural processes, in this case climate change through the eyes of the various human descendants in the book, who have been engineered specifically to adapt to it. Reviews of Man After Man were generally positive, but more mixed than the previous books and criticised its scientific basis to a greater extent than that of its predecessors. Dixon himself is not fond of the book, having referred to it as a "disaster of a project". During writing, the book had changed considerably from its initial concept, which Dixon instead repurposed for his later book Greenworld (2010). Summary Man After Man explores an imaginary future evolutionary path of humanity, from 200 years in the future to five million years in the future. It contains several technological, social and biological concepts, most prominently genetic engineering but also parasitism, slavery, and elective surgery. As a result of mankind's technological prowess, evolution is accelerated, producing several species with varying intraspecific relations, many of them unrecognizable as humans. Instead of the field guide-like
https://en.wikipedia.org/wiki/Orion%20Electric
was a Japanese consumer electronics company that was established in 1958 in Osaka, Japan. Their devices were branded as "Orion". The company used to be called Orion Electric, until Brain and Capital Holdings, Inc. (Japanese company) acquired it in 2019. From 1984 to their acquisition, their headquarters were based in Echizen, Fukui, Japan. Products manufactured and sold under the Orion brand included transistor radios, radio/cassette recorders, car stereos, and home stereo systems. Before their acquisition, they were of the world's largest OEM television and video equipment manufacturers, primarily supplying major-brand OEM customers, with Toshiba being its major customer in the 2000s. Orion produced around six million televisions and twelve million DVD player and TV combo units each year until 2019. Most of their products were manufactured in Thailand. The Orion Group employed in excess of 9,000 workers. They had factories and offices in Japan, Thailand, Poland, the United Kingdom, and the United States. Orion's flagship factories in Thailand were one of Thailand’s top exporters, and they were recognized with an award from the Thai Government for their contribution. Orion manufactured products primarily for Memorex, Otake, Hitachi, JVC, Emerson, and Sansui. In the North American market, Orion manufactured many televisions and VCRs for Emerson Radio during the 1980s and 1990s, but when Emerson Radio went bankrupt in 2000, the Emerson brand and their assets were bought by Orion’s primary competitor, Funai. During the 1990s, Orion and another of their brand names, World, were exclusively sold by Wal-Mart. The products sold consisted of discounted televisions, TV/VCR combos, and VHS players. In 2001, at its peak, Orion partnered with Toshiba to manufacture smaller CRT and LCD televisions, combo televisions, and DVD/VCR combos for the North American market, until 2009. After Toshiba exited, Orion production numbers had dropped significantly by more than 90% and ran in