source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/A-not-B%20error
The A-not-B error is an incomplete or absent schema of object permanence, normally observed during the sensorimotor stage of Jean Piaget's Theory of Cognitive Development. A typical A-not-B task goes like this: An experimenter hides an attractive toy under box "A" within the baby's reach. The baby searches for the toy, looks under box "A", and finds the toy. This activity is usually repeated several times (always with the researcher hiding the toy under box "A"), which means baby has the ability to pass the object permanence test. Then, in the critical trial, the experimenter moves the toy under box "B", also within easy reach of the baby. Babies of 10 months or younger typically make the perseveration error, meaning they look under box "A" even though they saw the researcher move the toy under box "B", and box "B" is just as easy to reach. Piaget called this phenomenon A-not-B error. This demonstrates a lack of, or incomplete, schema of object permanence, shows that the infant's cognition of the existence of the object at this time still depends on the actions he makes to the object. Children of 12 months or older (in the preoperational stage of Piaget's theory of cognitive development) typically do not make this error. Competing explanations Traditionally, this phenomenon has been explained as the child seeing an image and remembering where it was, rather than where it is. Other accounts deal with the development of planning, reaching, and deciding things. There are also behaviorist accounts that explain the behavior in terms of reinforcement. This account argues that the repeated trials with hiding the toy in box "A" is reinforcing that specific behavior, so that the child still reaches for box "A" because the action has been reinforced before. However, this account does not explain the shift in behavior that occurs around 12 months. Smith and Thelen used a dynamic systems approach to the A-not-B task. They found that various components of the activity (streng
https://en.wikipedia.org/wiki/Comparison%20of%20file%20managers
The following tables compare general and technical information for a number of notable file managers. General information Operating system support Cross-platform file managers This table shows the operating systems that the file managers can run on, without emulation. Mac-only file managers Finder ForkLift Path Finder Xfile Commander One *nix-only file managers emelFM2 Gentoo file manager Konqueror Krusader nnn Nautilus Nemo PCMan File Manager Ranger ROX-Filer Thunar SpaceFM worker Windows-only file managers Altap Salamander Directory Opus Explorer++ File Manager Nomad.NET SE-Explorer STDU Explorer Total Commander File Explorer xplorer² XYplorer ZTreeWin iOS-only file managers Files (Apple) Android-only file managers Files by Google Ghost Commander Manager views Information about what common file manager views are implemented natively (without third-party add-ons). Note that the "Column View" does not refer to the Miller Columns browsing / visualization technique that can be applied to tree structures / folders. Twin-panel file managers have obligatory connected panels where action in one panel results in reaction in the second. Konqueror supports multiple panels divided horizontally, vertically or both, but these panels do not act as twin panels by default (the user has to mark the panels he wants to act as twin-panels). Network protocols Information on what networking protocols the file managers support. Note that many of these protocols might be supported, in part or in whole, by software layers below the file manager, rather than by the file manager itself; for example, the macOS Finder doesn't implement those protocols, and the Windows Explorer doesn't implement most of them, they just make ordinary file system calls to access remote files, and Konqueror either uses ordinary file system calls or KIO slave calls to access remote files. Some functions, such as browsing for servers or shares, might be implemented in the file manager even if mo
https://en.wikipedia.org/wiki/Kazuya%20Mishima
is a character in Bandai Namco's Tekken fighting game series, first featured as the protagonist in the original 1994 game and later became one of the major antagonists and antihero of the series. The son of worldwide conglomerate Mishima Zaibatsu CEO Heihachi Mishima, Kazuya seeks revenge against his father for throwing him off a cliff years earlier. Kazuya becomes corrupted in later games, seeking to obtain more power and later eventually comes into conflict with his son Jin Kazama. Kazuya Mishima possesses the Devil Gene, a demonic mutation, which he inherited from his late mother, Kazumi Mishima, which can transform him into a demonic version of himself known as . Devil Kazuya has often appeared as a separate character in previous installments (excluding Tekken (1994)) prior to becoming part of Kazuya's moveset in Tekken Tag Tournament 2 and later games. Kazuya Mishima is also present in related series media and other games. The character was based on writer Yukio Mishima, with whom he shares a last name. A number of staff members have considered him one of the franchise's strongest characters, which has led to debates about reducing the damage of some of his moves or removing together. Kazuya Mishima's devil form was created to bring unrealistic fighters into the series, but the incarnation has made few appearances. Several voice actors have portrayed Kazuya Mishima in video games and films related to Tekken. In addition to appearances in spin-offs in the Tekken series, Kazuya also appears as a playable character in Namco × Capcom, Project X Zone 2, Street Fighter X Tekken, The King of Fighters All Star and Super Smash Bros. Ultimate. Kazuya Mishima has been positively received by critics. A number of websites have listed him as one of the best Tekken characters and one of the best characters in fighting games. Journalists have praised Kazuya Mishima's moves and dark characterization, which rivals that of his father. In contrast, critical reception of Kazuya M
https://en.wikipedia.org/wiki/CD4
In molecular biology, CD4 (cluster of differentiation 4) is a glycoprotein that serves as a co-receptor for the T-cell receptor (TCR). CD4 is found on the surface of immune cells such as T helper cells, monocytes, macrophages, and dendritic cells. It was discovered in the late 1970s and was originally known as leu-3 and T4 (after the OKT4 monoclonal antibody that reacted with it) before being named CD4 in 1984. In humans, the CD4 protein is encoded by the CD4 gene. CD4+ T helper cells are white blood cells that are an essential part of the human immune system. They are often referred to as CD4 cells, T-helper cells or T4 cells. They are called helper cells because one of their main roles is to send signals to other types of immune cells, including CD8 killer cells, which then destroy the infectious particle. If CD4 cells become depleted, for example in untreated HIV infection, or following immune suppression prior to a transplant, the body is left vulnerable to a wide range of infections that it would otherwise have been able to fight. Structure Like many cell surface receptors/markers, CD4 is a member of the immunoglobulin superfamily. It has four immunoglobulin domains (D1 to D4) that are exposed on the extracellular surface of the cell: D1 and D3 resemble immunoglobulin variable (IgV) domains. D2 and D4 resemble immunoglobulin constant (IgC) domains. The immunoglobulin variable (IgV) domain of D1 adopts an immunoglobulin-like β-sandwich fold with seven β-strands in 2 β-sheets, in a Greek key topology. CD4 interacts with the β2-domain of MHC class II molecules through its D1 domain. T cells displaying CD4 molecules (and not CD8) on their surface, therefore, are specific for antigens presented by MHC II and not by MHC class I (they are MHC class II-restricted). MHC class I contains Beta-2 microglobulin. The short cytoplasmic/intracellular tail (C) of CD4 contains a special sequence of amino acids that allow it to recruit and interact with the tyrosine ki
https://en.wikipedia.org/wiki/Cortex%20%28botany%29
In botany, a cortex is an outer layer of a stem or root in a vascular plant, lying below the epidermis but outside of the vascular bundles. The cortex is composed mostly of large thin-walled parenchyma cells of the ground tissue system and shows little to no structural differentiation. The outer cortical cells often acquire irregularly thickened cell walls, and are called collenchyma cells. Plants Stems and branches In the three dimensional structure of herbaceous stems, the epidermis, cortex and vascular cambium form concentric cylinders around the inner cylindrical core of pith. Some of the outer cortical cells may contain chloroplasts, giving them a green color. They can therefore produce simple carbohydrates through photosynthesis. In woody plants, the cortex is located between the periderm (bark) and the vascular tissue (phloem, in particular). It is responsible for the transportation of materials into the central cylinder of the root through diffusion and may also be used for storage of food in the form of starch. Roots In the roots of vascular plants, the cortex occupies a larger portion of the organ's volume than in herbaceous stems. The loosely packed cells of root cortex allow movement of water and oxygen in the intercellular spaces. One of the main functions of the root cortex is to serve as a storage area for reserve foods. The innermost layer of the cortex in the roots of vascular plants is the endodermis. The endodermis is responsible for storing starch as well as regulating the transport of water, ions and plant hormones. Lichen On a lichen, the cortex is also the surface layer or "skin" of the nonfruiting part of the body of some lichens. It is the "skin", or outer layer of tissue, that covers the undifferentiated cells of the . Fruticose lichens have one cortex encircling the branches, even flattened, leaf-like forms. Foliose lichens have different upper and lower cortices. Crustose, placodioid, and squamulose lichens have an upper cor
https://en.wikipedia.org/wiki/Cortex%20%28anatomy%29
In anatomy and zoology, the cortex (: cortices) is the outermost (or superficial) layer of an organ. Organs with well-defined cortical layers include kidneys, adrenal glands, ovaries, the thymus, and portions of the brain, including the cerebral cortex, the best-known of all cortices. Etymology The word is of Latin origin and means bark, rind, shell or husk. Notable examples The renal cortex, between the renal capsule and the renal medulla; assists in ultrafiltration The adrenal cortex, situated along the perimeter of the adrenal gland; mediates the stress response through the production of various hormones The thymic cortex, mainly composed of lymphocytes; functions as a site for somatic recombination of T cell receptors, and positive selection The cerebral cortex, the outer layer of the cerebrum, plays a key role in memory, attention, perceptual awareness, thought, language, and consciousness. Cortical bone is the hard outer layer of bone; distinct from the spongy, inner cancellous bone tissue Ovarian cortex is the outer layer of the ovary and contains the follicles. The lymph node cortex is the outer layer of the lymph node. Cerebral cortex The cerebral cortex is typically described as comprising three parts: the sensory, motor, and association areas. These sensory areas receive and process information from the senses. The senses of vision, audition, and touch are served by the primary visual cortex, the primary auditory cortex, and primary somatosensory cortex. The cerebellar cortex is the thin gray surface layer of the cerebellum, consisting of an outer molecular layer or stratum moleculare, a single layer of Purkinje cells (the ganglionic layer), and an inner granular layer or stratum granulosum. The cortex is the outer surface of the cerebrum and is composed of gray matter. The motor areas are located in both hemispheres of the cerebral cortex. Two areas of the cortex are commonly referred to as motor: the primary motor cortex, which executes v
https://en.wikipedia.org/wiki/Service%20control%20point
A service control point (SCP) is a standard component of the Intelligent Network (IN) telephone system which is used to control the service. Standard SCPs in the telecom industry today are deployed using SS7, SIGTRAN or SIP technologies. The SCP queries the service data point (SDP) which holds the actual database and directory. SCP, using the database from the SDP, identifies the geographical number to which the call is to be routed. This is the same mechanism that is used to route 800 numbers. SCP may also communicate with an intelligent peripheral (IP) to play voice messages, or prompt for information from the user, such as prepaid long distance using account codes. This is done by implementing telephone feature codes like "#", which can be used to terminate the input for a user name or password or can be used for call forwarding. These are realized using Intelligent Network Application Part (INAP) that sits above Transaction Capabilities Application Part (TCAP) on the SS7 protocol stack. The TCAP is part of the top or 7th layer of the OSI layer breakdown. SCPs are connected with either SSPs or STPs. This is dependent upon the network architecture that the network service provider wants. The most common implementation uses STPs. SCP and SDP split is becoming a common industry practice. This is known generally in the industry by split architecture. Reason is that operators want to decouple the dependency between the two functionality to facilitate upgrades and possibly rely on different vendors. External links See Telcordia GR-1299-CORE, for Service Control Point/Adjunct Interface generic requirements.
https://en.wikipedia.org/wiki/RNA%20polymerase%20II
RNA polymerase II (RNAP II and Pol II) is a multiprotein complex that transcribes DNA into precursors of messenger RNA (mRNA) and most small nuclear RNA (snRNA) and microRNA. It is one of the three RNAP enzymes found in the nucleus of eukaryotic cells. A 550 kDa complex of 12 subunits, RNAP II is the most studied type of RNA polymerase. A wide range of transcription factors are required for it to bind to upstream gene promoters and begin transcription. Discovery Early studies suggested a minimum of two RNAPs: one which synthesized rRNA in the nucleolus, and one which synthesized other RNA in the nucleoplasm, part of the nucleus but outside the nucleolus. In 1969, biochemists Robert G. Roeder and William Rutter discovered there are total three distinct nuclear RNA polymerases, an additional RNAP that was responsible for transcription of some kind of RNA in the nucleoplasm. The finding was obtained by the use of ion-exchange chromatography via DEAE coated Sephadex beads. The technique separated the enzymes by the order of the corresponding elutions, Ι,ΙΙ,ΙΙΙ, by increasing the concentration of ammonium sulfate. The enzymes were named according to the order of the elutions, RNAP I, RNAP II, RNAP IΙI. This discovery demonstrated that there was an additional enzyme present in the nucleoplasm, which allowed for the differentiation between RNAP II and RNAP III. RNA polymerase II (RNAP2) undergoes regulated transcriptional pausing during early elongation. Various studies has shown that disruption of transcription elongation is implicated in cancer, neurodegeneration, HIV latency etc. Subunits The eukaryotic core RNA polymerase II was first purified using transcription assays. The purified enzyme has typically 10–12 subunits (12 in humans and yeast) and is incapable of specific promoter recognition. Many subunit-subunit interactions are known. DNA-directed RNA polymerase II subunit RPB1 – an enzyme that in humans is encoded by the POLR2A gene and in yeast is encoded
https://en.wikipedia.org/wiki/Microwave%20spectroscopy
Microwave spectroscopy is the spectroscopy method that employs microwaves, i.e. electromagnetic radiation at GHz frequencies, for the study of matter. History The ammonia molecule NH3 is shaped like a pyramid 0.38 Å in height, with an equilateral triangle of hydrogens forming the base.The nitrogen situated on the axis has two equivalent equilibrium positions above and below the triangle of hydrogens, and this raises the possibility of the nitrogen tunneling up and down, through the plane of the H-atoms. In 1932 Dennison et al. ... analyzed the vibrational energy of this molecule and concluded that the vibrational energy would be split into pairs by the presence of these two equilibrium positions. The next year Wright and Randall observed ... a splitting of 0.67 cm–1 in far infrared lines, corresponding to a frequency of 20 GHz, the value predicted by theory. In 1934 Cleeton and Williams ... constructed a grating echelle spectrometer in order to measure this splitting directly, thereby beginning the field of microwave spectroscopy. They observed a somewhat asymmetric absorption line with a maximum at 24 GHz and a full width at half height of 12 GHz. In molecular physics In the field of molecular physics, microwave spectroscopy is commonly used to probe the rotation of molecules. In condensed matter physics In the field of condensed matter physics, microwave spectroscopy is used to detect dynamic phenomena of either charges or spins at GHz frequencies (corresponding to nanosecond time scales) and energy scales in the µeV regime. Matching to these energy scales, microwave spectroscopy on solids is often performed as a function of temperature (down to cryogenic regimes of a few K or even lower) and/or magnetic field (with fields up to several T). Spectroscopy traditionally considers the frequency-dependent response of materials, and in the study of dielectrics microwave spectroscopy often covers a large frequency range. In contrast, for conductive samples as well as
https://en.wikipedia.org/wiki/Mental%20space
The mental space is a theoretical construct proposed by Gilles Fauconnier corresponding to possible worlds in truth-conditional semantics. The main difference between a mental space and a possible world is that a mental space does not contain a faithful representation of reality, but an idealized cognitive model. Building of mental spaces and establishment of mappings between those mental spaces are the two main processes involved in construction of meaning. It is one of the basic components in Gilles Fauconnier and Mark Turner's blending theory, a theory within cognitive semantics. Base space and built space Base space, also known as reality space, presents the interlocutors' shared knowledge of the real world. Space builders are elements within a sentence that establish spaces distinct from, yet related to the base space constructed. Space builders can be expressions like prepositional phrases, adverbs, connectives, and subject-verb combinations that are followed by an embedded sentence. They require hearers to establish scenarios beyond the present point of time. A built space depicts a situation that only holds true for that space itself, but may or may not be true in reality. The base space and built spaces are occupied by elements that map onto each other. These elements include categories that may refer to specific entities in those categories. According to Fauconnier's Access Principle, specific entities of a category in a space can be described by its counterpart category in another space even if it differs from the specific entity in the other space. An example of a built space can be seen in the example " Mary wants to buy a book". In this case, the built space is not that of reality, but Mary's desire space. Though the book in reality space refers to any book in general, it can still be used to describe the book in Mary's desire space, which may or may not be a specific book... Foundation and expansion space ‘if A then B’ sentences create another t
https://en.wikipedia.org/wiki/Pospiviroidae
The Pospiviroidae are a incertae sedis family of ssRNA viroids with 5 genera and 39 species, including the first viroid to be discovered, PSTVd, which is part of genus Pospiviroid. Their secondary structure is key to their biological activity. The classification of this family is based on differences in the conserved central region sequence. Pospiviroidae replication occurs in an asymmetric fashion via host cell RNA polymerase, RNase, and RNA ligase. its hosts are plants, specifically dicotyledons and some monocotyledons Genome Members of the family Pospiviroidae have circular ssRNA of 246–375 nt. They assume rod-like or quasi-rod-like conformations containing a central conserved region (CCR) and a terminal conserved hairpin (TCH) or a terminal conserved region (TCR). The genome of viroids does not encode any proteins. Replication Its replication is nuclear and mediated by DNA-dependent RNA polymerase II, which is redirected to use RNA templates through an asymmetric RNA–RNA rolling-circle mechanism. (+) polarity circRNA molecules (by convention the most abundant strand in vivo) are repeatedly transcribed into oligomeric complementary (−) RNAs. Such intermediates serve as templates for generating oligomeric (+) RNAs that are cleaved by a host enzyme of the RNase III class. The termini of the resulting linear monomers are ligated by the host DNA ligase 1 to generate the mature circular viroid RNA. Taxonomy Apscaviroid Apple dimple fruit viroid Apple scar skin viroid Apscaviroid aclsvd Apscaviroid cvd-VII Apscaviroid dvd Apscaviroid glvd Apscaviroid lvd Apscaviroid plvd-I Apscaviroid pvd Apscaviroid pvd-2 Australian grapevine viroid Citrus bent leaf viroid Citrus dwarfing viroid Citrus viroid V Citrus viroid VI Grapevine yellow speckle viroid 1 Grapevine yellow speckle viroid 2 Pear blister canker viroid Cocadviroid Citrus bark cracking viroid Coconut cadang-cadang viroid Coconut tinangaja viroid Hop latent viroid Coleviroid Coleus blu
https://en.wikipedia.org/wiki/Apophysis%20%28software%29
Apophysis is an open source fractal flame editor and renderer for Microsoft Windows and Macintosh. Apophysis has many features for creating and editing fractal flames, including an editor which allows one to directly edit the transforms by manipulating triangles, a mutations window, which applies random edits to the triangles, an adjust window, which allows the adjustment of coloring and location of the image. It also provides a scripting language with direct access to most of the components of the fractal, which allows for effects such as the animations seen in Electric Sheep, which are also fractal flames. Users can export fractal flames to other fractal flame rendering programs, such as FLAM3. There is a separate version of Apophysis that supports 3D. There are numerous clones, ports, and forks of it. History Scott Draves invented Fractal Flames and published an open source implementation written in C in the early 90s. In 2001, Ronald Hordijk translated his code into Delphi and created a non-animated screensaver. And in 2003 or 2004 Mark Townsend took Ronald's code and added a graphical user interface to create Apophysis. It has since been improved and updated by Peter Sdobnov, Piotr Borys, and Ronald Hordijk. Since 2009, there is a version of Apophysis called Apophysis 7X. Originally, it was targeting to provide support for modern Microsoft Windows operating systems like Windows Vista and 7. A strong feedback from the Apophysis users encouraged the developer Georg Kiehne to provide updates, which made 7X the most popular and advanced version of Apophysis so far. Technical details The user specifies a set of mathematical functions. Each function is a composition of an affine map, and usually some non-linear map. This set of functions is called an iterated function system (IFS). Apophysis then generates the attractor of this set of functions, by means of Monte-Carlo simulation. In fact, Apophysis generates a probability measure, which is then colored accor
https://en.wikipedia.org/wiki/Theory%20of%20Colours
Theory of Colours () is a book by Johann Wolfgang von Goethe about the poet's views on the nature of colours and how they are perceived by humans. It was published in German in 1810 and in English in 1840. The book contains detailed descriptions of phenomena such as coloured shadows, refraction, and chromatic aberration. The book is a successor to two short essays titled "Contributions to Optics" (). The work originated in Goethe's occupation with painting and primarily had its influence in the arts, with painters such as (Philipp Otto Runge, J. M. W. Turner, the Pre-Raphaelites, Hilma af Klint, and Wassily Kandinsky). Although Goethe's work was rejected by some physicists, a number of philosophers and physicists have concerned themselves with it, including Thomas Johann Seebeck, Arthur Schopenhauer (see: On Vision and Colors), Hermann von Helmholtz, Ludwig Wittgenstein, Werner Heisenberg, Kurt Gödel, and Mitchell Feigenbaum. Goethe's book provides a catalogue of how colour is perceived in a wide variety of circumstances, and considers Isaac Newton's observations to be special cases. Unlike Newton, Goethe's concern was not so much with the analytic treatment of colour, as with the qualities of how phenomena are perceived. Philosophers have come to understand the distinction between the optical spectrum, as observed by Newton, and the phenomenon of human colour perception as presented by Goethe—a subject analyzed at length by Wittgenstein in his comments on Goethe's theory in Remarks on Colour. Historical background At Goethe's time, it was generally acknowledged that, as Isaac Newton had shown in his Opticks in 1704, colourless (white) light is split up into its component colours when directed through a prism. Goethe's starting point was the supposed discovery of how Newton erred in the prismatic experiment, and by 1793 Goethe had formulated his arguments against Newton in the essay "Über Newtons Hypothese der diversen Refrangibilität" ("On Newton's hypoth
https://en.wikipedia.org/wiki/Entropy%20unit
The entropy unit is a non-S.I. unit of thermodynamic entropy, usually denoted "e.u." or "eU" and equal to one calorie per kelvin per mole, or 4.184 joules per kelvin per mole. Entropy units are primarily used in chemistry to describe enthalpy changes. Sources Units of measurement
https://en.wikipedia.org/wiki/Electronic%20component
An electronic component is any basic discrete electronic device or physical entity part of an electronic system used to affect electrons or their associated fields. Electronic components are mostly industrial products, available in a singular form and are not to be confused with electrical elements, which are conceptual abstractions representing idealized electronic components and elements. A datasheet for an electronic component is a technical document that provides detailed information about the component's specifications, characteristics, and performance. Electronic components have a number of electrical terminals or leads. These leads connect to other electrical components, often over wire, to create an electronic circuit with a particular function (for example an amplifier, radio receiver, or oscillator). Basic electronic components may be packaged discretely, as arrays or networks of like components, or integrated inside of packages such as semiconductor integrated circuits, hybrid integrated circuits, or thick film devices. The following list of electronic components focuses on the discrete version of these components, treating such packages as components in their own right. Classification Components can be classified as passive, active, or electromechanic. The strict physics definition treats passive components as ones that cannot supply energy themselves, whereas a battery would be seen as an active component since it truly acts as a source of energy. However, electronic engineers who perform circuit analysis use a more restrictive definition of passivity. When only concerned with the energy of signals, it is convenient to ignore the so-called DC circuit and pretend that the power supplying components such as transistors or integrated circuits is absent (as if each such component had its own battery built in), though it may in reality be supplied by the DC circuit. Then, the analysis only concerns the AC circuit, an abstraction that ignores DC voltages a
https://en.wikipedia.org/wiki/Neutron%20capture
Neutron capture is a nuclear reaction in which an atomic nucleus and one or more neutrons collide and merge to form a heavier nucleus. Since neutrons have no electric charge, they can enter a nucleus more easily than positively charged protons, which are repelled electrostatically. Neutron capture plays a significant role in the cosmic nucleosynthesis of heavy elements. In stars it can proceed in two ways: as a rapid process (r-process) or a slow process (s-process). Nuclei of masses greater than 56 cannot be formed by thermonuclear reactions (i.e., by nuclear fusion) but can be formed by neutron capture. Neutron capture on protons yields a line at 2.223 MeV predicted and commonly observed in solar flares. Neutron capture at small neutron flux At small neutron flux, as in a nuclear reactor, a single neutron is captured by a nucleus. For example, when natural gold (197Au) is irradiated by neutrons (n), the isotope 198Au is formed in a highly excited state, and quickly decays to the ground state of 198Au by the emission of gamma rays (). In this process, the mass number increases by one. This is written as a formula in the form , or in short form . If thermal neutrons are used, the process is called thermal capture. The isotope 198Au is a beta emitter that decays into the mercury isotope 198Hg. In this process, the atomic number rises by one. Neutron capture at high neutron flux The r-process happens inside stars if the neutron flux density is so high that the atomic nucleus has no time to decay via beta emission between neutron captures. The mass number therefore rises by a large amount while the atomic number (i.e., the element) stays the same. When further neutron capture is no longer possible, the highly unstable nuclei decay via many β− decays to beta-stable isotopes of higher-numbered elements. Capture cross section The absorption neutron cross section of an isotope of a chemical element is the effective cross-sectional area that an atom of that isotope
https://en.wikipedia.org/wiki/Mankind%20Quarterly
Mankind Quarterly is a peer-reviewed journal that has been described as a "cornerstone of the scientific racism establishment", a "white supremacist journal", and "a pseudo-scholarly outlet for promoting racial inequality". It covers physical and cultural anthropology, including human evolution, intelligence, ethnography, linguistics, mythology, archaeology, and biology. It is published by the Ulster Institute for Social Research, which was presided over by Richard Lynn. History The journal was established in 1960 with funding from segregationists, who designed it to serve as a mouthpiece for their views. The costs of initially launching the journal were paid by the Pioneer Fund's Wickliffe Draper. The founders were Robert Gayre, Henry Garrett, Roger Pearson, Corrado Gini, Luigi Gedda (Honorary Advisory Board), Otmar von Verschuer and Reginald Ruggles Gates. Another early editor was Herbert Charles Sanborn, formerly the chair of the department of Philosophy and Psychology at Vanderbilt University from 1921 to 1942. It was originally published in Edinburgh, Scotland, by the International Association for the Advancement of Ethnology and Eugenics, an organization founded by Draper to promote eugenics and scientific racism. Its foundation was a response to the declaration by UNESCO, which dismissed the validity of race as a biological concept, and to attempts to end racial segregation in the American South. In 1961, physical anthropologist Juan Comas published a series of scathing critiques of the journal arguing that the journal was reproducing discredited racial ideologies, such as Nordicism and anti-Semitism, under the guise of science. In 1963, after the journal's first issue, contributors U. R. Ehrenfels, T. N. Madan, and Juan Comas said that the journal's editorial practice was biased and misleading. In response, the journal published a series of rebuttals and attacks on Comas. Comas argued in Current Anthropology that the journal's publication of A. James Gr
https://en.wikipedia.org/wiki/CAD/CAM%20in%20the%20footwear%20industry
CAD/CAM in the footwear industry is the use of computers and graphics software for designing and grading of shoe upper patterns and, for manufacturing of cutting dies, shoe lasts and sole moulds. CAD/CAM software is a PC-based system, which is made up of program modules. Today, there are 2D and 3D versions of CAD/CAM systems in the shoe industry. Computer aided design was introduced in the shoe industry in the 1970s. Initially, it was used primarily for pattern grading. It enabled manufacturers to perform complex grading relatively easily and quickly. CAD systems today have been developed with a much wider range of functions. Logos, textures, and other decorations can be incorporated into product designs of both the uppers and soles to help reinforce branding on all areas of the model. It automates routine procedures, increasing speed and consistency, whilst reducing the possibility of mistakes. CAD data can now be used effectively for a wide variety of activities across footwear manufacturing business. CAD/CAM generates data at the design stage, which can be used right through the planning and manufacturing stages. Latest improvements in the CAD/CAM technology are: Graphic capabilities and interconnectivity have improved enormously. Software developments have progressively made systems more intuitive and easier to use. With 2D sketch and paint modules, a serviceable sketch can be produced and then colour and texture can be added. 3D systems enable the last and design to be viewed from any perspective and several angles even simultaneously. With CAD/CAM software, footwear manufacturers can cut their time to market dramatically and so increase market share and profitability. In addition, the power and flexibility of the software can overcome restrictions to the designer's creativity imposed by traditional methods. Sole design CAD/CAM software can be used to generate machining data for shoe sole models and moulds Shoe sole mould makers are able to strengthen t
https://en.wikipedia.org/wiki/AARON
AARON is the collective name for a series of computer programs written by artist Harold Cohen that create original artistic images. Proceeding from Cohen's initial question "What are the minimum conditions under which a set of marks functions as an image?", AARON was in development between 1972 and the 2010s. As the software is not open source, its development effectively ended with Cohen's death in 2016. The name "AARON" does not seem to be an acronym; rather, it was a name chosen to start with the letter "A" so that the names of successive programs could follow it alphabetically. However, Cohen did not create any other major programs. Initial versions of AARON created abstract drawings that grew more complex through the 1970s. More representational imagery was added in the 1980s; first rocks, then plants, then people. In the 1990s more representational figures set in interior scenes were added, along with color. AARON returned to more abstract imagery, this time in color, in the early 2000s. Cohen used machines that allowed AARON to produce physical artwork. The first machines drew in black and white using a succession of custom-built "turtle" and flatbed plotter devices. Cohen would sometimes color these images by hand in fabric dye (Procion), or scale them up to make larger paintings and murals. In the 1990s Cohen built a series of digital painting machines to output AARON's images in ink and fabric dye. His later work used a large-scale inkjet printer on canvas. Development of AARON began in the C programming language then switched to Lisp in the early 1990s. Cohen credits Lisp with helping him solve the challenges he faced in adding color capabilities to AARON. An article about Cohen appeared in Computer Answers that describes AARON and shows two line drawings that were exhibited at the Tate gallery. The article goes on to describe the workings of AARON, then running on a DEC VAX 750 minicomputer. Raymond Kurzweil's company has produced a downloadable sc
https://en.wikipedia.org/wiki/Stein%27s%20example
In decision theory and estimation theory, Stein's example (also known as Stein's phenomenon or Stein's paradox) is the observation that when three or more parameters are estimated simultaneously, there exist combined estimators more accurate on average (that is, having lower expected mean squared error) than any method that handles the parameters separately. It is named after Charles Stein of Stanford University, who discovered the phenomenon in 1955. An intuitive explanation is that optimizing for the mean-squared error of a combined estimator is not the same as optimizing for the errors of separate estimators of the individual parameters. In practical terms, if the combined error is in fact of interest, then a combined estimator should be used, even if the underlying parameters are independent. If one is instead interested in estimating an individual parameter, then using a combined estimator does not help and is in fact worse. Formal statement The following is the simplest form of the paradox, the special case in which the number of observations is equal to the number of parameters to be estimated. Let be a vector consisting of unknown parameters. To estimate these parameters, a single measurement is performed for each parameter , resulting in a vector of length . Suppose the measurements are known to be independent, Gaussian random variables, with mean and variance 1, i.e., . Thus, each parameter is estimated using a single noisy measurement, and each measurement is equally inaccurate. Under these conditions, it is intuitive and common to use each measurement as an estimate of its corresponding parameter. This so-called "ordinary" decision rule can be written as , which is the maximum likelihood estimator (MLE). The quality of such an estimator is measured by its risk function. A commonly used risk function is the mean squared error, defined as . Surprisingly, it turns out that the "ordinary" decision rule is suboptimal (inadmissible) in terms of mean
https://en.wikipedia.org/wiki/Visual%20phototransduction
Visual phototransduction is the sensory transduction process of the visual system by which light is detected to yield nerve impulses in the rod cells and cone cells in the retina of the eye in humans and other vertebrates. It relies on the visual cycle, a sequence of biochemical reactions in which a molecule of retinal bound to opsin undergoes photoisomerization, initiates a cascade that signals detection of the photon, and is indirectly restored to its photosensitive isomer for reuse. Phototransduction in some invertebrates such as fruit flies relies on similar processes. Photoreceptors The photoreceptor cells involved in vertebrate vision are the rods, the cones, and the photosensitive ganglion cells (ipRGCs). These cells contain a chromophore (11-cis-retinal, the aldehyde of vitamin A1 and light-absorbing portion) that is bound to a cell membrane protein, opsin. Rods are responsible for low light levels and contrast detections. Because they all have the same response across frequencies, no color information can be deduced from the rods only, as in low light conditions for example. Cones, on the other hand, are of different kinds with different frequency response, such that color can be perceived through comparison of the outputs of different kinds of cones. Each cone type responds best to certain wavelengths, or colors, of light because each type has a slightly different opsin. The three types of cones are L-cones, M-cones and S-cones that respond optimally to long wavelengths (reddish color), medium wavelengths (greenish color), and short wavelengths (bluish color) respectively. Humans have trichromatic photopic vision consisting of three opponent process channels that enable color vision. Visual cycle The visual cycle occurs via G-protein coupled receptors called retinylidene proteins which consists of a visual opsin and a chromophore 11-cis-retinal. The 11-cis-retinal is covalently linked to the opsin receptor via Schiff base. When it absorbs a photon, 11
https://en.wikipedia.org/wiki/Pureland%20origami
Pureland origami is a style of origami invented by the British paper folder John Smith that is limited to using only mountain and valley folds. The aim of Pureland origami is to make origami easier for inexperienced folders and those who have impaired motor skills. This means that many, but not all, of the more complicated processes that are common in regular origami, are impossible; and so alternative manipulations have been developed to create similar effects. See also Origami Origami techniques External links Some Thoughts on Minimal Folding by John Smith in Bits of Smith. FOLDS.NET - Some diagrams of pureland origami. The Origami Interest Group - Three pureland diagrams. (archived 2011) Origami
https://en.wikipedia.org/wiki/Large%20Plasma%20Device
The Large Plasma Device (often stylized as LArge Plasma Device or LAPD) is an experimental physics device located at UCLA. It is designed as a general purpose laboratory for experimental plasma physics research. The device began operation in 1991 and was upgraded in 2001 to its current version. The modern LAPD is operated as the primary device for a national collaborative research facility, the Basic Plasma Science Facility (or BaPSF), which is supported by the US Department of Energy, Fusion Energy Sciences and the National Science Foundation. Half of the operation time of the device is available to scientists at other institutions and facilities who can compete for time through a yearly solicitation. History The first version of the LAPD was a 10 meter long device constructed by a team led by Walter Gekelman in 1991. The construction took 3.5 years to complete and was funded by the Office of Naval Research (ONR). A major upgrade to a 20 meter version was funded by ONR and an NSF Major Research Instrumentation award in 1999. Following the completion of that major upgrade, the award of a $4.8 million grant by the US Department of Energy and the National Science Foundation in 2001 enabled the creation of the Basic Plasma Science Facility and the operation of the LAPD as part of this national user facility. Gekelman was director of the facility until 2016, when Troy Carter became BaPSF director. Machine overview The LAPD is a linear pulsed-discharge device operated at a high (1 Hz) repetition rate, producing a strongly magnetized background plasma which is physically large enough to support Alfvén waves. Plasma is produced from a barium oxide (BaO) cathode-anode discharge at one end of a 20-meter long, 1 meter diameter cylindrical vacuum vessel (diagram). The resulting plasma column is roughly 16.5 meters long and 60 cm in diameter. The background magnetic field, produced by a series of large electromagnets surrounding the chamber, can be varied from 400 gau
https://en.wikipedia.org/wiki/HP%20Time-Shared%20BASIC
HP Time-Shared BASIC (HP TSB) is a BASIC programming language interpreter for Hewlett-Packard's HP 2000 line of minicomputer-based time-sharing computer systems. TSB is historically notable as the platform that released the first public versions of the game Star Trek. The system implements a dialect of BASIC as well as a rudimentary user account and program library that allows multiple people to use the system at once. The systems were a major force in the early-to-mid 1970s and generated a large number of programs. HP maintained a database of contributed-programs and customers could order them on punched tape for a nominal fee. Most BASICs of the 1970s trace their history to the original Dartmouth BASIC of the 1960s, but early versions of Dartmouth did not handle string variables or offer string manipulation features. Vendors added their own solutions; HP used a system similar to Fortran and other languages with array slicing, while DEC later introduced the MID/LEFT/RIGHT functions. As microcomputers began to enter the market in the mid-1970s, many new BASICs appeared that based their parsers on DEC's or HP's syntax. Altair BASIC, the original version of what became Microsoft BASIC, was patterned on DEC's BASIC-PLUS. Others, including Apple's Integer BASIC, Atari BASIC and North Star BASIC were patterned on the HP style. This made conversions between these platforms somewhat difficult if string handling was encountered. Nomenclature The software was also known by its versioned name, tied to the hardware version on which it ran, such as HP 2000C Time-Shared BASIC and the operating system came in different varieties — 2000A, 2000B, 2000C, High-Speed 2000C, 2000E, and 2000F. HP also referred to the language as "Access BASIC" in some publications. This matched the naming of the machines on which it ran, known as the "2000/Access" in some publications. This terminology appears to have been used only briefly when the platform was first launched. Platform details
https://en.wikipedia.org/wiki/Capoid%20race
Capoid race is a grouping formerly used for the Khoikhoi and San peoples in the context of a now-outdated model of dividing humanity into different races. The term was introduced by Carleton S. Coon in 1962 and named for the Cape of Good Hope. Coon proposed that the term "Negroid" should be abandoned, and the sub-Saharan African populations of West African stock (including the Bantu) should be termed "Congoid" instead. The observation of a significant difference between the Khoisan and the populations of West African stock was not original to Coon. It had been noted as early as 1684 by François Bernier, the early modern author who originally introduced the French word race to refer to the large divisions of mankind. Bernier, outside of five large divisions described in more detail, proposed the possible addition of more categories, primarily for "the Blacks of the Cape of Good Hope" (les Noirs du Cap de bonne Esperance), which seemed to him to be of significantly different build from most other populations below the Sahara.
https://en.wikipedia.org/wiki/GlobeXplorer
GlobeXplorer was an online spatial data company that compiled and distributed aerial photos, satellite imagery, and map data from their online spatial archives. GlobeXplorer has been credited as the first company to establish a business around compiling and distributing online aerial and satellite imagery. In 2007, the company was acquired by DigitalGlobe. GlobeXplorer's imagery and property data was licensed to many online information websites. GlobeXplorer obtained its content through online distribution relationships with about 30 of the world's top acquirers of aerial, satellite, and property data. GlobeXplorer's primary products were the ImageAtlas ecommerce storefront and ImageBuilder web developer toolkit. It also provided ImageConnect extensions and web services for GIS and Computer-aided design. GlobeXplorer's defensible core competence was its ability to meter custom profiles of content for consumers and pay royalties to providers based on 512x512 "standard image units" (SIU). This was accomplished through content hosting, delivery via APIs and application plugins, and building a custom billing system modeled around TELCO call rating and inter-bank settlement accounting. Beyond imagery, the billing system also supported metering of vector data and usage of floating license tokens for programs such as Arc/INFO, PCI and ERDAS. History GlobeXplorer was founded in 1999 by Rob Shanks, Michael Fisher, Chris Nicholas, and Paul Smith (former executives at HJW GeoSpatial, Inc.) through partnerships with Sun Microsystems, NASA, and Oracle Corporation. It grew from a NASA EOCAP project of HJW's to place ready-made orthophotography online, and an internal project of Sun Microsystems and Oracle to counter the Microsoft "Terraserver" technology demonstration. The EOCAP project concluded with the creation of an 'earth imagery' searchable website in 1998. HJW was acquired by Harrods of London, who provided financial backing to spin out the online effort into GlobeXplor
https://en.wikipedia.org/wiki/Stephen%20Oppenheimer
Stephen Oppenheimer (born 1947) is a British paediatrician, geneticist, and writer. He is a graduate of Balliol College, Oxford and an honorary fellow of the Liverpool School of Tropical Medicine. In addition to his work in medicine and tropical diseases, he has published popular works in the fields of genetics and human prehistory. This latter work has been the subject of a number of television and film projects. Career Oppenheimer trained in medicine at Oxford and London universities, qualifying in 1971. From 1972 he worked as a clinical paediatrician, mainly in Malaysia, Nepal and Papua New Guinea. He carried out and published clinical research in the areas of nutrition, infectious disease (including malaria), and genetics, focussing on the interactions between nutrition, genetics and infection, in particular iron nutrition, thalassaemia and malaria. From 1979 he moved into medical research and teaching, with positions at the Liverpool School of Tropical Medicine, Oxford University, a research centre in Kilifi, Kenya, and the Universiti Sains Malaysia in Penang. He spent three years undertaking fieldwork in Papua New Guinea, studying the effects of iron supplementation on susceptibility to infection. His fieldwork, published in the late 1980s, identified the role of genetic mutation in malarious areas as a result of natural selection due to its protective effect against malaria, and that different genotypes for alpha-thalassaemia traced different migrations out to the Pacific. Following that work, he concentrated on researching the use of unique genetic mutations as markers of ancient migrations. From 1990 to 1994 Oppenheimer served as chairman and chief of clinical service in the Department of Paediatrics in the Chinese University of Hong Kong. He worked as senior specialist paediatrician in Brunei from 1994 to 1996. He returned to England in 1997, writing the book Eden in the East: the drowned continent of Southeast Asia, published in 1998. The bo
https://en.wikipedia.org/wiki/Apache%20Axis
Apache Axis (Apache eXtensible Interaction System) is an open-source, XML based Web service framework. It consists of a Java and a C++ implementation of the SOAP server, and various utilities and APIs for generating and deploying Web service applications. Using Apache Axis, developers can create interoperable, distributed computing applications. Axis development takes place under the auspices of the Apache Software Foundation. Axis for Java When using the Java version of Axis, there are two ways to expose Java code as Web service. The easiest one is to use Axis native JWS (Java Web Service) files. Another way is to use custom deployment. Custom deployment enables you to customize resources that should be exposed as Web services. See also Apache Axis2. JWS Web service creation JWS files contain Java class source code that should be exposed as Web service. The main difference between an ordinary java file and jws file is the file extension. Another difference is that jws files are deployed as source code and not compiled class files. The following example will expose methods add and subtract of class Calculator. public class Calculator { public int add(int i1, int i2) { return i1 + i2; } public int subtract(int i1, int i2) { return i1 - i2; } } JWS Web service deployment Once the Axis servlet is deployed, you need only to copy the jws file to the Axis directory on the server. This will work if you are using an Apache Tomcat container. In the case that you are using another web container, custom WAR archive creation will be required. JWS Web service access JWS Web service is accessible using the URL http://localhost:8080/axis/Calculator.jws. If you are running a custom configuration of Apache Tomcat or a different container, the URL might be different. Custom deployed Web service Custom Web service deployment requires a specific deployment descriptor called WSDD (Web Service Deployment Descriptor) syntax. It can be used to sp
https://en.wikipedia.org/wiki/Solid%20compression
In computing, solid compression is a method for data compression of multiple files, wherein all the uncompressed files are concatenated and treated as a single data block. Such an archive is called a solid archive. It is used natively in the 7z and RAR formats, as well as indirectly in tar-based formats such as .tar.gz and .tar.bz2. By contrast, the ZIP format is not solid because it stores separately compressed files (though solid compression can be emulated for small archives by combining the files into an uncompressed archive file and then compressing that archive file inside a second compressed ZIP file). Explanation Compressed file formats often feature both compression (storing the data in a small space) and archiving (storing multiple files and metadata in a single file). One can combine these in two natural ways: compress the individual files, and then archive into a single file; archive into a single data block, and then compress. The order matters (these operations do not commute), and the latter is solid compression. In Unix, compression and archiving are traditionally separate operations, which allows one to understand this distinction: Compressing individual files and then archiving would be a tar gzip-compressed files this is very uncommon. Archiving various uncompressed files via tar and compressing yields a compressed archive: a .tar.gz file this is solid compression. A rough graphical representation In this example, three files each have a common part with the same information, a unique part with information not in the other files, and an "air" part with low-entropy and accordingly well-compressible information. original file A original file B original file C non solid archive: solid archive: Rationale Benefits Solid compression allows for much better compression rates when all the files are similar, which is often the case if they are of the same file format. It can also be efficient when archiving a large number of small files.
https://en.wikipedia.org/wiki/Au%20jus
Au jus () is a French culinary term meaning "with juice". It refers to meat dishes prepared or served together with a light broth or gravy, made from the fluids secreted by the meat as it is cooked. In French cuisine, cooking au jus is a natural way to enhance the flavour of dishes, mainly chicken, veal, and lamb. In American cuisine, the term is sometimes used to refer to a light sauce for beef recipes, which may be served with the food or placed on the side for dipping. Ingredients and preparation To prepare a natural jus, the cook may skim off the fat from the juices left after cooking and bring the remaining meat stock and water to a boil. Jus can be frozen for six months or longer, but the flavour may suffer after this time. Au jus recipes in the United States often use soy sauce, Worcestershire sauce, salt, pepper, white or brown sugar, garlic, beets, carrots, onions, or other ingredients to make something more like a gravy. The American jus is sometimes prepared separately, rather than being produced naturally by the food being cooked. An example could be a beef jus made by reducing beef stock to a concentrated form (also known as Glace de Viande) to accompany a meat dish. It is typically served with the French dip sandwich. Jus can also be made by extracting the juice from the original meat and combining it with another liquid e.g. red wine (thus forming a red wine jus). A powdered product described as jus is also sold and is rubbed into the meat before cooking or added afterwards. Powdered forms generally use a combination of salt, dried onion, and sometimes sugar as primary flavoring agents. Use as noun In the United States, the phrase au jus is often used as a noun, owing to it having been adapted in culinary references into the noun form: Rather than a "sandwich au jus", the menu may read "sandwich with au jus". See also List of dips Gravy, essentially a thickened jus
https://en.wikipedia.org/wiki/Central%20force
In classical mechanics, a central force on an object is a force that is directed towards or away from a point called center of force. where is the force, F is a vector valued force function, F is a scalar valued force function, r is the position vector, ||r|| is its length, and is the corresponding unit vector. Not all central force fields are conservative or spherically symmetric. However, a central force is conservative if and only if it is spherically symmetric or rotationally invariant. Properties Central forces that are conservative can always be expressed as the negative gradient of a potential energy: (the upper bound of integration is arbitrary, as the potential is defined up to an additive constant). In a conservative field, the total mechanical energy (kinetic and potential) is conserved: (where 'ṙ' denotes the derivative of 'r' with respect to time, that is the velocity,'I' denotes moment of inertia of that body and 'ω' denotes angular velocity), and in a central force field, so is the angular momentum: because the torque exerted by the force is zero. As a consequence, the body moves on the plane perpendicular to the angular momentum vector and containing the origin, and obeys Kepler's second law. (If the angular momentum is zero, the body moves along the line joining it with the origin.) It can also be shown that an object that moves under the influence of any central force obeys Kepler's second law. However, the first and third laws depend on the inverse-square nature of Newton's law of universal gravitation and do not hold in general for other central forces. As a consequence of being conservative, these specific central force fields are irrotational, that is, its curl is zero, except at the origin: Examples Gravitational force and Coulomb force are two familiar examples with being proportional to 1/r2 only. An object in such a force field with negative (corresponding to an attractive force) obeys Kepler's laws of planetary motion. The f
https://en.wikipedia.org/wiki/Character%20displacement
Character displacement is the phenomenon where differences among similar species whose distributions overlap geographically are accentuated in regions where the species co-occur, but are minimized or lost where the species' distributions do not overlap. This pattern results from evolutionary change driven by biological competition among species for a limited resource (e.g. food). The rationale for character displacement stems from the competitive exclusion principle, also called Gause's Law, which contends that to coexist in a stable environment two competing species must differ in their respective ecological niche; without differentiation, one species will eliminate or exclude the other through competition. Character displacement was first explicitly explained by William L. Brown Jr. and E. O. Wilson in 1956: "Two closely related species have overlapping ranges. In the parts of the ranges where one species occurs alone, the populations of that species are similar to the other species and may even be very difficult to distinguish from it. In the area of overlap, where the two species occur together, the populations are more divergent and easily distinguished, i.e., they 'displace' one another in one or more characters. The characters involved can be morphological, ecological, behavioral, or physiological; they are assumed to be genetically based." Brown and Wilson used the term character displacement to refer to instances of both reproductive character displacement, or reinforcement of reproductive barriers, and ecological character displacement driven by competition. As the term character displacement is commonly used, it generally refers to morphological differences due to competition. Brown and Wilson viewed character displacement as a phenomenon involved in speciation, stating, "we believe that it is a common aspect of geographical speciation, arising most often as a product of the genetic and ecological interaction of two (or more) newly evolved, cognate spec
https://en.wikipedia.org/wiki/TRON%20%28encoding%29
TRON Code is a multi-byte character encoding used in the TRON project. It is similar to Unicode but does not use Unicode's Han unification process: each character from each CJK character set is encoded separately, including archaic and historical equivalents of modern characters. This means that Chinese, Japanese, and Korean text can be mixed without any ambiguity as to the exact form of the characters; however, it also means that many characters with equivalent semantics will be encoded more than once, complicating some operations. TRON has room for 150 million code points. Separate code points for Chinese, Korean, and Japanese variants of the 70,000+ Han characters in Unicode 4.1 (if that were deemed necessary) would require more than 200,000 code points in TRON. TRON includes the non-Han characters from Unicode 2.0, but it has not been keeping up to date with recent editions to Unicode as Unicode expands beyond the Basic Multilingual Plane and adds characters to existing scripts. The TRON encoding has been updated to include other recent code page updates like JIS X 0213. Fonts for the TRON encoding are available, but they have restrictions for commercial use. Structure Each character in TRON Code is two bytes. Similarly to ISO/IEC 2022, the TRON character encoding handles characters in multiple character sets within a single character encoding by using escape sequences, referred to as language specifier codes, to switch between planes of 48,400 code points. Character sets incorporated into TRON Code include existing character sets such as JIS X 0208 and GB 2312, as well as other character sources such as the Dai Kan-Wa Jiten, and some scripts not included in other encodings such as Dongba symbols. Owing to the incorporation of entire character sets into TRON Code, many characters with equivalent semantics are encoded multiple times; for example, all of the kanji characters in the GT Typeface receive their own codepoints, despite many of them overlapping wit
https://en.wikipedia.org/wiki/Interactive%20whiteboard
An interactive whiteboard (IWB), also known as interactive board or smart board, is a large interactive display board in the form factor of a whiteboard. It can either be a standalone touchscreen computer used independently to perform tasks and operations, or a connectable apparatus used as a touchpad to control computers from a projector. They are used in a variety of settings, including classrooms at all levels of education, in corporate board rooms and work groups, in training rooms for professional sports coaching, in broadcasting studios, and others. The first interactive whiteboards were designed and manufactured for use in the office. They were developed by PARC around 1990. This board was used in small group meetings and round-tables. The interactive whiteboard industry was expected to reach sales of US$1 billion worldwide by 2008; one of every seven classrooms in the world was expected to feature an interactive whiteboard by 2011 according to market research by Futuresource Consulting. In 2004, 26% of British primary classrooms had interactive whiteboards. The Becta Harnessing Technology Schools Survey 2007 indicated that 98% of secondary and 100% of primary schools had IWBs. By 2008, the average numbers of interactive whiteboards rose in both primary schools (18 compared with just over six in 2005, and eight in the 2007 survey) and secondary schools (38, compared with 18 in 2005 and 22 in 2007). General operation and use An interactive whiteboard (IWB) device can either be a standalone computer or a large, functioning touchpad for computers to use. A device driver is usually installed on the attached computer so that the interactive whiteboard can act as a Human Input Device (HID), like a mouse. The computer's video output is connected to a digital projector so that images may be projected on the interactive whiteboard surface, although interactive whiteboards with LCD displays also exist. The user then calibrates the whiteboard image by matching the
https://en.wikipedia.org/wiki/Simple%20function
In the mathematical field of real analysis, a simple function is a real (or complex)-valued function over a subset of the real line, similar to a step function. Simple functions are sufficiently "nice" that using them makes mathematical reasoning, theory, and proof easier. For example, simple functions attain only a finite number of values. Some authors also require simple functions to be measurable; as used in practice, they invariably are. A basic example of a simple function is the floor function over the half-open interval [1, 9), whose only values are {1, 2, 3, 4, 5, 6, 7, 8}. A more advanced example is the Dirichlet function over the real line, which takes the value 1 if x is rational and 0 otherwise. (Thus the "simple" of "simple function" has a technical meaning somewhat at odds with common language.) All step functions are simple. Simple functions are used as a first stage in the development of theories of integration, such as the Lebesgue integral, because it is easy to define integration for a simple function and also it is straightforward to approximate more general functions by sequences of simple functions. Definition Formally, a simple function is a finite linear combination of indicator functions of measurable sets. More precisely, let (X, Σ) be a measurable space. Let A1, ..., An ∈ Σ be a sequence of disjoint measurable sets, and let a1, ..., an be a sequence of real or complex numbers. A simple function is a function of the form where is the indicator function of the set A. Properties of simple functions The sum, difference, and product of two simple functions are again simple functions, and multiplication by constant keeps a simple function simple; hence it follows that the collection of all simple functions on a given measurable space forms a commutative algebra over . Integration of simple functions If a measure μ is defined on the space (X,Σ), the integral of f with respect to μ is if all summands are finite. Relation to Lebesgue i
https://en.wikipedia.org/wiki/Action%20spectrum
An action spectrum is a graph of the rate of biological effectiveness plotted against wavelength of light. It is related to absorption spectrum in many systems. Mathematically, it describes the inverse quantity of light required to evoke a constant response. It is very rare for an action spectrum to describe the level of biological activity, since biological responses are often nonlinear with intensity. Action spectra are typically written as unit-less responses with peak response of one, and it is also important to distinguish if an action spectrum refers to quanta at each wavelength (mol or log-photons), or to spectral power (W). It shows which wavelength of light is most effectively used in a specific chemical reaction. Some reactants are able to use specific wavelengths of light more effectively to complete their reactions. For example, chlorophyll is much more efficient at using the red and blue regions than the green region of the light spectrum to carry out photosynthesis. Therefore, the action spectrum graph would show spikes above the wavelengths representing the colours red and blue. The first action spectrum was made by T. W. Engelmann, who split light into its components by the prism and then illuminated Cladophora placed in a suspension of aerobic bacteria. He found that bacteria accumulated in the region of blue and red light of the split spectrum. He thus discovered the effect of the different wavelengths of light on photosynthesis and plotted the first action spectrum of photosynthesis. Further examples include suppression of melatonin by wavelength and a variety of hazard functions, related to tissue damage from visible and near-visible light. See also Photosynthetically active radiation Photosynthesis Absorption spectrum Chlorophyll a
https://en.wikipedia.org/wiki/Butterfly%20net
A butterfly net (sometimes called an aerial insect net) is one of several kinds of nets used to collect insects. The entire bag of the net is generally constructed from a lightweight mesh to minimize damage to delicate butterfly wings. Other types of nets used in insect collecting include beat nets, aquatic nets, and sweep nets. Nets for catching different insects have different mesh sizes. Aquatic nets usually have bigger, more 'open' mesh. Catching small aquatic creatures usually requires an insect net. The mesh is smaller and can capture more.
https://en.wikipedia.org/wiki/Amelogenin
Amelogenins are a group of protein isoforms produced by alternative splicing or proteolysis from the AMELX gene, on the X chromosome, and also the AMELY gene in males, on the Y chromosome. They are involved in amelogenesis, the development of enamel. Amelogenins are type of extracellular matrix protein, which, together with ameloblastins, enamelins and tuftelins, direct the mineralization of enamel to form a highly organized matrix of rods, interrod crystal and proteins. Although the precise role of amelogenin(s) in regulating the mineralization process is unknown, it is known that amelogenins are abundant during amelogenesis. Developing human enamel contains about 70% protein, 90% of which are amelogenins. Function Amelogenins are believed to be involved in the organizing of enamel rods during tooth development. The latest research indicates that these proteins regulate the initiation and growth of hydroxyapatite crystals during the mineralization of enamel. In addition, amelogenins appear to aid in the development of cementum by directing cementoblasts to the tooth's root surface. Variants The amelogenin gene has been most widely studied in humans, where it is a single copy gene, located on the X and Y chromosomes at Xp22.1–Xp22.3 and Yp 11.2 [5]. The amelogenin gene's location on sex chromosomes has implications for variability both between the X chromosome form (AMELX) and the Y chromosome form (AMELY), and between alleles of AMELY among different populations. This is because AMELY exists in the non-recombining region of chromosome Y, effectively isolating it from normal selection pressures. Other sources of amelogenin variation arise from the various isoforms of AMELX obtained from alternative splicing of mRNA transcripts. Specific roles for isoforms have yet to be established. Among other organisms, amelogenin is well conserved among eutherians, and has homologs in monotremes, reptiles and amphibians. Application in sex determination Differences between
https://en.wikipedia.org/wiki/Enamelin
Enamelin is an enamel matrix protein (EMPs), that in humans is encoded by the ENAM gene. It is part of the non-amelogenins, which comprise 10% of the total enamel matrix proteins. It is one of the key proteins thought to be involved in amelogenesis (enamel development). The formation of enamel's intricate architecture is thought to be rigorously controlled in ameloblasts through interactions of various organic matrix protein molecules that include: enamelin, amelogenin, ameloblastin, tuftelin, dentine sialophosphoprotein, and a variety of enzymes. Enamelin is the largest protein (~168kDa) in the enamel matrix of developing teeth and is the least abundant (encompasses approximately 1-5%) of total enamel matrix proteins. It is present predominantly at the growing enamel surface. Structure Enamelin is thought to be the oldest member of the enamel matrix protein (EMP) family, with animal studies showing remarkable conservation of the gene phylogenetically. All other EMPs are derived from enamelin, such as amelogenin. EMPs belong to a larger family of proteins termed 'secretory calcium-binding phosphoproteins' (SCPP). Similar to other enamel matrix proteins, enamelin undergoes extensive post-translational modifications (mainly phosphorylation), processing, and secretion by proteases. Enamelin has three putative phosphoserines (Ser54, Ser191, and Ser216 in humans) phosphorylated by a Golgi-associated secretory pathway kinase (FAM20C) based on their distinctive Ser-x-Glu (S-x-E) motifs. The major secretory product of the ENAM gene has 1103 amino acids (post-secretion), and has an acidic isoelectric point ranging from 4.5–6.5 (depending on the fragment). At the secretory stage, the enzyme matrix metalloproteinase-20 (MMP20) proteolytically cleaves the secreted enamelin protein immediately upon release, into several smaller polypeptides; each having their own functions. However, the whole protein (~168 kDa) and its largest derivative fragment (~89 kDa) are undetectable i
https://en.wikipedia.org/wiki/Cleavage%20%28crystal%29
Cleavage, in mineralogy and materials science, is the tendency of crystalline materials to split along definite crystallographic structural planes. These planes of relative weakness are a result of the regular locations of atoms and ions in the crystal, which create smooth repeating surfaces that are visible both in the microscope and to the naked eye. If bonds in certain directions are weaker than others, the crystal will tend to split along the weakly bonded planes. These flat breaks are termed "cleavage". The classic example of cleavage is mica, which cleaves in a single direction along the basal pinacoid, making the layers seem like pages in a book. In fact, mineralogists often refer to "books of mica". Diamond and graphite provide examples of cleavage. Each is composed solely of a single element, carbon. In diamond, each carbon atom is bonded to four others in a tetrahedral pattern with short covalent bonds. The planes of weakness (cleavage planes) in a diamond are in four directions, following the faces of the octahedron. In graphite, carbon atoms are contained in layers in a hexagonal pattern where the covalent bonds are shorter (and thus even stronger) than those of diamond. However, each layer is connected to the other with a longer and much weaker van der Waals bond. This gives graphite a single direction of cleavage, parallel to the basal pinacoid. So weak is this bond that it is broken with little force, giving graphite a slippery feel as layers shear apart. As a result, graphite makes an excellent dry lubricant. While all single crystals will show some tendency to split along atomic planes in their crystal structure, if the differences between one direction or another are not large enough, the mineral will not display cleavage. Corundum, for example, displays no cleavage. Types of cleavage Cleavage forms parallel to crystallographic planes: Basal, pinacoidal, or planar cleavage occurs when there is only one cleavage plane. Talc has basal cleavage.
https://en.wikipedia.org/wiki/Usenet%20II
Usenet II was a proposed alternative to the classic Usenet hierarchy, started in 1998. Unlike the original Usenet, it was peered only between "sound sites" and employed a system of rules to keep out spam. Usenet II was backed by influential Usenetters like Russ Allbery. Sometime between 2010 and 2011, the web page for Usenet II went offline. The newsgroup hierarchy in Usenet II revived the old naming system used by Usenet before the Great Renaming. All groups had names starting "net.", which serve to distinguish them from the "Big 8" (misc.*, sci.*, news.*, rec.*, soc.*, talk.*, comp.*, humanities.*). A separate checkgroup system, using the same technical mechanism as the one produced by David C. Lawrence for the Big 8, enforced the Usenet II hierarchy and prevents the creation of unauthorized newsgroups within it. The basic principles of operation were controlled by a Steering Committee, which appointed "hierarchy czars" who were responsible for the content of specific portions of the namespace, or hierarchies. Usenet II had strictly enforced rules. Readers of messages in Usenet II had to be fully compliant with the RFC 1036 (Usenet) standard plus some additional format compliance rules that were specific to Usenet II. A message header had to contain a valid email address in the From field. It was required to have an NNTP-Posting-Host header field containing a sound site. The distribution field was to be set to "4gh" (a reference to Shockwave Rider by John Brunner). If the Subject field started with "Re:", indicating a follow-up, there had to be a valid "References" field that contained the Message-ID of a previous message. Crossposts to groups outside the net.* hierarchy were cancelled automatically. No message were allowed to spawn a discussion in more than three newsgroups. This applied both to the "newsgroups" field and the "Followup-To" field. It was permissible to post the same message three times. Posting the same message every day or every we
https://en.wikipedia.org/wiki/Spatiotemporal%20gene%20expression
Spatiotemporal gene expression is the activation of genes within specific tissues of an organism at specific times during development. Gene activation patterns vary widely in complexity. Some are straightforward and static, such as the pattern of tubulin, which is expressed in all cells at all times in life. Some, on the other hand, are extraordinarily intricate and difficult to predict and model, with expression fluctuating wildly from minute to minute or from cell to cell. Spatiotemporal variation plays a key role in generating the diversity of cell types found in developed organisms; since the identity of a cell is specified by the collection of genes actively expressed within that cell, if gene expression was uniform spatially and temporally, there could be at most one kind of cell. Consider the gene wingless, a member of the wnt family of genes. In the early embryonic development of the model organism Drosophila melanogaster, or fruit fly, wingless is expressed across almost the entire embryo in alternating stripes three cells separated. This pattern is lost by the time the organism develops into a larva, but wingless is still expressed in a variety of tissues such as the wing imaginal discs, patches of tissue that will develop into the adult wings. The spatiotemporal pattern of wingless gene expression is determined by a network of regulatory interactions consisting of the effects of many different genes such as even-skipped and Krüppel. What causes spatial and temporal differences in the expression of a single gene? Because current expression patterns depend strictly on previous expression patterns, there is a regressive problem of explaining what caused the first differences in gene expression. The process by which uniform gene expression becomes spatially and temporally differential is known as symmetry breaking. For example, in the case of embryonic Drosophila development, the genes nanos and bicoid are asymmetrically expressed in the oocyte because mate
https://en.wikipedia.org/wiki/Alien%20Planet
Alien Planet is a 2005 docufiction TV special created for the Discovery Channel. Based on the 1990 book Expedition by the artist and writer Wayne Barlowe, Alien Planet explores the imagined extraterrestrial life of the fictional planet Darwin IV in the style of a nature documentary. Although closely following Barlowe's depiction of Darwin IV, Alien Planet features a team of scientists and science fiction figures discussing Darwin IV as if it had actually been discovered. Among the people featured are Michio Kaku, Stephen Hawking, Jack Horner, James B. Garvin and George Lucas. Alien Planet garnered positive reviews as a thought-provoking programme, though some criticism was raised concerning the strange creatures featured, which some reviewers saw as bordering on implausible. Plot Alien Planet starts out with an interstellar spacecraft named Von Braun, leaving Earth's orbit. Traveling at 20% the speed of light (37,000 miles/s), it reaches Darwin IV, a planet 6.5 light-years away, in 42 years. Upon reaching orbit, it deploys the Darwin Reconnaissance Orbiter, which looks for potential landing sites for the probes. The first probe, Balboa, explodes along with its lifting body transport during entry, because one of its wings failed to unfold. Two backup probes, Leonardo da Vinci (nicknamed Leo) and Isaac Newton (nicknamed Ike), successfully land on the planet, and learn much about its bizarre indigenous lifeforms, including an apparently sapient species. The robotic probes sent out to research on Darwin IV are called Horus Probes. Each Horus probe consists of an high, long inflatable, hydrogen-filled balloon, which is covered with solar receptors, a computer 'brain', a 'head' covered with sensors, and several smaller robots that can be sent to places too dangerous for the probes themselves. The probes have a limited degree of artificial intelligence, very similar to the 'processing power' of a 4-year-old. All the real thinking is done by a supercomputer in the orb
https://en.wikipedia.org/wiki/Ernst%20Mally
Ernst Mally (; ; 11 October 1879 – 8 March 1944) was an Austrian analytic philosopher, initially affiliated with Alexius Meinong's Graz School of object theory. Mally was one of the founders of deontic logic and is mainly known for his contributions in that field of research. In metaphysics, he is known for introducing a distinction between two kinds of predication, better known as the dual predication approach. Life Mally was born in the town of Kranj () in the Duchy of Carniola, Austria-Hungary (now in Slovenia). His father was of Slovene origin, but identified himself with Austrian German culture (he also Germanized the orthography of his surname, originally spelled Mali, a common Slovene surname of Upper Carniola). After his death, the family moved to the Carniolan capital of Ljubljana (). There, Ernst attended the prestigious Ljubljana German-language Gymnasium. Already at a young age, Mally became a fervent supporter of the Pan-German nationalist movement of Georg von Schönerer. In the same time, he developed an interest in philosophy. In 1898, he enrolled in the University of Graz, where he studied philosophy under the supervision of Alexius Meinong, as well as physics and mathematics, specializing in formal logic. He graduated in 1903 with a doctoral thesis entitled Untersuchungen zur Gegenstandstheorie des Messens (Investigations in the Object Theory of Measurement). In 1906 he started teaching at a high school in Graz, at the same time collaborating with Adalbert Meingast and working as Meinong's assistant at the university. He also maintained close contacts with the Graz Psychological Institute, founded by Meinong. In 1912, he wrote his habilitation thesis entitled Gegenstandstheoretische Grundlagen der Logik und Logistik (Object-theoretic Foundations for Logics and Logistics) at Graz with Meinong as supervisor. From 1915 to 1918 he served as an officer in the Austro-Hungarian Army. After the end of World War I, Mally joined the Greater German People'
https://en.wikipedia.org/wiki/Great%20American%20Interchange
The Great American Biotic Interchange (commonly abbreviated as GABI), also known as the Great American Interchange and the Great American Faunal Interchange, was an important late Cenozoic paleozoogeographic biotic interchange event in which land and freshwater fauna migrated from North America via Central America to South America and vice versa, as the volcanic Isthmus of Panama rose up from the sea floor and bridged the formerly separated continents. Although earlier dispersals had occurred, probably over water, the migration accelerated dramatically about 2.7 million years (Ma) ago during the Piacenzian age. It resulted in the joining of the Neotropic (roughly South American) and Nearctic (roughly North American) biogeographic realms definitively to form the Americas. The interchange is visible from observation of both biostratigraphy and nature (neontology). Its most dramatic effect is on the zoogeography of mammals, but it also gave an opportunity for reptiles, amphibians, arthropods, weak-flying or flightless birds, and even freshwater fish to migrate. Coastal and marine biota, however, was affected in the opposite manner; the formation of the Central American Isthmus caused what has been termed the Great American Schism, with significant diversification and extinction occurring as a result of the isolation of the Caribbean from the Pacific. The occurrence of the interchange was first discussed in 1876 by the "father of biogeography", Alfred Russel Wallace. Wallace had spent five years exploring and collecting specimens in the Amazon basin. Others who made significant contributions to understanding the event in the century that followed include Florentino Ameghino, W. D. Matthew, W. B. Scott, Bryan Patterson, George Gaylord Simpson and S. David Webb. The Pliocene timing of the formation of the connection between North and South America was discussed in 1910 by Henry Fairfield Osborn. Analogous interchanges occurred earlier in the Cenozoic, when the formerly
https://en.wikipedia.org/wiki/MHC%20class%20I
MHC class I molecules are one of two primary classes of major histocompatibility complex (MHC) molecules (the other being MHC class II) and are found on the cell surface of all nucleated cells in the bodies of vertebrates. They also occur on platelets, but not on red blood cells. Their function is to display peptide fragments of proteins from within the cell to cytotoxic T cells; this will trigger an immediate response from the immune system against a particular non-self antigen displayed with the help of an MHC class I protein. Because MHC class I molecules present peptides derived from cytosolic proteins, the pathway of MHC class I presentation is often called cytosolic or endogenous pathway. In humans, the HLAs corresponding to MHC class I are HLA-A, HLA-B, and HLA-C. Function Class I MHC molecules bind peptides generated mainly from degradation of cytosolic proteins by the proteasome. The MHC I:peptide complex is then inserted via endoplasmic reticulum into the external plasma membrane of the cell. The epitope peptide is bound on extracellular parts of the class I MHC molecule. Thus, the function of the class I MHC is to display intracellular proteins to cytotoxic T cells (CTLs). However, class I MHC can also present peptides generated from exogenous proteins, in a process known as cross-presentation. A normal cell will display peptides from normal cellular protein turnover on its class I MHC, and CTLs will not be activated in response to them due to central and peripheral tolerance mechanisms. When a cell expresses foreign proteins, such as after viral infection, a fraction of the class I MHC will display these peptides on the cell surface. Consequently, CTLs specific for the MHC:peptide complex will recognize and kill presenting cells. Alternatively, class I MHC itself can serve as an inhibitory ligand for natural killer cells (NKs). Reduction in the normal levels of surface class I MHC, a mechanism employed by some viruses and certain tumors to evade CTL
https://en.wikipedia.org/wiki/Behaviour%20therapy
Behaviour therapy or behavioural psychotherapy is a broad term referring to clinical psychotherapy that uses techniques derived from behaviourism and/or cognitive psychology. It looks at specific, learned behaviours and how the environment, or other people's mental states, influences those behaviours, and consists of techniques based on behaviorism's theory of learning: respondent or operant conditioning. Behaviourists who practice these techniques are either behaviour analysts or cognitive-behavioural therapists. They tend to look for treatment outcomes that are objectively measurable. Behaviour therapy does not involve one specific method, but it has a wide range of techniques that can be used to treat a person's psychological problems. Behavioural psychotherapy is sometimes juxtaposed with cognitive psychotherapy. While cognitive behavioural therapy integrates aspects of both approaches, such as cognitive restructuring, positive reinforcement, habituation (or desensitisation), counterconditioning, and modelling. Applied behaviour analysis (ABA) is the application of behaviour analysis that focuses on functionally assessing how behaviour is influenced by the observable learning environment and how to change such behaviour through contingency management or exposure therapies, which are used throughout clinical behaviour analysis therapies or other interventions based on the same learning principles. Cognitive-behavioural therapy views cognition and emotions as preceding overt behaviour and implements treatment plans in psychotherapy to lessen the issue by managing competing thoughts and emotions, often in conjunction with behavioural learning principles. A 2013 Cochrane review comparing behaviour therapies to psychological therapies found them to be equally effective, although at the time the evidence base that evaluates the benefits and harms of behaviour therapies was weak. History Precursors of certain fundamental aspects of behaviour therapy have been ide
https://en.wikipedia.org/wiki/Barrett%E2%80%93Crane%20model
The Barrett–Crane model is a model in quantum gravity, first published in 1998, which was defined using the Plebanski action. The field in the action is supposed to be a -valued 2-form, i.e. taking values in the Lie algebra of a special orthogonal group. The term in the action has the same symmetries as it does to provide the Einstein–Hilbert action. But the form of is not unique and can be posed by the different forms: where is the tetrad and is the antisymmetric symbol of the -valued 2-form fields. The Plebanski action can be constrained to produce the BF model which is a theory of no local degrees of freedom. John W. Barrett and Louis Crane modeled the analogous constraint on the summation over spin foam. The Barrett–Crane model on spin foam quantizes the Plebanski action, but its path integral amplitude corresponds to the degenerate field and not the specific definition , which formally satisfies the Einstein's field equation of general relativity. However, if analysed with the tools of loop quantum gravity the Barrett–Crane model gives an incorrect long-distance limit , and so the model is not identical to loop quantum gravity.
https://en.wikipedia.org/wiki/IBM%20OfficeVision
OfficeVision was an IBM proprietary office support application. History PROFS, DISOSS and Office/36 OfficeVision started as a product for the VM operating system named PROFS (for PRofessional OFfice System) and was initially made available in 1981. Before that it was just a PRPQ (Programming Request for Price Quotation), an IBM administrative term for non-standard software offerings with unique features, support and pricing. The first release of PROFS was developed by IBM in Poughkeepsie, NY, in conjunction with Amoco, from a prototype developed years earlier by Paul Gardner and others. Subsequent development took place in Dallas. The editor XEDIT was the basis of the word processing function in PROFS. PROFS itself had descended from OFS (Office System) developed also on the same laboratory and first installed in October 1974. This was a primitive solution for office automation created between 1970 and 1972, which was replacement for an in-house system. Compared to Poughkeepsie's original in-house system, the distinctive new features added by OFS were a centralised database virtual machine (data base manager or DBM) for shared permanent storage of documents, instead of storing all documents in user's personal virtual machines; and a centralised virtual machine (mailman master machine or distribution virtual machine) to manage mail transfer between individuals, instead of relying on direct communication between the personal virtual machines of individual users. By 1981, IBM's Poughkeepsie site had over 500 PROFS users. In 1983, IBM introduced release 2 of PROFS, along with auxiliary software to enable document interchange between PROFS, DISOSS, Displaywriter, IBM 8100 and IBM 5520 systems. PROFS and its e-mail component, known colloquially as PROFS Notes, featured prominently in the investigation of the Iran-Contra scandal. Oliver North believed he had deleted his correspondence, but the system archived it anyway. Congress subsequently examined the e-mail
https://en.wikipedia.org/wiki/Leber%27s%20hereditary%20optic%20neuropathy
Leber's hereditary optic neuropathy (LHON) is a mitochondrially inherited (transmitted from mother to offspring) degeneration of retinal ganglion cells (RGCs) and their axons that leads to an acute or subacute loss of central vision; it predominantly affects young adult males. LHON is transmitted only through the mother, as it is primarily due to mutations in the mitochondrial (not nuclear) genome, and only the egg contributes mitochondria to the embryo. Men cannot pass on the disease to their offspring. LHON is usually due to one of three pathogenic mitochondrial DNA (mtDNA) point mutations. These mutations are at nucleotide positions 11778 G to A, 3460 G to A and 14484 T to C, respectively in the ND4, ND1 and ND6 subunit genes of complex I of the oxidative phosphorylation chain in mitochondria. Signs and symptoms Clinically, there is an acute onset of visual loss, first in one eye, and then a few weeks to months later in the other. Onset is usually young adulthood, but age range at onset from 7-75 is reported. The age of onset is slightly higher in females (range 19–55 years: mean 31.3 years) than males (range 15–53 years: mean 24.3). The male-to-female ratio varies between mutations: 3:1 for 3460 G>A, 6:1 for 11778 G>A and 8:1 for 14484 T>C. This typically evolves to very severe optic atrophy and a permanent decrease of visual acuity. Both eyes become affected either simultaneously (25% of cases) or sequentially (75% of cases) with a median inter-eye delay of 8 weeks. Rarely, only one eye is affected. In the acute stage, lasting a few weeks, the affected eye demonstrates an oedematous appearance of the nerve fiber layer, especially in the arcuate bundles and enlarged or telangiectatic and tortuous peripapillary vessels (microangiopathy). The main features are seen on fundus examination, just before or after the onset of visual loss. A pupillary defect may be visible in the acute stage as well. Examination reveals decreased visual acuity, loss of color vision a
https://en.wikipedia.org/wiki/Pharyngeal%20slit
Pharyngeal slits are filter-feeding organs found among deuterostomes. Pharyngeal slits are repeated openings that appear along the pharynx caudal to the mouth. With this position, they allow for the movement of water in the mouth and out the pharyngeal slits. It is postulated that this is how pharyngeal slits first assisted in filter-feeding, and later, with the addition of gills along their walls, aided in respiration of aquatic chordates. These repeated segments are controlled by similar developmental mechanisms. Some hemichordate species can have as many as 200 gill slits. Pharyngeal clefts resembling gill slits are transiently present during the embryonic stages of tetrapod development. The presence of pharyngeal arches and clefts in the neck of the developing human embryo famously led Ernst Haeckel to postulate that "ontogeny recapitulates phylogeny"; this hypothesis, while false, contains elements of truth, as explored by Stephen Jay Gould in Ontogeny and Phylogeny. However, it is now accepted that it is the vertebrate pharyngeal pouches and not the neck slits that are homologous to the pharyngeal slits of invertebrate chordates. Pharyngeal arches, pouches, and clefts are, at some stage of life, found in all chordates. One theory of their origin is the fusion of nephridia which opened both on the outside and the gut, creating openings between the gut and the environment. Pharyngeal arches in Vertebrates In vertebrates, the pharyngeal arches are derived from all three germ layers. Neural crest cells enter these arches where they contribute to craniofacial features such as bone and cartilage. However, the existence of pharyngeal structures before neural crest cells evolved is indicated by the existence of neural crest-independent mechanisms of pharyngeal arch development. The first, most anterior pharyngeal arch gives rise to the oral jaw. The second arch becomes the hyoid and jaw support. In fish, the other posterior arches contribute to the brachial ske
https://en.wikipedia.org/wiki/Plebanski%20action
General relativity and supergravity in all dimensions meet each other at a common assumption: Any configuration space can be coordinatized by gauge fields , where the index is a Lie algebra index and is a spatial manifold index. Using these assumptions one can construct an effective field theory in low energies for both. In this form the action of general relativity can be written in the form of the Plebanski action which can be constructed using the Palatini action to derive Einstein's field equations of general relativity. The form of the action introduced by Plebanski is: where are internal indices, is a curvature on the orthogonal group and the connection variables (the gauge fields) are denoted by . The symbol is the Lagrangian multiplier and is the antisymmetric symbol valued over . The specific definition formally satisfies the Einstein's field equation of general relativity. Application is to the Barrett–Crane model. See also Tetradic Palatini action Barrett–Crane model BF model
https://en.wikipedia.org/wiki/Cleavage%20%28embryo%29
In embryology, cleavage is the division of cells in the early development of the embryo, following fertilization. The zygotes of many species undergo rapid cell cycles with no significant overall growth, producing a cluster of cells the same size as the original zygote. The different cells derived from cleavage are called blastomeres and form a compact mass called the morula. Cleavage ends with the formation of the blastula, or of the blastocyst in mammals. Depending mostly on the concentration of yolk in the egg, the cleavage can be holoblastic (total or entire cleavage) or meroblastic (partial cleavage). The pole of the egg with the highest concentration of yolk is referred to as the vegetal pole while the opposite is referred to as the animal pole. Cleavage differs from other forms of cell division in that it increases the number of cells and nuclear mass without increasing the cytoplasmic mass. This means that with each successive subdivision, there is roughly half the cytoplasm in each daughter cell than before that division, and thus the ratio of nuclear to cytoplasmic material increases. Mechanism The rapid cell cycles are facilitated by maintaining high levels of proteins that control cell cycle progression such as the cyclins and their associated cyclin-dependent kinases (CDKs). The complex cyclin B/CDK1 also known as MPF (maturation promoting factor) promotes entry into mitosis. The processes of karyokinesis (mitosis) and cytokinesis work together to result in cleavage. The mitotic apparatus is made up of a central spindle and polar asters made up of polymers of tubulin protein called microtubules. The asters are nucleated by centrosomes and the centrosomes are organized by centrioles brought into the egg by the sperm as basal bodies. Cytokinesis is mediated by the contractile ring made up of polymers of actin protein called microfilaments. Karyokinesis and cytokinesis are independent but spatially and temporally coordinated processes. While mitosis
https://en.wikipedia.org/wiki/Spermatogonium
A spermatogonium (: spermatogonia) is an undifferentiated male germ cell. Spermatogonia undergo spermatogenesis to form mature spermatozoa in the seminiferous tubules of the testis. There are three subtypes of spermatogonia in humans: Type A (dark) cells, with dark nuclei. These cells are reserve spermatogonial stem cells which do not usually undergo active mitosis. Type A (pale) cells, with pale nuclei. These are the spermatogonial stem cells that undergo active mitosis. These cells divide to produce Type B cells. Type B cells, which undergo growth and become primary spermatocytes. Anticancer drugs Anticancer drugs such as doxorubicin and vincristine can adversely affect male fertility by damaging the DNA of proliferative spermatogonial stem cells. Experimental exposure of rat undifferentiated spermatogonia to doxorubicin and vincristine indicated that these cells are able to respond to DNA damage by increasing their expression of DNA repair genes, and that this response likely partially prevents DNA break accumulation. In addition to a DNA repair response, exposure of spermatogonia to doxorubicin can also induce programmed cell death (apoptosis). Additional images See also List of distinct cell types in the adult human body
https://en.wikipedia.org/wiki/Tally%20light
In a television studio, a tally light (or on air indicator) is a small signal lamp on a professional video camera or monitor. It is usually located just above the lens or on the electronic viewfinder (EVF) and communicates, for the benefit of those in front of the camera as well as the camera operator, that the camera is “live,” i.e. its signal is being used for the “main program” at that moment. Many non-studio (i.e. intended for offline recording) video cameras—and even digital photo cameras capable of filming video—also feature some sort of recording indication. For television productions with more than one camera in a multiple-camera setup, the tally lights are generally illuminated automatically by a vision mixer trigger that is fed to a tally breakout board and then to a special video router designed for tally signals. The video switcher keeps track of which video sources are selected by the technical director and output to the main program bus. A switch automatically closes the appropriate electrical contacts to create a circuit, which activates the tally unit located in the camera control units (CCU). If more than one camera is on-air simultaneously (as in the case of a dissolve transition), during the duration of the transition the tally lights of both cameras will remain lit until transition completion. This is also the case when multiple cameras are placed in “boxes” on screen, sometimes referred to as “picture-in-picture” (PiP) mode. Colours & usage In the active (“on air”) mode, tally lights are typically red. Some cameras and video switchers are capable of additionally showing a “preview” tally signal (typically green) for when the camera is about to be switched to and become the main source of video signal. Once the switch happens, green changes to red. This feature allows the presenter to be aware of the upcoming transition, and, for example, change their posture. In addition to the tally lights, an additional light called ISO is sometimes
https://en.wikipedia.org/wiki/Descending%20aorta
In human anatomy, the descending aorta is part of the aorta, the largest artery in the body. The descending aorta begins at the aortic arch and runs down through the chest and abdomen. The descending aorta anatomically consists of two portions or segments, the thoracic and the abdominal aorta, in correspondence with the two great cavities of the trunk in which it is situated. Within the abdomen, the descending aorta branches into the two common iliac arteries which serve the pelvis and eventually legs. The ductus arteriosus connects to the junction between the pulmonary artery and the descending aorta in foetal life. This artery later regresses as the ligamentum arteriosum. See also Abbott artery
https://en.wikipedia.org/wiki/Azygos
Azygos (impar), from the Greek άζυξ, refers to an anatomical structure that is unpaired. This is relatively unusual, as most elements of anatomy reflect bilateral symmetry. Azygos may refer to: Azygos anterior cerebral artery Azygos artery of vagina Azygos lobe Azygos vein Ganglion impar Medical terminology
https://en.wikipedia.org/wiki/Sorus
A sorus (: sori) is a cluster of sporangia (structures producing and containing spores) in ferns and fungi. A coenosorus (: coenosori) is a compound sorus composed of multiple, fused sori. Etymology This Neo-Latin word is from Ancient Greek σωρός (sōrós 'stack, pile, heap'). Structure In lichens and other fungi, the sorus is surrounded by an external layer. In some red algae, it may take the form of depression into the thallus. In ferns, the sori form a yellowish or brownish mass on the edge or underside of a fertile frond. In some species, they are protected during development by a scale or film of tissue called the indusium, which forms an umbrella-like cover. Life cycle significance Sori occur on the sporophyte generation, the sporangia within producing haploid meiospores. As the sporangia mature, the indusium shrivels so that spore release is unimpeded. The sporangia then burst and release the spores. As an aid to identification The shape, arrangement, and location of the sori are often valuable clues in the identification of fern taxa. Sori may be circular or linear. They may be arranged in rows, either parallel or oblique to the costa, or randomly. Their location may be marginal or set away from the margin on the frond lamina. The presence or absence of indusium is also used to identify fern taxa. Gallery See also Sorocarp
https://en.wikipedia.org/wiki/Chris%20Stringer
Christopher Brian Stringer is a British physical anthropologist noted for his work on human evolution. Biography Growing up in a working-class family in the East End of London, Stringer first took an interest in anthropology during primary school, when he undertook a project on Neanderthals. Stringer studied anthropology at University College London, holds a PhD in Anatomical Science and a DSc in Anatomical Science (both from Bristol University). Stringer joined the permanent staff of the Natural History Museum in 1973. He is currently Research Leader in Human Origins. Research Stringer is one of the leading proponents of the recent African origin hypothesis or ″Out of Africa″ theory, which hypothesizes that modern humans originated in Africa over 100,000 years ago and replaced, in some way, the world's archaic humans, such as Homo floresiensis and Neanderthals, after migrating within and then out of Africa to the non-African world within the last 50,000 to 100,000 years. He always considered that some interbreeding between the different groups could have occurred, but thought this would have been trivial in the big picture. However, recent genetic data show that the replacement process did include some interbreeding. In the last decade he has proposed a more complex version of events within Africa, which he has termed ″multiregional African origin″. He also directed the Ancient Human Occupation of Britain project which ran for about 10 years from 2001. This consortium reconstructed and studied the episodic pattern of human colonisation of Britain during the Pleistocene. He is co-director of the follow-up project "Pathways to Ancient Britain". Honours He is a Fellow of the Royal Society and Honorary Fellow of the Society of Antiquaries. He won the 2008 Frink Medal of the Zoological Society of London and the Rivers Memorial Medal from the Royal Anthropological Institute in 2004 He was elected a Member of the American Philosophical Society in 2019. Stringer
https://en.wikipedia.org/wiki/Yaw%20drive
The yaw drive is an important component of the horizontal axis wind turbines' yaw system. To ensure the wind turbine is producing the maximal amount of electric energy at all times, the yaw drive is used to keep the rotor facing into the wind as the wind direction changes. This only applies for wind turbines with a horizontal axis rotor. The wind turbine is said to have a yaw error if the rotor is not aligned to the wind. A yaw error implies that a lower share of the energy in the wind will be running through the rotor area. (The generated energy will be approximately proportional to the cosine of the yaw error). History When the windmills of the 18th century included the feature of rotor orientation via the rotation of the nacelle, an actuation mechanism able to provide that turning moment was necessary. Initially the windmills used ropes or chains extending from the nacelle to the ground in order to allow the rotation of the nacelle by means of human or animal power. Another historical innovation was the fantail. This device was actually an auxiliary rotor equipped with plurality of blades and located downwind of the main rotor, behind the nacelle in a 90° (approximately) orientation to the main rotor sweep plane. In the event of change in wind direction the fantail would rotate thus transmitting its mechanical power through a gearbox (and via a gear-rim-to-pinion mesh) to the tower of the windmill. The effect of the aforementioned transmission was the rotation of the nacelle towards the direction of the wind, where the fantail would not face the wind thus stop turning (i.e. the nacelle would stop to its new position). The modern yaw drives, even though electronically controlled and equipped with large electric motors and planetary gearboxes have great similarities to the old windmill concept. Types The main categories of yaw drives are: The Electric Yaw Drives: Commonly used in almost all modern turbines. The Hydraulic Yaw Drive: Hardly ever used anymore
https://en.wikipedia.org/wiki/NABTS
NABTS, the North American Broadcast Teletext Specification, is a protocol used for encoding NAPLPS-encoded teletext pages, as well as other types of digital data, within the vertical blanking interval (VBI) of an analog video signal. It is standardized under standard EIA-516, and has a rate of 15.6 kbit/s per line of video (with error correction). It was adopted into the international standard CCIR 653 (now ITU-R BT.653) of 1986 as CCIR Teletext System C. History NABTS was originally developed as a protocol by the Canadian Department of Communications, with their industry partner Norpak, for the Telidon system. Similar systems had been developed by the BBC in Europe for their Ceefax system, and were later standardized as the World System Teletext (WST, aka CCIR Teletext System B), but differences in European and North American television standards and the greater flexibility of the Telidon standard led to the creation of a new delivery mechanism that was tuned for speed. NABTS was the standard used for both CBS's ExtraVision and NBC's very short-lived NBC Teletext services in the mid-1980s. The short-lived Time Teletext service, operated by the Time Video Information Services division of Time, Inc. and several experimental services launched by Boston's PBS station WGBH, also used NABTS. Due to teletext in general not really catching on in North America, NABTS saw a new use for the datacasting features of WebTV for Windows, under Windows 98, as well as for the now-defunct Intercast system. Canadian company Norpak sold and manufactured encoders and decoders for NABTS until the end of analog broadcasting in North America in the early 2010s; it was acquired by the Ross Video consortium in 2010. NABTS is still used in legacy analog video systems for private closed-circuit data delivery over a television broadcast or video signal. Description In a normal NTSC video signal there are 525 "lines" of video signal. These are split into two half-images, known as "fields", s
https://en.wikipedia.org/wiki/Logical%20shift
In computer science, a logical shift is a bitwise operation that shifts all the bits of its operand. The two base variants are the logical left shift and the logical right shift. This is further modulated by the number of bit positions a given value shall be shifted, such as shift left by 1 or shift right by n. Unlike an arithmetic shift, a logical shift does not preserve a number's sign bit or distinguish a number's exponent from its significand (mantissa); every bit in the operand is simply moved a given number of bit positions, and the vacant bit-positions are filled, usually with zeros, and possibly ones (contrast with a circular shift). A logical shift is often used when its operand is being treated as a sequence of bits instead of as a number. Logical shifts can be useful as efficient ways to perform multiplication or division of unsigned integers by powers of two. Shifting left by n bits on a signed or unsigned binary number has the effect of multiplying it by 2n. Shifting right by n bits on an unsigned binary number has the effect of dividing it by 2n (rounding towards 0). Logical right shift differs from arithmetic right shift. Thus, many languages have different operators for them. For example, in Java and JavaScript, the logical right shift operator is , but the arithmetic right shift operator is . (Java has only one left shift operator (), because left shift via logic and arithmetic have the same effect.) The programming languages C, C++, and Go, however, have only one right shift operator, . Most C and C++ implementations, and Go, choose which right shift to perform depending on the type of integer being shifted: signed integers are shifted using the arithmetic shift, and unsigned integers are shifted using the logical shift. All currently relevant C standards (ISO/IEC 9899:1999 to 2011) leave a definition gap for cases where the number of shifts is equal to or bigger than the number of bits in the operands in a way that the result is undefined. Th
https://en.wikipedia.org/wiki/Diamond%20cubic
In crystallography, the diamond cubic crystal structure is a repeating pattern of 8 atoms that certain materials may adopt as they solidify. While the first known example was diamond, other elements in group 14 also adopt this structure, including α-tin, the semiconductors silicon and germanium, and silicon–germanium alloys in any proportion. There are also crystals, such as the high-temperature form of cristobalite, which have a similar structure, with one kind of atom (such as silicon in cristobalite) at the positions of carbon atoms in diamond but with another kind of atom (such as oxygen) halfway between those (see :Category:Minerals in space group 227). Although often called the diamond lattice, this structure is not a lattice in the technical sense of this word used in mathematics. Crystallographic structure Diamond's cubic structure is in the Fdm space group (space group 227), which follows the face-centered cubic Bravais lattice. The lattice describes the repeat pattern; for diamond cubic crystals this lattice is "decorated" with a motif of two tetrahedrally bonded atoms in each primitive cell, separated by of the width of the unit cell in each dimension. The diamond lattice can be viewed as a pair of intersecting face-centered cubic lattices, with each separated by of the width of the unit cell in each dimension. Many compound semiconductors such as gallium arsenide, β-silicon carbide, and indium antimonide adopt the analogous zincblende structure, where each atom has nearest neighbors of an unlike element. Zincblende's space group is F3m, but many of its structural properties are quite similar to the diamond structure. The atomic packing factor of the diamond cubic structure (the proportion of space that would be filled by spheres that are centered on the vertices of the structure and are as large as possible without overlapping) is significantly smaller (indicating a less dense structure) than the packing factors for the face-centered and body-cent
https://en.wikipedia.org/wiki/Clock%20network
A clock network or clock system is a set of synchronized clocks designed to always show exactly the same time by communicating with each other. Clock networks usually consist of a central master clock kept in sync with an official time source, and one or more slave clocks which receive and display the time from the master. Synchronization sources The master clock in a clock network can receive accurate time in a number of ways: through the United States GPS satellite constellation, a Network Time Protocol server, the CDMA cellular phone network, a modem connection to a time source, or by listening to radio transmissions from WWV or WWVH, or a special signal from an upstream broadcast network. Some master clocks don't determine the time automatically. Instead, they rely on an operator to manually set them. Clock networks in critical applications often include a backup source to receive the time, or provisions to allow the master clock to maintain the time even if it loses access to its primary time source. For example, many master clocks can use the reliable frequency of the alternating current line they are connected to. Slave clocks Slave clocks come in many shapes and sizes. They can connect to the master clock through either a cable or a short-range wireless signal. In the 19th century Paris used a series of pneumatic tubes to transmit the signal. Some slave clocks will run independently if they lose the master signal, often with a warning light lit. Others will freeze until the connection is restored. Clock synchronization Many master clocks include the capability to synchronize devices like computers to the master clock signal. Common features include the transmission of the time via RS-232, a Network Time Protocol, or a Pulse Per Second (PPS) contact. Others provide SMPTE time code outputs, which are often used in television settings to synchronize the video from multiple sources. Master Clocks often come equipped with programmable relay o
https://en.wikipedia.org/wiki/Joseph%20Goguen
Joseph Amadee Goguen ( ; June 28, 1941 – July 3, 2006) was an American computer scientist. He was professor of Computer Science at the University of California and University of Oxford, and held research positions at IBM and SRI International. In the 1960s, along with Lotfi Zadeh, Goguen was one of the earliest researchers in fuzzy logic and made profound contributions to fuzzy set theory. In the 1970s Goguen's work was one of the earliest approaches to the algebraic characterisation of abstract data types and he originated and helped develop the OBJ family of programming languages. He was author of A Categorical Manifesto and founder and Editor-in-Chief of the Journal of Consciousness Studies. His development of institution theory impacted the field of universal logic. Standard implication in product fuzzy logic is often called "Goguen implication". Goguen categories are named after him. He was married to Ryoko Amadee Goguen, who is a composer, pianist, and vocalist. Education and academic career Goguen received his bachelor's degree in mathematics from Harvard University in 1963, and his PhD in mathematics from the University of California, Berkeley in 1968, where he was a student of the founder of fuzzy set theory, Lotfi Zadeh. He taught at UC Berkeley, the University of Chicago and University of California, Los Angeles, where he was a full professor of computer science. He held a Research Fellowship in the Mathematical Sciences at the IBM Watson Research Center, where he organised the "ADJ" group. He also visited the University of Edinburgh in Scotland on three Senior Visiting Fellowships. From 1979 to 1988, Goguen worked at SRI International in Menlo Park, California. From 1988 to 1996, he was a professor at the Oxford University Computing Laboratory (now the Department of Computer Science, University of Oxford) in England and a Fellow at St Anne's College, Oxford. In 1996 he became professor of Computer Science at the University of California, San Die
https://en.wikipedia.org/wiki/UML%20tool
A UML tool is a software application that supports some or all of the notation and semantics associated with the Unified Modeling Language (UML), which is the industry standard general-purpose modeling language for software engineering. UML tool is used broadly here to include application programs which are not exclusively focused on UML, but which support some functions of the Unified Modeling Language, either as an add-on, as a component or as a part of their overall functionality. Kinds of Functionality UML tools support the following kinds of functionality: Diagramming Diagramming in this context means creating and editing UML diagrams; that is diagrams that follow the graphical notation of the Unified Modeling Language. The use of UML diagrams as a means to draw diagrams of – mostly – object-oriented software is generally agreed upon by software developers. When developers draw diagrams of object-oriented software, they usually follow the UML notation. On the other hand, it is often debated whether those diagrams are needed at all, during what stages of the software development process they should be used, and how (if at all) they should be kept up to date. The primacy of software code often leads to the diagrams being deprecated. Round-trip engineering Round-trip engineering refers to the ability of a UML tool to perform code generation from models, and model generation from code (a.k.a., reverse engineering), while keeping both the model and the code semantically consistent with each other. Code generation and reverse engineering are explained in more detail below. Code generation Code generation in this context means that the user creates UML diagrams, which have some connected model data, and the UML tool derives from the diagrams part or all of the source code for the software system. In some tools the user can provide a skeleton of the program source code, in the form of a source code template, where predefined tokens are then replaced with program
https://en.wikipedia.org/wiki/Brain%20implant
Brain implants, often referred to as neural implants, are technological devices that connect directly to a biological subject's brain – usually placed on the surface of the brain, or attached to the brain's cortex. A common purpose of modern brain implants and the focus of much current research is establishing a biomedical prosthesis circumventing areas in the brain that have become dysfunctional after a stroke or other head injuries. This includes sensory substitution, e.g., in vision. Other brain implants are used in animal experiments simply to record brain activity for scientific reasons. Some brain implants involve creating interfaces between neural systems and computer chips. This work is part of a wider research field called brain–computer interfaces. (Brain–computer interface research also includes technology such as EEG arrays that allow interface between mind and machine but do not require direct implantation of a device.) Neural implants such as deep brain stimulation and Vagus nerve stimulation are increasingly becoming routine for patients with Parkinson's disease and clinical depression, respectively. Purpose Brain implants electrically stimulate, block or record (or both record and stimulate simultaneously) signals from single neurons or groups of neurons (biological neural networks) in the brain. This can only be done where the functional associations of these neurons are approximately known. Because of the complexity of neural processing and the lack of access to action potential related signals using neuroimaging techniques, the application of brain implants has been seriously limited until recent advances in neurophysiology and computer processing power. Much research is also being done on the surface chemistry of neural implants in effort to design products which minimize all negative effects that an active implant can have on the brain, and that the body can have on the function of the implant. Researchers are also exploring a range of delive
https://en.wikipedia.org/wiki/Dielectric%20spectroscopy
Dielectric spectroscopy (which falls in a subcategory of impedance spectroscopy) measures the dielectric properties of a medium as a function of frequency. It is based on the interaction of an external field with the electric dipole moment of the sample, often expressed by permittivity. It is also an experimental method of characterizing electrochemical systems. This technique measures the impedance of a system over a range of frequencies, and therefore the frequency response of the system, including the energy storage and dissipation properties, is revealed. Often, data obtained by electrochemical impedance spectroscopy (EIS) is expressed graphically in a Bode plot or a Nyquist plot. Impedance is the opposition to the flow of alternating current (AC) in a complex system. A passive complex electrical system comprises both energy dissipater (resistor) and energy storage (capacitor) elements. If the system is purely resistive, then the opposition to AC or direct current (DC) is simply resistance. Materials or systems exhibiting multiple phases (such as composites or heterogeneous materials) commonly show a universal dielectric response, whereby dielectric spectroscopy reveals a power law relationship between the impedance (or the inverse term, admittance) and the frequency, ω, of the applied AC field. Almost any physico-chemical system, such as electrochemical cells, mass-beam oscillators, and even biological tissue possesses energy storage and dissipation properties. EIS examines them. This technique has grown tremendously in stature over the past few years and is now being widely employed in a wide variety of scientific fields such as fuel cell testing, biomolecular interaction, and microstructural characterization. Often, EIS reveals information about the reaction mechanism of an electrochemical process: different reaction steps will dominate at certain frequencies, and the frequency response shown by EIS can help identify the rate limiting step. Dielectric me
https://en.wikipedia.org/wiki/Heat%20capacities%20of%20the%20elements%20%28data%20page%29
Specific heat capacity Notes All values refer to 25 °C and to the thermodynamically stable standard state at that temperature unless noted. Values from CRC refer to "100 kPa (1 bar or 0.987 standard atmospheres)". Lange indirectly defines the values to be at a standard state pressure of "1 atm (101325 Pa)", although citing the same NBS and JANAF sources among others. It is assumed this inexactly refers to "ambient pressure".
https://en.wikipedia.org/wiki/Eidophor
An Eidophor was a video projector used to create theater-sized images from an analog video signal. The name Eidophor is derived from the Greek word-roots eido and phor meaning 'image' and 'bearer' (carrier). Its basic technology was the use of electrostatic charges to deform an oil surface. Origins and use The idea for the original Eidophor was conceived in 1939 in Zurich by Swiss physicist Fritz Fischer, professor at the Labor für technische Physik of the Swiss Federal Institute of Technology, with the first prototype being unveiled in 1943. A basic patent was filed on November 8, 1939, in Switzerland and granted by the United States Patent and Trademark Office (patent no. 2,391,451) to Friederich Ernst Fischer for the Process and appliance for projecting television pictures on 25 December 1945. During the Second World War, Edgar Gretener worked together with Fischer at the Institute of Technical Physics to develop a prototype. When Gretener launched his own company Dr. Edgar Gretener AG in 1941 to develop enciphering equipment for the Swiss army, he stopped working on Eidophor. Hugo Thiemann took over this responsibility at the ETH. After six years of work on this project at the ETH, Thiemann moved together with the project to the company Dr. Edgar Gretener AG, which was licensed by the ETH to further develop Eidophor, following Fischer's death in 1947. An original August 1952 magazine article in the Radio and Television News credits the development of the Eidophor to Edgar Gretener. Following the Second World War, a first demonstration of an Eidophor system as a cinema video projector was organized in the Cinema Theater REX in Zurich to show successfully a TV broadcast in April 1958. An even more promising perspective was the interest of Paramount Pictures and 20th Century Fox which experimented with the concept of "theatre television", where television images would be broadcast onto cinema screens. Over 100 cinemas were set up for the project, which fa
https://en.wikipedia.org/wiki/H.E.R.O.%20%28video%20game%29
H.E.R.O. (standing for Helicopter Emergency Rescue Operation) is a video game written by John Van Ryzin and published by Activision for the Atari 2600 in March 1984. It was ported to the Apple II, Atari 5200, Atari 8-bit family, ColecoVision, Commodore 64, MSX, and ZX Spectrum. The player uses a helicopter backpack and other tools to rescue victims trapped deep in a mine. The mine is made up of multiple screens using a flip screen style. Sega released a version of the game for its SG-1000 console in Japan in 1985. While the gameplay was identical, Sega changed the backpack from a helicopter to a jetpack. Gameplay The player assumes control of Roderick Hero (sometimes styled as "R. Hero"), a one-man rescue team. Miners working in Mount Leone are trapped, and it's up to Roderick to reach them. The player is equipped with a backpack-mounted helicopter unit, which allows him to hover and fly, along with a helmet-mounted laser and a limited supply of dynamite. Each level consists of a maze of mine shafts that Roderick must safely navigate in order to reach the miner trapped at the bottom. The backpack has a limited amount of power, so the player must reach the miner before the power supply is exhausted, in which the player restarts the level from the beginning if that happens. The player only needs enough power to reach the trapped miner - not to return with him as well. Mine shafts may be blocked by cave-ins or magma, which require dynamite to clear. The helmet laser can also destroy cave-ins, but far more slowly than dynamite. Unlike a cave-in, magma is lethal when touched. Later levels include walls of magma with openings that alternate between open and closed requiring skillful navigation. The mine shafts are populated by spiders, bats and other unknown creatures that are deadly to the touch; these creatures can be destroyed using the laser or dynamite. Some deep mines are flooded, forcing players to hover safely above the water. In later levels, monsters st
https://en.wikipedia.org/wiki/Opportunistic%20infection
An opportunistic infection is an infection caused by pathogens (bacteria, fungi, parasites or viruses) that take advantage of an opportunity not normally available. These opportunities can stem from a variety of sources, such as a weakened immune system (as can occur in acquired immunodeficiency syndrome or when being treated with immunosuppressive drugs, as in cancer treatment), an altered microbiome (such as a disruption in gut microbiota), or breached integumentary barriers (as in penetrating trauma). Many of these pathogens do not necessarily cause disease in a healthy host that has a non-compromised immune system, and can, in some cases, act as commensals until the balance of the immune system is disrupted. Opportunistic infections can also be attributed to pathogens which cause mild illness in healthy individuals but lead to more serious illness when given the opportunity to take advantage of an immunocompromised host. Types of opportunistic infections A wide variety of pathogens are involved in opportunistic infection and can cause a similarly wide range in pathologies. A partial list of opportunistic pathogens and their associated presentations includes: Bacteria Clostridioides difficile (formerly known as Clostridium difficile) is a species of bacteria that is known to cause gastrointestinal infection and is typically associated with the hospital setting. Legionella pneumophila is a bacterium that causes Legionnaire’s disease, a respiratory infection. Mycobacterium avium complex (MAC) is a group of two bacteria, M. avium and M. intracellulare, that typically co-infect, leading to a lung infection called mycobacterium avium-intracellulare infection. Mycobacterium tuberculosis is a species of bacteria that causes tuberculosis, a respiratory infection. Pseudomonas aeruginosa is a bacterium that can cause respiratory infections. It is frequently associated with cystic fibrosis and hospital-acquired infections. Salmonella is a genus of bacteria, known
https://en.wikipedia.org/wiki/Riemann%20series%20theorem
In mathematics, the Riemann series theorem, also called the Riemann rearrangement theorem, named after 19th-century German mathematician Bernhard Riemann, says that if an infinite series of real numbers is conditionally convergent, then its terms can be arranged in a permutation so that the new series converges to an arbitrary real number, or diverges. This implies that a series of real numbers is absolutely convergent if and only if it is unconditionally convergent. As an example, the series 1 − 1 + 1/2 − 1/2 + 1/3 − 1/3 + ⋯ converges to 0 (for a sufficiently large number of terms, the partial sum gets arbitrarily near to 0); but replacing all terms with their absolute values gives 1 + 1 + 1/2 + 1/2 + 1/3 + 1/3 + ⋯, which sums to infinity. Thus the original series is conditionally convergent, and can be rearranged (by taking the first two positive terms followed by the first negative term, followed by the next two positive terms and then the next negative term, etc.) to give a series that converges to a different sum: 1 + 1/2 − 1 + 1/3 + 1/4 − 1/2 + ⋯ = ln 2. More generally, using this procedure with p positives followed by q negatives gives the sum ln(p/q). Other rearrangements give other finite sums or do not converge to any sum. History It is a basic result that the sum of finitely many numbers does not depend on the order in which they are added. For example, . The observation that the sum of an infinite sequence of numbers can depend on the ordering of the summands is commonly attributed to Augustin-Louis Cauchy in 1833. He analyzed the alternating harmonic series, showing that certain rearrangements of its summands result in different limits. Around the same time, Peter Gustav Lejeune Dirichlet highlighted that such phenomena is ruled out in the context of absolute convergence, and gave further examples of Cauchy's phenomena for some other series which fail to be absolutely convergent. In the course of his analysis of Fourier series and the theory of Riema
https://en.wikipedia.org/wiki/National%20personification
A national personification is an anthropomorphic personification of a state or the people(s) it inhabits. It may appear in political cartoons and propaganda. Some personifications in the Western world often took the Latin name of the ancient Roman province. Examples of this type include Britannia, Germania, Hibernia, Hispania, Helvetia and Polonia. Examples of personifications of the Goddess of Liberty include Marianne, the Statue of Liberty (Liberty Enlightening the World), and many examples of United States coinage. Another ancient model was Roma, a female deity who personified the city of Rome and his dominion over the territories of the Roman Empire. Examples of representations of the everyman or citizenry in addition to the nation itself are Deutscher Michel, John Bull and Uncle Sam. Gallery Personifications by country or territory See also Afghanis-tan, a manga originally published as a webcomic about Central Asia with personified countries. Polandball, a contemporary form of national personification in which countries are drawn by Internet users as stereotypic balls and shared as comics on online communities. Hetalia: Axis Powers, an anime about personified countries interacting, mostly taking place within the World Wars. Mural crown National animal, often personifies a nation in cartoons. National emblem, for other metaphors for nations. National god, a deity that embodies a nation. National patron saint, a Saint that is regarded as the heavenly advocate of a nation.
https://en.wikipedia.org/wiki/DSCAM
DSCAM and Dscam are both abbreviations for Down syndrome cell adhesion molecule. In humans, DSCAM refers to a gene that encodes one of several protein isoforms. Down syndrome (DS), caused by trisomy 21, is the most common birth defect associated with intellectual disability. DSCAM plays a crucial role in the development of DS: it is expressed in the developing nervous system, with the highest level of expression occurring in the fetal brain. When over-expressed in the developing fetal central nervous system, it leads to Down syndrome. A homologue of the Dscam protein in Drosophila melanogaster has 38,016 isoforms arising from four variable exon clusters (12, 48, 33 and 2 alternatives, respectively). By comparison, the entire Drosophila melanogaster genome only has 15,016 genes. The diversity of isoforms from alternative splicing of the Dscam1 gene in D. melanogaster allows every neuron in the fly to display a unique set of Dscam proteins on its cell surface. Dscam interaction stimulates neuronal self-avoidance mechanisms that are essential for normal neural circuit development. History/discovery The DSCAM protein structure is conserved, with roughly more than 20% amino acid identity across the deuterostomes and protostomes, and assuming an ancestral homologous gene, places the origin of the DSCAM gene at >600 million years ago. Since then, the DSCAM gene has been duplicated at least once in vertebrates and insects. DSCAM was first identified in an effort to characterize proteins located within human chromosome band 21q22, a region known to play a critical role in Down syndrome. The name Down syndrome cell adhesion molecule was chosen for a combination of reasons including: 1) chromosomal location, 2) its appropriate (normal) expression in developing neural tissue, and 3) its structure as an Ig receptor related to other cell adhesion molecules (CAMs). Gene The DSCAM gene has been identified in the DS critical region. Dscam is predicted to be a transmembrane pro
https://en.wikipedia.org/wiki/List%20of%20stochastic%20processes%20topics
In the mathematics of probability, a stochastic process is a random function. In practical applications, the domain over which the function is defined is a time interval (time series) or a region of space (random field). Familiar examples of time series include stock market and exchange rate fluctuations, signals such as speech, audio and video; medical data such as a patient's EKG, EEG, blood pressure or temperature; and random movement such as Brownian motion or random walks. Examples of random fields include static images, random topographies (landscapes), or composition variations of an inhomogeneous material. Stochastic processes topics This list is currently incomplete. See also :Category:Stochastic processes Basic affine jump diffusion Bernoulli process: discrete-time processes with two possible states. Bernoulli schemes: discrete-time processes with N possible states; every stationary process in N outcomes is a Bernoulli scheme, and vice versa. Bessel process Birth–death process Branching process Branching random walk Brownian bridge Brownian motion Chinese restaurant process CIR process Continuous stochastic process Cox process Dirichlet processes Finite-dimensional distribution First passage time Galton–Watson process Gamma process Gaussian process – a process where all linear combinations of coordinates are normally distributed random variables. Gauss–Markov process (cf. below) GenI process Girsanov's theorem Hawkes process Homogeneous processes: processes where the domain has some symmetry and the finite-dimensional probability distributions also have that symmetry. Special cases include stationary processes, also called time-homogeneous. Karhunen–Loève theorem Lévy process Local time (mathematics) Loop-erased random walk Markov processes are those in which the future is conditionally independent of the past given the present. Markov chain Markov chain central limit theorem Conti
https://en.wikipedia.org/wiki/Holonomic%20brain%20theory
Holonomic brain theory is a branch of neuroscience investigating the idea that human consciousness is formed by quantum effects in or between brain cells. Holonomic refers to representations in a Hilbert phase space defined by both spectral and space-time coordinates. Holonomic brain theory is opposed by traditional neuroscience, which investigates the brain's behavior by looking at patterns of neurons and the surrounding chemistry. This specific theory of quantum consciousness was developed by neuroscientist Karl Pribram initially in collaboration with physicist David Bohm building on the initial theories of holograms originally formulated by Dennis Gabor. It describes human cognition by modeling the brain as a holographic storage network. Pribram suggests these processes involve electric oscillations in the brain's fine-fibered dendritic webs, which are different from the more commonly known action potentials involving axons and synapses. These oscillations are waves and create wave interference patterns in which memory is encoded naturally, and the wave function may be analyzed by a Fourier transform. Gabor, Pribram and others noted the similarities between these brain processes and the storage of information in a hologram, which can also be analyzed with a Fourier transform. In a hologram, any part of the hologram with sufficient size contains the whole of the stored information. In this theory, a piece of a long-term memory is similarly distributed over a dendritic arbor so that each part of the dendritic network contains all the information stored over the entire network. This model allows for important aspects of human consciousness, including the fast associative memory that allows for connections between different pieces of stored information and the non-locality of memory storage (a specific memory is not stored in a specific location, i.e. a certain cluster of neurons). Origins and development In 1946 Dennis Gabor invented the hologram mathematically,
https://en.wikipedia.org/wiki/Canntaireachd
; ) is the ancient method of teaching, learning and memorizing Piobaireachd (also spelt Pibroch), a type of music primarily played on the Great Highland bagpipe. In the canntairached method of instruction, the teacher sings or hums the tune to the pupil, sometimes using specific syllables which signify the sounds to be produced by the bagpipe. History It appears that written staff notation began to come into use for bagpiping in the late 1700s or early 1800s. Seumas MacNeill, founder of The College of Piping, puts the date at 1803; The Piobaireachd Society holds that this occurred earlier, in the latter half of the eighteenth century. Prior to that time, instructors had to use other methods for teaching bagpipe tunes to students: by singing in canntaireachd, by playing the pipes for the student, or most likely a combination of both methods. The Campbell (Nether Lorn) canntaireachd Efforts were made to translate the vocal tradition into written form. The earliest known written collection dates to the early 1790s. It was written by Colin Mòr Campbell of Nether Lorn parish in Argyll. While Campbell's system had its origins in chanted notation, his Campbell Canntaireachd is now viewed as written documentation, to be read rather than sung. Author William Donaldson noted: "Although Campbell's work was almost immediately superseded by a form of staff notation adapted specifically for the pipe, and remained unpublished and unrecognised until well into the 20th Century, it remains an important achievement and gives valuable insight into the musical organisation" of piobaireachd music. Other systems Neil McLeod of Gesto also published a system of canntaireachd. It was reputedly based on the singing of John MacCrimmon, one of the last practicing members of that well-known piping family. The MacArthur family of pipers are reported to have had their own oral form of canntaireachd, but it was not documented. A further variety of Canntaireachd and distinct collection of pibroc
https://en.wikipedia.org/wiki/KisMAC
KisMAC is a wireless network discovery tool for Mac OS X. It has a wide range of features, similar to those of Kismet (its Linux/BSD namesake). The program is geared toward network security professionals, and is not as novice-friendly as similar applications. Distributed under the GNU General Public License, KisMAC is free software. KisMAC will scan for networks passively on supported cards - including Apple's AirPort, and AirPort Extreme, and many third-party cards, and actively on any card supported by Mac OS X itself. Cracking of WEP and WPA keys, both by brute force, and exploiting flaws such as weak scheduling and badly generated keys is supported when a card capable of monitor mode is used, and packet reinjection can be done with a supported card (Prism2 and some Ralink cards). GPS mapping can be performed when an NMEA compatible GPS receiver is attached. Kismac2 is a fork of the original software with a new GUI, new features and that works for OS X 10.7 - 10.10, 64-bit only. It is no longer maintained. Data can also be saved in pcap format and loaded into programs such as Wireshark. KisMAC Features Reveals hidden / cloaked / closed SSIDs Shows logged in clients (with MAC Addresses, IP addresses and signal strengths) Mapping and GPS support Can draw area maps of network coverage PCAP import and export Support for 802.11b/g Different attacks against encrypted networks Deauthentication attacks AppleScript-able Kismet drone support (capture from a Kismet drone) KisMAC and Germany The project was created and led by Michael Rossberg until July 27, 2007, when he removed himself from the project due to changes in German law (specifically, StGB Section 202c) that "prohibits the production and distribution of security software". On this date, project lead was passed on to Geoffrey Kruse, maintainer of KisMAC since 2003, and active developer since 2001. KisMAC is no longer being actively being developed. Primary development, and the relocated KisMA
https://en.wikipedia.org/wiki/List%20of%20circle%20topics
This list of circle topics includes things related to the geometric shape, either abstractly, as in idealizations studied by geometers, or concretely in physical space. It does not include metaphors like "inner circle" or "circular reasoning" in which the word does not refer literally to the geometric shape. Geometry and other areas of mathematics Circle Circle anatomy Annulus (mathematics) Area of a disk Bipolar coordinates Central angle Circular sector Circular segment Circumference Concentric Concyclic Degree (angle) Diameter Disk (mathematics) Horn angle Measurement of a Circle List of topics related to Pole and polar Power of a point Radical axis Radius Radius of convergence Radius of curvature Sphere Tangent lines to circles Versor Specific circles Apollonian circles Circles of Apollonius Archimedean circle Archimedes' circles – the twin circles doubtfully attributed to Archimedes Archimedes' quadruplets Circle of antisimilitude Bankoff circle Brocard circle Carlyle circle Circumscribed circle (circumcircle) Midpoint-stretching polygon Coaxal circles Director circle Fermat–Apollonius circle Ford circle Fuhrmann circle Generalised circle GEOS circle Great circle Great-circle distance Circle of a sphere Horocycle Incircle and excircles of a triangle Inscribed circle Johnson circles Magic circle (mathematics) Malfatti circles Nine-point circle Orthocentroidal circle Osculating circle Riemannian circle Schinzel circle Schoch circles Spieker circle Tangent circles Twin circles Unit circle Van Lamoen circle Villarceau circles Woo circles Circle-derived entities Apollonian gasket Arbelos Bicentric polygon Bicentric quadrilateral Coxeter's loxodromic sequence of tangent circles Cyclic quadrilateral Cycloid Ex-tangential quadrilateral Hawaiian earring Inscribed angle Inscribed angle theorem Inversive distance Inversive geometry Irrational rotation Lens (geometry) Lune Lune of
https://en.wikipedia.org/wiki/Osculating%20circle
An osculating circle is a circle that best approximates the curvature of a curve at a specific point. It is tangent to the curve at that point and has the same curvature as the curve at that point. The osculating circle provides a way to understand the local behavior of a curve and is commonly used in differential geometry and calculus. More formally, in differential geometry of curves, the osculating circle of a sufficiently smooth plane curve at a given point p on the curve has been traditionally defined as the circle passing through p and a pair of additional points on the curve infinitesimally close to p. Its center lies on the inner normal line, and its curvature defines the curvature of the given curve at that point. This circle, which is the one among all tangent circles at the given point that approaches the curve most tightly, was named circulus osculans (Latin for "kissing circle") by Leibniz. The center and radius of the osculating circle at a given point are called center of curvature and radius of curvature of the curve at that point. A geometric construction was described by Isaac Newton in his Principia: Nontechnical description Imagine a car moving along a curved road on a vast flat plane. Suddenly, at one point along the road, the steering wheel locks in its present position. Thereafter, the car moves in a circle that "kisses" the road at the point of locking. The curvature of the circle is equal to that of the road at that point. That circle is the osculating circle of the road curve at that point. Mathematical description Let be a regular parametric plane curve, where is the arc length (the natural parameter). This determines the unit tangent vector , the unit normal vector , the signed curvature and the radius of curvature at each point for which is composed: Suppose that P is a point on γ where . The corresponding center of curvature is the point Q at distance R along N, in the same direction if k is positive and in the opposite
https://en.wikipedia.org/wiki/Area%20of%20a%20circle
In geometry, the area enclosed by a circle of radius is . Here the Greek letter represents the constant ratio of the circumference of any circle to its diameter, approximately equal to 3.14159. One method of deriving this formula, which originated with Archimedes, involves viewing the circle as the limit of a sequence of regular polygons with an increasing number of sides. The area of a regular polygon is half its perimeter multiplied by the distance from its center to its sides, and because the sequence tends to a circle, the corresponding formula–that the area is half the circumference times the radius–namely, , holds for a circle. Terminology Although often referred to as the area of a circle in informal contexts, strictly speaking the term disk refers to the interior region of the circle, while circle is reserved for the boundary only, which is a curve and covers no area itself. Therefore, the area of a disk is the more precise phrase for the area enclosed by a circle. History Modern mathematics can obtain the area using the methods of integral calculus or its more sophisticated offspring, real analysis. However, the area of a disk was studied by the Ancient Greeks. Eudoxus of Cnidus in the fifth century B.C. had found that the area of a disk is proportional to its radius squared. Archimedes used the tools of Euclidean geometry to show that the area inside a circle is equal to that of a right triangle whose base has the length of the circle's circumference and whose height equals the circle's radius in his book Measurement of a Circle. The circumference is 2r, and the area of a triangle is half the base times the height, yielding the area  r2 for the disk. Prior to Archimedes, Hippocrates of Chios was the first to show that the area of a disk is proportional to the square of its diameter, as part of his quadrature of the lune of Hippocrates, but did not identify the constant of proportionality. Historical arguments A variety of arguments have been advance
https://en.wikipedia.org/wiki/Coating
A coating is a covering that is applied to the surface of an object, usually referred to as the substrate. The purpose of applying the coating may be decorative, functional, or both. Coatings may be applied as liquids, gases or solids e.g. Powder coatings. Paints and lacquers are coatings that mostly have dual uses, which are protecting the substrate and being decorative, although some artists paints are only for decoration, and the paint on large industrial pipes is for preventing corrosion and identification e.g. blue for process water, red for fire-fighting control. Functional coatings may be applied to change the surface properties of the substrate, such as adhesion, wettability, corrosion resistance, or wear resistance. In other cases, e.g. semiconductor device fabrication (where the substrate is a wafer), the coating adds a completely new property, such as a magnetic response or electrical conductivity, and forms an essential part of the finished product. A major consideration for most coating processes is that the coating is to be applied at a controlled thickness, and a number of different processes are in use to achieve this control, ranging from a simple brush for painting a wall, to some very expensive machinery applying coatings in the electronics industry. A further consideration for 'non-all-over' coatings is that control is needed as to where the coating is to be applied. A number of these non-all-over coating processes are printing processes. Many industrial coating processes involve the application of a thin film of functional material to a substrate, such as paper, fabric, film, foil, or sheet stock. If the substrate starts and ends the process wound up in a roll, the process may be termed "roll-to-roll" or "web-based" coating. A roll of substrate, when wound through the coating machine, is typically called a web. Applications Coating applications are diverse and serve many purposes. Coatings can be both decorative and have other functions. A
https://en.wikipedia.org/wiki/Networked%20Readiness%20Index
The Networked Readiness Index is an index published annually by the World Economic Forum in collaboration with INSEAD, as part of their annual Global Information Technology Report. It aims to measure the degree of readiness of countries to exploit opportunities offered by information and communications technology. The Networked Readiness Index was first conceived of and constructed by Geoffrey Kirkman, Jeffrey Sachs and Carlos Osorio in 2002 at Harvard University. The 2016 edition covers 139 nations.
https://en.wikipedia.org/wiki/Nuclear-free%20zone
A nuclear-free zone is an area in which nuclear weapons (see nuclear-weapon-free zone) and nuclear power plants are banned. The specific ramifications of these depend on the locale in question. Nuclear-free zones usually neither address nor prohibit radiopharmaceuticals used in nuclear medicine even though many of them are produced in nuclear reactors. They typically do not prohibit other nuclear technologies such as cyclotrons used in particle physics. Several sub-national authorities worldwide have declared themselves "nuclear-free". However, the label is often symbolic, as nuclear policy is usually determined and regulated at higher levels of government: nuclear weapons and components may traverse nuclear-free zones via military transport without the knowledge or consent of local authorities which had declared nuclear-free zones. Palau became the first nuclear-free nation in 1980. New Zealand was the first Western-allied nation to legislate towards a national nuclear free zone by effectively renouncing the nuclear deterrent. Nuclear-free zone by geographical areas Antarctica The Antarctic Treaty System banned military activity on the continent, effective in 1961, and suspended territorial claims. A nuclear reactor provided electricity for McMurdo Station, operated by the United States in the New Zealand Antarctic Territory from 1962 to 1972. Australia Many Australian local government areas of Australia have passed anti-nuclear weaponry legislation; notable among these are Brisbane, capital of Queensland, which has been nuclear weapon free since 1983, and the South and North Sydney councils. Fremantle in Western Australia was the first council to declare itself a nuclear free zone in 1980. The continuing presence of nuclear armed and powered warships in the city's port led to many protests during the 1980s and 1990s. However the passage of such legislation is generally considered just a symbolic measure. The majority of councils which have passed anti-nuc
https://en.wikipedia.org/wiki/Cetology%20of%20Moby-Dick
The cetology in Herman Melville's 1851 novel, Moby-Dick, is a running theme that appears most importantly in Ishmael's zoological classification of whales, in Chapter 32, "Cetology". The purpose of that chapter, the narrator says, is "to attend to a matter almost indispensable to a thorough appreciative understanding of the more special leviathanic revelations and allusions of all sorts which are to follow." Further descriptions of whales and their anatomy occur in seventeen other chapters, including "The Sperm Whale's Head -- Contrasted View" (Chapter 74) and "The Right Whale's Head -- Contrasted View" (Chapter 75). Although writing a work of fiction, Melville included extensive material that presents the properties of whales in a seemingly scientific form. Many of the observations are taken from Melville's reading in whaling sources in addition to his own experiences in whaling in the 1840s. They include descriptions of a range of species in the order of Cetacea. The detailed descriptions are a digression from the story-line, but critics argue that their objectivity and encyclopedic form balance the spiritual elements of the novel and ground its cosmic speculations. These chapters, however, are the most likely to be omitted in abridged versions. Description Ishmael's observations are not a complete scientific study, even by standards of the day. The cetological chapters do add variety and give readers information that helps them understand the story, but Melville also has thematic and aesthetic purposes. Critics justify and even praise the sections for keeping the metaphysical and spiritual meanings in the novel anchored to matter-of-fact reality and balance the extraordinary with the ordinary. The extensive descriptions show that the starting point for the “cosmic and spiritual is earthly and physical” and give the novel what one critic calls the “illusion of objectivity and the effect of a wide view of life.” Ishmael asserts in the novel that the whale is a
https://en.wikipedia.org/wiki/Elliott%E2%80%93Halberstam%20conjecture
In number theory, the Elliott–Halberstam conjecture is a conjecture about the distribution of prime numbers in arithmetic progressions. It has many applications in sieve theory. It is named for Peter D. T. A. Elliott and Heini Halberstam, who stated the conjecture in 1968. Stating the conjecture requires some notation. Let , the prime-counting function, denote the number of primes less than or equal to . If is a positive integer and is coprime to , we let denote the number of primes less than or equal to which are equal to modulo . Dirichlet's theorem on primes in arithmetic progressions then tells us that where is Euler's totient function. If we then define the error function where the max is taken over all coprime to , then the Elliott–Halberstam conjecture is the assertion that for every and there exists a constant such that for all . This conjecture was proven for all by Enrico Bombieri and A. I. Vinogradov (the Bombieri–Vinogradov theorem, sometimes known simply as "Bombieri's theorem"); this result is already quite useful, being an averaged form of the generalized Riemann hypothesis. It is known that the conjecture fails at the endpoint . The Elliott–Halberstam conjecture has several consequences. One striking one is the result announced by Dan Goldston, János Pintz, and Cem Yıldırım, which shows (assuming this conjecture) that there are infinitely many pairs of primes which differ by at most 16. In November 2013, James Maynard showed that subject to the Elliott–Halberstam conjecture, one can show the existence of infinitely many pairs of consecutive primes that differ by at most 12. In August 2014, Polymath group showed that subject to the generalized Elliott–Halberstam conjecture, one can show the existence of infinitely many pairs of consecutive primes that differ by at most 6. Without assuming any form of the conjecture, the lowest proven bound is 246. See also Barban–Davenport–Halberstam theorem Sexy prime Siegel–Walfisz theorem No
https://en.wikipedia.org/wiki/Opaque%20predicate
In computer programming, an opaque predicate is a predicate, an expression that evaluates to either "true" or "false", for which the outcome is known by the programmer a priori, but which, for a variety of reasons, still needs to be evaluated at run time. Opaque predicates have been used as watermarks, as they will be identifiable in a program's executable. They can also be used to prevent an overzealous optimizer from optimizing away a portion of a program. Another use is in obfuscating the control or dataflow of a program to make reverse engineering harder. External links "A Method for Watermarking Java Programs via Opaque Predicates" Computer programming
https://en.wikipedia.org/wiki/4x4%20Evo
4x4 Evo (also re-released as 4x4 Evolution) is a video game developed by Terminal Reality for the Windows, Macintosh, Sega Dreamcast, and PlayStation 2 platforms. It is one of the first console games to have cross-platform online play where Dreamcast, Macintosh, and Windows versions of the game appear online at the same time. The game can use maps created by users to download onto a hard drive as well as a Dreamcast VMU. All versions of the game are similar in quality and gameplay although the online systems feature a mode to customize the players' own truck and use it online. The game is still online-capable on all systems except for PlayStation 2. This was Terminal Reality's only video game to be released for the Dreamcast. Gameplay Gameplay features off-road racing of over 70 licensed truck manufacturers. Modes featured in the game were Career Mode, Online Mode, Map editor, and versus mode. The career mode is the most important part of the game to feature a way to buy better trucks similar to the Gran Turismo series. The Career mode also gives the player six purpose-built race vehicles: Chevrolet TrailBlazer Race SUV 2WD, Dodge Dakota Race Truck 4WD, Ford F-150 Race Truck 2WD, Mitsubishi Pajero Rally 4WD, Nissan Xterra Race SUV 4WD, and the Toyota Tundra Race Truck 2WD. They cost anywhere from $350,000 up to $850,000. These are the fastest vehicles in the game. Recently, KC Vale acquired permission from Terminal Reality, Incorporated to upload the game to his Web server, but the original vehicles have been removed due to an expired license. Multiplayer Although this game was released many years ago, the online community still exists with a fair number of players and some moderators who manage chat rooms. Dedicated servers are long gone, but it is possible to host games over the Internet and join other player-hosted games. The game has been brought back online thanks to the Dreamcast community as one of the more than 20 games so far to be brought back online fo
https://en.wikipedia.org/wiki/Singlet%20oxygen
Singlet oxygen, systematically named dioxygen(singlet) and dioxidene, is a gaseous inorganic chemical with the formula O=O (also written as or ), which is in a quantum state where all electrons are spin paired. It is kinetically unstable at ambient temperature, but the rate of decay is slow. The lowest excited state of the diatomic oxygen molecule is a singlet state. It is a gas with physical properties differing only subtly from those of the more prevalent triplet ground state of O2. In terms of its chemical reactivity, however, singlet oxygen is far more reactive toward organic compounds. It is responsible for the photodegradation of many materials but can be put to constructive use in preparative organic chemistry and photodynamic therapy. Trace amounts of singlet oxygen are found in the upper atmosphere and in polluted urban atmospheres where it contributes to the formation of lung-damaging nitrogen dioxide. It often appears and coexists confounded in environments that also generate ozone, such as pine forests with photodegradation of turpentine. The terms 'singlet oxygen' and 'triplet oxygen' derive from each form's number of electron spins. The singlet has only one possible arrangement of electron spins with a total quantum spin of 0, while the triplet has three possible arrangements of electron spins with a total quantum spin of 1, corresponding to three degenerate states. In spectroscopic notation, the lowest singlet and triplet forms of O2 are labeled 1Δg and 3Σ, respectively. Electronic structure Singlet oxygen refers to one of two singlet electronic excited states. The two singlet states are denoted 1Σ and 1Δg (the preceding superscript "1" indicates a singlet state). The singlet states of oxygen are 158 and 95 kilojoules per mole higher in energy than the triplet ground state of oxygen. Under most common laboratory conditions, the higher energy 1Σ singlet state rapidly converts to the more stable, lower energy 1Δg singlet state. This more stabl
https://en.wikipedia.org/wiki/Lettering
Lettering is an umbrella term that covers the art of drawing letters, instead of simply writing them. Lettering is considered an art form, where each letter in a phrase or quote acts as an illustration. Each letter is created with attention to detail and has a unique role within a composition. Lettering is created as an image, with letters that are meant to be used in a unique configuration. Lettering words do not always translate into alphabets that can later be used in a typeface, since they are created with a specific word in mind. Examples Lettering includes that used for purposes of blueprints and comic books, as well as decorative lettering such as sign painting and custom graphics. For instance; on posters, for a letterhead or business wordmark, lettering in stone, lettering for advertisements, tire lettering, fileteado, graffiti, or on chalkboards. Lettering may be drawn, incised, applied using stencils, using a digital medium with a stylus, or a vector program. Lettering that was not created using digital tools is commonly referred to as hand-lettering. In the past, almost all decorative lettering other than that on paper was created as custom or hand-painted lettering. The use of fonts in place of lettering has increased due to new printing methods, phototypesetting, and digital typesetting, which allow fonts to be printed at any desired size. Lettering has been particularly important in Islamic art, due to the Islamic practice of avoiding depictions of sentient beings generally and of Muhammad in particular, and instead using representations in the form of Islamic calligraphy, including hilyes, or artforms based on written descriptions of Muhammed. More recently, there has been an influx of aspiring artists attempting hand-lettering with brush pens and digital mediums. Some popular styles are sans serif, serif, cursive/script, vintage, blackletter ("gothic") calligraphy, graffiti, and creative lettering. Related artforms Lettering can be confused
https://en.wikipedia.org/wiki/Ethogram
An ethogram is a catalogue or inventory of behaviours or actions exhibited by an animal used in ethology. The behaviours in an ethogram are usually defined to be mutually exclusive and objective, avoiding subjectivity and functional inference as to their possible purpose. For example, a species may use a putative threat display, which in the ethogram is given a descriptive name such as "head forward" or "chest-beating display", and not "head forward threat" or "chest-beating threat". This degree of objectivity is required because what looks like "courtship" might have a completely different function, and in addition, the same motor patterns in different species can have very different functions (e.g. tail wagging in cats and dogs). Objectivity and clarity in the definitions of behaviours also improve inter-observer reliability. Often, ethograms are hierarchical in presentation. The defined behaviours are recorded under broader categories of behaviour which may allow functional inference such that "head forward" is recorded under "Aggression". In ethograms of social behaviour, the ethogram may also indicate the "Giver" and "Receiver" of activities. Sometimes, the definition of a behaviour in an ethogram may have arbitrary components. For example, "Stereotyped licking" might be defined as "licking the bars of the cage more than 5 times in 30 seconds". The definition may be arguable, but if it is stated clearly, it fulfils the requirements of scientific repeatability and clarity of reporting and data recording. Some ethograms are given in pictorial form and not only catalogue the behaviours but indicate the frequency of their occurrence and the probability that one behaviour follows another. This probability can be indicated numerically or by the thickness of an arrow connecting the two behaviours. Sometimes the proportion of time that each behaviour occupies can be represented in a pie chart or bar chart Animal welfare science Ethograms are used extens
https://en.wikipedia.org/wiki/Remote%20Database%20Access
Remote database access (RDA) is a protocol standard for database access produced in 1993 by the International Organization for Standardization (ISO). Despite early efforts to develop proof of concept implementations of RDA for major commercial remote database management systems (RDBMSs) (including Oracle, Rdb, NonStop SQL and Teradata), this standard has not found commercial support from database vendors. The standard has since been withdrawn, and replaced by ISO/IEC 9579:1999 - Information technology -- Remote Database Access for SQL, which has also been withdrawn, and replaced by ISO/IEC 9579:2000 Information technology -- Remote database access for SQL with security enhancement. Purpose The purpose of RDA is to describe the connection of a database client to a database server. It includes features for: communicating database operations and parameters from the client to the server, in return, transporting result data from the server to the client, database transaction management, and exchange of information. RDA is an application-level protocol, inasmuch that it builds on an existing network connection between client and server. In the case of TCP/IP connections, RFC 1066 is used for implementing RDA. History RDA was published in 1993 as a combined standard of ANSI, ISO (International Organization for Standardization) and IEC (International Electrotechnical Commission). The standards definition comprises two parts: ANSI/ISO/IEC 9579-1:1993 - Remote Database Access -- Part 1: Generic Model, Service and Protocol ANSI/ISO/IEC 9579-2:1993
https://en.wikipedia.org/wiki/Boltzmann%20relation
In a plasma, the Boltzmann relation describes the number density of an isothermal charged particle fluid when the thermal and the electrostatic forces acting on the fluid have reached equilibrium. In many situations, the electron density of a plasma is assumed to behave according to the Boltzmann relation, due to their small mass and high mobility. Equation If the local electrostatic potentials at two nearby locations are φ1 and φ2, the Boltzmann relation for the electrons takes the form: where ne is the electron number density, Te is the temperature of the plasma, and kB is the Boltzmann constant. Derivation A simple derivation of the Boltzmann relation for the electrons can be obtained using the momentum fluid equation of the two-fluid model of plasma physics in absence of a magnetic field. When the electrons reach dynamic equilibrium, the inertial and the collisional terms of the momentum equations are zero, and the only terms left in the equation are the pressure and electric terms. For an isothermal fluid, the pressure force takes the form while the electric term is . Integration leads to the expression given above. In many problems of plasma physics, it is not useful to calculate the electric potential on the basis of the Poisson equation because the electron and ion densities are not known a priori, and if they were, because of quasineutrality the net charge density is the small difference of two large quantities, the electron and ion charge densities. If the electron density is known and the assumptions hold sufficiently well, the electric potential can be calculated simply from the Boltzmann relation. Inaccurate situations Discrepancies with the Boltzmann relation can occur, for example, when oscillations occur so fast that the electrons cannot find a new equilibrium (see e.g. plasma oscillations) or when the electrons are prevented from moving by a magnetic field (see e.g. lower hybrid oscillations). See also List of plasma (physics) articles
https://en.wikipedia.org/wiki/Furazolidone
Furazolidone is a nitrofuran antibacterial agent and monoamine oxidase inhibitor (MAOI). It is marketed by Roberts Laboratories under the brand name Furoxone and by GlaxoSmithKline as Dependal-M. Medical uses Furazolidone has been used in human and veterinary medicine. It has a broad spectrum of activity being active against Gram positivei Clostridium perfringens Corynebacterium pyogenes Streptococci Staphylococci Gram negative Escherichia coli Salmonella dublin Salmonella typhimurium Shigella Protozoa Giardia lamblia Eimeria species Histomonas meleagridis Use in humans In humans it has been used to treat diarrhoea and enteritis caused by bacteria or protozoan infections, including traveler's diarrhoea, cholera and bacteremic salmonellosis. From the early 1970's it has been used in China to treat peptic ulcers, where the mechanism is treatment of the causative Helicobacter pylori infection. In 2002, a journal article suggested it's use in treatment of helicobacter pylori infections in children. Furazolidone has also been used for giardiasis (due to Giardia lamblia), amoebiasis and shigellosis also though it is not a first line treatment. Use in animals As a veterinary medicine, furazolidone has been used with some success to treat salmonids for Myxobolus cerebralis infections. It has also been used in aquaculture. Since furazolidone is a nitrofuran antibiotic, its use in food animals is currently prohibited by the FDA under the Animal Medicinal Drug Use Clarification Act, 1994. Furazolidone is no longer available in the US. Use in laboratory It is used to differentiate micrococci and staphylococci. Mechanism of action It is believed to work by crosslinking of DNA. Side effects Though an effective antibiotic when all others fail, against extremely drug resistant infections, it has many side effects. including inhibition of monoamine oxidase, and as with other nitrofurans generally, minimum inhibitory concentrations also produce systemic toxic
https://en.wikipedia.org/wiki/Benomyl
Benomyl (also marketed as Benlate) is a fungicide introduced in 1968 by DuPont. It is a systemic benzimidazole fungicide that is selectively toxic to microorganisms and invertebrates, especially earthworms, but nontoxic toward mammals. Due to the prevalence of resistance of parasitic fungi to benomyl, it and similar pesticides are of diminished effectiveness. Nonetheless it is widely used. Toxicity Benomyl is of low toxicity to mammals. It has an arbitrary LD50 of "greater than 10,000 mg/kg/day for rats". Skin irritation may occur through industrial exposure, and florists, mushroom pickers and floriculturists have reported allergic reactions to benomyl. In a laboratory study, dogs fed benomyl in their diets for three months developed no major toxic effects, but did show evidence of altered liver function at the highest dose (150 mg/kg). With longer exposure, more severe liver damage occurred, including cirrhosis. The US Environmental Protection Agency classified benomyl as a possible carcinogen. Carcinogenic studies have produced conflicting results. A two-year experimental study on mice has shown it "probably" causes an increase in liver tumours. The British Ministry of Agriculture Fisheries and Food took the view this was brought about by the hepatotoxic effect of benomyl. In regards to occupational exposures to benomyl, the Occupational Safety and Health Administration has set a permissible exposure limit of 15 mg/m3 for total exposure over an eight-hour time-weighted average, and 5 mg/m3 for respiratory exposures. Birth defects In 1996, a Miami jury awarded US$4 million to a child whose mother was exposed in pregnancy to Benlate. The child was born without eyes (anophthalmia). The mother had been exposed to an unusually high dose of Benlate through her exposure from a nearby farm, during pregnancy. An important issue in the case was the timing and magnitude of exposure. In October 2008, DuPont paid confidential settlements to two New Zealand families who
https://en.wikipedia.org/wiki/String-to-string%20correction%20problem
In computer science, the string-to-string correction problem refers to determining the minimum cost sequence of edit operations necessary to change one string into another (i.e., computing the shortest edit distance). Each type of edit operation has its own cost value. A single edit operation may be changing a single symbol of the string into another (cost WC), deleting a symbol (cost WD), or inserting a new symbol (cost WI). If all edit operations have the same unit costs (WC = WD = WI = 1) the problem is the same as computing the Levenshtein distance of two strings. Several algorithms exist to provide an efficient way to determine string distance and specify the minimum number of transformation operations required. Such algorithms are particularly useful for delta creation operations where something is stored as a set of differences relative to a base version. This allows several versions of a single object to be stored much more efficiently than storing them separately. This holds true even for single versions of several objects if they do not differ greatly, or anything in between. Notably, such difference algorithms are used in molecular biology to provide some measure of kinship between different kinds of organisms based on the similarities of their macromolecules (such as proteins or DNA). Extension The extended variant of the problem includes a new type of edit operation: swapping any two adjacent symbols, with a cost of WS. This version can be solved in polynomial time under certain restrictions on edit operation costs. Robert A. Wagner (1975) showed that the general problem is NP-complete. In particular, he proved that when WI < WC = WD = ∞ and 0 < WS < ∞ (or equivalently, changing and deletion are not permitted), the problem is NP-complete.
https://en.wikipedia.org/wiki/Finings
Finings are substances that are usually added at or near the completion of the processing of making wine, beer, and various nonalcoholic juice beverages. They are used to remove organic compounds, either to improve clarity or adjust flavor or aroma. The removed compounds may be sulfides, proteins, polyphenols, benzenoids, or copper ions. Unless they form a stable sediment in the final container, the spent finings are usually discarded from the beverage along with the target compounds that they capture. Substances used as finings include egg whites, blood, milk, isinglass, and Irish moss. These are still used by some producers, but more modern substances have also been introduced and are more widely used, including bentonite, gelatin, casein, carrageenan, alginate, diatomaceous earth, pectinase, pectolyase, PVPP, kieselsol (colloidal silica), copper sulfate, dried albumen (egg whites), hydrated yeast, and activated carbon. Actions Finings’ actions may be broadly categorized as either electrostatic, adsorbent, ionic, or enzymatic. The electrostatic types comprise the vast majority; including all but activated carbon, fining yeast, PVPP, copper sulfate, pectinase and pectolase. Their purpose is to selectively remove proteins, tannins (polyphenolics) and coloring particles (melanoidins). They must be used as a batch technique, as opposed to flow-through processing methods such as filters. Their particles each have an electric charge which is attracted to the oppositely charged particles of the colloidal dispersion that they are breaking. The result is that the two substances become bound as a stable complex; their net charge becoming neutral. Thus the agglomeration of a semi-solid follows, which may be separated from the beverage either as a floating or settled mass. The only adsorbent types of finings in use are activated carbon and specialized fining yeasts. Although activated carbon may be implemented as a flow-through filter, it is also commonly utilized as a ba
https://en.wikipedia.org/wiki/Robinson%E2%80%93Schensted%20correspondence
In mathematics, the Robinson–Schensted correspondence is a bijective correspondence between permutations and pairs of standard Young tableaux of the same shape. It has various descriptions, all of which are of algorithmic nature, it has many remarkable properties, and it has applications in combinatorics and other areas such as representation theory. The correspondence has been generalized in numerous ways, notably by Knuth to what is known as the Robinson–Schensted–Knuth correspondence, and a further generalization to pictures by Zelevinsky. The simplest description of the correspondence is using the Schensted algorithm , a procedure that constructs one tableau by successively inserting the values of the permutation according to a specific rule, while the other tableau records the evolution of the shape during construction. The correspondence had been described, in a rather different form, much earlier by Robinson , in an attempt to prove the Littlewood–Richardson rule. The correspondence is often referred to as the Robinson–Schensted algorithm, although the procedure used by Robinson is radically different from the Schensted algorithm, and almost entirely forgotten. Other methods of defining the correspondence include a nondeterministic algorithm in terms of jeu de taquin. The bijective nature of the correspondence relates it to the enumerative identity where denotes the set of partitions of (or of Young diagrams with squares), and denotes the number of standard Young tableaux of shape . The Schensted algorithm The Schensted algorithm starts from the permutation written in two-line notation where , and proceeds by constructing sequentially a sequence of (intermediate) ordered pairs of Young tableaux of the same shape: where are empty tableaux. The output tableaux are and . Once is constructed, one forms by inserting into , and then by adding an entry to in the square added to the shape by the insertion (so that and have equal shapes for all
https://en.wikipedia.org/wiki/CIDNP
CIDNP (chemically induced dynamic nuclear polarization), often pronounced like "kidnip", is a nuclear magnetic resonance (NMR) technique that is used to study chemical reactions that involve radicals. It detects the non-Boltzmann (non-thermal) nuclear spin state distribution produced in these reactions as enhanced absorption or emission signals. CIDNP was discovered in 1967 by Bargon and Fischer, and, independently, by Ward and Lawler. Early theories were based on dynamic nuclear polarisation (hence the name) using the Overhauser Effect. The subsequent experiments, however, have found that in many cases DNP fails to explain CIDNP polarization phase. In 1969 an alternative explanation which relies on the nuclear spins affecting the probability of a radical pair recombining or separating. It is related to chemically induced dynamic electron polarization (CIDEP) insofar as the radical-pair mechanism explains both phenomena. Concept and experimental set-up The effect is detected by NMR spectroscopy, usually using 1H NMR spectrum, as enhanced absorption or emission signals ("negative peaks"). The effect arises when unpaired electrons (radicals) are generated during a chemical reaction involving heat or light within the NMR tube. The magnetic field in the spectrometer interacts with the magnetic fields that are caused by the spins of the protons. The two spins of protons produce two slightly different energy levels. In normal conditions, slightly more nuclei, about 10 parts in a million are found in the lower energy level. In contrast, CIDNP produces greatly imbalanced populations, with far greater numbers of spins in upper energy level in some products of the reaction and greater numbers in the lower energy level in other products. The spectrometer uses radio frequencies to detect these differences. Radical pair mechanism The radical pair mechanism is currently accepted as the most common cause of CIDNP. This theory was proposed by Closs, and, independently, by Kapte