source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Rod%20Holt | Frederick Rodney Holt (born 1934) is an American computer engineer and political activist. He is Apple employee #5, and developed the unique power supply for the 1977 Apple II. Actor Ron Eldard portrayed him in the 2013 film, Jobs.
Background
Holt was born in 1934 to a psychiatry resident father and artist and teacher mother. He became interested in electronics by the age of 14 and taught ham radio courses for Wellesley High School by the age of 16.
In 1952, after graduating from high school, Holt married his high school girlfriend Joanne. He also joined Ohio State University as a math major. He and Joanne had two children, Christine and Cheryl, during this period. Holt later stated that while at OSU, he also "became entranced with motorcycles and opened up my own motorcycle shop. That adventure failed within a year, however, and I then worked in the electronics industry to support my family. I continued to race bikes intermittently for the next twenty years." By 1958, when he was a grad student at OSU, he also became a political activist. He would later become involved in OSU's Free Speech Movement, served as editor of the Free Speech Press, and reconfigured himself as a socialist.
After graduate school, he became an electrical engineer with the Hickok Electrical Instrument company in Cleveland, Ohio, and later joined Atari as an Analog Engineer.
Apple Computer
During the early development of the Apple II, Apple Inc.'s co-founder, Steve Jobs asked his former boss, Atari's Al Alcorn for help with the power supply. Alcorn redirected Jobs to Holt, who saw himself as "a second-string quarterback" at Atari. He was initially "skeptical of Jobs and of Apple" (Swaine and Freiberger note that Holt "had trouble understanding the West Coast culture that shaped Apple's Founders"), telling Jobs that his rate was $200 per day. Jobs, however, replied that "we can afford you" and Holt joined the Apple II team in part responding to Alcorn's request to "help the kids out." Ho |
https://en.wikipedia.org/wiki/VEGF%20receptor | VEGF receptors (VEGFRs) are receptors for vascular endothelial growth factor (VEGF). There are three main subtypes of VEGFR, numbered 1, 2 and 3. Depending on alternative splicing, they may be membrane-bound (mbVEGFR) or soluble (sVEGFR).
Inhibitors of VEGFR are used in the treatment of cancer.
VEGF
Vascular endothelial growth factor (VEGF) is an important signaling protein involved in both vasculogenesis (the formation of the circulatory system) and angiogenesis (the growth of blood vessels from pre-existing vasculature). As its name implies, VEGF activity is restricted mainly to cells of the vascular endothelium, although it does have effects on a limited number of other cell types (e.g. stimulation monocyte/macrophage migration). In vitro, VEGF has been shown to stimulate endothelial cell mitogenesis and cell migration. VEGF also enhances microvascular permeability and is sometimes referred to as vascular permeability factor.
Receptor biology
All members of the VEGF family stimulate cellular responses by binding to tyrosine kinase receptors (the VEGFRs) on the cell surface, causing them to dimerize and become activated through transphosphorylation. The VEGF receptors have an extracellular portion consisting of 7 immunoglobulin-like domains, a single transmembrane spanning region and an intracellular portion containing a split tyrosine-kinase domain.
VEGF-A binds to VEGFR-1 (Flt-1) and VEGFR-2 (KDR/Flk-1). VEGFR-2 appears to mediate almost all of the known cellular responses to VEGF. The function of VEGFR-1 is less well defined, although it is thought to modulate VEGFR-2 signaling. Another function of VEGFR-1 is to act as a dummy/decoy receptor, sequestering VEGF from VEGFR-2 binding (this appears to be particularly important during vasculogenesis in the embryo). In fact, an alternatively spliced form of VEGFR-1 (sFlt1) is not a membrane bound protein but is secreted and functions primarily as a decoy. A third receptor has been discovered (VEGFR-3), however |
https://en.wikipedia.org/wiki/Prospect%20theory | Prospect theory is a theory of behavioral economics, judgment and decision making that was developed by Daniel Kahneman and Amos Tversky in 1979. The theory was cited in the decision to award Kahneman the 2002 Nobel Memorial Prize in Economics.
Based on results from controlled studies, it describes how individuals assess their loss and gain perspectives in an asymmetric manner (see loss aversion). For example, for some individuals, the pain from losing $1,000 could only be compensated by the pleasure of earning $2,000. Thus, contrary to the expected utility theory (which models the decision that perfectly rational agents would make), prospect theory aims to describe the actual behavior of people.
In the original formulation of the theory, the term prospect referred to the predictable results of a lottery. However, prospect theory can also be applied to the prediction of other forms of behaviors and decisions.
Prospect theory challenges the expected utility theory developed by John von Neumann and Oskar Morgenstern in 1944 and constitutes one of the first economic theories built using experimental methods.
Overview
Prospect theory stems from Loss aversion, where the observation is that agents asymmetrically feel losses greater than that of an equivalent gain. It centralises around the idea that people conclude their utility from "gains" and "losses" relative to a certain reference point. This "reference point" is different for each person and relative to their individual situation. Thus, rather than making decisions like a rational agent (i.e using expected utility theory and choosing the maximum value), decisions are made in relativity not in absolutes.
Consider two scenarios;
100% chance to gain $450 or 50% chance to gain $1000
100% chance to lose $500 or 50% chance to lose $1100
Prospect theory suggests that;
When faced with a risky choice leading to gains agents are risk averse, preferring the certain outcome with a lower expected utility (concave va |
https://en.wikipedia.org/wiki/Journal%20of%20Physics%20G | Journal of Physics G: Nuclear and Particle Physics is a peer-reviewed journal that publishes theoretical and experimental research into nuclear physics, particle physics and particle astrophysics, including all interface areas between these fields.
The editor-in-chief is Jacek Dobaczewski, University of York, UK.
Scope
The journal publishes research articles on:
theoretical and experimental topics in the physics of elementary particles and fields;
intermediate-energy physics and nuclear physics;
experimental and theoretical research in particle, neutrino, and nuclear astrophysics;
research arising from all interface areas among these fields.
Research is published in the following formats:
Research Papers: Reports of original and high-quality research work;
Research Notes: Contributions from individuals (or small groups) within large collaborations, containing early results of analyses, detector development, simulations, etc. which might not otherwise be published in the wider literature;
Topical Reviews: Specially commissioned review articles on areas of current interest;
LabTalk: Article summaries written by the researchers themselves which introduce the findings, techniques, and possible applications of their research.
Abstracting and indexing information
The journal is indexed in INSPEC Information Services, ISI (Science Citation Index, SciSearch, ISI Alerting Services, Current Contents/Physical, Chemical and Earth Sciences), Article@INIST, and Chemical Abstracts. |
https://en.wikipedia.org/wiki/Trivial%20File%20Transfer%20Protocol | Trivial File Transfer Protocol (TFTP) is a simple lockstep File Transfer Protocol which allows a client to get a file from or put a file onto a remote host. One of its primary uses is in the early stages of nodes booting from a local area network. TFTP has been used for this application because it is very simple to implement.
TFTP was first standardized in 1981 and the current specification for the protocol can be found in .
Overview
Due to its simple design, TFTP can be easily implemented by code with a small memory footprint. It is therefore the protocol of choice for the initial stages of any network booting strategy like BOOTP, PXE, BSDP, etc., when targeting from highly resourced computers to very low resourced Single-board computers (SBC) and System on a Chip (SoC). It is also used to transfer firmware images and configuration files to network appliances like routers, firewalls, IP phones, etc. Today, TFTP is virtually unused for Internet transfers.
TFTP's design was influenced from the earlier protocol EFTP, which was part of the PARC Universal Packet protocol suite. TFTP was first defined in 1980 by IEN 133.
In June 1981 The TFTP Protocol (Revision 2) was published as RFC 783 and later updated in July 1992 by RFC 1350 which fixed among other things the Sorcerer's Apprentice Syndrome. In March 1995 the TFTP Option Extension RFC 1782 updated later in May 1998 by RFC 2347, defined the option negotiation mechanism which establishes the framework for file transfer options to be negotiated prior to the transfer using a mechanism which is consistent with TFTP's original specification.
TFTP is a simple protocol for transferring files, implemented on top of the UDP/IP protocols using well-known port number 69. TFTP was designed to be small and easy to implement, and therefore it lacks most of the advanced features offered by more robust file transfer protocols. TFTP only reads and writes files from or to a remote server. It cannot list, delete, or rename files |
https://en.wikipedia.org/wiki/Radio%20object%20with%20continuous%20optical%20spectrum | Radio Objects with Continuous Optical Spectra, (abbr. ROCOS, also referred to as ROCOSes) is a group of about 80 astrophysical objects characterized by optical spectra anomalously devoid of emission or absorption features, which makes it impossible to determine their distances and locations in relation to our galaxy. They are considered to be a subclass of blazars, and are similar in their spectral characteristics to DC-dwarfs and single stellar-mass black holes.
Discovery and study
Radio Objects with Continuous Optical Spectra, or ROCOSes, were discovered in the 1970s. Among the discoverers was a group of Soviet astrophysicists, who studied them at the Crimean Astrophysical Observatory and the Special Astrophysical Observatory of the Russian Academy of Science, using the former's 2.6-meter optical telescope and the latter's 6-meter optical telescope (BTA-6), along with a 1000-channel photon counter and photometers. The group published their findings in a series of articles in the Russian scientific journals Astronomy Letters and Astronomy Reports.
Criteria
An astronomical radio object is classified as a ROCOS if it possesses (a)
an optical image with stellar appearance, which is identified with a radio source, and (b) no emission or absorption features in its optical spectrum, except for those due to galactic interstellar medium, with a signal-to-noise ratio at the level of those observable for quasar candidates. About 8% of the known astronomical radio objects satisfy these two criteria and are considered ROCOSes.
Properties
The absence of distinct emission or absorption lines in the ROCOSes' spectra makes them very similar in this regard to highly polarized quasars (HPQ), BL Lac objects, and single stellar-mass black holes. The absence of optical spectral features also makes it impossible to use red shift for determining their distances or even ascertaining if they are located within or outside our galaxy. |
https://en.wikipedia.org/wiki/Webster%20Wells | Webster Wells (1851–1916) was an American mathematician known primarily for his authorship of mathematical textbooks.
Early life and career
Webster Wells was born at Roxbury, Massachusetts on September 4, 1851. His parents, Thomas Foster Wells (1822–1903) and Sarah Morrill Wells (1828–1897), initially named him Thomas Wells, but presumably after the death of the statesman Daniel Webster in 1852, renamed him Daniel Webster Wells,
and from at least 1860, he was known as Webster Wells. Samuel Adams, the Boston brewer and patriot, was a great-great-grandfather, and the poets Thomas Wells (1790–1861) and Anna Maria (Foster) Wells (1795–1868) were grandparents. The architect Joseph Morrill Wells was his brother.
Beginning in 1863, Webster Wells studied at the West Newton English and Classical School (aka The Allen School), West Newton, Massachusetts, and then attended the Massachusetts Institute of Technology from which he graduated in 1873 with a Bachelor of Science degree. Wells taught mathematics at MIT, where he was successively instructor (1873–1880), assistant professor (1883), associate professor (1885), and full professor (1893–1911).
Personal life
Webster Wells married Emily Walker Langdon at Boston on June 21, 1876.
Wells died at Arlington, Massachusetts, on May 23, 1916, from the complications of Huntington's Chorea. He was buried in Oak Grove Cemetery, Medford, Massachusetts.
Textbooks
Wells' textbooks were used in many schools and colleges in the United States. Among the many titles were:
Webster Wells. Elementary Treatise on Logarithms (Boston, MA: Robert S. Davies Co., 1878).
Webster Wells. University Algebra (Boston MA: Leach, Shewell & Sanborn, 1880), one of "Greenleaf's Mathematical Series."
Webster Wells. Practical Textbook on Plane and Spherical Trigonometry (Boston, MA: Leach, Shewell & Sanborn, 1883).
Webster Wells. A Complete Course in Algebra for Academies and High Schools (Boston, MA: Leach, Shewell & Sanborn, 1885).
Webster Wells. The |
https://en.wikipedia.org/wiki/Schedules%20Direct | Schedules Direct is a non-profit organization that provides a low-cost television program listing service for open source and freeware digital video recorders.
Developers from several different projects including MythTV, XMLTV, and GB-PVR founded Schedules Direct in response to Tribune Media Services's (TMS's) decision to shut down its free Data Direct program listing service as of September 1, 2007. MythTV and other such software use the data to display an on-screen electronic program guide (EPG), and to schedule upcoming recordings. Schedules Direct contracts with TMS to purchase a license to redistribute its data—which TiVo and other commercial digital video recorders also use—to Schedules Direct members. Individuals may become Schedules Direct members for $35 a year, and members may use the listings service with approved open source and freeware applications. |
https://en.wikipedia.org/wiki/Institute%20of%20Theoretical%20Physics%2C%20Saclay | The Institute of Theoretical Physics ("Institut de physique théorique") (IPhT) is a research institute of the Direction of Fundamental Research (DRF) of the French Alternative Energies and Atomic Energy Commission (CEA). The Institute is also a joint research unit of the Institute of Physics (INP), a subsidiary of the French National Center for Scientific Research (CNRS). It is associated to the Paris-Saclay University. IPhT is situated on the Saclay Plateau South of Paris.
History
The IPhT was created in 1963 as the "Service de Physique Théorique" (SPhT), in succession of the "Service de Physique Mathématique" (SPM) of CEA. It became an Institute (and took the name IPhT) in 2008. It was initially devoted to nuclear physics and superconductivity. Particle physics quickly became an important theme. After its move in 1968 from the main CEA-Saclay site to the present site of Orme des Merisiers, quantum field theory became a major research topic, together with statistical physics. Subsequently, new topics such as conformal theories and matrix models, cosmology and string theory, condensed matter physics and out-of-equilibrium statistical physics, quantum information, found their place there. IPhT is usually considered one of the top theoretical physics research institute in Europe.
Present research themes
Research at IPhT covers most areas of theoretical physics:
Cosmology and astroparticule physics
Particle Physics : quantum chromodynamics, hadron physics, Collider physics, scattering amplitudes, physics beyond the standard model
Quantum Gravity, String theory
Mathematical Physics : Quantum field theory, conformal field theory, integrable systems, topological recursion, combinatorics, random geometries
Condensed matter physics
Statistical Physics: out of equilibrium systems, complex systems, network theory, biophysics
Quantum information science
IPhT organizes each spring the "Itzykson Conference", an international meeting centered on theme which is diffe |
https://en.wikipedia.org/wiki/Instituto%20de%20Astrof%C3%ADsica%20de%20Canarias | The Instituto de Astrofísica de Canarias (IAC) is an astrophysical research institute located in the Canary Islands, Spain. It was founded in 1975 at the University of La Laguna. It operates two astronomical observatories in the Canary Islands: Roque de los Muchachos Observatory on La Palma, and Teide Observatory on Tenerife.
The current director of the IAC is Rafael Rebolo López. In 2016, English scientist Stephen Hawking was appointed Honorary Professor of the IAC, the first such appointment made by the institute.
See also
Instituto de Astrofísica de Andalucía |
https://en.wikipedia.org/wiki/TIME-ITEM | TIME-ITEM is an ontology of Topics that describes the content of undergraduate medical education. TIME is an acronym for "Topics for Indexing Medical Education"; ITEM is an acronym for "Index de thèmes pour l’éducation médicale." Version 1.0 of the taxonomy has been released and the web application that allows users to work with it is still under development. Its developers are seeking more collaborators to expand and validate the taxonomy and to guide future development of the web application.
History
The development of TIME-ITEM began at the University of Ottawa in 2006. It was initially developed to act as a content index for a curriculum map being constructed there. After its initial presentation at the 2006 conference of the Canadian Association for Medical Education, early collaborators included the University of British Columbia, McMaster University and Queen's University.
Features
The TIME-ITEM ontology is unique in that it is designed specifically for undergraduate medical education. As such, it includes fewer strictly biomedical entries than other common medical vocabularies (such as MeSH or SNOMED CT) but more entries relating to the medico-social concepts of communication, collaboration, professionalism, etc.
Topics within TIME-ITEM are arranged poly-hierarchically, meaning any Topic can have more than one parent. Relationships are established based on the logic that learning about a Topic contributes to the learning of all its parent Topics.
In addition to housing the ontology of Topics, the TIME-ITEM web application can house multiple Outcome frameworks. All Outcomes, whether private Outcomes entered by single institutions or publicly available medical education Outcomes (such as CanMeds 2005) are hierarchically linked to one or more Topics in the ontology. In this way, the contribution of each Topic to multiple Outcomes is made explicit.
The structure of the XML documents exported from TIME-ITEM (which contain the hierarchy of Outco |
https://en.wikipedia.org/wiki/Gambler%27s%20ruin | In statistics, gambler's ruin is the fact that a gambler playing a game with negative expected value will eventually go broke, regardless of their betting system.
The concept was initially stated: A persistent gambler who raises their bet to a fixed fraction of the gambler's bankroll after a win, but does not reduce it after a loss, will eventually and inevitably go broke, even if each bet has a positive expected value.
Another statement of the concept is that a persistent gambler with finite wealth, playing a fair game (that is, each bet has expected value of zero to both sides) will eventually and inevitably go broke against an opponent with infinite wealth. Such a situation can be modeled by a random walk on the real number line. In that context, it is probable that the gambler will, with virtual certainty, return to their point of origin, which means going broke, and is ruined an infinite number of times if the random walk continues forever. This is a corollary of a general theorem by Christiaan Huygens, which is also known as gambler's ruin. That theorem shows how to compute the probability of each player winning a series of bets that continues until one's entire initial stake is lost, given the initial stakes of the two players and the constant probability of winning. This is the oldest mathematical idea that goes by the name gambler's ruin, but not the first idea to which the name was applied. The term's common usage today is another corollary to Huygens's result.
The concept has specific relevance for gamblers. However it also leads to mathematical theorems with wide application and many related results in probability and statistics. Huygens's result in particular led to important advances in the mathematical theory of probability.
History
The earliest known mention of the gambler's ruin problem is a letter from Blaise Pascal to Pierre Fermat in 1656 (two years after the more famous correspondence on the problem of points). Pascal's version was summarize |
https://en.wikipedia.org/wiki/Estimated%20date%20of%20delivery | The estimated date of delivery (EDD), also known as expected date of confinement, and estimated due date or simply due date, is a term describing the estimated delivery date for a pregnant person. Normal pregnancies last between 38 and 42 weeks. Children are delivered on their expected due date about 4% of the time.
Origins of the term
Confinement is a traditional term referring to the period of pregnancy when an upper-class, noble, or royal woman would withdraw from society in medieval and Tudor times and be confined to their rooms with midwives, ladies-in-waiting and female family members only to attend them. This was believed to calm the mother and reduce the risk of premature delivery. "Lying-in" or bedrest is no longer a standard part of antenatal care.
Estimation methods
Due date estimation basically follows two steps:
Determination of which time point is to be used as the origin for gestational age. This starting point is the person's last normal menstrual period (LMP) or the corresponding time as estimated by a more accurate method if available. Such methods include adding 14 days to a known duration since fertilization (as is possible in in vitro fertilization) or by obstetric ultrasonography.
Adding the estimated gestational age at childbirth to the above time point. Childbirth on average occurs at a gestational age of 280 days (40 weeks), which is therefore often used as a standard estimation for individual pregnancies. However, alternative durations as well as more individualized methods have also been suggested.
Estimation of gestational age
According to American College of Obstetricians and Gynecologists, the main methods to calculate gestational age are:
Directly calculating the days since the beginning of the last menstrual period
Early obstetric ultrasound, comparing the size of an embryo or fetus to that of a reference group of pregnancies of known gestational age (such as calculated from last menstrual periods), and using the mean gestational |
https://en.wikipedia.org/wiki/Comparison%20of%20version-control%20software | In software development, version control is a class of systems responsible for managing changes to computer programs or other collections of information such that revisions have a logical and consistent organization. The following tables include general and technical information on notable version control and software configuration management (SCM) software. For SCM software not suitable for source code, see Comparison of open-source configuration management software.
General information
Table explanation
Repository model describes the relationship between various copies of the source code repository. In a client–server model, users access a master repository via a client; typically, their local machines hold only a working copy of a project tree. Changes in one working copy must be committed to the master repository before they are propagated to other users. In a distributed model, repositories act as peers, and users typically have a local repository with version history available, in addition to their working copies.
Concurrency model describes how changes to the working copy are managed to prevent simultaneous edits from causing nonsensical data in the repository. In a lock model, changes are disallowed until the user requests and receives an exclusive lock on the file from the master repository. In a merge model, users may freely edit files, but are informed of possible conflicts upon checking their changes into the repository, whereupon the version control system may merge changes on both sides, or let the user decide when conflicts arise. Distributed version control systems usually use a merge concurrency model.
Technical information
Table explanation
Software: The name of the application that is described.
Programming language: The coding language in which the application is being developed
Storage Method: Describes the form in which files are stored in the repository. A snapshot indicates that a committed file(s) is stored in its entirety—usually |
https://en.wikipedia.org/wiki/Annie%20Selden | Annie Laurer Alexander Selden is an expert in mathematics education. She is a professor emeritus at Tennessee Technological University, and an adjunct professor at New Mexico State University. She was one of the original founders of the Association for Women in Mathematics in 1971.
Education
Born as Annie Louise Laurer, she graduated from Oberlin College in 1959,
learned to program computers in a summer job at IBM in Endicott, New York,
and traveled to the University of Göttingen to study mathematics as a Fulbright scholar.
With the support of the Woodrow Wilson Foundation,
she earned a master's degree from Yale University in 1962.
Delayed by marriage and two children,
she completed her Ph.D. from Clarkson University in 1974.
She published her dissertation, Bisimple ω-semigroups in the locally compact setting, under the name Annie Laurer Alexander.
It was supervised by John Selden Jr., whom she later married as her second husband.
Career
Although Selden originally intended to be a research mathematician, the job market at the time of her graduation led her to teach abroad, and the experience of teaching mathematics to non-native English speakers led her to become interested in mathematics education.
She taught at the State University of New York at Potsdam, Hampden–Sydney College, Boğaziçi University in Turkey, and Bayero University Kano in Nigeria, before joining Tennessee Technological University in 1985. She retired and moved to New Mexico in 2003.
Awards and honors
In 2002, Selden was the winner of the Louise Hay Award of the Association for Women in Mathematics,
and the AWM/MAA Falconer Lecturer.
She was elected as a fellow of the American Association for the Advancement of Science in 2003.
The Annie and John Selden Prize of the Mathematical Association of America is named after Selden and her husband. |
https://en.wikipedia.org/wiki/MountainsMap | Mountains is an image analysis and surface metrology software platform published by the company Digital Surf. Its core is micro-topography, the science of studying surface texture and form in 3D at the microscopic scale. The software is dedicated to profilometers, 3D light microscopes ("MountainsMap"), scanning electron microscopes ("MountainsSEM") and scanning probe microscopes ("MountainsSPIP").
Integration by instrument manufacturers
The main editor's distribution channel is OEM, through the integration of MountainsMap by most profiler and microscope manufacturers, usually under their respective brands; it is sold for instance as:
Hitachi map 3D on Hitachi's scanning electron microscopes,
TopoMAPS on Thermo Fisher Scientific (FEI division) scanning electron microscopes,
TalyMap, TalyProfile, or TalyMap Contour on Taylor-Hobson's profilometers,
PicoImage on Keysight's AFM's,
HommelMap on Jenoptik's profilometers (Hommel-Etamic line of products),
MountainsMap - X on Nikon's microscopes,
Apex 2D or Apex 3D on KLA-Tencor's profilometers,
Leica Map on Leica's microscopes,
ConfoMap on Carl Zeiss' microscopes,
MCubeMap on Mitutoyo profilometers.
Vision 64 Map on Bruker optical profilometers
AttoMap on cathodoluminescence-analysis-dedicated scanning electron microscopes from AttoLight
SmileView Map on JEOL's scanning electron microscopes,
SensoMap on Sensofar's optical profilometers,
Compatibility
Mountains native file format is the SURF format (.SUR extension).
Mountains is compatible with most instruments of the market capable of supplying images or topography.
Mountains complies to the ISO 25178 standard on 3D surface texture evaluation and offers the profile and areal filters defined in ISO 16610.
The metrology reports are generated in proprietary format but can also be exported to PDF and RTF formats.
Mountains is available in English, Brazilian Portuguese, simplified Chinese, French, German, Italian, Japanese, Korean, Polish, Russian and Spani |
https://en.wikipedia.org/wiki/Dots%20%28game%29 | Dots (, , ) is an abstract strategy game, played by two or more people on a sheet of squared paper. The game is somewhat similar to Go, in that the goal is to "capture" enemy dots by surrounding them with a continuous line of one's own dots. Once an area containing enemy dots is surrounded, that area ceases to be playable.
The game has some similarities to the simpler and smaller Dots and Boxes game.
Rules
Field
Dots is played on a grid of any finite size, traditionally 39x32, the size of the grid that is often encountered on a page of squared copybook in Russia. Players take turns to place a dot of their own color (players usually being red and blue) on an empty intersection of the grid.
Capture rule
If a newly-placed dot completes a closed chain of dots of the same color which encloses at least one of the enemy dots, then all the area inside it is surrounded. To form a chain dots must be adjacent to each other either vertically, horizontally or diagonally. Surrounded enemy dots are added to the score of the player who surrounded them (but the player's own dots are not counted). All enclosed dots and empty intersections are excluded from further play and cannot be used to make new surrounds. To mark a newly surrounded area, the surrounding player must draw a boundary line through all of their dots that are part of enclosing chain; to make it more visual the area can also be shaded in the player's color.
Players cannot surround areas that do not have enemy dots inside. As a consequence the enemy can use empty intersections inside an enclosed area to complete their own enclosing chain. However, if one places a dot into empty area surrounded by the opponent and cannot immediately use it to surround, then this dot can be captured by the opponent (this kind of suicide move is never beneficial but is not prohibited).
Often there is more than one way of choosing an enclosing chain of dots. When played with pens and paper, players are free to choose one however th |
https://en.wikipedia.org/wiki/Electron%20configurations%20of%20the%20elements%20%28data%20page%29 | This page shows the electron configurations of the neutral gaseous atoms in their ground states. For each atom the subshells are given first in concise form, then with all subshells written out, followed by the number of electrons per shell. Electron configurations of elements beyond hassium (element 108) have never been measured; predictions are used below.
As an approximate rule, electron configurations are given by the Aufbau principle and the Madelung rule. However there are numerous exceptions; for example the lightest exception is chromium, which would be predicted to have the configuration , written as , but whose actual configuration given in the table below is .
Note that these electron configurations are given for neutral atoms in the gas phase, which are not the same as the electron configurations for the same atoms in chemical environments. In many cases, multiple configurations are within a small range of energies and the irregularities shown below do not necessarily have a clear relation to chemical behaviour. For the undiscovered eighth-row elements, mixing of configurations is expected to be very important, and sometimes the result can no longer be well-described by a single configuration.
See also
Extended periodic table#Electron configurations – Predictions for undiscovered elements 119–173 and 184 |
https://en.wikipedia.org/wiki/Rapid%20thermal%20processing | Rapid thermal processing (RTP) is a semiconductor manufacturing process which heats silicon wafers to temperatures exceeding 1,000°C for not more than a few seconds. During cooling wafer temperatures must be brought down slowly to prevent dislocations and wafer breakage due to thermal shock. Such rapid heating rates are often attained by high intensity lamps or lasers. These processes are used for a wide variety of applications in semiconductor manufacturing including dopant activation, thermal oxidation, metal reflow and chemical vapor deposition.
Temperature control
One of the key challenges in rapid thermal processing is accurate measurement and control of the wafer temperature. Monitoring the ambient with a thermocouple has only recently become feasible, in that the high temperature ramp rates prevent the wafer from coming to thermal equilibrium with the process chamber. One temperature control strategy involves in situ pyrometry to effect real time control. Used for melting iron for welding purposes.
Rapid thermal anneal
Rapid thermal anneal (RTA) in rapid thermal processing is a process used in semiconductor device fabrication which involves heating a single wafer at a time in order to affect its electrical properties. Unique heat treatments are designed for different effects. Wafers can be heated in order to activate dopants, change film-to-film or film-to-wafer substrate interfaces, densify deposited films, change states of grown films, repair damage from ion implantation, move dopants or drive dopants from one film into another or from a film into the wafer substrate.
Rapid thermal anneals are performed by equipment that heats a single wafer at a time using either lamp based heating, a hot chuck, or a hot plate that a wafer is brought near. Unlike furnace anneals they are of short duration, processing each wafer in several minutes.
To achieve short annealing times and quick throughput, sacrifices are made in temperature and process uniformity, temp |
https://en.wikipedia.org/wiki/Franz%20Edelman%20Award%20for%20Achievement%20in%20Operations%20Research%20and%20the%20Management%20Sciences | The Franz Edelman Award for Achievement in Operations Research and the Management Sciences recognizes excellence in the execution of operations research on the organizational level.
About
The award is presented annually by the Institute for Operations Research and the Management Sciences (INFORMS).
The international competition pits six teams from industry, government, healthcare, and the non-profit sectors. The competition takes place at the INFORMS Business Analytics Conference and concludes with the announcement of the winner at a gala dinner at the conference. It began in 1971 as the TIMS Prize and was named for Franz Edelman, the operations research director of RCA, shortly after his death in 1982. It carries a cash award of $10,000. Since its inception, nearly $250 billion in benefits have been tabulated among Franz Edelman Award finalist teams. Following the competition, INFORMS publishes papers by the Edelman finalists in the January issue of the INFORMS journal Interfaces.
List of recent winners
2022: Chile: Achievement in Advanced Analytics, Operations Research and Management Science for its use of operations research (O.R.) to improve response strategies to the COVID-19 pandemic.
2021: United Nations World Food Programme: Towards Zero Hunger with Analytics
2020: Intel: Intel Realizes $25 Billion by Applying Advanced Analytics from Product Architecture Design Through Supply Chain Planning
2019: Louisville Metropolitan Sewer District (MSD) : Analytics and Optimization Reduce Sewage Overflows to Protect Community Waterways in Kentucky
2018: FCC : Unlocking the Beachfront: Using O.R. to Repurpose Wireless Spectrum
2017: Holiday Retirement and Prorize : Revenue Management Provides Double-digit Revenue Lift for Holiday Retirement
2016: United Parcel Service (UPS): UPS On Road Integrated Optimization and Navigation (Orion) Project
2015: Syngenta: Advanced Analytics for Agricultural Product Development
2014: U.S. Centers for Disease Control and Preve |
https://en.wikipedia.org/wiki/List%20of%20structural%20engineering%20software | This is list of notable software packages that implement engineering analysis of structure against applied loads using structural engineering and structural engineering theory. |
https://en.wikipedia.org/wiki/The%20Audience%20Engine | The Audience Engine is announced open-source, customizable suite of fundraising tools for public radio being developed by the Congera Corporation, a subsidiary of WFMU Radio. It was conceived by and is being developed under the supervision of WFMU management, but as of November 2020 no product has been announced, demoed or released thus rendering the project as effectively vaporware.
The platform is based on WFMU's own model of fundraising and listener-community relations, a project that began development in 1998 and WFMU claims helps raise 70% of its annual $2.5 million operating budget via its website. The developers explain that "by pairing online content, real-time playlist information, social media, and community interaction tools directly with crowdfunding campaigns, WFMU has not only built a positive and intelligent online community, but also a sustainable model that can be adopted by other organizations." Besides radio, Audience Engine has potential usage for online television and journalism. The goal is to "enable organizations ... to build audiences and become self sufficient."
A large part of Audience Engine's potential appeal is its tightly integrated fundraising capabilities. "Audience Engine comes with a set of tools that integrates crowdfunding-inspired donation tools throughout a publisher's site, with on and off-site widgets for donations as well as gift reward management, and a full suite of analytics underlying it all for that publisher to gain insight on what is and isn't raising money," noted Flanagan. Freedman observed that "Kickstarter did a great job of borrowing or stealing the concept of the pledge drive, and vastly improved it as well. Public media hasn't borrowed it back yet! That's what we're trying to do."
Although aimed primarily towards small and mid-sized radio stations, larger public radio stations such as WBUR and WNYC have considered harnessing the platform's possible uses in their operations.
A draft of the platform was publi |
https://en.wikipedia.org/wiki/Pressure%E2%80%93volume%20diagram | A pressure–volume diagram (or PV diagram, or volume–pressure loop) is used to describe corresponding changes in volume and pressure in a system. They are commonly used in thermodynamics, cardiovascular physiology, and respiratory physiology.
PV diagrams, originally called indicator diagrams, were developed in the 18th century as tools for understanding the efficiency of steam engines.
Description
A PV diagram plots the change in pressure P with respect to volume V for some process or processes. Typically in thermodynamics, the set of processes forms a cycle, so that upon completion of the cycle there has been no net change in state of the system; i.e. the device returns to the starting pressure and volume.
The figure shows the features of an idealized PV diagram. It shows a series of numbered states (1 through 4). The path between each state consists of some process (A through D) which alters the pressure or volume of the system (or both).
A key feature of the diagram is that the amount of energy expended or received by the system as work can be measured because the net work is represented by the area enclosed by the four lines.
In the figure, the processes 1-2-3 produce a work output, but processes from 3-4-1 require a smaller energy input to return to the starting position / state; so the net work is the difference between the two.
This figure is highly idealized, in so far as all the lines are straight and the corners are right angles. A diagram showing the changes in pressure and volume in a real device will show a more complex shape enclosing the work cycle. ().
History
The PV diagram, then called an indicator diagram, was developed in 1796 by James Watt and his employee John Southern. Volume was traced by a plate moving with the piston, while pressure was traced by a pressure gauge whose indicator moved at right angles to the piston. A pencil was used to draw the diagram. Watt used the diagram to make radical improvements to steam engine performance.
|
https://en.wikipedia.org/wiki/Color%20science | Color science is the scientific study of color including lighting and optics; measurement of light and color; the physiology, psychophysics, and modeling of color vision; and color reproduction.
Organizations
International Commission on Illumination (CIE)
Illuminating Engineering Society (IES)
Inter-Society Color Council (ISCC)
Society for Imaging Science and Technology (IS&T)
International Colour Association (AIC)
Optica, formerly the Optical Society of America (OSA)
The Colour Group
Society of Dyers and Colourists (SDC)
American Association of Textile Chemists and Colorists (AATCC)
Association for Research in Vision and Ophthalmology (ARVO)
ACM SIGGRAPH
Vision Sciences Society (VSS)
Council for Optical Radiation Measurements (CORM)
Journals
The preeminent scholarly journal publishing research papers in color science is Color Research and Application, started in 1975 by founding editor-in-chief Fred Billmeyer, along with Gunter Wyszecki, Michael Pointer and Rolf Kuehni, as a successor to the Journal of Colour (1964–1974). Previously most color science work had been split between journals with broader or partially overlapping focus such as the Journal of the Optical Society of America (JOSA), Photographic Science and Engineering (1957–1984), and the Journal of the Society of Dyers and Colourists (renamed Coloration Technology in 2001).
Other journals where color science papers are published include the Journal of Imaging Science & Technology, the Journal of Perceptual Imaging, the Journal of the International Colour Association (JAIC), the Journal of the Color Science Association of Japan, Applied Optics, and the Journal of Vision.
Conferences
Congress of the International Color Association
IS&T Color and Imaging Conference (CIC)
SIGGRAPH
International Symposium for Color Science and Art
Selected books
3rd ed. (2000).
Author's website. 2nd ed. (2005).
1st ed. (1997). |
https://en.wikipedia.org/wiki/Feng%20Office%20Community%20Edition | Feng Office Community Edition (formerly OpenGoo) is an open-source collaboration platform developed and supported by Feng Office and the OpenGoo community. It is a fully featured online office suite with a similar set of features as other online office suites, like Google Workspace, Microsoft 365, Zimbra, LibreOffice Online and Zoho Office Suite. The application can be downloaded and installed on a server.
Feng Office could also be categorized as collaborative software and as personal information manager software.
Features
Feng Office Community Edition main features include project management, document management, contact management, e-mail and time management. Text documents and presentations can be created and edited online. Files can be uploaded, organized and shared, independent of file formats.
Organization of the information in Feng Office Community Edition is done using workspaces and tags.
The application presents the information stored using different interfaces such as lists, dashboards and calendar views.
Licensing
Feng Office Community Edition is distributed under the GNU Affero General Public License, version 3 only.
Technology used
Feng Office uses PHP, JavaScript, AJAX (ExtJS) and MySQL technology.
Several open source projects served as a basis for development. ActiveCollab's last open sourced release was used as the initial code base. It includes CKEditor for online document editing.
System requirements
The server could run on any operating system. The system needs the following packages:
Apache HTTP Server 2.0+
PHP 5.0+
MySQL 4.1+ (InnoDB support recommended)
On the client side, the user is only required to use a modern Web browser.
History
OpenGoo started as a degree project at the faculty of Engineering of the University of the Republic, Uruguay. The project was presented and championed by Software Engineer Conrado Viña. Software Engineers Marcos Saiz and Ignacio de Soto developed the first prototype as their thesis. Professors |
https://en.wikipedia.org/wiki/Leycesteria%20formosa | Leycesteria formosa, the pheasant berry, is a deciduous shrub in the family Caprifoliaceae, native to the Himalayas and southwestern China. It is considered a noxious invasive species in Australia, New Zealand, the neighbouring islands of Micronesia, and some other places.
In its native Himalaya the shrub is frequently used in the traditional medicine of the various countries and peoples encompassed within the region.
Names
The genus name Leycesteria was coined by Nathaniel Wallich (one time director of Royal Botanic Garden, Calcutta) in honour of his friend William Leycester, Chief justice and noted amateur horticulturist, in Bengal in about 1820; while the Latin specific name formosa (feminine form of formosus) signifies 'beautiful' or 'handsome' (literally: 'shapely') - in reference to the curious, pendent inflorescences with their richly wine-coloured bracts. There is a popular misconception, however, that the specific name derives from the place name 'Formosa', which is an abbreviation of the original Portuguese name for the island of Taiwan: Ilha Formosa "beautiful island". Portuguese is a romance language (i.e. derived from Latin) and the adjective formosa has passed into it unchanged in spelling and meaning from the original Latin. Leycesteria formosa is so named in recognition of its beauty, not in acknowledgment of an origin on the island now known as Taiwan. The Latin specific names of certain plants, given to indicate that they were native to Taiwan at a time when it was known as Formosa take such forms as formosae, formosana and formosensis, not the Latin adjective/Portuguese adjective-used-as-a-proper-noun formosa.
Other common names include Himalayan honeysuckle, pheasant-eye, Elisha's tears, flowering nutmeg, spiderwort, Cape fuchsia, whistle stick, Himalaya nutmeg, granny's curls,partridge berry, chocolate berry, shrimp plant/flower and treacle tree/berry It is also recorded as Symphoricarpos rivularis Suksdorf.
Contrary to the impression gi |
https://en.wikipedia.org/wiki/Lauren%20Williams%20%28mathematician%29 | Lauren Kiyomi Williams (born 1978) is an American mathematician known for her work on cluster algebras, tropical geometry, algebraic combinatorics, amplituhedra, and the positive Grassmannian. She is Dwight Parker Robinson Professor of Mathematics at Harvard University.
Education
Williams's father is an engineer; her mother is third-generation Japanese American. She grew up in Los Angeles, where her interest in mathematics was sparked by winning a fourth-grade mathematics contest. She was the valedictorian of Palos Verdes Peninsula High School in 1996, and while there participated in summer research at the Massachusetts Institute of Technology with Satomi Okazaki, a student of her eventual advisor, Richard P. Stanley. She graduated magna cum laude from Harvard University in 2000 with a A.B. in mathematics, and received her PhD in 2005 at the Massachusetts Institute of Technology under the supervision of Stanley. Her dissertation was titled Combinatorial Aspects of Total Positivity.
Work
After postdoctoral positions at the University of California, Berkeley and Harvard, Williams rejoined the Berkeley mathematics department as an assistant professor in 2009, and was promoted to associate professor in 2013 and then full professor in 2016.
Starting in the fall of 2018, she rejoined the Harvard mathematics department as a full professor, making her the second ever tenured female math professor at Harvard. The first, Sophie Morel, left Harvard in 2012.
Along with colleagues O. Mandelshtam (her former student, now an assistant professor at University of Waterloo) and S. Corteel, in 2018 Williams developed a new characterization of both symmetric and nonsymmetric Macdonald polynomials using the combinatorial exclusion process.
Awards
In 2012, she became one of the inaugural fellows of the American Mathematical Society. She is the 2016 winner of the Association for Women in Mathematics and Microsoft Research Prize in Algebra and Number Theory. In 2022 she was awarded a |
https://en.wikipedia.org/wiki/Subcategory | In mathematics, specifically category theory, a subcategory of a category C is a category S whose objects are objects in C and whose morphisms are morphisms in C with the same identities and composition of morphisms. Intuitively, a subcategory of C is a category obtained from C by "removing" some of its objects and arrows.
Formal definition
Let C be a category. A subcategory S of C is given by
a subcollection of objects of C, denoted ob(S),
a subcollection of morphisms of C, denoted hom(S).
such that
for every X in ob(S), the identity morphism idX is in hom(S),
for every morphism f : X → Y in hom(S), both the source X and the target Y are in ob(S),
for every pair of morphisms f and g in hom(S) the composite f o g is in hom(S) whenever it is defined.
These conditions ensure that S is a category in its own right: its collection of objects is ob(S), its collection of morphisms is hom(S), and its identities and composition are as in C. There is an obvious faithful functor I : S → C, called the inclusion functor which takes objects and morphisms to themselves.
Let S be a subcategory of a category C. We say that S is a full subcategory of C if for each pair of objects X and Y of S,
A full subcategory is one that includes all morphisms in C between objects of S. For any collection of objects A in C, there is a unique full subcategory of C whose objects are those in A.
Examples
The category of finite sets forms a full subcategory of the category of sets.
The category whose objects are sets and whose morphisms are bijections forms a non-full subcategory of the category of sets.
The category of abelian groups forms a full subcategory of the category of groups.
The category of rings (whose morphisms are unit-preserving ring homomorphisms) forms a non-full subcategory of the category of rngs.
For a field K, the category of K-vector spaces forms a full subcategory of the category of (left or right) K-modules.
Embeddings
Given a subcategory S of C, the inclusion fun |
https://en.wikipedia.org/wiki/Multiplicity-one%20theorem | In the mathematical theory of automorphic representations, a multiplicity-one theorem is a result about the representation theory of an adelic reductive algebraic group. The multiplicity in question is the number of times a given abstract group representation is realised in a certain space, of square-integrable functions, given in a concrete way.
A multiplicity one theorem may also refer to a result about the restriction of a representation of a group G to a subgroup H. In that context, the pair (G, H) is called a strong Gelfand pair.
Definition
Let G be a reductive algebraic group over a number field K and let A denote the adeles of K. Let Z denote the centre of G and let be a continuous unitary character from Z(K)\Z(A)× to C×. Let L20(G(K)/G(A), ) denote the space of cusp forms with central character ω on G(A). This space decomposes into a direct sum of Hilbert spaces
where the sum is over irreducible subrepresentations and m are non-negative integers.
The group of adelic points of G, G(A), is said to satisfy the multiplicity-one property if any smooth irreducible admissible representation of G(A) occurs with multiplicity at most one in the space of cusp forms of central character , i.e. m is 0 or 1 for all such .
Results
The fact that the general linear group, GL(n), has the multiplicity-one property was proved by for n = 2 and independently by and for n > 2 using the uniqueness of the Whittaker model. Multiplicity-one also holds for SL(2), but not for SL(n) for n > 2 .
Strong multiplicity one theorem
The strong multiplicity one theorem of and states that two cuspidal automorphic representations of the general linear group are isomorphic if their local components are isomorphic for all but a finite number of places.
See also
Gan-Gross-Prasad conjecture |
https://en.wikipedia.org/wiki/James%20Dance | James Cyril Aubrey George Dance (5 May 1907 – 16 March 1971) was a British Conservative Party politician. He was educated at Eton College and was in the 2nd Dragoon Guards (Queen's Bays) during World War II. He was an insurance underwriter for Lloyd's of London.
Dance was elected as Member of Parliament for Bromsgrove at the 1955 general election. He was the parliamentary private secretary to George Ward during Ward's time as Parliamentary and Financial Secretary to the Admiralty and Secretary of State for Air. Dance remained an MP until he died in office on 16 March 1971, at the age of 63. The resulting by-election was won by the Labour Party's Terry Davis.
Dance was married to Charlotte Strutt until her death; they had one child. He then remarried, to Anne Walker, and they had three children. |
https://en.wikipedia.org/wiki/Biopanning | Biopanning is an affinity selection technique which selects for peptides that bind to a given target. All peptide sequences obtained from biopanning using combinatorial peptide libraries have been stored in a special freely available database named BDB. This technique is often used for the selection of antibodies too.
Biopanning involves 4 major steps for peptide selection. The first step is to have phage display libraries prepared. This involves inserting foreign desired gene segments into a region of the bacteriophage genome, so that the peptide product will be displayed on the surface of the bacteriophage virion. The most often used are genes pIII or pVIII of bacteriophage M13.
The next step is the capturing step. It involves conjugating the phage library to the desired target. This procedure is termed panning. It utilizes the binding interactions so that only specific peptides presented by bacteriophage are bound to the target. For example, selecting antibody presented by bacteriophage with coated antigen in microtiter plates.
The washing step comes after the capturing step to wash away the unbound phages from solid surface. Only the bound phages with strong affinity are kept. The final step involves the elution step where the bound phages are eluted through changing of pH or other environment conditions.
The end result is the peptides produced by bacteriophage are specific. The resulting filamentous phages can infect gram-negative bacteria once again to produce phage libraries. The cycle can occur many times resulting with strong affinity binding peptides to the target. |
https://en.wikipedia.org/wiki/Retirement%20spend-down | At retirement, individuals stop working and no longer get employment earnings, and enter a phase of their lives, where they rely on the assets they have accumulated, to supply money for their spending needs for the rest of their lives. Retirement spend-down, or withdrawal rate, is the strategy a retiree follows to spend, decumulate or withdraw assets during retirement.
Retirement planning aims to prepare individuals for retirement spend-down, because the different spend-down approaches available to retirees depend on the decisions they make during their working years. Actuaries and financial planners are experts on this topic.
Importance
More than 10,000 Post-World War II baby boomers will reach age 65 in the United States every day between 2014 and 2027. This represents the majority of the more than 78 million Americans born between 1946 and 1964. As of 2014, 74% of these people are expected to be alive in 2030, which highlights that most of them will live for many years beyond retirement. By the year 2000, 1 in every 14 people was age 65 or older. By the year 2050, more than 1 in 6 people are projected to be at least 65 years old. The following statistics emphasize the importance of a well-planned retirement spend-down strategy for these people:
87% of workers do not feel very confident about having enough money to retire comfortably.
80% of retirees do not feel very confident about maintaining financial security throughout their remaining lifetime.
49% of workers over age 55 have less than $50,000 of savings.
25% of workers have not saved at all for retirement.
35% of workers are not currently saving for retirement.
56% of workers have not tried to calculate their income needs in retirement.
Longevity risk
Individuals each have their own retirement aspirations, but all retirees face longevity risk – the risk of outliving their assets. This can spell financial disaster. Avoiding this risk is therefore a baseline goal that any successful retirement |
https://en.wikipedia.org/wiki/Basic%20Number%20Theory | Basic Number Theory is an influential book by André Weil, an exposition of algebraic number theory and class field theory with particular emphasis on valuation-theoretic methods. Based in part on a course taught at Princeton University in 1961-2, it appeared as Volume 144 in Springer's Grundlehren der mathematischen Wissenschaften series. The approach handles all 'A-fields' or global fields, meaning finite algebraic extensions of the field of rational numbers and of the field of rational functions of one variable with a finite field of constants. The theory is developed in a uniform way, starting with topological fields, properties of Haar measure on locally compact fields, the main theorems of adelic and idelic number theory, and class field theory via the theory of simple algebras over local and global fields. The word `basic’ in the title is closer in meaning to `foundational’ rather than `elementary’, and is perhaps best interpreted as meaning that the material developed is foundational for the development of the theories of automorphic forms, representation theory of algebraic groups, and more advanced topics in algebraic number theory. The style is austere, with a narrow concentration on a logically coherent development of the theory required, and essentially no examples.
Mathematical context and purpose
In the foreword, the author explains that instead of the “futile and impossible task” of improving on Hecke’s classical treatment of algebraic number theory, he “rather tried to draw the conclusions from the developments of the last thirty years, whereby locally compact groups, measure and integration have been seen to play an increasingly important role in classical number theory”. Weil goes on to explain a viewpoint that grew from work of Hensel, Hasse, Chevalley, Artin, Iwasawa, Tate, and Tamagawa in which the real numbers may be seen as but one of infinitely many different completions of the rationals, with no logical reason to favour it over the various |
https://en.wikipedia.org/wiki/Ship%20identifier | A ship identifier refers to one of several types of identifiers used for maritime vessels. An identifier may be a proper noun (La Niña); a proper noun combined with a standardized prefix based on the type of ship (e.g. ); a serial code; a unique, alphanumeric ID (e.g. A123B456C7); or an alphanumeric ID displayed in international signal flags (e.g. , representing U6CH). Some identifiers are permanent for a ship while others may be changed at the owners' discretion although regulatory agencies will need to approve the change. Modern ships will usually have several identifiers.
In addition to proper nouns, types of ship identifiers include:
Code letters – an identifier for a ship that is displayed on vessels by ICS flags representing the letters of the alphabet and numbers 0–9, e.g. the flags (from top to bottom) represented the identifier "USMW"
Hull number or Hull Identification Number (HIN) – a number used as an identifier for civilian and naval vessels, national/regional subtypes include:
Craft Identification Number – a permanent unique fourteen-digit alphanumeric identifier issued to all marine vessels in Europe
ENI number (European Number of Identification or European Vessel Identification Number) – a unique, eight-digit identifier for ships capable of navigating on inland European waters that is attached to a hull for its entire lifetime, independent of the vessel's current name or flag
Naval Registry Identification Number – United States until 1920s, replaced by hull classification symbol system
IMO number – a unique identifier issued by the International Maritime Organization (IMO) for ships
Maritime call sign – an identifier used during radio transmissions, used mainly during verbal transmissions and sometimes incorporating a vessel's MMSI
Maritime Mobile Service Identity (MMSI) – a unique, nine-digit identifier used over radio frequencies to identify a vessel, used mainly for automated, non-verbal transmissions
Official number – a ship identifier number as |
https://en.wikipedia.org/wiki/Merkur%20%28toy%29 | Merkur refers to a metal construction set built in Czechoslovakia (later the Czech Republic). It was also referred to as Constructo or Build-O in English-speaking countries and Tecc in the Netherlands.
Unlike Erector/Meccano, which was based on Imperial/customary measurements, Merkur used metric. There is 1x1 cm raster of connection holes on building parts, connected by M3.5 screws.
The brand was launched in 1920 and ran until 1940 when World War II put a halt to production. It was resumed in 1947. The private company was closed down and its assets nationalised by the Communist Czechoslovak state in 1953. The Merkur toys were made throughout the communist period and were exported all over Europe. The company was privatized by some of the former employees after 1989, but went into insolvency in 1993. Later on, Jaromír Kříž bought out the company and during three years he got back the production and saved this renowned Czech toy.
In 1961, Otto Wichterle used Merkur based apparatus for experimental production of the first soft contact lenses.
The factory and Merkur museum are located in Police nad Metují, Czech Republic.
Merkur also produces a wide range of toys, including metal 0 scale model trains and steam engines. |
https://en.wikipedia.org/wiki/H%C3%B3rreo | An hórreo is a typical granary from the northwest of the Iberian Peninsula (Asturias, Galicia, where it might be called a Galician granary, and Northern Portugal), built in wood or stone, raised from the ground (to keep rodents out) by pillars ( in Asturian, pegoyos in Cantabrian, in Galician, in Portuguese, in Basque) ending in flat staddle stones (vira-ratos in Galician, mueles or tornarratos in Asturian, or zubiluzea in Basque) to prevent access by rodents. Ventilation is allowed by the slits in its walls.
Names
In some areas, hórreos are known as horriu, (Asturian), (Leonese), (Cantabrian), hórreo, paneira, canastro, piorno, cabazo (Galician), , , , (Portuguese), , , (Basque).
Distribution
Hórreos are mainly found in the Northwest of Spain (Galicia and Asturias) and Northern Portugal. There are two main types of hórreo, rectangular-shaped, the more extended, usually found in Galicia and coastal areas of Asturias; and square-shaped hórreos from Asturias, León, western Cantabria and eastern Galicia.
Origins
The oldest document containing an image of an hórreo is the Cantigas de Santa Maria by Alfonso X "El Sabio" (song CLXXXVII) from the 13th century. In this depiction, three rectangular hórreos of gothic style are illustrated.
Types
There are several types of Asturian hórreo, according to the characteristics of the roof (thatched, tiled, slate, pitched or double pitched), the materials used for the pillars or the decoration. The oldest still standing date from the 15th century, and even nowadays they are built ex novo. There are an estimated 18,000 hórreos and paneras in Asturias, some are poorly preserved but there is a growing awareness from owners and authorities to maintain them in good shape.
The longest hórreo in Galicia is located in Carnota, A Coruña, and is long.
Other similar granary structures include Asturian paneras (basically, big hórreos with more than four pillars), cabaceiras (Galician round basketwork hórreo), trojes or in |
https://en.wikipedia.org/wiki/List%20of%20genetic%20codes | While there is much commonality, different parts of the tree of life use slightly different genetic codes. When translating from genome to protein, the use of the correct genetic code is essential. The mitochondrial codes are the relatively well-known examples of variation. The translation table list below follows the numbering and designation by NCBI.
The standard code
The vertebrate mitochondrial code
The yeast mitochondrial code
The mold, protozoan, and coelenterate mitochondrial code and the mycoplasma/spiroplasma code
The invertebrate mitochondrial code
The ciliate, dasycladacean and hexamita nuclear code
The kinetoplast code; cf. table 4.
cf. table 1.
The echinoderm and flatworm mitochondrial code
The euplotid nuclear code
The bacterial, archaeal and plant plastid code
The alternative yeast nuclear code
The ascidian mitochondrial code
The alternative flatworm mitochondrial code
The Blepharisma nuclear code
The chlorophycean mitochondrial code
The trematode mitochondrial code
The Scenedesmus obliquus mitochondrial code
The Thraustochytrium mitochondrial code
The Pterobranchia mitochondrial code
The candidate division SR1 and gracilibacteria code
The Pachysolen tannophilus nuclear code
The karyorelict nuclear code
The Condylostoma nuclear code
The Mesodinium nuclear code
The peritrich nuclear code
The Blastocrithidia nuclear code
The Balanophoraceae plastid code
The Cephalodiscidae mitochondrial code
The alternative translation tables (2 to 33) involve codon reassignments that are recapitulated in the list of all known alternative codons.
Table summary
Comparison of alternative translation tables for all codons (using IUPAC amino acid codes):
Notes
Three translation tables have a peculiar status:
Table 7 is now merged into translation table 4.
Table 8 is merged to table 1; all plant chloroplast differences due to RNA edit.
Table 15 is deleted in the source but included here for completeness.
Other mechanisms also play a par |
https://en.wikipedia.org/wiki/Sound%20intensity | Sound intensity, also known as acoustic intensity, is defined as the power carried by sound waves per unit area in a direction perpendicular to that area. The SI unit of intensity, which includes sound intensity, is the watt per square meter (W/m2). One application is the noise measurement of sound intensity in the air at a listener's location as a sound energy quantity.
Sound intensity is not the same physical quantity as sound pressure. Human hearing is sensitive to sound pressure which is related to sound intensity. In consumer audio electronics, the level differences are called "intensity" differences, but sound intensity is a specifically defined quantity and cannot be sensed by a simple microphone.
Sound intensity level is a logarithmic expression of sound intensity relative to a reference intensity.
Mathematical definition
Sound intensity, denoted I, is defined by
where
p is the sound pressure;
v is the particle velocity.
Both I and v are vectors, which means that both have a direction as well as a magnitude. The direction of sound intensity is the average direction in which energy is flowing.
The average sound intensity during time T is given by
For a plane wave ,
Where,
is frequency of sound,
is the amplitude of the sound wave particle displacement,
is density of medium in which sound is traveling, and
is speed of sound.
Inverse-square law
For a spherical sound wave, the intensity in the radial direction as a function of distance r from the centre of the sphere is given by
where
P is the sound power;
A(r) is the surface area of a sphere of radius r.
Thus sound intensity decreases as 1/r2 from the centre of the sphere:
This relationship is an inverse-square law.
Sound intensity level
Sound intensity level (SIL) or acoustic intensity level is the level (a logarithmic quantity) of the intensity of a sound relative to a reference value.
It is denoted LI, expressed in nepers, bels, or decibels, and defined by
where
I is the sound |
https://en.wikipedia.org/wiki/Steatosis | Steatosis, also called fatty change, is abnormal retention of fat (lipids) within a cell or organ. Steatosis most often affects the liver – the primary organ of lipid metabolism – where the condition is commonly referred to as fatty liver disease. Steatosis can also occur in other organs, including the kidneys, heart, and muscle. When the term is not further specified (as, for example, in 'cardiac steatosis'), it is assumed to refer to the liver.
Risk factors associated with steatosis are varied, and may include diabetes mellitus, protein malnutrition, hypertension, cell toxins, obesity, anoxia, and sleep apnea.
Steatosis reflects an impairment of the normal processes of synthesis and elimination of triglyceride fat. Excess lipid accumulates in vesicles that displace the cytoplasm. When the vesicles are large enough to distort the nucleus, the condition is known as macrovesicular steatosis; otherwise, the condition is known as microvesicular steatosis. While not particularly detrimental to the cell in mild cases, large accumulations can disrupt cell constituents, and in severe cases the cell may even burst.
Pathogenesis
No single mechanism leading to steatosis exists; rather, a varied multitude of pathologies disrupt normal lipid movement through the cell and cause accumulation. These mechanisms can be separated based on whether they ultimately cause an oversupply of lipid which can not be removed quickly enough (i.e., too much in), or whether they cause a failure in lipid breakdown (i.e., not enough used).
Failure of lipid metabolism can also lead to the mechanisms which would normally utilise or remove lipids becoming impaired, resulting in the accumulation of unused lipids in the cell. Certain toxins, such as alcohols, carbon tetrachloride, aspirin, and diphtheria toxin, interfere with cellular machinery involved in lipid metabolism. In those with Gaucher's disease, the lysosomes fail to degrade lipids and steatosis arises from the accumulation of glycoli |
https://en.wikipedia.org/wiki/Drinker%20paradox | The drinker paradox (also known as the drinker's theorem, the drinker's principle, or the drinking principle) is a theorem of classical predicate logic that can be stated as "There is someone in the pub such that, if he or she is drinking, then everyone in the pub is drinking." It was popularised by the mathematical logician Raymond Smullyan, who called it the "drinking principle" in his 1978 book What Is the Name of this Book?
The apparently paradoxical nature of the statement comes from the way it is usually stated in natural language. It seems counterintuitive both that there could be a person who is causing the others to drink, or that there could be a person such that all through the night that one person were always the last to drink. The first objection comes from confusing formal "if then" statements with causation (see Correlation does not imply causation or Relevance logic for logics that demand relevant relationships between premise and consequent, unlike classical logic assumed here). The formal statement of the theorem is timeless, eliminating the second objection because the person the statement holds true for at one instant is not necessarily the same person it holds true for at any other instant.
The formal statement of the theorem is
where D is an arbitrary predicate and P is an arbitrary nonempty set.
Proofs
The proof begins by recognizing it is true that either everyone in the pub is drinking, or at least one person in the pub is not drinking. Consequently, there are two cases to consider:
Suppose everyone is drinking. For any particular person, it cannot be wrong to say that if that particular person is drinking, then everyone in the pub is drinking—because everyone is drinking. Because everyone is drinking, then that one person must drink because when that person drinks everybody drinks, everybody includes that person.
Otherwise at least one person is not drinking. For any nondrinking person, the statement if that particular person is dr |
https://en.wikipedia.org/wiki/Charles%20D.%20Hansen | Charles "Chuck" D. Hansen is an American computer scientist at the University of Utah who works on scientific visualization. He is a Distinguished Professor, a Fellow of the IEEE and a founding faculty member of the Scientific Computing and Imaging Institute. He was an associate editor-in-chief of IEEE Transactions on Visualization and Graphics.
Biography
Hansen received a BS in computer science from Memphis State University in 1981 and a PhD in computer science from the University of Utah in 1987. From 1989 to 1997, he was a Technical Staff Member in the Advanced Computing Laboratory (ACL) at Los Alamos National Laboratory, where he formed and directed the visualization efforts. He was a Bourse de Chateaubriand PostDoc Fellow at INRIA in Rocquencourt, France in 1987 and 1988. Since 1998, he has been a full professor in Computer Science at the University of Utah. In 2019, he was named a Distinguished Professor of Computing at the University of Utah. He was a visiting scientist at INRIA-Rhône-Alpes in the GRAVIR group in 2004-2005 and a visiting professor at the Joseph Fourier University in Grenoble in 2011-2012. In 2005, he won the IEEE Visualization Technical Achievement Award for his "seminal work on tools for understanding large-scale scientific data sets". In 2017, he was awarded the IEEE Technical Committee on Visualization and Graphics "Career Award" in recognition for his contributions to large scale data visualization, including advances in parallel and volume rendering, novel interaction techniques, and techniques for exploiting hardware; for his leadership in the community as an educator, program chair, and editor; and for providing vision for the development and support of the field. He was associate editor-in-chief of IEEE Transactions on Visualization and Graphics from 2003 to 2007, and again from 2014 to 2018. He was elected an IEEE Fellow in 2012.
Books |
https://en.wikipedia.org/wiki/Costas%20array | In mathematics, a Costas array can be regarded geometrically as a set of n points, each at the center of a square in an n×n square tiling such that each row or column contains only one point, and all of the n(n − 1)/2 displacement vectors between each pair of dots are distinct. This results in an ideal "thumbtack" auto-ambiguity function, making the arrays useful in applications such as sonar and radar. Costas arrays can be regarded as two-dimensional cousins of the one-dimensional Golomb ruler construction, and, as well as being of mathematical interest, have similar applications in experimental design and phased array radar engineering.
Costas arrays are named after John P. Costas, who first wrote about them in a 1965 technical report. Independently, Edgar Gilbert also wrote about them in the same year, publishing what is now known as the logarithmic Welch method of constructing Costas arrays.
The general enumeration of Costas arrays is an open problem in computer science and finding an algorithm that can solve it in polynomial time is an open research question.
Numerical representation
A Costas array may be represented numerically as an n×n array of numbers, where each entry is either 1, for a point, or 0, for the absence of a point. When interpreted as binary matrices, these arrays of numbers have the property that, since each row and column has the constraint that it only has one point on it, they are therefore also permutation matrices. Thus, the Costas arrays for any given n are a subset of the permutation matrices of order n.
Arrays are usually described as a series of indices specifying the column for any row. Since it is given that any column has only one point, it is possible to represent an array one-dimensionally. For instance, the following is a valid Costas array of order N = 4:
or simply
There are dots at coordinates: (1,2), (2,1), (3,3), (4,4)
Since the x-coordinate increases linearly, we can write this in shorthand as the set of al |
https://en.wikipedia.org/wiki/Macromolecular%20assembly | The term macromolecular assembly (MA) refers to massive chemical structures such as viruses and non-biologic nanoparticles, cellular organelles and membranes and ribosomes, etc. that are complex mixtures of polypeptide, polynucleotide, polysaccharide or other polymeric macromolecules. They are generally of more than one of these types, and the mixtures are defined spatially (i.e., with regard to their chemical shape), and with regard to their underlying chemical composition and structure. Macromolecules are found in living and nonliving things, and are composed of many hundreds or thousands of atoms held together by covalent bonds; they are often characterized by repeating units (i.e., they are polymers). Assemblies of these can likewise be biologic or non-biologic, though the MA term is more commonly applied in biology, and the term supramolecular assembly is more often applied in non-biologic contexts (e.g., in supramolecular chemistry and nanotechnology). MAs of macromolecules are held in their defined forms by non-covalent intermolecular interactions (rather than covalent bonds), and can be in either non-repeating structures (e.g., as in the ribosome (image) and cell membrane architectures), or in repeating linear, circular, spiral, or other patterns (e.g., as in actin filaments and the flagellar motor, image). The process by which MAs are formed has been termed molecular self-assembly, a term especially applied in non-biologic contexts. A wide variety of physical/biophysical, chemical/biochemical, and computational methods exist for the study of MA; given the scale (molecular dimensions) of MAs, efforts to elaborate their composition and structure and discern mechanisms underlying their functions are at the forefront of modern structure science.
Biomolecular complex
A biomolecular complex, also called a biomacromolecular complex, is any biological complex made of more than one biopolymer (protein, RNA, DNA,
carbohydrate) or large non-polymeric biomolecules |
https://en.wikipedia.org/wiki/Ecotype | In evolutionary ecology, an ecotype, sometimes called ecospecies, describes a genetically distinct geographic variety, population, or race within a species, which is genotypically adapted to specific environmental conditions.
Typically, though ecotypes exhibit phenotypic differences (such as in morphology or physiology) stemming from environmental heterogeneity, they are capable of interbreeding with other geographically adjacent ecotypes without loss of fertility or vigor.
Definition
An ecotype is a variant in which the phenotypic differences are too few or too subtle to warrant being classified as a subspecies. These different variants can occur in the same geographic region where distinct habitats such as meadow, forest, swamp, and sand dunes provide ecological niches. Where similar ecological conditions occur in widely separated places, it is possible for a similar ecotype to occur in the separated locations. An ecotype is different from a subspecies, which may exist across a number of different habitats. In animals, ecotypes owe their differing characteristics to the effects of a very local environment. Therefore, ecotypes have no taxonomic rank.
Terminology
Ecotypes are closely related to morphs. In the context of evolutionary biology, genetic polymorphism is the occurrence in the equilibrium of two or more distinctly different phenotypes within a population of a species, in other words, the occurrence of more than one form or morph. The frequency of these discontinuous forms (even that of the rarest) is too high to be explained by mutation. In order to be classified as such, morphs must occupy the same habitat at the same time and belong to a panmictic population (whose members can all potentially interbreed). Polymorphism is actively and steadily maintained in populations of species by natural selection (most famously sexual dimorphism in humans) in contrast to transient polymorphisms where conditions in a habitat change in such a way that a "form" is be |
https://en.wikipedia.org/wiki/Hygrophorus%20camarophyllus | Hygrophorus camarophyllus is a species of edible fungus in the genus Hygrophorus. |
https://en.wikipedia.org/wiki/Balsa%20wood%20bridge | The building of balsa-wood bridges is often used as an educational technology. It may be accompanied by a larger project involving varying areas of study.
Typically classes which would include a balsa wood bridge cover the subject areas of physics, engineering, static equilibrium, or building trades, although it may be done independently of any of these subjects. Building a balsa wood bridge can be done after completing a section or unit covering a related topic or the process of design and building can be used to guide students to a better understanding of the desired subject area.
Requirements
Although there is great variety between different balsa wood bridge projects, students are in general trying to build a bridge that can withstand the greatest weight before it fails. Other restrictions are often applied, but these vary widely from one contest to another.
Sample requirements include:
restricting the maximum mass of the bridge
requiring a minimum span
requiring a minimum height of the roadway
restricting the physical dimensions of the bridge
restricting the size of individual pieces of balsa wood
limiting the amount of glue or balsa wood that can be used
requiring a driveable roadway that allows passage of a vehicle of specified size
restricting the way pieces are placed on the bridge (for example no parallel joining pieces)
Testing
Bridges are usually tested by applying a downward force on the bridge. How and where the force is applied varies from one contest to the next. There are two common methods of applying the test force to the bridge:
By hanging a container (such as a trash can) from the bridge and loading known weights into the container until the bridge breaks. The tester could also slowly add water or sand to the container until the bridge breaks and then weigh the container, providing a more accurate way to find the breaking force.
By using a mechanical or pneumatic testing device that pushes down on the bridge with increasing force until the |
https://en.wikipedia.org/wiki/Grover%27s%20algorithm | In quantum computing, Grover's algorithm, also known as the quantum search algorithm, is a quantum algorithm for unstructured search that finds with high probability the unique input to a black box function that produces a particular output value, using just evaluations of the function, where is the size of the function's domain. It was devised by Lov Grover in 1996.
The analogous problem in classical computation cannot be solved in fewer than evaluations (because, on average, one has to check half of the domain to get a 50% chance of finding the right input). Charles H. Bennett, Ethan Bernstein, Gilles Brassard, and Umesh Vazirani proved that any quantum solution to the problem needs to evaluate the function times, so Grover's algorithm is asymptotically optimal. Since classical algorithms for NP-complete problems require exponentially many steps, and Grover's algorithm provides at most a quadratic speedup over the classical solution for unstructured search, this suggests that Grover's algorithm by itself will not provide polynomial-time solutions for NP-complete problems (as the square root of an exponential function is an exponential, not polynomial, function).
Unlike other quantum algorithms, which may provide exponential speedup over their classical counterparts, Grover's algorithm provides only a quadratic speedup. However, even quadratic speedup is considerable when is large, and Grover's algorithm can be applied to speed up broad classes of algorithms. Grover's algorithm could brute-force a 128-bit symmetric cryptographic key in roughly 264 iterations, or a 256-bit key in roughly 2128 iterations. It may not be the case that Grover's algorithm poses a significantly increased risk to encryption over existing classical algorithms, however.
Applications and limitations
Grover's algorithm, along with variants like amplitude amplification, can be used to speed up a broad range of algorithms. In particular, algorithms for NP-complete problems generally con |
https://en.wikipedia.org/wiki/Nodamuvirales | Nodamuvirales is an order of positive-strand RNA viruses which infect eukaryotes. The name of the group is a contraction of "Nodamura virus" and -virales which is the suffix for a virus order.
Taxonomy
The following families are recognized:
Nodaviridae
Sinhaliviridae |
https://en.wikipedia.org/wiki/Behavioral%20cusp | A behavioral cusp is any behavior change that brings an organism's behavior into contact with new contingencies that have far-reaching consequences. A behavioral cusp is a special type of behavior change because it provides the learner with opportunities to access new reinforcers, new contingencies, new environments, new related behaviors (generativeness) and competition with archaic or problem behaviors. It affects the people around the learner, and these people agree to the behavior change and support its development after the intervention is removed.
The concept has far reaching implications for every individual, and for the field of developmental psychology, because it provides a behavioral alternative to the concept of maturation and change due to the simple passage of time, such as developmental milestones. The cusp is a behavior change that presents special features when compared to other behavior changes.
History
The concept was first proposed by Sidney W. Bijou, an American developmental psychologist. The idea of the cusp was to link behavioral principles to rapid spurts in development (see Behavior analysis of child development).
A behavioral cusp as conceptualized by Jesus Rosales-Ruiz & Donald Baer in 1997 is an important behavior change that affects future behavior changes. The behavioral cusp, like the reinforcer, is apprehended by its effects. Whereas a reinforcer acts on a single response or a group of related responses, the effects of a behavioral cusp regulate a large number of responses in a more distant future.
The concept has been compared to a developmental milestone, however, not all cusps are milestones. For example, learning to play soccer is not a milestone, but it was life-changing for Pelé. As a result of learning to kick grapefruits (the initial important change or cusp), Pelé accessed (1) new environments, (2) new reinforcers, (3) new soccer moves, (4) dropped competing behaviors (smoking), and (5) gained international acclaims for |
https://en.wikipedia.org/wiki/Matti%20%C3%84yr%C3%A4p%C3%A4%C3%A4%20Prize | The Matti Äyräpää Prize () is a Finnish prize in medicine awarded by The Finnish Medical Society Duodecim since 1969. It is named after the dentist Matti Äyräpää, who was Duodecim's first chairman.
In 2016, the prize money was €20,000.
Recipients
1969 – Eino Kulonen
1970 – Kauko Vainio
1971 – Esko Nikkilä
1972 – Olli Mäkelä
1973 – Olavi Eränkö
1974 – Kari Penttinen
1975 – Lauri Saxén
1976 – Erkki Klemola
1977 – Kari Kivirikko
1978 – Kari Cantell
1979 – Bror-Axel Lamberg
1980 – Pirjo Mäkelä
1981 – Markku Seppälä
1982 – Reijo Vihko
1983 – Eero Saksela
1984 – Tatu Miettinen
1985 – Antti Vaheri
1986 – Olli Jänne
1987 – Mikko Hallman
1988 – Pekka Häyry
1989 – Pekka Halonen
1990 – Albert de la Chapelle
1991 – Leevi Kääriäinen
1992 – Tapani Luukkainen
1993 – Mårten Wikström
1994 – Juhani Jänne
1995 – Jouni Uitto
1996 – Leena Palotie
1997 – Carl G. Gahmberg
1998 – Kari Alitalo
1999 – Ilpo Huhtaniemi
2000 – Pekka Saikku
2001 – Riitta Hari
2002 – Matti Haltia
2003 – Kai Simons
2004 – Petri Kovanen
2005 – Kimmo Kontula
2006 – Lauri A. Aaltonen
2007 – Markku Laakso
2008 – Sirpa Jalkanen
2009 – Seppo Ylä-Herttuala
2010 – Jorma Viikari
2011 – Juha Kere
2012 – Heikki Joensuu
2013 – Taina Pihlajaniemi
2014 – Leif Groop
2015 – Heikki Huikuri
2016 – Erika Isolauri
...
2021 – Anu Wartiovaara
See also
List of medicine awards
Footnotes |
https://en.wikipedia.org/wiki/Simplicial%20sphere | In geometry and combinatorics, a simplicial (or combinatorial) d-sphere is a simplicial complex homeomorphic to the d-dimensional sphere. Some simplicial spheres arise as the boundaries of convex polytopes, however, in higher dimensions most simplicial spheres cannot be obtained in this way.
One important open problem in the field was the g-conjecture, formulated by Peter McMullen, which asks about possible numbers of faces of different dimensions of a simplicial sphere. In December 2018, the g-conjecture was proven by Karim Adiprasito in the more general context of rational homology spheres.
Examples
For any n ≥ 3, the simple n-cycle Cn is a simplicial circle, i.e. a simplicial sphere of dimension 1. This construction produces all simplicial circles.
The boundary of a convex polyhedron in R3 with triangular faces, such as an octahedron or icosahedron, is a simplicial 2-sphere.
More generally, the boundary of any (d+1)-dimensional compact (or bounded) simplicial convex polytope in the Euclidean space is a simplicial d-sphere.
Properties
It follows from Euler's formula that any simplicial 2-sphere with n vertices has 3n − 6 edges and 2n − 4 faces. The case of n = 4 is realized by the tetrahedron. By repeatedly performing the barycentric subdivision, it is easy to construct a simplicial sphere for any n ≥ 4. Moreover, Ernst Steinitz gave a characterization of 1-skeleta (or edge graphs) of convex polytopes in R3 implying that any simplicial 2-sphere is a boundary of a convex polytope.
Branko Grünbaum constructed an example of a non-polytopal simplicial sphere (that is, a simplicial sphere that is not the boundary of a polytope). Gil Kalai proved that, in fact, "most" simplicial spheres are non-polytopal. The smallest example is of dimension d = 4 and has f0 = 8 vertices.
The upper bound theorem gives upper bounds for the numbers fi of i-faces of any simplicial d-sphere with f0 = n vertices. This conjecture was proved for simplicial convex polytopes by Peter M |
https://en.wikipedia.org/wiki/Newton%27s%20inequalities | In mathematics, the Newton inequalities are named after Isaac Newton. Suppose a1, a2, ..., an are real numbers and let denote the kth elementary symmetric polynomial in a1, a2, ..., an. Then the elementary symmetric means, given by
satisfy the inequality
If all the numbers ai are non-zero, then equality holds if and only if all the numbers ai are equal.
It can be seen that S1 is the arithmetic mean, and Sn is the n-th power of the geometric mean.
See also
Maclaurin's inequality |
https://en.wikipedia.org/wiki/GeneTalk | GeneTalk is a web-based platform, tool, and database for filtering, reduction and prioritization of human sequence variants from next-generation sequencing (NGS) data. GeneTalk allows editing annotation about sequence variants and build up a crowd sourced database with clinically relevant information for diagnostics of genetic disorders. GeneTalk allows searching for information about specific sequence variants and connects to experts on variants that are potentially disease-relevant.
Application to diagnostics
Users can upload NGS data in Variant Call Format (VCF) onto the GeneTalk server into their accounts. All entries of the file are preprocessed and shown in the integrated VCF viewer. Filtering tools are set by the user to reduce the number of clinically non-relevant variants. After filtering and prioritization users can interpret relevant variants by retrieving information (annotations) about variants from the GeneTalk database. The communication platform allow users to contact experts about specific variants, genes, or genetic disorders, to exchange knowledge and expertise.
Analysis procedure
Steps required to analyze VCF files
Upload VCF file
Edit pedigree and phenotype information for segregation filtering
Filter VCF file by editing the filtering options
View results and annotations
Add annotations
Filtering tools
The following filtering options may be used to reduce the non-relevant sequence variants in VCF files.
Functional – filter out variants that have effects on protein level
Linkage – filter out variants that are on specified chromosomes
Gene panel – filter variants by genes or gene panels, subscribe to publicly available gene panels or create own ones
Frequency – show only variants with a genotype frequency lower than specified
Inheritance – filter out variants by presumed mode of inheritance
Annotation – show only variants with a score for medical relevance and scientific evidence
Communication platform & expert network
Us |
https://en.wikipedia.org/wiki/Hallia | Hallia is a taxonomic synonym that may refer to:
Hallia = Alysicarpus
Hallia = Psoralea |
https://en.wikipedia.org/wiki/Dichlorophen | Dichlorophen is an anticestodal agent, fungicide, germicide, and antimicrobial agent. It is used in combination with toluene for the removal of parasites such as ascarids, hookworms, and tapeworms from dogs and cats.
Safety and regulation
LD50 (oral, mouse) is 3300 mg/kg. |
https://en.wikipedia.org/wiki/Risk%20of%20Rain | Risk of Rain is a 2013 platform game developed by Hopoo Games and published by Chucklefish. The game, initially made by a two-student team from the University of Washington using the GameMaker engine, was funded through Kickstarter before being released on Microsoft Windows in November 2013. Ports for OS X and Linux versions were released a year later, with consoles versions being released in the later half of the 2010s.
In Risk of Rain, players control the survivor of a space freighter crash on a strange planet. Players attempt to survive by killing monsters and collecting items that can boost their offensive and defensive abilities. The game features a difficulty scale that increases with time, requiring the player to choose between spending time building experience and completing levels quickly before the monsters become more difficult. The game supports up to ten cooperative players in online play and up to two players in local play. A sequel, Risk of Rain 2, was released in August 2020, while a remastered version of the game is scheduled for a November 2023 release on Windows and Nintendo Switch under the title Risk of Rain Returns.
Gameplay
At the start of the game, the player selects one of twelve characters. Initially, one character is available, the Commando. As the player completes various in-game objectives, more characters become available. Each character has various statistics and a set of unique moves; for example, the sniper has the ability to hit creatures from a long distance for large, piercing damage but their firing rate is slow, while the commando can do rapid, moderate damage at close range.
Throughout the game, the goal is to locate a teleporter, always placed in a random location on the level. As the players hunt for it, they will encounter monsters; upon death, the monsters will drop in-game money and will provide the players experience. As the players gain experience they will level up, gaining more hit points and damage. Money can be u |
https://en.wikipedia.org/wiki/Wien%20approximation | Wien's approximation (also sometimes called Wien's law or the Wien distribution law) is a law of physics used to describe the spectrum of thermal radiation (frequently called the blackbody function). This law was first derived by Wilhelm Wien in 1896. The equation does accurately describe the short-wavelength (high-frequency) spectrum of thermal emission from objects, but it fails to accurately fit the experimental data for long-wavelength (low-frequency) emission.
Details
Wien derived his law from thermodynamic arguments, several years before Planck introduced the quantization of radiation.
Wien's original paper did not contain the Planck constant. In this paper, Wien took the wavelength of black-body radiation and combined it with the Maxwell–Boltzmann energy distribution for atoms. The exponential curve was created by the use of Euler's number e raised to the power of the temperature multiplied by a constant. Fundamental constants were later introduced by Max Planck.
The law may be written as
(note the simple exponential frequency dependence of this approximation) or, by introducing natural Planck units,
where:
This equation may also be written as
where is the amount of energy per unit surface area per unit time per unit solid angle per unit wavelength emitted at a wavelength λ.
The peak value of this curve, as determined by setting the derivative of the equation equal to zero and solving, occurs at
a wavelength
and frequency
Relation to Planck's law
The Wien approximation was originally proposed as a description of the complete spectrum of thermal radiation, although it failed to accurately describe long-wavelength (low-frequency) emission. However, it was soon superseded by Planck's law, which accurately describes the full spectrum, derived by treating the radiation as a photon gas and accordingly applying Bose–Einstein in place of Maxwell–Boltzmann statistics. Planck's law may be given as
The Wien approximation may be derived from Planck' |
https://en.wikipedia.org/wiki/Steinhaus%E2%80%93Johnson%E2%80%93Trotter%20algorithm | The Steinhaus–Johnson–Trotter algorithm or Johnson–Trotter algorithm, also called plain changes, is an algorithm named after Hugo Steinhaus, Selmer M. Johnson and Hale F. Trotter that generates all of the permutations of elements. Each permutation in the sequence that it generates differs from the previous permutation by swapping two adjacent elements of the sequence. Equivalently, this algorithm finds a Hamiltonian cycle in the permutohedron.
This method was known already to 17th-century English change ringers, and calls it "perhaps the most prominent permutation enumeration algorithm". A version of the algorithm can be implemented in such a way that the average time per permutation is constant. As well as being simple and computationally efficient, this algorithm has the advantage that subsequent computations on the permutations that it generates may be sped up because of the similarity between consecutive permutations that it generates.
Algorithm
The sequence of permutations generated by the Steinhaus–Johnson–Trotter algorithm has a natural recursive structure, that can be generated by a recursive algorithm. However the actual Steinhaus–Johnson–Trotter algorithm does not use recursion, instead computing the same sequence of permutations by a simple iterative method. A later improvement allows it to run in constant average time per permutation.
Recursive structure
The sequence of permutations for a given number can be formed from the sequence of permutations for by placing the number into each possible position in each of the shorter permutations. The Steinhaus–Johnson–Trotter algorithm follows this structure: the sequence of permutations it generates consists of blocks of permutations, so that within each block the permutations agree on the ordering of the numbers from 1 to and differ only in the position of . The blocks themselves are ordered recursively, according to the Steinhaus–Johnson–Trotter algorithm for one less element.
Within each block, the |
https://en.wikipedia.org/wiki/Morchella%20deqinensis | Morchella deqinensis is a species of fungus in the family Morchellaceae found in China. It grows in coniferous and mixed forests at an elevation of .
Taxonomy
The species was reported as new to science in 2006 by Shu-Hong Li and colleagues from the Yunnan Agricultural University. The specific epithet deqinensis refers to Dêqên County in Yunnan, where the type specimen was collected.
Description
Fruit bodies are tall; of this, the egg-shaped to broadly conical cap is wide by tall. Its surface features sparse, vertically arranged ridges that are dark greenish-brown in colour. The intersection of vertical and horizontal ridges form irregularly shaped, cream-coloured pits that are generally two or three times as long as they are wide. The flesh is firm and thin, and lacks any distinct odour or taste. The cap is completely attached to the hollow, conical stipe, which measures long by wide. Initially whitish, it becomes rust-coloured in maturity or after drying. White granules are present at the top of the stipe and in the grooves at the base.
The hymenium (fertile, spore-bearing surface) is 6–10 µm thick. Asci (spore-bearing cells) are cylindrical, eight-spored, hyaline (translucent) and measure 9.9–10.5 µm thick by 105–150 µm long. In deposit, the spores are cream coloured. Spores are ellipsoid to egg-shaped, smooth, hyaline, and have dimensions of 6.4–8.1 by 9.2–9.6 µm. They are thin walled and lack oil droplets. Paraphyses are club shaped with a septum at the base, and measure 3.6–4.5 by 43 µm.
Similar species
Morchella umbrina is somewhat similar in appearance to M. deqinensis, but the former species can be distinguished by its deeper colour, the smaller pits on the cap surface, and the absence of granules on the stipe.
Habitat and distribution
Morchella deqinensis is known only from China, where it fruits on the ground in coniferous and mixed forests at an elevation of . The type collection was made in April. |
https://en.wikipedia.org/wiki/Conley%20index%20theory | In dynamical systems theory, Conley index theory, named after Charles Conley, analyzes topological structure of invariant sets of diffeomorphisms and of smooth flows. It is a far-reaching generalization of the Hopf index theorem that predicts existence of fixed points of a flow inside a planar region in terms of information about its behavior on the boundary. Conley's theory is related to Morse theory, which describes the topological structure of a closed manifold by means of a nondegenerate gradient vector field. It has an enormous range of applications to the study of dynamics, including existence of periodic orbits in Hamiltonian systems and travelling wave solutions for partial differential equations, structure of global attractors for reaction–diffusion equations and delay differential equations, proof of chaotic behavior in dynamical systems, and bifurcation theory. Conley index theory formed the basis for development of Floer homology.
Short description
A key role in the theory is played by the notions of isolating neighborhood and isolated invariant set . The Conley index is the homotopy type of a space built from a certain pair of compact sets called an index pair for . Charles Conley showed that index pairs exist and that the index of is independent of the choice of the index pair. In the special case of the negative gradient flow of a smooth function, the Conley index of a nondegenerate (Morse) critical point of index is the pointed homotopy type of the k-sphere Sk.
A deep theorem due to Conley asserts continuation invariance: Conley index is invariant under certain deformations of the dynamical system. Computation of the index can, therefore, be reduced to the case of the diffeomorphism or a vector field whose invariant sets are well understood.
If the index is nontrivial then the invariant set S is nonempty. This principle can be amplified to establish existence of fixed points and periodic orbits inside N.
Construction
We build the Conley I |
https://en.wikipedia.org/wiki/Lotfi%20A.%20Zadeh | Lotfi Aliasker Zadeh (; ; ; 4 February 1921 – 6 September 2017) was a mathematician, computer scientist, electrical engineer, artificial intelligence researcher, and professor of computer science at the University of California, Berkeley.
Zadeh is best known for proposing fuzzy mathematics, consisting of several fuzzy-related concepts: fuzzy sets, fuzzy logic, fuzzy algorithms, fuzzy semantics, fuzzy languages, fuzzy control, fuzzy systems, fuzzy probabilities, fuzzy events, and fuzzy information.
Zadeh was a founding member of the Eurasian Academy.
Early life and career
Azerbaijan
Zadeh was born in Baku, Azerbaijan SSR, as Lotfi Aliaskerzadeh. His father was Rahim Aleskerzade, an Iranian Muslim Azerbaijani journalist from Ardabil on assignment from Iran, and his mother was Fanya (Feyga) Korenman, a Jewish pediatrician from Odesa, Ukraine, who was an Iranian citizen. The Soviet government at this time courted foreign correspondents, and the family lived well while in Baku. Zadeh attended elementary school for three years there, which he said "had a significant and long-lasting influence on my thinking and my way of looking at things."
Iran
In 1931, when Stalin began agricultural collectivization, and Zadeh was ten, his father moved his family back to Tehran, Iran. Zadeh was enrolled in Alborz High School, a missionary school, where he was educated for the next eight years, and where he met his future wife, Fay (Faina) Zadeh, who said that he was "deeply influenced" by the "extremely decent, fine, honest and helpful" Presbyterian missionaries from the United States who ran the college. "To me they represented the best that you could find in the United States – people from the Midwest with strong roots. They were really 'Good Samaritans' – willing to give of themselves for the benefit of others. So this kind of attitude influenced me deeply. It also instilled in me a deep desire to live in the United States." During this time, Zadeh was awarded several patents.
|
https://en.wikipedia.org/wiki/Selamectin | Selamectin (trade names Selehold manufactured by KRKA, Selarid manufactured by Norbrook Laboratories Limited, Revolution and Stronghold manufactured by Zoetis, Revolt manufactured by Aurora Pharmaceuticals, Senergy manufactured by Virbac, among others) is a topical parasiticide and anthelminthic used on dogs and cats. It treats and prevents infections of heartworms, fleas, ear mites, sarcoptic mange (scabies), and certain types of ticks in dogs, and prevents heartworms, fleas, ear mites, hookworms, and roundworms in cats. It is structurally related to ivermectin and milbemycin. Selamectin is not approved for human use.
Usage
The drug is applied topically. It is isopropyl alcohol based, packaged according to its varying dosage sizes and applied once monthly. It is not miscible in water.
Mode of action
Selamectin disables parasites by activating glutamate-gated chloride channels at muscle synapses. Selamectin activates the chloride channel without desensitization, allowing chloride ions to enter the nerve cells and causing neuromuscular paralysis, impaired muscular contraction, and eventual death.
The substance fights both internal and surface parasitic infection. Absorbed into the body through the skin and hair follicles, it travels through the bloodstream, intestines, and sebaceous glands; parasites ingest the drug when they feed on the animal's blood or secretions.
Side effects
Selamectin has been found to be safe and effective in a 2003 review.
Selamectin has high safety ratings, with less than 1% of pets displaying side effects. In cases where side-effects do occur, they most often include passing irritation or hair loss at the application site. Symptoms beyond these (such as drooling, rapid breathing, lack of coordination, vomiting, or diarrhea) could be due to shock as a result of selamectin killing heartworms or other vulnerable parasites present at high levels in the bloodstreams of dogs. This would be a reaction due to undetected or underestimated i |
https://en.wikipedia.org/wiki/Sir%20Isaac%20Newton%20Sixth%20Form | Sir Isaac Newton Sixth Form is a specialist maths and science sixth form with free school status located in Norwich, owned by the Inspiration Trust. It has the capacity for 480 students aged 16–19. It specialises in mathematics and science.
History
Prior to becoming a Sixth Form College the building functioned as a fire station serving the central Norwich area until August 2011 when it closed down. Two years later the Sixth Form was created within the empty building with various additions being made to the existing structure. The sixth form was ranked the 7th best state sixth form in England by the Times in 2022.
Curriculum
At Sir Isaac Newton Sixth Form, students can study a choice of either Maths, Further Maths, Core Maths, Biology, Chemistry, Physics, Computer Science, Environmental Science or Psychology. Additionally, students can also study any of the subjects on offer at the partner free school Jane Austen College, also located in Norwich and specialising in humanities, Arts and English. |
https://en.wikipedia.org/wiki/Mariam%20Nabatanzi | Mariam Nabatanzi Babirye (born ) also known as Maama Uganda or Mother Uganda, is a Ugandan woman known for birthing 44 children. As of April 2023, her eldest children were twenty-eight years old, and the youngest were six years old. She is a single mother, who was abandoned by her husband in 2015. He reportedly feared the responsibility of supporting so many children.
Born around 1980, Babirye first gave birth when she was 13 years old, having been forced into marriage the year prior. By the age of 36, she had given birth to a total of 44 children, including three sets of quadruplets, four sets of triplets, and six sets of twins, for a total of fifteen births. The number of multiple births was caused by a rare genetic condition causing hyperovulation as a result of enlarged ovaries. In 2019, when Babirye was aged 40, she underwent a medical procedure to prevent any further pregnancies. She lives in the village of Kasawo, located in the Mukono district of Central Uganda.
Life and background
In 1993, Babirye was sold into child marriage at the age of twelve to a violent 40-year-old man. A year later, she first became a mother in 1994 with a set of twins, followed by triplets in 1996. She then gave birth to a set of quadruplets 19 months later. She never found the rate at which she was procreating unusual due to her family history; she had been quoted as saying: "My father gave birth to forty-five children with different women, and these all came in quintuplets, quadruples, twins and triplets."
In Uganda, there are some communities that practice early child marriages, where a young girl is given off to an older man in exchange for a dowry that most frequently consists of cows. Babirye's marriage was an example of this. At the age of twenty-three, she had given birth to twenty-five children, but was advised to continue giving birth, as it would help reduce further fertility. Those affected with Babirye's condition are often advised that abstinence from pregnancy ca |
https://en.wikipedia.org/wiki/Enterprise%20Volume%20Management%20System | Enterprise Volume Management System (EVMS) was a flexible, integrated volume management software used to manage storage systems under Linux.
Its features include:
Handle EVMS, Linux LVM and LVM2 volumes
Handle many kinds of disk partitioning schemes
Handle many different file systems (Ext2, Ext3, FAT, JFS, NTFS, OCFS2, OpenGFS, ReiserFS, Swap, XFS etc.)
Multi-disk (MD) management
Software RAID: level 0, 1, 4 and 5 (no support for level 6 and 10)
Drive linking (device concatenation)
Multipath I/O support
Manage shared cluster storage
Expand and shrink volumes and file systems online or offline (depending on the file system's capabilities)
Snapshots (frozen images of volumes), optionally writable
Conversion between different volume types
Move partitions
Make, check and repair file systems
Bad block relocation
Three types of user interface: GUI, text mode interface and CLI
Backup and restore the EVMS metadata
EVMS is licensed under the GNU General Public License version 2 or later. EVMS is supported now in some Linux distributions, among others it is now (2008) SUSE, Debian and Gentoo
LVM vs EVMS
For a while, both LVM and EVMS were competing for inclusion in the mainline kernel. EVMS had more features and better userland tools, but the internals of LVM were more attractive to kernel developers, so in the end LVM won the battle for inclusion. In response, the EVMS team decided to concentrate on porting the EVMS userland tools to work with the LVM kernelspace.
Sometime after the release of version 2.5.5 on February 26, 2006, IBM discontinued development of the project. There have been no further releases. In 2008 Novell announced that the company would be moving from EVMS to LVM in future editions of their SUSE products, while continuing to fully support customers using EVMS. |
https://en.wikipedia.org/wiki/MED26 | Mediator of RNA polymerase II transcription subunit 26 is an enzyme that in humans is encoded by the MED26 gene. It forms part of the Mediator complex.
The activation of gene transcription is a multistep process that is triggered by factors that recognize transcriptional enhancer sites in DNA. These factors work with co-activators to direct transcriptional initiation by the RNA polymerase II apparatus. The protein encoded by this gene is a subunit of the CRSP (cofactor required for SP1 activation) complex, which, along with TFIID, is required for efficient activation by SP1. This protein is also a component of other multisubunit complexes e.g. thyroid hormone receptor-(TR-) associated proteins which interact with TR and facilitate TR function on DNA templates in conjunction with initiation factors and cofactors.
Activity
MED26 is a transcription elongation factor that increases the overall transcription rate of RNA polymerase II by reactivating transcription elongation complexes that have arrested transcription. It does this through recruiting ELL/EAF- and P-TEFb- containing complexes to promoters via a direct interaction with the N-terminal domain (NTD). The MED26 NTD also binds TFIID, and TFIID and elongation complexes interact with MED26 through overlapping binding sites. MED26 NTD may function as a molecular switch contributing to the transition of Pol II into productive elongation.
The three structural domains of TFIIS are conserved from yeast to human. The 80 or so N-terminal residues form a protein interaction domain containing a conserved motif, which has been called the LW motif because of the invariant leucine and tryptophan residues it contains. Although the N-terminal domain is not needed for transcriptional activity, a similar sequence has been identified in other transcription factors and proteins that are predominantly nuclear localized. Specific examples are listed below:
MED26 (also known as CRSP70 and ARC70), a subunit of the Mediator complex, |
https://en.wikipedia.org/wiki/Thiocarlide | Thiocarlide (or tiocarlide or isoxyl) is a thiourea drug used in the treatment of tuberculosis, inhibiting synthesis of oleic acid and tuberculostearic acid.
Thiocarlide has considerable antimycobacterial activity in vitro and is effective against multi-drug resistant strains of Mycobacterium tuberculosis. Isoxyl inhibits M. bovis with six hours of exposure, which is similar to isoniazid and ethionamide, two other prominent anti-TB drugs. Unlike these two drugs, however, isoxyl also partially inhibits the synthesis of fatty acids.
Thiocarlide was developed by a Belgian company, Continental Pharma S.A. Belgo-Canadienne in Brussels, Belgium. The head researcher was Professor N. P. Buu-Hoi, head of Continental Pharma's Research Division. |
https://en.wikipedia.org/wiki/WiperSoft | WiperSoft is an anti-spyware program developed by Wiper Software. It is designed to help users protect their computers from such threats as adware, browser hijackers, worms, potentially unwanted programs (PUPs), trojans, and viruses. Currently available only for Microsoft Windows.
History
WiperSoft was launched in 2015 and was available as a free program for home users. Users were able to use the scan and removal functions without having to buy a subscription.
In 2016, it was re-released with a new design, improved detection and removal functionalities and a more user-friendly interface. That same year, WiperSoft also became a paid program.
WiperSoft saw a big increase in downloads and sales in 2017, and is reportedly used by 1 million users from 120 different countries.
It was tested by Softpedia in 2017 and was rated 100% Clean.
Product
WiperSoft is primarily an anti-spyware program, and comes in two versions. Free WiperSoft offers users to scan their computers for malware. Paid WiperSoft features include malware detection and removal, help desk services and custom fix.
According to Wiper Software, the program can detect and remove threats like potentially unwanted programs, adware, browser hijackers, questionable toolbars, browser add-ons, viruses, trojans and more. Detected potential threats are not automatically deleted, and users have the option of keeping them installed. The program will also undo the changes made by detected threats, such as change of homepage or default search engine.
Availability
The program is currently only available for Microsoft Windows users. All popular browsers, such as Google Chrome, Mozilla Firefox, Internet Explorer and Opera are supported. The program is available in 10 languages. |
https://en.wikipedia.org/wiki/Blackett%20effect | The Blackett effect, also called gravitational magnetism, is the hypothetical generation of a magnetic field by an uncharged, rotating body. This effect has never been observed.
History
Gravitational magnetism was proposed by the German-British physicist Arthur Schuster as an explanation for the magnetic field of the Earth, but was found nonexistent in a 1923 experiment by H. A. Wilson. The hypothesis was revived by the British physicist P. M. S. Blackett in 1947 when he proposed that a rotating body should generate a magnetic field proportional to its angular momentum. This was never generally accepted, and by the 1950s even Blackett felt it had been refuted., pp. 39–43
The Blackett effect was used by the science fiction writer James Blish in his series Cities in Flight (1955–1962) as the basis for his fictional stardrive, the spindizzy. |
https://en.wikipedia.org/wiki/Predispositioning%20theory | Predispositioning theory, in the field of decision theory and systems theory, is a theory focusing on the stages between a complete order and a complete disorder.
Predispositioning theory was founded by Aron Katsenelinboigen (1927–2005), a professor in the Wharton School who dealt with indeterministic systems such as chess, business, economics, and other fields of knowledge and also made an essential step forward in elaboration of styles and methods of decision-making.
Predispositioning theory
Predispositioning theory is focused on the intermediate stage between a complete order and a complete disorder. According to Katsenelinboigen, the system develops gradually, going through several stages, starting with incomplete and inconsistent linkages between its elements and ending with complete and consistent ones.
"Mess. The zero phase can be called a mess because it contains no linkages between the system's elements. Such a definition of mess as ‘a disorderly, un-tidy, or dirty state of things’ we find in Webster's New World Dictionary. (...)
Chaos. Mess should not be confused with the next phase, chaos, as this term is understood today. Arguably, chaos is the first phase of indeterminism that displays sufficient order to talk of the general problem of system development.
The chaos phase is characterized by some ordering of accumulated statistical data and the emergence of the basic rules of interactions of inputs and outputs (not counting boundary conditions). Even such a seemingly limited ordering makes it possible to fix systemic regularities of the sort shown by Feigenbaum numbers and strange attractors.
(...) Different types of orderings in the chaos phase may be brought together under the notion of directing, for they point to a possible general direction of system development and even its extreme states.
But even if a general path is known, enormous difficulties remain in linking algorithmically the present state with the final one and in operationalizing t |
https://en.wikipedia.org/wiki/Parasympatholytic | A parasympatholytic agent is a substance or activity that reduces the activity of the parasympathetic nervous system.
The term parasympatholytic typically refers to the effect of a drug, although some poisons act to block the parasympathetic nervous system as well. Most drugs with parasympatholytic properties are anticholinergics.
Parasympatholytic agents and sympathomimetic agents have similar effects to each other, although some differences between the two groups can be observed. For example, both cause mydriasis, but parasympatholytics reduce accommodation (cycloplegia), whereas sympathomimetics do not.
Clinical significance
Parasympatholytic drugs are sometimes used to treat slow heart rhythms (bradycardias or bradydysrhythmias) caused by myocardial infarctions or other pathologies, as well as to treat conditions that cause bronchioles in the lung to constrict, such as asthma. By blocking the parasympathetic nervous system, parasympatholytic drugs can increase heart rate in patients with bradycardic heart rhythms, and open up airways and reduce mucous production in patients with asthma.
About
External links
Overview at salisbury.edu
Anticholinergics |
https://en.wikipedia.org/wiki/Check%20mark | A check or check mark (American English), checkmark (Philippine English), tickmark (Indian English) or tick (Australian, New Zealand and British English) is a mark (✓, ✔, etc.) used, primarily in the English-speaking world, to indicate the concept "yes" (e.g. "yes; this has been verified", "yes; that is the correct answer", "yes; this has been completed", or "yes; this [item or option] applies"). The x mark is also sometimes used for this purpose (most notably on election ballot papers, e.g. in the United Kingdom), but otherwise usually indicates "no", incorrectness, or failure. One of the earliest usages of a check mark as an indication of completion is on ancient Babylonian tablets "where small indentations were sometimes made with a stylus, usually placed at the left of a worker's name, presumably to indicate whether the listed ration has been issued."
As a verb, to check (off) or tick (off) means to add such a mark. Printed forms, printed documents, and computer software (see checkbox) commonly include squares in which to place check marks.
International differences
The check mark is a predominant affirmative symbol of convenience in the English-speaking world because of its instant and simple composition. In other language communities, there are different conventions.
It is common in Swedish schools for a to indicate that an answer is incorrect, while "R", from the Swedish , i.e., "correct", is used to indicate that an answer is correct.
In Finnish, ✓ stands for , i.e., "wrong", due to its similarity to a slanted v. The opposite, "correct", is marked with , a slanted vertical line emphasized with two dots (see also commercial minus sign).
In Japan, the O mark is used instead of the check mark, and the X or ✓ mark are commonly used for wrong.
In the Netherlands a 'V' is used to show that things are missing while the flourish of approval (or krul) is used for approving a section or sum.
Unicode
Unicode provides various check marks:
See also
Bracket
O |
https://en.wikipedia.org/wiki/Ferredoxin-thioredoxin%20reductase | Ferredoxin-thioredoxin reductase , systematic name ferredoxin:thioredoxin disulfide oxidoreductase, is a [4Fe-4S] protein that plays an important role in the ferredoxin/thioredoxin regulatory chain. It catalyzes the following reaction:
2 reduced ferredoxin + thioredoxin disulfide 2 oxidized ferredoxin + thioredoxin thiols + 2 H+
Ferredoxin-Thioredoxin reductase (FTR) converts an electron signal (photoreduced ferredoxin) to a thiol signal (reduced thioredoxin), regulating enzymes by reduction of specific disulfide groups. It catalyses the light-dependent activation of several photosynthesis enzymes and constitutes the first historical example of a thiol/disulfide exchange cascade for enzyme regulation. It is a heterodimer of subunit alpha and subunit beta. Subunit alpha is the variable subunit, and beta is the catalytic chain. The structure of the beta subunit has been determined and found to fold around the FeS cluster.
Biological Function
Major groups of oxygen-producing, photosynthetic organisms such as cyanobacteria, algae, C4, C3, and crassulacean acid metabolism (CAM) plants use Ferredoxin-thioredoxin reductase for carbon fixation regulation. FTR, as part of a greater Ferredoxin-Thioredoxin system, allows plants to change their metabolism based on light intensity. Specifically, the Ferredoxin-Thioredoxin system controls enzymes in the Calvin Cycle and Pentose phosphate pathway - allowing plants to balance carbohydrate synthesis and degradation based on the availability of light. In the light, photosynthesis harnesses light energy and reduces Ferredoxin. Using FTR, reduced Ferredoxin then reduces Thioredoxin. Thioredoxin, through thiol/disulfide exchange, then activates carbohydrate synthesis enzymes such as chloroplast fructose-1,6-bisphosphatase, Sedoheptulose-bisphosphatase, and phosphoribulokinase. As a result, light uses FTR to activate carbohydrate biosynthesis. In the dark, Ferredoxin remains oxidized. This leaves Thioredoxin inactive and allow |
https://en.wikipedia.org/wiki/Integral%20closure%20of%20an%20ideal | In algebra, the integral closure of an ideal I of a commutative ring R, denoted by , is the set of all elements r in R that are integral over I: there exist such that
It is similar to the integral closure of a subring. For example, if R is a domain, an element r in R belongs to if and only if there is a finitely generated R-module M, annihilated only by zero, such that . It follows that is an ideal of R (in fact, the integral closure of an ideal is always an ideal; see below.) I is said to be integrally closed if .
The integral closure of an ideal appears in a theorem of Rees that characterizes an analytically unramified ring.
Examples
In , is integral over . It satisfies the equation , where is in the ideal.
Radical ideals (e.g., prime ideals) are integrally closed. The intersection of integrally closed ideals is integrally closed.
In a normal ring, for any non-zerodivisor x and any ideal I, . In particular, in a normal ring, a principal ideal generated by a non-zerodivisor is integrally closed.
Let be a polynomial ring over a field k. An ideal I in R is called monomial if it is generated by monomials; i.e., . The integral closure of a monomial ideal is monomial.
Structure results
Let R be a ring. The Rees algebra can be used to compute the integral closure of an ideal. The structure result is the following: the integral closure of in , which is graded, is . In particular, is an ideal and ; i.e., the integral closure of an ideal is integrally closed. It also follows that the integral closure of a homogeneous ideal is homogeneous.
The following type of results is called the Briancon–Skoda theorem: let R be a regular ring and an ideal generated by elements. Then for any .
A theorem of Rees states: let (R, m) be a noetherian local ring. Assume it is formally equidimensional (i.e., the completion is equidimensional.). Then two m-primary ideals have the same integral closure if and only if they have the same multiplicity.
See also
Dedekind–Kummer |
https://en.wikipedia.org/wiki/Divisibility%20sequence | In mathematics, a divisibility sequence is an integer sequence indexed by positive integers n such that
for all m, n. That is, whenever one index is a multiple of another one, then the corresponding term also is a multiple of the other term. The concept can be generalized to sequences with values in any ring where the concept of divisibility is defined.
A strong divisibility sequence is an integer sequence such that for all positive integers m, n,
Every strong divisibility sequence is a divisibility sequence: if and only if . Therefore, by the strong divisibility property, and therefore .
Examples
Any constant sequence is a strong divisibility sequence.
Every sequence of the form for some nonzero integer k, is a divisibility sequence.
The numbers of the form (Mersenne numbers) form a strong divisibility sequence.
The repunit numbers in any base form a strong divisibility sequence.
More generally, any sequence of the form for integers is a divisibility sequence. In fact, if and are coprime, then this is a strong divisibility sequence.
The Fibonacci numbers form a strong divisibility sequence.
More generally, any Lucas sequence of the first kind is a divisibility sequence. Moreover, it is a strong divisibility sequence when .
Elliptic divisibility sequences are another class of such sequences. |
https://en.wikipedia.org/wiki/Gridwars | Gridwars (aka GRID WARS) was a programming contest announced in November 2002 by Engineered Intelligence (EI). The competition was devised to promote EI's product called CxC (a parallel programming language) introduced the same day. Gridwars was also announced in selected forums and through personal invitations.
Four contests were held in total: in February 2003, in June 2003 (Gridwars II), in November 2003 (Gridwars Interactive), and in April 2004 (Gridwars III).
EI was founded by Matt Oberdorfer; in the late 2005 EI discontinued CxC and announced a new product called "I/O accelerator". In the early 2006 EI changed name to Gear6 and replaced Gridwars front page with the announcement of discontinuation. Shortly after the web site www.gridwars.com was shut down.
Game concept and core rules
The game is played on a board aka "battlefield"—orthogonal grid of given size drawn on a torus (thus opposite edges of the field are in contact).
Each cell of the battlefield can be either empty or owned by one of several codes competing for the cells of the battlefield. The code which manages to take over the battlefield or owns most cells after a specified number of cycles is the winner.
The original terminology used by EI was peculiar in that it referred to the competing codes as "the warriors" and to the cells as "processors" of a virtual computing grid (hence "the battle for processors") capable, however, of "firing bullets" at each other.
The game proceeds in turns (cycles). At the beginning of the game, each code owns one cell. Every cycle, codes are executed for cells they own. As it happens, framework program supplies the codes with some data: who are the cell's eight immediate neighbors (by warrior number, 0 for free cell) and its own warrior number. Based on this data, warriors can "fire three bullets" at one/two/three of its 8 neighbors. Gridwars II introduced a principal extension of original rules: warriors could now return 32-bit word, called communication var |
https://en.wikipedia.org/wiki/Humeroradial%20joint | The humeroradial joint is the joint between the head of the radius and the capitulum of the humerus, is a limited ball-and-socket joint, hinge type of synovial joint.
Structure
The annular ligament binds the head of the radius to the radial notch of the ulna, preventing any separation of the two bones laterally. Therefore, the humeroradial joint is not functionally a ball and socket joint, although the joint surface in itself allows movement in all directions.
The annular ligament secures the head of the radius from dislocation, which would otherwise tend to occur, from the shallowness of the cup-like surface on the head of the radius. Without this ligament, the tendon of the biceps brachii would be liable to pull the head of the radius out of the joint.
The head of the radius is not in complete contact with the capitulum of the humerus in all positions of the joint.
The capitulum occupies only the anterior and inferior surfaces of the lower end of the humerus, so that in complete extension a part of the radial head can be plainly felt projecting at the back of the joint.
In full flexion the movement of the radial head is hampered by the compression of the surrounding soft parts, so that the freest rotatory movement of the radius on the humerus (pronation and supination) takes place in semiflexion, in which position the two articular surfaces are in most intimate contact.
Flexion and extension of the elbow-joint are limited by the tension of the structures on the front and back of the joint; the limitation of flexion is also aided by the soft structures of the arm and forearm coming into contact.
Clinical significance
Subluxation
A subluxation of the humeroradial joint is called a "nursemaid's elbow", also known as radial head subluxation. It is generally caused by a sudden pull on the extended pronated forearm, such as by an adult tugging on an uncooperative child or by swinging the child by the arms during play.
In radial head subluxation, there is littl |
https://en.wikipedia.org/wiki/Near%20polygon | In mathematics, a near polygon is an incidence geometry introduced by Ernest E. Shult and Arthur Yanushka in 1980. Shult and Yanushka showed the connection between the so-called tetrahedrally closed line-systems in Euclidean spaces and a class of point-line geometries which they called near polygons. These structures generalise the notion of generalized polygon as every generalized 2n-gon is a near 2n-gon of a particular kind. Near polygons were extensively studied and connection between them and dual polar spaces was shown in 1980s and early 1990s. Some sporadic simple groups, for example the Hall-Janko group and the Mathieu groups, act as automorphism groups of near polygons.
Definition
A near 2d-gon is an incidence structure (), where is the set of points, is the set of lines and is the incidence relation, such that:
The maximum distance between two points (the so-called diameter) is d.
For every point and every line there exists a unique point on which is nearest to .
Note that the distance are measured in the collinearity graph of points, i.e., the graph formed by taking points as vertices and joining a pair of vertices if they are incident with a common line.
We can also give an alternate graph theoretic definition, a near 2d-gon is a connected graph of finite diameter d with the property that for every vertex x and every maximal clique M there exists a unique vertex x in M nearest to x.
The maximal cliques of such a graph correspond to the lines in the incidence structure definition.
A near 0-gon (d = 0) is a single point while a near 2-gon (d = 1) is just a single line, i.e., a complete graph. A near quadrangle (d = 2) is same as a (possibly degenerate) generalized quadrangle. In fact, it can be shown that every generalized 2d-gon is a near 2d-gon that satisfies the following two additional conditions:
Every point is incident with at least two lines.
For every two points x, y at distance i < d, there exists a unique neighbour of y at dis |
https://en.wikipedia.org/wiki/Electroplasticity | Electroplasticity, describes the enhanced plastic behavior of a solid material under the application of an electric field. This electric field could be internal, resulting in current flow in conducting materials, or external. The effect of electric field on mechanical properties ranges from simply enhancing existing plasticity, such as reducing the flow stress in already ductile metals, to promoting plasticity in otherwise brittle ceramics. The exact mechanisms that control electroplasticity vary based on the material and the exact conditions (e.g., temperature, strain rate, grain size, etc.). Enhancing the plasticity of materials is of great practical interest as plastic deformation provides an efficient way of transforming raw materials into final products. The use of electroplasticity to improve processing of materials is known as electrically assisted manufacturing.
History
Electroplasticity was first discovered by Eugene S. Machlin, who reported in 1959 that applying an electric field made NaCl weaker and more ductile. Since then, the effect of electric fields on plasticity has been studied in many materials systems including metal, ceramics, and semiconductors. Various mechanisms have been posited to explain electroplastic effects and their dependence on materials properties and external conditions. For most materials the electroplastic effect arises from a combination of multiple mechanisms. This should not be all that surprising given that the electric fields directly affect electrons which dictate the bonding in materials and therefore all higher level phenomena such as dislocation motion, flow stress, vacancy diffusion, etc.
Electroplasticity in Metals
The application of DC electric fields is known to reduce the flow stress of metals and metal alloys while increasing the fracture strain. Several mechanisms have been put forth to explain this effect including Joule heating, electron wind force, dissolution of metallic bonds, and unpinning of dislocatio |
https://en.wikipedia.org/wiki/Luria%E2%80%93Delbr%C3%BCck%20experiment | The Luria–Delbrück experiment (1943) (also called the Fluctuation Test) demonstrated that in bacteria, genetic mutations arise in the absence of selective pressure rather than being a response to it. Thus, it concluded Darwin's theory of natural selection acting on random mutations applies to bacteria as well as to more complex organisms. Max Delbrück and Salvador Luria won the 1969 Nobel Prize in Physiology or Medicine in part for this work.
History
By the 1940s the ideas of inheritance and mutation were generally accepted, though the role of DNA as the hereditary material had not yet been established. It was thought that bacteria were somehow different and could develop heritable genetic mutations depending on the circumstances they found themselves: in short, was the mutation in bacteria pre-adaptive (pre-existent) or post-adaptive (directed adaption)?
In their experiment, Luria and Delbrück inoculated a small number of bacteria (Escherichia coli) into separate culture tubes. After a period of growth, they plated equal volumes of these separate cultures onto agar containing the T1 phage (virus). If resistance to the virus in bacteria were caused by an induced activation in bacteria i.e. if resistance were not due to heritable genetic components, then each plate should contain roughly the same number of resistant colonies.
Assuming a constant rate of mutation, Luria hypothesized that if mutations occurred after and in response to exposure to the selective agent, the number of survivors would be distributed according to a Poisson distribution with the mean equal to the variance. This was not what Delbrück and Luria found: Instead the number of resistant colonies on each plate varied drastically: the variance was considerably greater than the mean.
Luria and Delbrück proposed that these results could be explained by the occurrence of a constant rate of random mutations in each generation of bacteria growing in the initial culture tubes. Based on these assumptio |
https://en.wikipedia.org/wiki/Vibrational%20circular%20dichroism | Vibrational circular dichroism (VCD) is a spectroscopic technique which detects differences in attenuation of left and right circularly polarized light passing through a sample. It is the extension of circular dichroism spectroscopy into the infrared and near infrared ranges.
Because VCD is sensitive to the mutual orientation of distinct groups in a molecule, it provides three-dimensional structural information. Thus, it is a powerful technique as VCD spectra of enantiomers can be simulated using ab initio calculations, thereby allowing the identification of absolute configurations of small molecules in solution from VCD spectra. Among such quantum computations of VCD spectra resulting from the chiral properties of small organic molecules are those based on density functional theory (DFT) and gauge-including atomic orbitals (GIAO). As a simple example of the experimental results that were obtained by VCD are the spectral data obtained within the carbon-hydrogen (C-H) stretching region of 21 amino acids in heavy water solutions. Measurements of vibrational optical activity (VOA) have thus numerous applications, not only for small molecules, but also for large and complex biopolymers such as muscle proteins (myosin, for example) and DNA.
Vibrational modes
Theory
While the fundamental quantity associated with the infrared absorption is the dipole strength, the differential absorption is also proportional to the rotational strength, a quantity which depends on both the electric and magnetic dipole transition moments. Sensitivity of the handedness of a molecule toward circularly polarized light results from the form of the rotational strength. A rigorous theoretical development of VCD was developed concurrently by the late Professor P.J. Stephens, FRS, at the University of Southern California, and the group of Professor A.D. Buckingham, FRS, at Cambridge University in the UK, and first implemented analytically in the Cambridge Analytical Derivative Package (CADPAC) by |
https://en.wikipedia.org/wiki/Histamine%20N-methyltransferase | Histamine N-methyltransferase (HNMT, HMT) is an enzyme involved in the metabolism of histamine. It is one of two enzymes involved in the metabolism of histamine in mammals, the other being diamine oxidase (DAO). HNMT catalyzes the methylation of histamine in the presence of S-adenosylmethionine (SAM-e) forming N-methylhistamine. The HNMT enzyme is present in most body tissues but is not present in serum. Histamine N-methyltransferase is encoded by a single gene, HNMT, which in humans has been mapped to chromosome 2.
Function
The function of the HNMT enzyme is histamine metabolism by ways of Nτ-methylation using SAM-e as the methyl donor, producing N-methylhistamine, which, unless excreted, can be further processed by monoamine oxidase B (MAOB) or by DAO. Methylated histamine metabolites are excreted with urine.
In mammals, histamine is metabolized by two major pathways: oxidative deamination via DAO, encoded by the AOC1 gene, and Nτ-methylation via HNMT, encoded by the HNMT gene. In the brain of mammals histamine neurotransmitter activity is controlled by Nτ-methylation since DAO is not present in the central nervous system.
As about the biologic species, the HNMT enzyme is found in vertebrates, including birds, reptiles and amphibian, but not in invertebrates and plants.
The HNMT enzyme resides in the cytosol intracellular fluid. Whereas DAO metabolizes extracellular free histamine, be it either exogenous came with food or mostly endogenous released from granules of mast cells and basophils as a result of allergic reactions, since DAO is mainly expressed in the cells of intestinal epithelium, HNMT is involved in metabolism of the persistently present intracellular primarily endogenous histamine, mainly in kidneys and liver, but also in bronchi, large intestine, ovary, prostate, spinal cord, spleen, trachea and peripheral tissues. In the case of flawed HNMT activity, the organs which are most affected are brain, liver and mucous membrane of bronchus. Consequent |
https://en.wikipedia.org/wiki/Palatoglossal%20arch | The palatoglossal arch (glossopalatine arch, anterior pillar of fauces) on either side runs downward, lateral (to the side), and forward to the side of the base of the tongue, and is formed by the projection of the glossopalatine muscle with its covering mucous membrane. It is the anterior border of the isthmus of the fauces and marks the border between the mouth and the palatopharyngeal arch. The latter marks the beginning of the pharynx. |
https://en.wikipedia.org/wiki/Cognitive%20ecology | Cognitive ecology is the study of cognitive phenomena within social and natural contexts. It is an integrative perspective drawing from aspects of ecological psychology, cognitive science, evolutionary ecology and anthropology. Notions of domain-specific modules in the brain and the cognitive biases they create are central to understanding the enacted nature of cognition within a cognitive ecological framework. This means that cognitive mechanisms not only shape the characteristics of thought, but they dictate the success of culturally transmitted ideas. Because culturally transmitted concepts can often inform ecological decision-making behaviors, group-level trends in cognition (i.e., culturally salient concepts) are hypothesized to address ecologically relevant challenges.
Theoretical basis
Cognitive ecology explores the interactive relationship between organism-environment interactions and its impact on cognitive phenomena. Human cognition in this framework is multimodal and viewed similarly to enactivist perspectives on cognitive processing. For cultural concepts, this emphasizes cognitive distribution across an ecosystem, which is predicated on models of the extended mind thesis.
Ecological psychology
While the multi-faceted nature of cognitive ecology is a consequence of its interdisciplinary history, it primarily derives from early work in ecological psychology. Paradigm shifts from behaviorist orientations of psychology to cognition, or the "cognitive revolution", gave rise to the ecological psychology approach, which distanced itself from mainstream cognitivist views by breaking down the common mind-environment dichotomy of psychological theory.
One particularly influential progenitor of this work was ecological psychologist James Gibson, whose legacy is marked by his ideas on ecological and social affordances. These are the opportunistic features of environmental objects that can be exploited for human use, and are therefore particularly perceptible ( |
https://en.wikipedia.org/wiki/High-dimensional%20statistics | In statistical theory, the field of high-dimensional statistics studies data whose dimension is larger than typically considered in classical multivariate analysis. The area arose owing to the emergence of many modern data sets in which the dimension of the data vectors may be comparable to, or even larger than, the sample size, so that justification for the use of traditional techniques, often based on asymptotic arguments with the dimension held fixed as the sample size increased, was lacking.
Examples
Parameter estimation in linear models
The most basic statistical model for the relationship between a covariate vector and a response variable is the linear model
where is an unknown parameter vector, and is random noise with mean zero and variance . Given independent responses , with corresponding covariates , from this model, we can form the response vector , and design matrix . When and the design matrix has full column rank (i.e. its columns are linearly independent), the ordinary least squares estimator of is
When , it is known that . Thus, is an unbiased estimator of , and the Gauss-Markov theorem tells us that it is the Best Linear Unbiased Estimator.
However, overfitting is a concern when is of comparable magnitude to : the matrix in the definition of may become ill-conditioned, with a small minimum eigenvalue. In such circumstances will be large (since the trace of a matrix is the sum of its eigenvalues). Even worse, when , the matrix is singular. (See Section 1.2 and Exercise 1.2 in .)
It is important to note that the deterioration in estimation performance in high dimensions observed in the previous paragraph is not limited to the ordinary least squares estimator. In fact, statistical inference in high dimensions is intrinsically hard, a phenomenon known as the curse of dimensionality, and it can be shown that no estimator can do better in a worst-case sense without additional information (see Example 15.10). Nevertheless, the s |
https://en.wikipedia.org/wiki/Johnson%27s%20algorithm | Johnson's algorithm is a way to find the shortest paths between all pairs of vertices in an edge-weighted directed graph. It allows some of the edge weights to be negative numbers, but no negative-weight cycles may exist. It works by using the Bellman–Ford algorithm to compute a transformation of the input graph that removes all negative weights, allowing Dijkstra's algorithm to be used on the transformed graph. It is named after Donald B. Johnson, who first published the technique in 1977.
A similar reweighting technique is also used in Suurballe's algorithm for finding two disjoint paths of minimum total length between the same two vertices in a graph with non-negative edge weights.
Algorithm description
Johnson's algorithm consists of the following steps:
First, a new node is added to the graph, connected by zero-weight edges to each of the other nodes.
Second, the Bellman–Ford algorithm is used, starting from the new vertex , to find for each vertex the minimum weight of a path from to . If this step detects a negative cycle, the algorithm is terminated.
Next the edges of the original graph are reweighted using the values computed by the Bellman–Ford algorithm: an edge from to , having length , is given the new length .
Finally, is removed, and Dijkstra's algorithm is used to find the shortest paths from each node to every other vertex in the reweighted graph. The distance in the original graph is then computed for each distance ( , ), by adding to the distance returned by Dijkstra's algorithm.
Example
The first three stages of Johnson's algorithm are depicted in the illustration below.
The graph on the left of the illustration has two negative edges, but no negative cycles. The center graph shows the new vertex , a shortest path tree as computed by the Bellman–Ford algorithm with as starting vertex, and the values computed at each other node as the length of the shortest path from to that node. Note that these values are all non-positive, because |
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Borwein%20constant | The Erdős–Borwein constant is the sum of the reciprocals of the Mersenne numbers. It is named after Paul Erdős and Peter Borwein.
By definition it is:
Equivalent forms
It can be proven that the following forms all sum to the same constant:
where σ0(n) = d(n) is the divisor function, a multiplicative function that equals the number of positive divisors of the number n. To prove the equivalence of these sums, note that they all take the form of Lambert series and can thus be resummed as such.
Irrationality
Erdős in 1948 showed that the constant E is an irrational number. Later, Borwein provided an alternative proof.
Despite its irrationality, the binary representation of the Erdős–Borwein constant may be calculated efficiently.
Applications
The Erdős–Borwein constant comes up in the average case analysis of the heapsort algorithm, where it controls the constant factor in the running time for converting an unsorted array of items into a heap. |
https://en.wikipedia.org/wiki/Pallor%20mortis |
Pallor mortis (Latin: pallor "paleness", mortis "of death"), the first stage of death, is an after-death paleness that occurs in those with light/white skin. An opto-electronical colour measurement device is used to measure pallor mortis on bodies.
Timing and applicability
Pallor mortis occurs almost immediately, generally within 15–25 minutes, after death. Paleness develops so rapidly after death that it has little to no use in determining the time of death, aside from saying that it either happened less than 30 minutes ago or more, which could help if the body were found very soon after death.
Cause
Pallor mortis results from the collapse of capillary circulation throughout the body. Gravity then causes the blood to sink down into the lower parts of the body, creating livor mortis.
Similar paleness in living persons
A living person can look deathly pale, with such paleness often likened to death in figurative speech and in fiction. This can happen when blood escapes from the surface of the skin, in a matter of deep shock. Also heart failure (insufficientia cordis) can make the face appear pale; the person then might have blue lips. Skin can also become pale as a result of vasoconstriction as part of the body's homeostatic systems in cold conditions, or if the skin is deficient in vitamin D, as seen in people who spend most of the time indoors, away from sunlight. |
https://en.wikipedia.org/wiki/Q-type%20calcium%20channel | The Q-type calcium channel is a type of voltage-dependent calcium channel. Like the others of this class, the α1 subunit is the one that determines most of the channel's properties.
They are poorly understood, but like R-type calcium channels, they appear to be present in cerebellar granule cells. They have a high threshold of activation and relatively slow kinetics.
External links
Ion channels
Electrophysiology
Membrane biology
Integral membrane proteins
Calcium channels |
https://en.wikipedia.org/wiki/Access%20Linux%20Platform | The Access Linux Platform (ALP) is a discontinued open-source software based operating system, once referred to as a "next-generation version of the Palm OS," for mobile devices developed and marketed by Access Co., of Tokyo, Japan. The platform included execution environments for Java, classic Palm OS, and GTK+-based native Linux applications. ALP was demonstrated in devices at a variety of conferences, including 3GSM, LinuxWorld, GUADEC, and Open Source in Mobile.
The ALP was first announced in February 2006. The initial versions of the platform and software development kits were officially released in February 2007. There was a coordinated effort by Access, Esteemo, NEC, NTT DoCoMo, and Panasonic to use the platform as a basis for a shared platform implementing a revised version of the i.mode Mobile Oriented Applications Platform (MOAP) (L) application programming interfaces (APIs), conforming to the specifications of the LiMo Foundation. The first smartphone to use the ALP was to be the Edelweiss by Emblaze Mobile that was scheduled for mid-2009. However, it was shelved before release. The First Else (renamed from Monolith) smartphone, that was being developed by Sharp Corporation in cooperation with Emblaze Mobile and seven other partners, was scheduled for 2009, but was never released and officially cancelled in June 2010. The platform is no longer referenced on Access's website, but Panasonic and NEC released a number of ALP phones for the Japanese market between 2010 and 2013.
Look and feel
The user interface was designed with similar general goals to earlier Palm OS releases, with an aim of preserving the Zen of Palm, a design philosophy centered on making the applications as simple as possible. Other aspects of the interface included a task-based orientation rather than a file/document orientation as is commonly found on desktop systems.
The appearance of the platform was intended to be highly customizable to provide differentiation for specific device |
https://en.wikipedia.org/wiki/Spectral%20flux | Spectral flux is a measure of how quickly the power spectrum of a signal is changing, calculated by comparing the power spectrum for one frame against the power spectrum from the previous frame.
More precisely, it is usually calculated as the L2-norm (also known as the Euclidean distance) between the two normalised spectra. Calculated this way, the spectral flux is not dependent upon overall power (since the spectra are normalised), nor on phase considerations (since only the magnitudes are compared).
The spectral flux can be used to determine the timbre of an audio signal, or in onset detection, among other things.
Variations
Some implementations use the L1-norm rather than the L2-norm (i.e. the sum of absolute differences rather than the sum of squared differences).
Some implementations do not normalise the spectra.
For onset detection, increases in energy are important (not decreases), so some algorithms only include values calculated from bins in which the energy is increasing. |
https://en.wikipedia.org/wiki/Mount%20Rainier%20%28packet%20writing%29 | Mount Rainier (MRW) is a format for writable optical discs which provides the packet writing and defect management. Its goal is the replacement of the floppy disk. It is named after Mount Rainier, a volcano near Seattle, Washington, United States.
Mount Rainier can be used only with drives that explicitly support it (a part of SCSI/MMC and can work over ATAPI), but works with standard CD-R, CD-RW, DVD+/-R and DVD+/-RW media.
The physical format of MRW on the disk is managed by the drive's firmware, which remaps physical drive blocks into a virtual, defect-free space. Thus, the host computer does not see the physical format of the disk, only a sequence of data blocks capable of holding any filesystem.
Design
The time needed for the disk formatting is shortened to about one minute by the background formatting capabilities of the drive. Formatting allocates some sectors at the end of the disk for defect management. Defective sectors are recorded at a table in the lead-in (an administrative area) and in a copy of the table in the lead-out.
From the host computer's perspective, an MRW disc provides a defect-free block-accessible device, upon which any host supported filesystem may be written. Such filesystems may be FAT32, NTFS, etc., but the preferred format is usually UDF 1.02, as this file format is widely supported. An MRW-formatted CD-RW with a UDF filesystem gives approximately 500 MB free space.
Mt. Rainier allows write access to a disc within seconds after insertion and spin-up, even while a background formatting sequence is taking place. Before this technology, a user would have to wait for the formatting to complete before writing any data to a new disc. It is even possible to read (but not write) MRW disks without an MRW-compatible drive; A "remapper" device driver is needed, an example of which is EasyWrite Reader for Windows.
An alternative to MRW is to physically format a disc in UDF 1.5 or higher using the spared build. This is achieved by the use |
https://en.wikipedia.org/wiki/Intrastructural%20help | Intrastructural help (ISH) is where T and B cells cooperate to help or suppress an immune response gene. ISH has proven effective for the treatment of influenza, rabies related lyssavirus, hepatitis B, and the HIV virus. This process was used in 1979 to observe that T cells specific to the influenza virus could promote the stimulation of hemagglutinin specific B cells and elicit an effective humoral immune response. It was later applied to the lyssavirus and was shown to protect raccoons from lethal challenge. The ISH principle is especially beneficial because relatively invariable structural antigens can be used for the priming of T-cells to induce humoral immune response against variable surface antigens. Thus, the approach has also transferred well for the treatment of hepatitis B and HIV.
Background
One of the approaches for a protective HIV-1 vaccine is broadly neutralizing antibodies. These antibodies are found in 10-25 % of HIV-1 infected patients. Few of those (worldwide 0.8% of HIV-1 positive individuals) are able to suppress viremia up to a level that is below the detection levels and are so-called "elite controllers" or "long term non-progressors". Most of the conducted vaccine trials were unable to induce protective neutralizing antibodies; even though some protective effects of poly-functional antibodies were observed. These Fc-dependent effects seem to play an important role in disease control as shown by the non-human primate (NHP) experiment. In contrast, the results of the adenoviral-based STEP trial suggested a higher susceptibility due to high levels of non-neutralizing poly-functional antibodies and helper T cell proliferation induced by vaccination. In mouse models antibodies from the IgG1 subclass, which were mostly induced by vaccination, were seen to possess a relatively low functionality. Therefore, one objective is to increase the quality of the immune response by the induction of poly-functional antibody sub classes, e.g. IgG2A. However, |
https://en.wikipedia.org/wiki/Nonuniform%20sampling | Nonuniform sampling is a branch of sampling theory involving results related to the Nyquist–Shannon sampling theorem. Nonuniform sampling is based on Lagrange interpolation and the relationship between itself and the (uniform) sampling theorem. Nonuniform sampling is a generalisation of the Whittaker–Shannon–Kotelnikov (WSK) sampling theorem.
The sampling theory of Shannon can be generalized for the case of nonuniform samples, that is, samples not taken equally spaced in time. The Shannon sampling theory for non-uniform sampling states that a band-limited signal can be perfectly reconstructed from its samples if the average sampling rate satisfies the Nyquist condition. Therefore, although uniformly spaced samples may result in easier reconstruction algorithms, it is not a necessary condition for perfect reconstruction.
The general theory for non-baseband and nonuniform samples was developed in 1967 by Henry Landau. He proved that the average sampling rate (uniform or otherwise) must be twice the occupied bandwidth of the signal, assuming it is a priori known what portion of the spectrum was occupied.
In the late 1990s, this work was partially extended to cover signals for which the amount of occupied bandwidth was known, but the actual occupied portion of the spectrum was unknown. In the 2000s, a complete theory was developed
(see the section Beyond Nyquist below) using compressed sensing. In particular, the theory, using signal processing language, is described in this 2009 paper. They show, among other things, that if the frequency locations are unknown, then it is necessary to sample at least at twice the Nyquist criteria; in other words, you must pay at least a factor of 2 for not knowing the location of the spectrum. Note that minimum sampling requirements do not necessarily guarantee numerical stability.
Lagrange (polynomial) interpolation
For a given function, it is possible to construct a polynomial of degree n which has the same value with the fun |
https://en.wikipedia.org/wiki/Ocean%20dynamical%20thermostat | Ocean dynamical thermostat is a physical mechanism through which changes in the mean radiative forcing influence the gradients of sea surface temperatures in the Pacific Ocean and the strength of the Walker circulation. Increased radiative forcing (warming) is more effective in the western Pacific than in the eastern where the upwelling of cold water masses damps the temperature change. This increases the east-west temperature gradient and strengthens the Walker circulation. Decreased radiative forcing (cooling) has the opposite effect.
The process has been invoked to explain variations in the Pacific Ocean temperature gradients that correlate to insolation and climate variations. It may also be responsible for the hypothesized correlation between El Niño events and volcanic eruptions, and for changes in the temperature gradients that occurred during the 20th century. Whether the ocean dynamical thermostat controls the response of the Pacific Ocean to anthropogenic global warming is unclear, as there are competing processes at play; potentially, it could drive a La Niña-like climate tendency during initial warming before it is overridden by other processes.
Background
The equatorial Pacific is a key region of Earth in terms of its relative influence on the worldwide atmospheric circulation. A characteristic east-west temperature gradient is coupled to an atmospheric circulation, the Walker circulation, and further controlled by atmospheric and oceanic dynamics. The western Pacific features the so-called "warm pool", where the warmest sea surface temperatures (SSTs) of Earth are found. In the eastern Pacific conversely an area called the "cold tongue" is always colder than the warm pool even though they lie at the same latitude, as cold water is upwelled there. The temperature gradient between the two in turn induces an atmospheric circulation, the Walker circulation, which responds strongly to the SST gradient.
One important component of the climate is the El N |
https://en.wikipedia.org/wiki/Walking%20in%20the%20Rain%20%28The%20Ronettes%20song%29 | "Walking in the Rain" is a song written by Barry Mann, Phil Spector, and Cynthia Weil. It was originally recorded by the girl group the Ronettes in 1964 who had a charting hit with their version. Jay and the Americans released a charting hit cover of the song in 1969. The song has since been recorded by many other artists over the years, including the Walker Brothers.
The Ronettes version
The Ronettes were the first to release "Walking in the Rain". Their single reached number 23 on the Billboard Hot 100 chart in 1964. The song also reached number three on the R&B Singles Chart in 1965. The single contains sound effects of thunder and lightning, which earned audio engineer Larry Levine a Grammy nomination. Phil Spector produced the record.
In 2004, the Ronettes' version was ranked at No. 266 on Rolling Stone's 500 Greatest Songs of All Time, while being moved down to No. 269 in the 2010 update. The song didn't get into the 2021 list.
Jay and the Americans version
The pop group Jay and the Americans released a cover of "Walkin' in the Rain" in 1969 on their album Wax Museum, Vol. 1. Their version of the song reached number 19 on the U.S. Billboard Hot 100 and peaked at number 14 on Cash Box. It also hit number 8 on the Adult Contemporary chart, it was the last top-40 hit for the group.
Chart history
The Ronettes
The Walker Brothers
Jay & the Americans
The Partridge Family starring David Cassidy
Cheetah
Other versions
1967 – The Walker Brothers, single backed with the original b-side "Baby Make It Last the Time". This version alters the gender of the lyrics for a heterosexual male perspective. It reached number 26 in the UK Singles Chart. It was the group's final UK single before their first split.
1973 - featured on the television show The Partridge Family, single released in Canada, England, and other parts of Europe backed with "Together We're Better"; also with gender-altered lyrics, reached number 10 on the UK Singles Chart.
1978 - non-album single by |
https://en.wikipedia.org/wiki/SnRNP | snRNPs (pronounced "snurps"), or small nuclear ribonucleoproteins, are RNA-protein complexes that combine with unmodified pre-mRNA and various other proteins to form a spliceosome, a large RNA-protein molecular complex upon which splicing of pre-mRNA occurs. The action of snRNPs is essential to the removal of introns from pre-mRNA, a critical aspect of post-transcriptional modification of RNA, occurring only in the nucleus of eukaryotic cells.
Additionally, U7 snRNP is not involved in splicing at all, as U7 snRNP is responsible for processing the 3′ stem-loop of histone pre-mRNA.
The two essential components of snRNPs are protein molecules and RNA. The RNA found within each snRNP particle is known as small nuclear RNA, or snRNA, and is usually about 150 nucleotides in length. The snRNA component of the snRNP gives specificity to individual introns by "recognizing" the sequences of critical splicing signals at the 5' and 3' ends and branch site of introns. The snRNA in snRNPs is similar to ribosomal RNA in that it directly incorporates both an enzymatic and a structural role.
SnRNPs were discovered by Michael R. Lerner and Joan A. Steitz.
Thomas R. Cech and Sidney Altman also played a role in the discovery, winning the Nobel Prize for Chemistry in 1989 for their independent discoveries that RNA can act as a catalyst in cell development.
Types
At least five different kinds of snRNPs join the spliceosome to participate in splicing. They can be visualized by gel electrophoresis and are known individually as: U1, U2, U4, U5, and U6. Their snRNA components are known, respectively, as: U1 snRNA, U2 snRNA, U4 snRNA, U5 snRNA, and U6 snRNA.
In the mid-1990s, it was discovered that a variant class of snRNPs exists to help in the splicing of a class of introns found only in metazoans, with highly conserved 5' splice sites and branch sites. This variant class of snRNPs includes: U11 snRNA, U12 snRNA, U4atac snRNA, and U6atac snRNA. While different, they perform the same fu |
https://en.wikipedia.org/wiki/Model%20for%20Prediction%20Across%20Scales | The Model for Prediction Across Scales (MPAS) is an Earth system modeling software that integrates atmospheric, oceanographic, and cryospheric modeling across scales from regional to planetary. It includes climate and weather modeling and simulations that were used initially by researchers in 2013. The atmospheric models were created by the Earth System Laboratory at the National Center for Atmospheric Research and the oceanographic models were created by the Climate, Ocean, and Sea Ice Modeling Group at Los Alamos National Laboratory. The software has been used to model real-time weather as well as seasonal forecasting of convection, tornadoes and tropical cyclones. The atmospheric modeling component of the software can be used with other atmospheric modeling software including the Weather Research and Forecasting Model, the Global Forecast System, and the Community Earth System Model.
See also
Tropical cyclone forecast model
Wind wave model
Global circulation model |
https://en.wikipedia.org/wiki/Berthold%20Leibinger%20Innovationspreis |
The Berthold Leibinger Innovationspreis is an award for given to those who have created applied laser technology and innovations on the application or generation of laser light. It is open to participants worldwide. It is biennially awarded by the German non-profit foundation Berthold Leibinger Stiftung. Three prizes are awarded worth 100,000 euros. The prize winners are selected from eight finalists that present their work person in a jury session. The jury is composed of international experts from different fields.
Recipients
2000 |
2002 |
2004 |
2006 |
2008 |
2010 |
2012 |
2014 |
2016 |
2018 |
2000
First Prize: Josef Schneider, MAN Roland Druckmaschinen AG, „Laser and digitally changed Printing systems“
Second Prize: Martin Grabherr, ULM photonics GmbH, „VCSEL - Vertical Cavity Surface Emitting high-power Laser diode“
Third Prize: Lu Yong Feng, National University of Singapore, „Laser micro processing in industry“
2002
First Prize: Work Group Disk Laser, Universität Stuttgart, „Disk laser“
Second Prize: Tibor Juhasz and Ronald Kurtz, IntraLase Inc., „Femtosecond laser scalpel for Corneal surgery“
Third Prize: Stefan Hell, Marcus Dyba and Alexander Egner, Max Planck Institute for Biophysical Chemistry, „Optical nanoscopy with ultrashort pulse laser and stimulated emission“
2004
First Prize: Ursula Keller, ETH Zurich, „SESAM – Semiconductor Saturable Absorber Mirror for ultrafast lasers“
Second Prize: Andreas Tünnermann, Stefan Nolte and Holger Zellmer, Friedrich-Schiller-University, Jena / Fraunhofer Institute for Applied Optics and Precision Engineering, „High-power fiber lasers and their applications“
Third Prize: Axel Rolle, Specialized Hospital Coswig, Saxony, „Lung parenchymal laser surgery“
2006
First Prize: Karin Schütze and Raimund Schütze, P.A.L.M. Microlaser Technologies GmbH, a Company of the Carl Zeiss MicroImaging GmbH, „Laser micro beam and laser catapult for single cell capture“
Second Prize: Ian A. Walmsley, University of Oxford, „Metho |
https://en.wikipedia.org/wiki/GeForce | GeForce is a brand of graphics processing units (GPUs) designed by Nvidia. As of the GeForce 40 series, there have been eighteen iterations of the design. The first GeForce products were discrete GPUs designed for add-on graphics boards, intended for the high-margin PC gaming market, and later diversification of the product line covered all tiers of the PC graphics market, ranging from cost-sensitive GPUs integrated on motherboards, to mainstream add-in retail boards. Most recently, GeForce technology has been introduced into Nvidia's line of embedded application processors, designed for electronic handhelds and mobile handsets.
With respect to discrete GPUs, found in add-in graphics-boards, Nvidia's GeForce and AMD's Radeon GPUs are the only remaining competitors in the high-end market. GeForce GPUs are very dominant in the general-purpose graphics processor unit (GPGPU) market thanks to their proprietary CUDA architecture. GPGPU is expected to expand GPU functionality beyond the traditional rasterization of 3D graphics, to turn it into a high-performance computing device able to execute arbitrary programming code in the same way a CPU does, but with different strengths (highly parallel execution of straightforward calculations) and weaknesses (worse performance for complex branching code).
Name origin
The "GeForce" name originated from a contest held by Nvidia in early 1999 called "Name That Chip". The company called out to the public to name the successor to the RIVA TNT2 line of graphics boards. There were over 12,000 entries received and 7 winners received a RIVA TNT2 Ultra graphics card as a reward. Brian Burke, senior PR manager at Nvidia, told Maximum PC in 2002 that "GeForce" originally stood for "Geometry Force" since GeForce 256 was the first GPU for personal computers to calculate the transform-and-lighting geometry, offloading that function from the CPU.
Graphics processor generations
GeForce 256
GeForce 2 series
Launched in April 2000, the firs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.