source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Multi-Object%20Spectrometer
A multi-object spectrometer is a type of optical spectrometer capable of simultaneously acquiring the spectra of multiple separate objects in its field of view. It is used in astronomical spectroscopy and is related to long-slit spectroscopy. This technique became available in the 1980s. Description The term multi-object spectrograph is commonly used for spectrographs using a bundle of fibers to image part of the field. The entrance of the fibers is at the focal plane of the imaging instrument. The bundle is then reshaped; the individual fibers are aligned at the entrance slit of a spectrometer, dispersing the light on a detector. This technique is closely related to integral field spectrography (IFS), more specifically to fiber-IFS. It is a form of snapshot hyperspectral imaging, itself a part of imaging spectroscopy. Apertures Typically, the apertures of multi-object spectrographs can be modified to fit the needs of the given observation. For example, the MOSFIRE (Multi-Object Spectrometer for Infra-Red Exploration ) instrument on the W. M. Keck Observatory contains the Configurable Slit Unit (CSU) allowing arbitrary positioning of up to forty-six 18 cm slits by moving opposable bars. Some fiber-fed spectroscopes, such as the Large Sky Area Multi-Object Fibre Spectroscopic Telescope (LAMOST) can move the fibers to desired position. The LAMOST moves its 4000 fibers separately within designated areas for the requirements of a measurement, and can correct positioning errors in real time. The James Webb Space Telescope uses a fixed Micro-Shutter Assembly (MSA), an array of nearly 250000 5.1 mm by 11.7 mm shutters that can independently be opened or closed to change the location of the open slits on the device. Uses in telescopes Ground-based instruments Instruments with multi-object spectrometry capabilities are available on most 8-10 meter-class ground-based observatories. For example, the Large Binocular Telescope, W. M. Keck Observatory, Gran Telescopio
https://en.wikipedia.org/wiki/AES11
The AES11 standard published by the Audio Engineering Society provides a systematic approach to the synchronization of digital audio signals. AES11 recommends using an AES3 signal to distribute audio clocks within a facility. In this application, the connection is referred to as a Digital Audio Reference Signal (DARS). Further recommendations are made concerning the accuracy of sample clocks as embodied in the interface signal and the use of this format as a convenient synchronization reference where signals must be rendered co-timed for digital processing. Synchronism is defined, and limits are given which take account of relevant timing uncertainties encountered in an audio studio. Related developments AES11 Annex D (in the November 2005 or later printing or version) shows an example method to provide isochronous timing relationships for distributed AES3 structures over asynchronous networks such as AES47 where reference signals may be locked to common timing sources such as GPS. In addition, the Audio Engineering Society has now published a related standard called AES53, that specifies how the timing markers already specified in AES47 may be used to associate an absolute time-stamp with individual audio samples. This may be closely associated with AES11 and used to provide a way of aligning streams from disparate sources, including synchronizing audio to video in networked structures. The media profile defined in annex A of AES67 provides a means of using AES11 synchronization via the Precision Time Protocol.
https://en.wikipedia.org/wiki/List%20of%20Nvidia%20graphics%20processing%20units
This list contains general information about graphics processing units (GPUs) and video cards from Nvidia, based on official specifications. In addition some Nvidia motherboards come with integrated onboard GPUs. Limited/Special/Collectors' Editions or AIB versions are not included. Field explanations The fields in the table listed below describe the following: Model – The marketing name for the processor, assigned by The Nvidia. Launch – Date of release for the processor. Code name – The internal engineering codename for the processor (typically designated by an NVXY name and later GXY where X is the series number and Y is the schedule of the project for that generation). Fab – Fabrication process. Average feature size of components of the processor. Bus interface – Bus by which the graphics processor is attached to the system (typically an expansion slot, such as PCI, AGP, or PCI-Express). Memory – The amount of graphics memory available to the processor. SM Count – Number of streaming multiprocessors. Core clock – The factory core clock frequency; while some manufacturers adjust clocks lower and higher, this number will always be the reference clocks used by Nvidia. Memory clock – The factory effective memory clock frequency (while some manufacturers adjust clocks lower and higher, this number will always be the reference clocks used by Nvidia). All DDR/GDDR memories operate at half this frequency, except for GDDR5, which operates at one quarter of this frequency. Core config – The layout of the graphics pipeline, in terms of functional units. Over time the number, type, and variety of functional units in the GPU core has changed significantly; before each section in the list there is an explanation as to what functional units are present in each generation of processors. In later models, shaders are integrated into a unified shader architecture, where any one shader can perform any of the functions listed. Fillrate – Maximum theoretical fill rate in
https://en.wikipedia.org/wiki/Lattice%20constant
A lattice constant or lattice parameter is one of the physical dimensions and angles that determine the geometry of the unit cells in a crystal lattice, and is proportional to the distance between atoms in the crystal. A simple cubic crystal has only one lattice constant, the distance between atoms, but in general lattices in three dimensions have six lattice constants: the lengths a, b, and c of the three cell edges meeting at a vertex, and the angles α, β, and γ between those edges. The crystal lattice parameters a, b, and c have the dimension of length. The three numbers represent the size of the unit cell, that is, the distance from a given atom to an identical atom in the same position and orientation in a neighboring cell (except for very simple crystal structures, this will not necessarily be distance to the nearest neighbor). Their SI unit is the meter, and they are traditionally specified in angstroms (Å); an angstrom being 0.1 nanometer (nm), or 100 picometres (pm). Typical values start at a few angstroms. The angles α, β, and γ are usually specified in degrees. Introduction A chemical substance in the solid state may form crystals in which the atoms, molecules, or ions are arranged in space according to one of a small finite number of possible crystal systems (lattice types), each with fairly well defined set of lattice parameters that are characteristic of the substance. These parameters typically depend on the temperature, pressure (or, more generally, the local state of mechanical stress within the crystal), electric and magnetic fields, and its isotopic composition. The lattice is usually distorted near impurities, crystal defects, and the crystal's surface. Parameter values quoted in manuals should specify those environment variables, and are usually averages affected by measurement errors. Depending on the crystal system, some or all of the lengths may be equal, and some of the angles may have fixed values. In those systems, only some of t
https://en.wikipedia.org/wiki/Internal%20iliac%20artery
The internal iliac artery (formerly known as the hypogastric artery) is the main artery of the pelvis. Structure The internal iliac artery supplies the walls and viscera of the pelvis, the buttock, the reproductive organs, and the medial compartment of the thigh. The vesicular branches of the internal iliac arteries supply the bladder. It is a short, thick vessel, smaller than the external iliac artery, and about 3 to 4 cm in length. Course The internal iliac artery arises at the bifurcation of the common iliac artery, opposite the lumbosacral articulation, and, passing downward to the upper margin of the greater sciatic foramen, divides into two large trunks, an anterior and a posterior. It is posterior to the ureter, anterior to the internal iliac vein, anterior to the lumbosacral trunk, and anterior to the piriformis muscle. Near its origin, it is medial to the external iliac vein, which lies between it and the psoas major muscle. It is above the obturator nerve. Branches The arrangement of branches of the internal iliac artery is extremely variable. Typically, the artery divides into an anterior division and a posterior division, with the posterior division giving rise to the superior gluteal, iliolumbar, and lateral sacral arteries. The rest usually arise from the anterior division. Because it is variable, an artery may not be a direct branch, but instead might arise off a direct branch. In recent years the devolopement of techniques like Prostate artery embolisation and angiografy led to an increased understanding of the prostate vascularisation. Regarding the arterial supply M. de Assis et al has suggested an anatomic classification for the origin of the inferior vesical artery The following are the branches of internal iliac artery: Anastamoses In individuals assigned female at birth, the ovarian artery (a branch of the abdominal aorta) and uterine arteries form an anastomoses. Fetal structure In the fetus, the internal iliac artery is twice as la
https://en.wikipedia.org/wiki/ALTRAN
ALTRAN (ALgebraic TRANslator) is a programming language for the formal manipulation of rational functions of several variables with integer coefficients. It was developed at Bell Labs in 1960s. ALTRAN is a FORTRAN version of ALPAK rational algebra package, and “can be thought of as a variant of FORTRAN with the addition of an extra declaration, the ‘algebraic’ type declaration.” Although ALTRAN is written in ANSI FORTRAN, nevertheless there exist differences in FORTRAN implementations. ALTRAN handles machine dependencies through the use of a macro processor called M6. ALTRAN should not be confused with the ALGOL to FORTRAN Translator, called Altran, that "converts Extended Algol programs into Fortran IV." History ALPAK, written in 1964, originally consisted of a set of subroutines for FORTRAN written in assembly language. These subroutines were themselves rewritten in FORTRAN for ALTRAN. An early version of ALTRAN was developed by M. Douglas McIlroy and W. Stanley Brown in the middle 1960s. However, soon after the completion of their ALTRAN translator, the IBM 7094 computers, on which ALPAK and ALTRAN were reliant, began to be phased out in favor of newer machines. This led to development of a more advanced ALTRAN language and implementation developed by Brown, Andrew D. Hall, Stephen C. Johnson, Dennis M. Ritchie, and Stuart I. Feldman, which was highly portable. The translator was implemented by Ritchie, the interpreter by Hall, the run-time rational function and polynomial routines by Feldman, Hall, and Johnson, and the I/O routines by Johnson. Later, Feldman and Julia Ho added a rational expression evaluation package that generated accurate and efficient FORTRAN subroutines for the numerical evaluation of symbolic expressions produced by ALTRAN. In 1979, ALTRAN was ported to the Control Data Corporation 6600 and Cyber 176 computers at the Air Force Weapons Laboratory. They found that "ALTRAN is about 15 times faster than FORMAC in a PL/I environment, and
https://en.wikipedia.org/wiki/British%20Mathematical%20Olympiad
The British Mathematical Olympiad (BMO) forms part of the selection process for the UK International Mathematical Olympiad team and for other international maths competitions, including the European Girls' Mathematical Olympiad, the Romanian Master of Mathematics and Sciences, and the Balkan Mathematical Olympiad. It is organised by the British Mathematical Olympiad Subtrust, which is part of the United Kingdom Mathematics Trust. There are two rounds, the BMO1 and the BMO2. BMO Round 1 The first round of the BMO is held in November each year, and from 2006 is an open entry competition. The qualification to BMO Round 1 is through the Senior Mathematical Challenge. Students who do not make the qualification through the Senior Mathematical Challenge may be entered at the discretion of their school for a fee of £40. The paper lasts 3½ hours, and consists of six questions (from 2005), each worth 10 marks. The exam in the 2020-2021 cycle was adjusted to consist of two sections, first section with 4 questions each worth 5 marks (only answers required), and second section with 3 question each worth 10 marks (full solutions required). The duration of the exam had been reduced to 2½ hours, due to the difficulties of holding a 3½ hours exam under COVID-19. Candidates are required to write full proofs to the questions. An answer is marked on either a "0+" or a "10-" mark scheme, depending on whether the answer looks generally complete or not. An answer judged incomplete or unfinished is usually capped at 3 or 4, whereas for an answer judged as complete, marks may be deducted for minor errors or poor reasoning but it is likely to get a score of 7 or more. As a result, it is uncommon for an answer to score a middling mark between 4 and 6. While around 1000 gain automatic qualification to sit the BMO1 paper each year, the additional discretionary and international students means that since 2016, on average, around 1600 candidates have been entered for BMO1 each year. Although
https://en.wikipedia.org/wiki/Bacterial%20translation
Bacterial translation is the process by which messenger RNA is translated into proteins in bacteria. Initiation Initiation of translation in bacteria involves the assembly of the components of the translation system, which are: the two ribosomal subunits (50S and 30S subunits); the mature mRNA to be translated; the tRNA charged with N-formylmethionine (the first amino acid in the nascent peptide); guanosine triphosphate (GTP) as a source of energy, and the three prokaryotic initiation factors IF1, IF2, and IF3, which help the assembly of the initiation complex. Variations in the mechanism can be anticipated. The ribosome has three active sites: the A site, the P site, and the E site. The A site is the point of entry for the aminoacyl tRNA (except for the first aminoacyl tRNA, which enters at the P site). The P site is where the peptidyl tRNA is formed in the ribosome. And the E site which is the exit site of the now uncharged tRNA after it gives its amino acid to the growing peptide chain. The selection of an initiation site (usually an AUG codon) depends on the interaction between the 30S subunit and the mRNA template. The 30S subunit binds to the mRNA template at a purine-rich region (the Shine-Dalgarno sequence) upstream of the AUG initiation codon. The Shine-Dalgarno sequence is complementary to a pyrimidine rich region on the 16S rRNA component of the 30S subunit. This sequence has been evolutionarily conserved and plays a major role in the microbial world we know today. During the formation of the initiation complex, these complementary nucleotide sequences pair to form a double stranded RNA structure that binds the mRNA to the ribosome in such a way that the initiation codon is placed at the P site. Well-known coding regions that do not have AUG initiation codons are those of lacI (GUG) and lacA (UUG) in the E. coli lac operon. Two studies have independently shown that 17 or more non-AUG start codons may initiate translation in E. coli. There are three m
https://en.wikipedia.org/wiki/Eukaryotic%20translation
Eukaryotic translation is the biological process by which messenger RNA is translated into proteins in eukaryotes. It consists of four phases: initiation, elongation, termination, and recapping. Initiation Translation initiation is the process by which the ribosome and its associated factors bind to an mRNA and are assembled at the start codon. This process is defined as either cap-dependent, in which the ribosome binds initially at the 5' cap and then travels to the stop codon, or as cap-independent, where the ribosome does not initially bind the 5' cap. Cap-dependent initiation Initiation of translation usually involves the interaction of certain key proteins, the initiation factors, with a special tag bound to the 5'-end of an mRNA molecule, the 5' cap, as well as with the 5' UTR. These proteins bind the small (40S) ribosomal subunit and hold the mRNA in place. eIF3 is associated with the 40S ribosomal subunit and plays a role in keeping the large (60S) ribosomal subunit from prematurely binding. eIF3 also interacts with the eIF4F complex, which consists of three other initiation factors: eIF4A, eIF4E, and eIF4G. eIF4G is a scaffolding protein that directly associates with both eIF3 and the other two components. eIF4E is the cap-binding protein. Binding of the cap by eIF4E is often considered the rate-limiting step of cap-dependent initiation, and the concentration of eIF4E is a regulatory nexus of translational control. Certain viruses cleave a portion of eIF4G that binds eIF4E, thus preventing cap-dependent translation to hijack the host machinery in favor of the viral (cap-independent) messages. eIF4A is an ATP-dependent RNA helicase that aids the ribosome by resolving certain secondary structures formed along the mRNA transcript. The poly(A)-binding protein (PABP) also associates with the eIF4F complex via eIF4G, and binds the poly-A tail of most eukaryotic mRNA molecules. This protein has been implicated in playing a role in circularization of the mRNA
https://en.wikipedia.org/wiki/Initial%20mass%20function
In astronomy, the initial mass function (IMF) is an empirical function that describes the initial distribution of masses for a population of stars during star formation. IMF not only describes the formation and evolution of individual stars, it also serves as an important link that describes the formation and evolution of galaxies. The IMF is often given as a probability density function (PDF) that describes the probability of a star that has a certain mass. It differs from the present-day mass function (PDMF), which describes the current distribution of masses of stars, such as red giants, white dwarfs, neutron stars, and black holes, after a period of time of evolution away from the main sequence stars. IMF is derived from the luminosity function while PDMF is derived from the present-day luminosity function. IMF and PDMF can be linked through the "stellar creation function". Stellar creation function is defined as the number of stars per unit volume of space in a mass range and a time interval. For all the main sequence stars have greater lifetimes than the galaxy, IMF and PDMF are equivalent. Similarly, IMF and PDMF are equivalent in brown dwarfs due to their unlimited lifetimes. The properties and evolution of a star are closely related to its mass, so the IMF is an important diagnostic tool for astronomers studying large quantities of stars. For example, the initial mass of a star is the primary factor of determining its colour, luminosity, radius, radiation spectrum, and quantity of materials and energy it emitted into interstellar space during its lifetime. At low masses, the IMF sets the Milky Way Galaxy mass budget and the number of substellar objects that form. At intermediate masses, the IMF controls chemical enrichment of the interstellar medium. At high masses, the IMF sets the number of core collapse supernovae that occur and therefore the kinetic energy feedback. The IMF is relatively invariant from one group of stars to another, though some ob
https://en.wikipedia.org/wiki/AV%20input
AV input stands for Audio/Visual input, which is a common label on a connector to receive (AV) audio/visual signals from electronic equipment that generates AV signals (AV output). These terminals are commonly found on such equipment as a television, DVD recorder or VHS recorder, and typically take input from a DVD player, a TV tuner, VHS recorder or camcorder. Types of plugs used for video input Composite video RCA connector BNC connector UHF connector 1/8 inch minijack phone connector S-video DIN plug (also used for Apple Desktop Bus) Component video RCA connector RGBHV Digital video HDMI (High Definition Multimedia Interface) DVI (Digital Video Interface) IEEE 1394 (FireWire) SPDIF (Sony Philips Digital Interface)
https://en.wikipedia.org/wiki/Plasma%20recombination
Plasma recombination is a process by which positive ions of a plasma capture a free (energetic) electron and combine with electrons or negative ions to form new neutral atoms (gas). The process of recombination can be described as the reverse of ionization, whereby conditions allow the plasma to evert to a gas. Recombination is an exothermic process, meaning that the plasma releases some of its internal energy, usually in the form of heat. Except for plasma composed of pure hydrogen (or its isotopes), there may also be multiply charged ions. Therefore, a single electron capture results in decrease of the ion charge, but not necessarily in a neutral atom or molecule. Recombination usually takes place in the whole volume of a plasma (volume recombination), although in some cases it is confined to some special region of it. Each kind of reaction is called a recombining mode and their individual rates are strongly affected by the properties of the plasma such as its energy (heat), density of each species, pressure and temperature of the surrounding environment. Examples An everyday example of rapid plasma recombination occurs when a fluorescent lamp is switched off. The low-density plasma in the lamp (which generates the light by bombardment of the fluorescent coating on the inside of the glass wall) recombines in a fraction of a second after the plasma-generating electric field is removed by switching off the electric power source. Hydrogen recombination modes are of vital importance in the development of divertor regions for tokamak reactors. In fact they will provide a good way for extracting the energy produced in the core of the plasma. At the present time, it is believed that the most likely plasma losses observed in the recombining region are due to two different modes: electron ion recombination (EIR) and molecular activated recombination (MAR). Table
https://en.wikipedia.org/wiki/Directional%20symmetry%20%28time%20series%29
In statistical analysis of time series and in signal processing, directional symmetry is a statistical measure of a model's performance in predicting the direction of change, positive or negative, of a time series from one time period to the next. Definition Given a time series with values at times and a model that makes predictions for those values , then the directional symmetry (DS) statistic is defined as Interpretation The DS statistic gives the percentage of occurrences in which the sign of the change in value from one time period to the next is the same for both the actual and predicted time series. The DS statistic is a measure of the performance of a model in predicting the direction of value changes. The case would indicate that a model perfectly predicts the direction of change of a time series from one time period to the next. See also Statistical finance Notes and references Drossu, Radu, and Zoran Obradovic. "INFFC data analysis: lower bounds and testbed design recommendations." Computational Intelligence for Financial Engineering (CIFEr), 1997., Proceedings of the IEEE/IAFE 1997. IEEE, 1997. Lawrance, A. J., "Directionality and Reversibility in Time Series", International Statistical Review, 59 (1991), 67–79. Tay, Francis EH, and Lijuan Cao. "Application of support vector machines in financial time series forecasting." Omega 29.4 (2001): 309–317. Xiong, Tao, Yukun Bao, and Zhongyi Hu. "Beyond one-step-ahead forecasting: Evaluation of alternative multi-step-ahead forecasting models for crude oil prices." Energy Economics 40 (2013): 405–415. Symmetry Signal processing
https://en.wikipedia.org/wiki/Fission%E2%80%93fusion%20society
In ethology, fission–fusion society is one in which the size and composition of the social group change as time passes and animals move throughout the environment; animals merge into a group (fusion)—e.g. sleeping in one place—or split (fission)—e.g. foraging in small groups during the day. For species that live in fission–fusion societies, group composition is a dynamic property. The change in composition, subgroup size, and dispersion of different groups are 3 main elements of a fission-fusion society. This social organization is found in several primates, elephants, cetaceans, ungulates, social carnivores, some birds and some fish. Species Fission-fusion societies occur among many different species of primates (e.g. chimpanzees, orangutans, and humans), elephants (e.g. forest elephants, African elephants), and bats (e.g. northern long-eared bats). The change in composition, subgroup size, and dispersion of different groups are 3 main elements of a fission-fusion society. Primates Chimpanzees Chimpanzees often form smaller subgroups when travelling for longer periods at a time in between each food patch. When obtaining food, the size of subgroups can change depending on how much food is available and how far away the food may be. If food is worth retrieving due to little travel costs, subgroup size will enlarge. So among chimpanzees, the abundance of food and how dense it may be are factors that contribute to the changes of subgroup sizes. Orangutans Orangutans are one type of primates that model individual-based fission-fusion. Travel parties are established among this species inhabiting specifically in a Sumatran forest, and there are several benefits. Mating opportunities are a large benefit of grouping, as parties are most substantial during high mating activity. Infant socialization also contains benefits as well as costs, due to their needs to be cared for. Females are required to carry their infants, and those with infants of mid-size experience grea
https://en.wikipedia.org/wiki/Noise-equivalent%20flux%20density
In optics the noise-equivalent flux density (NEFD) or noise-equivalent irradiance (NEI) of a system is the level of flux density required to be equivalent to the noise present in the system. It is a measure used by astronomers in determining the accuracy of observations. The NEFD can be related to a light detector's noise-equivalent power for a collection area A and a photon bandwidth by: , where a factor (often 2, in the case of switching between measuring a source and measuring off-source) accounts for the photon statistics for the mode of operation.
https://en.wikipedia.org/wiki/Steve%20Jackson%20%28mathematician%29
Steve Jackson (full name: Stephen Craig Jackson) is an American set theorist at the University of North Texas. Much of his most notable work has involved the descriptive set-theoretic consequences of the axiom of determinacy. In particular he is known for having calculated the values of all the projective ordinals (the suprema of the lengths of all prewellorderings of the real numbers at a particular level in the projective hierarchy) under the assumption that the axiom of determinacy holds. In recent years he has also made contributions to the theory of Borel equivalence relations. With Dan Mauldin he solved the Steinhaus lattice problem. Jackson earned his PhD in 1983 at UCLA under the direction of Donald A. Martin, with a dissertation on A Calculation of δ15. In it, he proved that, under the axiom of determinacy, thereby solving the first Victoria Delfino problem, one of the notorious problems of the combinatorics of the axiom of determinacy.
https://en.wikipedia.org/wiki/PS/2%20port
The PS/2 port is a 6-pin mini-DIN connector used for connecting keyboards and mice to a PC compatible computer system. Its name comes from the IBM Personal System/2 series of personal computers, with which it was introduced in 1987. The PS/2 mouse connector generally replaced the older DE-9 RS-232 "serial mouse" connector, while the PS/2 keyboard connector replaced the larger 5-pin/180° DIN connector used in the IBM PC/AT design. The PS/2 keyboard port is electrically and logically identical to the IBM AT keyboard port, differing only in the type of electrical connector used. The PS/2 platform introduced a second port with the same design as the keyboard port for use to connect a mouse; thus the PS/2-style keyboard and mouse interfaces are electrically similar and employ the same communication protocol. However, unlike the otherwise similar Apple Desktop Bus connector used by Apple, a given system's keyboard and mouse port may not be interchangeable since the two devices use different sets of commands and the device drivers generally are hard-coded to communicate with each device at the address of the port that is conventionally assigned to that device. (That is, keyboard drivers are written to use the first port, and mouse drivers are written to use the second port.) Communication protocol Each port implements a bidirectional synchronous serial channel. The channel is slightly asymmetrical: it favors transmission from the input device to the computer, which is the majority case. The bidirectional IBM AT and PS/2 keyboard interface is a development of the unidirectional IBM PC keyboard interface, using the same signal lines but adding capability to send data back to the keyboard from the computer; this explains the asymmetry. The interface has two main signal lines, Data and Clock. These are single-ended signals driven by open-collector drivers at each end. Normally, the transmission is from the device to the host. To transmit a byte, the device simply outputs a
https://en.wikipedia.org/wiki/PVCS
PVCS Version Manager (originally named Polytron Version Control System) is a software package by Serena Software Inc., for version control of source code files. PVCS follows the "locking" approach to concurrency control; it has no merge operator built-in (but does, nonetheless, have a separate merge command). However PVCS can also be configured to support several users simultaneously attempting to edit the file; in this case the second chronological committer will have a branch created for them so that both modifications, instead of conflicting, will appear as parallel histories for the same file. This is unlike Concurrent Versions System (CVS) and Subversion where the second committer needs to first merge the changes via the update command and then resolve conflicts (when they exist) before actually committing. Originally developed by Don Kinzer and published by Polytron in 1985, through a history of acquisitions and mergers, the product was at times owned by Sage Software of Rockville (1989), Maryland (unrelated to Sage Software of the UK), Intersolv 1992, Micro Focus International 1998 and Merant PLC 2001. The latter was acquired by Serena Software in 2004, which was then acquired by Silver Lake Partners in 2006. Synergex ported both the PVCS Version Manager and the PVCS Configuration Builder (an extended make utility, including a variant of the command line tool make) to various Unix platforms and OpenVMS. In 2009, Serena Software clarified that it will continue to invest in PVCS and provide support to PVCS customers for the foreseeable future.PVCS Version Manager 8.5 release (2014) introduces both new feature and new platform support. In 2016, Micro Focus International announced the acquisition of Serena Software to again become the custodians of PVCS. See also List of version control software
https://en.wikipedia.org/wiki/Optical%20music%20recognition
Optical music recognition (OMR) is a field of research that investigates how to computationally read musical notation in documents. The goal of OMR is to teach the computer to read and interpret sheet music and produce a machine-readable version of the written music score. Once captured digitally, the music can be saved in commonly used file formats, e.g. MIDI (for playback) and MusicXML (for page layout). In the past it has, misleadingly, also been called "music optical character recognition". Due to significant differences, this term should no longer be used. History Optical music recognition of printed sheet music started in the late 1960s at the Massachusetts Institute of Technology when the first image scanners became affordable for research institutes. Due to the limited memory of early computers, the first attempts were limited to only a few measures of music. In 1984, a Japanese research group from Waseda University developed a specialized robot, called WABOT (WAseda roBOT), which was capable of reading the music sheet in front of it and accompanying a singer on an electric organ. Early research in OMR was conducted by Ichiro Fujinaga, Nicholas Carter, Kia Ng, David Bainbridge, and Tim Bell. These researchers developed many of the techniques that are still being used today. The first commercial OMR application, MIDISCAN (now SmartScore), was released in 1991 by Musitek Corporation. The availability of smartphones with good cameras and sufficient computational power, paved the way to mobile solutions where the user takes a picture with the smartphone and the device directly processes the image. Relation to other fields Optical music recognition relates to other fields of research, including computer vision, document analysis, and music information retrieval. It is relevant for practicing musicians and composers that could use OMR systems as a means to enter music into the computer and thus ease the process of composing, transcribing, and editing music.
https://en.wikipedia.org/wiki/Farber%20disease
Farber disease (also known as Farber's lipogranulomatosis, acid ceramidase deficiency, "Lipogranulomatosis", and ASAH1-related disorders) is an extremely rare, progressive, autosomal recessive lysosomal storage disease caused by a deficiency of the acid ceramidase enzyme. Acid ceramidase is responsible for breaking down ceramide into sphingosine and fatty acid. When the enzyme is deficient, this leads to an accumulation of fatty material (called ceramide) in the lysosomes of the cells, leading to the signs and symptoms of this disorder. Signs and symptoms The symptoms of Farber disease develop over time. The onset of symptoms and how quickly they progress vary from person to person. The most common symptoms include: Bumps under the skin located at pressure points and joints, also called subcutaneous nodules, lipogranulomas, or granulomas Swollen, painful joints with progressive limitation of range of motion resulting in contracture Hoarse voice/cry Other symptoms observed in some individuals with Farber disease include: Respiratory disease, e.g. lung infections, labored breathing, respiratory distress Central nervous system disease, e.g. developmental delay, muscle weakness, seizures Systemic inflammation Failure to thrive Bone disease, e.g. erosion of bone near joints, osteoporosis, peripheral osteolysis Enlarged liver (hepatomegaly) Eye disease, e.g. cherry-red spot, corneal opacities Genetics Farber disease is caused by variants in the ASAH1 gene. This gene codes for the acid ceramidase enzyme. Individuals with Farber disease have two copies of this gene that are not functioning properly leading to the enzyme deficiency. Over 73 different gene variants have been reported to cause Farber disease. No definitive genotype-phenotype correlations are known. Farber disease is inherited in an autosomal recessive manner. Affected individuals inherit one copy of the gene that is not functioning properly from each parent. Each parent is a called a carrier
https://en.wikipedia.org/wiki/Transporter%20Classification%20Database
The Transporter Classification Database (or TCDB) is an International Union of Biochemistry and Molecular Biology (IUBMB)-approved classification system for membrane transport proteins, including ion channels. Classification The upper level of classification and a few examples of proteins with known 3D structure: 1. Channels and pores 1.A α-type channels 1.A.1 Voltage-gated ion channel superfamily 1.A.2 Inward-rectifier K+ channel family 1.A.3 Ryanodine-inositol-1,4,5-trisphosphate receptor Ca2+ channel family 1.A.4 Transient receptor potential Ca2+ channel family 1.A.5 Polycystin cation channel family 1.A.6 Epithelial Na+ channel family 1.A.7 ATP-gated P2X receptor cation channel family 1.A.8 Major intrinsic protein superfamily 1.A.9 Neurotransmitter receptor, Cys loop, ligand-gated ion channel family 1.A.10 Glutamate-gated ion channel family of neurotransmitter receptors 1.A.11 Ammonium channel transporter family 1.A.12 Intracellular chloride channel family 1.A.13 Epithelial chloride channel family 1.A.14 Testis-enhanced gene transfer family 1.A.15 Nonselective cation channel-2 family 1.A.16 Formate-nitrite transporter family 1.A.17 Calcium-dependent chloride channel family 1.A.18 Chloroplast envelope anion-channel-forming Tic110 family 1.A.19 Type A influenza virus matrix-2 channel family 1.A.20 BCL2/Adenovirus E1B-interacting protein 3 family 1.A.21 Bcl-2 family 1.A.22 Large-conductance mechanosensitive ion channel 1.A.23 Small-conductance mechanosensitive ion channel 1.A.24 Gap-junction-forming connexin family 1.A.25 Gap-junction-forming innexin family 1.A.26 Mg2+ transporter-E family 1.A.27 Phospholemman family 1.A.28 Urea transporter family 1.A.29 Urea/amide channel family 1.A.30 H+- or Na+-translocating bacterial MotAB flagellar motor/ExbBD outer-membrane transport energizer superfamily 1.A.31 Annexin family 1.A.32 Type B influenza virus NB channel family 1.A.33 Cation-channel-forming heat shock protein 70 family 1.A.34 B
https://en.wikipedia.org/wiki/Border%20town
A border town is a town or city close to the boundary between two countries, states, or regions. Usually the term implies that the nearness to the border is one of the things the place is most famous for. With close proximities to a different country, diverse cultural traditions can have certain influence to the place. Border towns can have highly cosmopolitan communities, a feature they share with port cities, as traveling and trading often go through the town. They can also be flashpoints for international conflicts, especially when the two countries have territorial disputes. List of border towns and cities Transcontinental Asia/Africa El-Qantarah el-Sharqiyya, Egypt Asia/Europe Istanbul, Turkey Atyrau, Kazakhstan Oral, Kazakhstan Magnitogorsk, Russia In Africa Acoacán, Equatorial Guinea Adré, Chad Aflao, Ghana Afoji, Uganda Ahfir, Morocco Alexander Bay, South Africa Andéramboukane, Mali Ariamsvlei, Namibia Assamakka, Niger Badme, Eritrea Bahaï, Chad Bakel, Senegal Bambouti, Central African Republic and Source Yubu, South Sudan Bang, Central African Republic Bangassou, Central African Republic Bangui, Central African Republic and Zongo, Democratic Republic of the Congo Bariguna, South Sudan Beitbridge, Zimbabwe Benena, Mali Beni Ensar, Morocco Béni Ounif, Algeria and Figuig, Morocco Bétou, Republic of the Congo Blangoua, Cameroon Bokspits, Botswana Bolobo, Democratic Republic of the Congo Boma, Democratic Republic of the Congo Bomandjokou, Central African Republic Bomassa, Republic of the Congo Bongor, Chad Bosso, Niger Bray, South Africa and Bray, Botswana Bugarama, Democratic Republic of the Congo Bukavu, Democratic Republic of the Congo and Cyangugu, Rwanda Bulok, Gambia Bunangana, Democratic Republic of the Congo Buruntuma, Guinea Bissau Busia, Kenya and Busia, Uganda Ceuta, Spain and Belyounech, Morocco Chembe, Zambia Chicualacuala, Mozambique Cinkassé, Togo Cocobeach, Gabon Damasak, Nigeria Débété, Ivory
https://en.wikipedia.org/wiki/Quicksort
Quicksort is an efficient, general-purpose sorting algorithm. Quicksort was developed by British computer scientist Tony Hoare in 1959 and published in 1961. It is still a commonly used algorithm for sorting. Overall, it is slightly faster than merge sort and heapsort for randomized data, particularly on larger distributions. Quicksort is a divide-and-conquer algorithm. It works by selecting a 'pivot' element from the array and partitioning the other elements into two sub-arrays, according to whether they are less than or greater than the pivot. For this reason, it is sometimes called partition-exchange sort. The sub-arrays are then sorted recursively. This can be done in-place, requiring small additional amounts of memory to perform the sorting. Quicksort is a comparison sort, meaning that it can sort items of any type for which a "less-than" relation (formally, a total order) is defined. Most implementations of quicksort are not stable, meaning that the relative order of equal sort items is not preserved. Mathematical analysis of quicksort shows that, on average, the algorithm takes comparisons to sort n items. In the worst case, it makes comparisons. History The quicksort algorithm was developed in 1959 by Tony Hoare while he was a visiting student at Moscow State University. At that time, Hoare was working on a machine translation project for the National Physical Laboratory. As a part of the translation process, he needed to sort the words in Russian sentences before looking them up in a Russian-English dictionary, which was in alphabetical order on magnetic tape. After recognizing that his first idea, insertion sort, would be slow, he came up with a new idea. He wrote the partition part in Mercury Autocode but had trouble dealing with the list of unsorted segments. On return to England, he was asked to write code for Shellsort. Hoare mentioned to his boss that he knew of a faster algorithm and his boss bet a sixpence that he did not. His boss ultimately
https://en.wikipedia.org/wiki/Specific%20modulus
Specific modulus is a materials property consisting of the elastic modulus per mass density of a material. It is also known as the stiffness to weight ratio or specific stiffness. High specific modulus materials find wide application in aerospace applications where minimum structural weight is required. The dimensional analysis yields units of distance squared per time squared. The equation can be written as: where is the elastic modulus and is the density. The utility of specific modulus is to find materials which will produce structures with minimum weight, when the primary design limitation is deflection or physical deformation, rather than load at breaking—this is also known as a "stiffness-driven" structure. Many common structures are stiffness-driven over much of their use, such as airplane wings, bridges, masts, and bicycle frames. To emphasize the point, consider the issue of choosing a material for building an airplane. Aluminum seems obvious because it is "lighter" than steel, but steel is stronger than aluminum, so one could imagine using thinner steel components to save weight without sacrificing (tensile) strength. The problem with this idea is that there would be a significant sacrifice of stiffness, allowing, e.g., wings to flex unacceptably. Because it is stiffness, not tensile strength, that drives this kind of decision for airplanes, we say that they are stiffness-driven. The connection details of such structures may be more sensitive to strength (rather than stiffness) issues due to effects of stress risers. Specific modulus is not to be confused with specific strength, a term that compares strength to density. Applications Specific stiffness in tension The use of specific stiffness in tension applications is straightforward. Both stiffness in tension and total mass for a given length are directly proportional to cross-sectional area. Thus performance of a beam in tension will depend on Young's modulus divided by density. Speci
https://en.wikipedia.org/wiki/List%20of%20Adobe%20software
The following is a list of software products by Adobe Inc. Active products Software suites Experience Cloud Adobe Experience Cloud (AEC) is a collection of integrated online marketing and Web analytics solutions by Adobe Inc. It includes a set of analytics, social, advertising, media optimization, targeting, Web experience management and content management solutions. It includes: Advertising Cloud Analytics Audience Manager Campaign Commerce Cloud Experience Manager Experience Manager Assets Experience Manager Sites Experience Manager Forms Marketo Engage Primetime Target Creative Suite Adobe Creative Suite (CS) was a series of software suites of graphic design, video editing, and web development applications made or acquired by Adobe Systems. It included: Acrobat After Effects Audition Bridge Contribute Device Central Dreamweaver Dynamic Link Encore Fireworks Flash Professional Illustrator InDesign OnLocation Photoshop Premiere Pro Creative Cloud Adobe Creative Cloud is the successor to Creative Suite. It is based on a software as a service model. It includes everything in Creative Suite 6 with the exclusion of Fireworks and Encore, as both applications were discontinued. It also introduced a few new programs, including Muse, Animate, InCopy and Story CC Plus. Technical Communication Suite Adobe Technical Communication Suite is a collection of applications made by Adobe Systems for technical communicators, help authors, instructional designers, and eLearning and training design professionals. It includes: Acrobat Captivate FrameMaker Presenter RoboHelp eLearning Suite Adobe eLearning Suite was a collection of applications made by Adobe Systems for learning professionals, instructional designers, training managers, content developers, and educators. Acrobat Captivate Device Central Dreamweaver Flash Professional Photoshop Discontinued products Acrobat Approval allows users to deploy electronic forms based on the Acrobat P
https://en.wikipedia.org/wiki/Great%20Oxidation%20Event
The Great Oxidation Event (GOE) or Great Oxygenation Event, also called the Oxygen Catastrophe, Oxygen Revolution, Oxygen Crisis or Oxygen Holocaust, was a time interval during the Early Earth's Paleoproterozoic Era when the Earth's atmosphere and the shallow ocean first experienced a rise in the concentration of oxygen. This began approximately 2.460–2.426 Ga (billion years) ago, during the Siderian period, and ended approximately 2.060 Ga, during the Rhyacian. Geological, isotopic, and chemical evidence suggests that biologically-produced molecular oxygen (dioxygen or O2) started to accumulate in Earth's atmosphere and changed it from a weakly reducing atmosphere practically devoid of oxygen into an oxidizing one containing abundant free oxygen, with oxygen levels being as high as 10% of their present atmospheric level by the end of the GOE. The sudden injection of highly reactive free oxygen, toxic to the then-mostly anaerobic biosphere, may have caused the extinction of many existing organisms on Earth — then mostly archaeal colonies that used retinal to utilize green-spectrum light energy and power a form of anoxygenic photosynthesis (see Purple Earth hypothesis). Although the event is inferred to have constituted a mass extinction, due in part to the great difficulty in surveying microscopic organisms' abundances, and in part to the extreme age of fossil remains from that time, the Great Oxidation Event is typically not counted among conventional lists of "great extinctions", which are implicitly limited to the Phanerozoic eon. In any case, Isotope geochemistry data from sulfate minerals have been interpreted to indicate a decrease in the size of the biosphere of >80% associated with changes in nutrient supplies at the end of the GOE. The GOE is inferred to have been caused by cyanobacteria who evolved porphyrin-based photosynthesis, which produces dioxygen as a byproduct. The increasing oxygen level eventually depleted the reducing capacity of ferrous compo
https://en.wikipedia.org/wiki/Functional%20ecology
Functional ecology is a branch of ecology that focuses on the roles, or functions, that species play in the community or ecosystem in which they occur. In this approach, physiological, anatomical, and life history characteristics of the species are emphasized. The term "function" is used to emphasize certain physiological processes rather than discrete properties, describe an organism's role in a trophic system, or illustrate the effects of natural selective processes on an organism. This sub-discipline of ecology represents the crossroads between ecological patterns and the processes and mechanisms that underlie them. It focuses on traits represented in large number of species and can be measured in two ways – the first being screening, which involves measuring a trait across a number of species, and the second being empiricism, which provides quantitative relationships for the traits measured in screening. Functional ecology often emphasizes an integrative approach, using organism traits and activities to understand community dynamics and ecosystem processes, particularly in response to the rapid global changes occurring in earth's environment. Functional ecology sits at the nexus of several disparate disciplines and serves as the unifying principle between evolutionary ecology, evolutionary biology, genetics and genomics, and traditional ecological studies. It explores such areas as "[species'] competitive abilities, patterns of species co-occurrence, community assembly, and the role of different traits on ecosystem functioning". History The notion that ecosystems' functions can be affected by their constituent parts has its origins in the 19th century. Charles Darwin's On The Origin of Species is one of the first texts to directly comment on the effect of biodiversity on ecosystem health by noting a positive correlation between plant density and ecosystem productivity. In his influential 1927 work, Animal Ecology, Charles Elton proposed classifying an ecosyst
https://en.wikipedia.org/wiki/Carpenter%27s%20rule%20problem
The carpenter's rule problem is a discrete geometry problem, which can be stated in the following manner: Can a simple planar polygon be moved continuously to a position where all its vertices are in convex position, so that the edge lengths and simplicity are preserved along the way? A closely related problem is to show that any non-self-crossing polygonal chain can be straightened, again by a continuous transformation that preserves edge distances and avoids crossings. Both problems were successfully solved by . The problem is named after the multiple-jointed wooden rulers popular among carpenters in the 19th and early 20th centuries before improvements to metal tape measures made them obsolete. Combinatorial proof Subsequently to their work, Ileana Streinu provided a simplified combinatorial proof formulated in the terminology of robot arm motion planning. Both the original proof and Streinu's proof work by finding non-expansive motions of the input, continuous transformations such that no two points ever move towards each other. Streinu's version of the proof adds edges to the input to form a pointed pseudotriangulation, removes one added convex hull edge from this graph, and shows that the remaining graph has a one-parameter family of motions in which all distances are nondecreasing. By repeatedly applying such motions, one eventually reaches a state in which no further expansive motions are possible, which can only happen when the input has been straightened or convexified. provide an application of this result to the mathematics of paper folding: they describe how to fold any single-vertex origami shape using only simple non-self-intersecting motions of the paper. Essentially, this folding process is a time-reversed version of the problem of convexifying a polygon of length smaller than π, but on the surface of a sphere rather than in the Euclidean plane. This result was extended by for spherical polygons of edge length smaller than 2π. Generalization
https://en.wikipedia.org/wiki/Electric%20power
Electric power is the rate at which electrical energy is transferred by an electric circuit. The SI unit of power is the watt, one joule per second. Standard prefixes apply to watts as with other SI units: thousands, millions and billions of watts are called kilowatts, megawatts and gigawatts respectively. A common misconception is that electric power is bought and sold, but actually electrical energy is bought and sold. For example, electricity sold to consumers is measured in terms of amounts of energy, kilowatt-hours (kilowatts multiplied by hours), and not the rate at which this energy is transferred. Electric power is usually produced by electric generators, but can also be supplied by sources such as electric batteries. It is usually supplied to businesses and homes (as domestic mains electricity) by the electric power industry through an electrical grid. Electric power can be delivered over long distances by transmission lines and used for applications such as motion, light or heat with high efficiency. Definition Electric power, like mechanical power, is the rate of doing work, measured in watts, and represented by the letter P. The term wattage is used colloquially to mean "electric power in watts." The electric power in watts produced by an electric current I consisting of a charge of Q coulombs every t seconds passing through an electric potential (voltage) difference of V is: where: W is work in joules t is time in seconds Q is electric charge in coulombs V is electric potential or voltage in volts I is electric current in amperes I.e., watts = volts times amps. Explanation Electric power is transformed to other forms of energy when electric charges move through an electric potential difference (voltage), which occurs in electrical components in electric circuits. From the standpoint of electric power, components in an electric circuit can be divided into two categories: Active devices (power sources) If electric current is forced to flow thro
https://en.wikipedia.org/wiki/List%20of%20SIP%20software
This list of SIP software documents notable software applications which use Session Initiation Protocol (SIP) as a voice over IP (VoIP) protocol. Servers Free and open-source license A SIP server, also known as a SIP proxy, manages all SIP calls within a network and takes responsibility for receiving requests from user agents for the purpose of placing and terminating calls. Asterisk ejabberd FreeSWITCH FreePBX GNU SIP Witch Issabel, fork of Elastix Kamailio, formerly OpenSER Mobicents Platform (JSLEE[2] 1.0 compliant and SIP Servlets 1.1 compliant application server) OpenSIPS, fork of OpenSER SailFin SIP Express Router (SER) Enterprise Communications System sipXecs Yate Proprietary license 3Com VCX IP telephony module: back-to-back user agent SIP PBX 3CX Phone System, for Windows, Debian 8 GNU/Linux Aastra 5000, 800, MX-ONE Alcatel-Lucent 5060 IP Call server Aricent SIP UA stack, B2BUA, proxy, VoLTE/RCS Client AskoziaPBX Avaya Application Server 5300 (AS5300), JITC certified ASSIP VoIP Bicom Systems IP PBX for telecoms Brekeke PBX, SIP PBX for service providers and enterprises Cisco SIP Proxy Server, Cisco unified border element (CUBE), Cisco Unified Communication Manager (CUCM) CommuniGate Pro, virtualized PBX for IP Centrex hosting, voicemail services, self-care, ... Comverse Technology softswitch, media applications, SIP registrars Creacode SIP Application Server Real-time SIP call controller and IVR product for carrier-class VoIP networks Dialogic Corporation Powermedia Media Servers, audio and video SIP IVR, media and conferencing servers for Enterprise and Carriers. Dialexia VoIP Softswitches, IP PBX for medium and enterprise organizations, billing servers. IBM WebSphere Application Server - Converged HTTP and SIP container JEE Application Server Interactive Intelligence Windows-based IP PBX for small, medium and enterprise organizations Kerio Operator, IP PBX for small and medium enterprises Microsoft Lync Server 2010
https://en.wikipedia.org/wiki/Idem
idem is a Latin term meaning "the same". It is commonly abbreviated as id., which is particularly used in legal citations to denote the previously cited source (compare ibid.). It is also used in academic citations to replace the name of a repeated author. Id. is employed extensively in Canadian legislation and in legal documents of the United States to apply a short description to a section with the same focus as the previous. Id. is masculine and neuter; ead. (feminine) is the abbreviation for eadem, which also translates to "the same". As an abbreviation, Id. always takes a period (or full stop) in both British and American usage (see usage of the full stop in abbreviations). Its first known use dates back to the 14th century. Use Legal United States v. Martinez-Fuerte, 428 U.S. 543, 545 (1976). Id. at 547. Here, the first citation refers to the case of United States v. Martinez-Fuerte. The volume number cited is 428 and the page on which the case begins is 543, and the page number cited to is 545. The "U.S." between the numerical portions of the citation refers to the United States Reports. 1976 refers to the year that the case was published. The second citation references the first citation and automatically incorporates the same reporter and volume number; however, the page number cited is now 547. Id. refers to the immediately preceding citation, so if the previous citation includes more than one reference, or it is unclear which reference Id. refers to, its usage is inappropriate. "...the Executive Order declares that “the United States must ensure that those admitted to this country do not bear hostile attitudes toward it and its founding principles.” Id. It asserts, “Deteriorating..." (from page 3 of State of Washington v. Donald J. Trump) Here, Id. refers to the Executive Order that was mentioned in the previous sentence. Academic Macgillivray, J. A. Minotaur: Sir Arthur Evans and the Archaeology of the Minoan Myth. New York: Hill & Wang,
https://en.wikipedia.org/wiki/Dynamic%20modulus
Dynamic modulus (sometimes complex modulus) is the ratio of stress to strain under vibratory conditions (calculated from data obtained from either free or forced vibration tests, in shear, compression, or elongation). It is a property of viscoelastic materials. Viscoelastic stress–strain phase-lag Viscoelasticity is studied using dynamic mechanical analysis where an oscillatory force (stress) is applied to a material and the resulting displacement (strain) is measured. In purely elastic materials the stress and strain occur in phase, so that the response of one occurs simultaneously with the other. In purely viscous materials, there is a phase difference between stress and strain, where strain lags stress by a 90 degree ( radian) phase lag. Viscoelastic materials exhibit behavior somewhere in between that of purely viscous and purely elastic materials, exhibiting some phase lag in strain. Stress and strain in a viscoelastic material can be represented using the following expressions: Strain: Stress: where where is frequency of strain oscillation, is time, is phase lag between stress and strain. The stress relaxation modulus is the ratio of the stress remaining at time after a step strain was applied at time : , which is the time-dependent generalization of Hooke's law. For visco-elastic solids, converges to the equilibrium shear modulus: . The fourier transform of the shear relaxation modulus is (see below). Storage and loss modulus The storage and loss modulus in viscoelastic materials measure the stored energy, representing the elastic portion, and the energy dissipated as heat, representing the viscous portion. The tensile storage and loss moduli are defined as follows: Storage: Loss: Similarly we also define shear storage and shear loss moduli, and . Complex variables can be used to express the moduli and as follows: where is the imaginary unit. Ratio between loss and storage modulus The ratio of the loss modulus to storag
https://en.wikipedia.org/wiki/Speech%20technology
Speech technology relates to the technologies designed to duplicate and respond to the human voice. They have many uses. These include aid to the voice-disabled, the hearing-disabled, and the blind, along with communication with computers without a keyboard. They enhance game software and aid in marketing goods or services by telephone. The subject includes several subfields: Speech synthesis Speech recognition Speaker recognition Speaker verification Speech encoding Multimodal interaction See also Communication aids Language technology Speech interface guideline Speech processing Speech Technology (magazine) External links Speech processing da:Taleteknologi fi:Puheteknologia th:การประมวลผลคำพูด
https://en.wikipedia.org/wiki/BioLinux
BioLinux is a term used in a variety of projects involved in making access to bioinformatics software on a Linux platform easier using one or more of the following methods: Provision of complete systems Provision of bioinformatics software repositories Addition of bioinformatics packages to standard distributions Live DVD/CDs with bioinformatics software added Community building and support systems There are now various projects with similar aims, on both Linux systems and other Unices, and a selection of these are given below. There is also an overview in the Canadian Bioinformatics Helpdesk Newsletter that details some of the Linux-based projects. Package repositories Apple/Mac Many Linux packages are compatible with Mac OS X and there are several projects which attempt to make it easy to install selected Linux packages (including bioinformatics software) on a computer running Mac OS X. (source?) BioArchLinux BioArchLinux repository contain more than 3,770 packages for Arch Linux and Arch Linux based distribution. Debian Debian is another very popular Linux distribution in use in many academic institutions, and some bioinformaticians have made their own software packages available for this distribution in the deb format. Red Hat Package repositories are generally specific to the distribution of Linux the bioinformatician is using. A number of Linux variants are prevalent in bioinformatics work. Fedora is a freely-distributed version of the commercial Red Hat system. Red Hat is widely used in the corporate world as they offer commercial support and training packages. Fedora Core is a community supported derivative of Red Hat and is popular amongst those who like Red Hat's system but don't require commercial support. Many users of bioinformatics applications have produced RPMs (Red Hat's package format) designed to work with Fedora, which you can potentially also install on Red Hat Enterprise Linux systems. Other distributions such as Mandriv
https://en.wikipedia.org/wiki/Complement%20component%205a
C5a is a protein fragment released from cleavage of complement component C5 by protease C5-convertase into C5a and C5b fragments. C5b is important in late events of the complement cascade, an orderly series of reactions which coordinates several basic defense mechanisms, including formation of the membrane attack complex (MAC), one of the most basic weapons of the innate immune system, formed as an automatic response to intrusions from foreign particles and microbial invaders. It essentially pokes microscopic pinholes in these foreign objects, causing loss of water and sometimes death. C5a, the other cleavage product of C5, acts as a highly inflammatory peptide, encouraging complement activation, formation of the MAC, attraction of innate immune cells, and histamine release involved in allergic responses. The origin of C5 is in the hepatocyte, but its synthesis can also be found in macrophages, where it may cause local increase of C5a. C5a is a chemotactic agent and an anaphylatoxin; it is essential in the innate immunity but it is also linked with the adaptive immunity. The increased production of C5a is connected with a number of inflammatory diseases. Structure Human polypeptide C5a contains 74 amino acids and has 11kDa. NMR spectroscopy proved that the molecule is composed of four helices and connected by peptide loops with three disulphide bonds between helix IV and II, III. There is a short 1.5 turn helix on N-terminus but all agonist activity take place in the C-terminus. C5a is rapidly metabolised by a serum enzyme carboxypeptidase B to a 72 amino acid form C5a des-Arg without C terminal arginine. Functions C5a is an anaphylatoxin, causing increased expression of adhesion molecules on endothelium, contraction of smooth muscle, and increased vascular permeability. C5a des-Arg is a much less potent anaphylatoxin. Both C5a and C5a des-Arg can trigger mast cell degranulation, releasing proinflammatory molecules histamine and TNF-α. C5a is also an effective c
https://en.wikipedia.org/wiki/Activity%20diagram
Activity diagrams are graphical representations of workflows of stepwise activities and actions with support for choice, iteration and concurrency. In the Unified Modeling Language, activity diagrams are intended to model both computational and organizational processes (i.e., workflows), as well as the data flows intersecting with the related activities. Although activity diagrams primarily show the overall flow of control, they can also include elements showing the flow of data between activities through one or more data stores. Construction Activity diagrams are constructed from a limited number of shapes, connected with arrows. The most important shape types: stadia represent actions; diamonds represent decisions; bars represent the start (split) or end (join) of concurrent activities; a black circle represents the start (initial node) of the workflow; an encircled black circle represents the end (final node). Arrows run from the start towards the end and represent the order in which activities happen. Activity diagrams can be regarded as a form of a structured flowchart combined with a traditional data flow diagram. Typical flowchart techniques lack constructs for expressing concurrency. However, the join and split symbols in activity diagrams only resolve this for simple cases; the meaning of the model is not clear when they are arbitrarily combined with decisions or loops. While in UML 1.x, activity diagrams were a specialized form of state diagrams, in UML 2.x, the activity diagrams were reformalized to be based on Petri net-like semantics, increasing the scope of situations that can be modeled using activity diagrams. These changes cause many UML 1.x activity diagrams to be interpreted differently in UML 2.x. UML activity diagrams in version 2.x can be used in various domains, e.g. in design of embedded systems. It is possible to verify such a specification using model checking technique. See also Specification and Description Language Busines
https://en.wikipedia.org/wiki/Open-access%20network
An open-access network (OAN) refers to a horizontally layered network architecture in telecommunications, and the business model that separates the physical access to the network from the delivery of services. In an OAN, the owner or manager of the network does not supply services for the network; these services must be supplied by separate retail service providers. There are two different open-access network models: the two- and three-layer models. "Open Access" refers to a specialised and focused business model, in which a network infrastructure provider limits its activities to a fixed set of value layers in order to avoid conflicts of interest. The network infrastructure provider creates an open market and a platform for internet service providers (ISPs) to add value. The Open Access provider remains neutral and independent and offers standard and transparent pricing to ISPs on its network. It never competes with the ISPs. History In the 20th century, analog telephone and cable television networks were designed around the limitations of the prevailing technology. The copper-wired twisted pair telephone networks were not able to carry television programming, and copper-wired coaxial cable television networks were not able to carry voice telephony. Towards the end of the twentieth century, with the rise of packet switching—as used on the Internet—and IP-based and wireless technologies, it became possible to design, build, and operate a single high performance network capable of delivering hundreds of services from multiple, competing providers. Two models An OAN uses a different business model than traditional telecommunications networks. Regardless of whether the two- or three-layer model is used, an open-access network fundamentally means that there is an "organisational separation" of each of the layers. In other words, the network owner/operator cannot also be a retailer on that network. Two-layer model In the two-layer OAN model, there is a network owner
https://en.wikipedia.org/wiki/%C3%82u%20C%C6%A1
Âu Cơ (chữ Hán: 甌姬; ) was, according to the creation myth of the Vietnamese people, an immortal mountain snow goddess who married Lạc Long Quân (), and bore an egg sac that hatched a hundred children known collectively as Bách Việt, ancestors to the Vietnamese people. Âu Cơ is often honored as the mother of Vietnamese civilization. Mythology Âu Cơ was a beautiful young tiên who lived high in the snow-capped mountains. She traveled to help those who suffered from illnesses since she was very skillful in medicine and had a sympathetic heart. One day, a monster suddenly appeared before her while she was on her travels. It frightened her, so she transformed into a crane to fly away. Lạc Long Quân, the dragon king from the sea, passed by and saw the crane in danger. He grabbed a nearby rock and killed the monster with it. When Âu Cơ stopped flying to see the very person that saved her, she turned back into a tiên and instantly fell in love with her savior. She soon bore an egg sac, from which hatched a hundred children. However, despite their love for each other, Âu Cơ had always desired to be in the mountains again and Lạc Long Quân, too, yearned for the sea where the length of days are measured by seasons. They separated, each taking 50 children. Âu Cơ settled in the Vietnamese snow-covered mountains where she raised fifty young, intelligent, strong leaders, later known as the Hùng Vương, Hùng kings. In Vietnamese literature The books Đại Việt sử ký toàn thư (from the 15th century) and Lĩnh Nam chích quái (Wonders plucked from the dust of Linh-nam, from the 14th century) mention the legend. In Đại Việt sử ký toàn thư Âu Cơ is the daughter of Đế Lai (also known as Đế Ai 帝哀, or Emperor Ai, who was a descendant of Shennong), while in Lĩnh Nam chích quái, Âu Cơ was Đế Lai's concubine before she married off to Lạc Long Quân. Additionally in Lĩnh Nam chích quái, Âu Cơ gave birth to an egg sac but threw it away in the field, believing the egg sac to carry bad omens. Ngô Sĩ
https://en.wikipedia.org/wiki/OpenCable%20Application%20Platform
The OpenCable Application Platform, or OCAP, is an operating system layer designed for consumer electronics that connect to a cable television system, the Java-based middleware portion of the platform. Unlike operating systems on a personal computer, the cable company controls what OCAP programs run on the consumer's machine. Designed by CableLabs for the cable networks of North America, OCAP programs are intended for interactive services such as eCommerce, online banking, Electronic program guides, and digital video recording. Cable companies have required OCAP as part of the Cablecard 2.0 specification, a proposal that is controversial and has not been approved by the Federal Communications Commission. Cable companies have stated that two-way communications by third party devices on their networks will require them to support OCAP. The Consumer Electronics Association and other groups argue OCAP is intended to block features that compete with cable company provided services and that consumers should be entitled to add, delete and otherwise control programs as on their personal computers. On January 8, 2008 CableLabs announced the Tru2Way brand for the OpenCable platform, including OCAP as the application platform. Technical overview OCAP is the Java based software/middleware portion of the OpenCable initiative. OCAP is based on the Globally Executable MHP (GEM)-standard, and was defined by CableLabs. Because OCAP is based on GEM, it has a lot in common with the Multimedia Home Platform (MHP)-standard defined by the DVB project. At present two versions of the OCAP standard exist: OCAP v1.0 OCAP v2.0 See also Downloadable Conditional Access System (DCAS) Embedded Java Java Platform, Micro Edition ARIB Interactive digital cable ready OEDN
https://en.wikipedia.org/wiki/IBM%20Advanced%20Program-to-Program%20Communication
In computing, Advanced Program to Program Communication or APPC is a protocol which computer programs can use to communicate over a network. APPC is at the application layer in the OSI model, it enables communications between programs on different computers, from portables and workstations to midrange and host computers. APPC is defined as VTAM LU 6.2 ( Logical unit type 6.2 ) APPC was developed in 1982 as a component of IBM's Systems Network Architecture (SNA). Several APIs were developed for programming languages such as COBOL, PL/I, C or REXX. APPC software is available for many different IBM and non-IBM operating systems, either as part of the operating system or as a separate software package. APPC serves as a translator between application programs and the network. When an application on your computer passes information to the APPC software, APPC translates the information and passes it to a network interface, such as a LAN adapter card. The information travels across the network to another computer, where the APPC software receives the information from the network interface. APPC translates the information back into its original format and passes it to the corresponding partner application. APPC is mainly used by IBM installations running operating systems such z/OS (formerly MVS then OS/390), z/VM (formerly VM/CMS), z/TPF, IBM i (formerly OS/400), OS/2, AIX and z/VSE (formerly DOS/VSE). Microsoft also includes SNA support in Microsoft's Host Integration Server. Major IBM software products also include support for APPC, including CICS, Db2, CIM and WebSphere MQ. Unlike TCP/IP, in which both communication partners always possess a clear role (one is always server, and others always the client), APPC is a peer-to-peer protocol. The communication partners in APPC are equal, every application can be both server and client equally. The role, and the number of the parallel sessions between the partners, is negotiated over CNOS sessions (Change Number Of Sess
https://en.wikipedia.org/wiki/Phosphated%20distarch%20phosphate
Phosphated distarch phosphate, is a type of chemically modified starch. It can be derived from wheat starch, tapioca starch, potato starch or many other botanical sources of starch. It is produced by replacing the hydrogen bonds between starch chains by stronger, covalent phosphate bonds that are more permanent. It is manufactured by treating starch with sodium tripolyphosphate (STPP) and sodium trimetaphosphate (STMP), or phosphoric chloride (POCl3). Phosphorylated cross-linked starches is a category of modified food starches within the U.S. Code of Federal Regulations. Starches treated with STMP and STPP must not exceed 0.4 percent phosphorus as residual phosphate. Phosphated distarch phosphate starches can be used as a food additive (E1413) as a freeze-thaw-stable thickener (stabilises the consistency of the foodstuff when frozen and thawed) within the European Union in products such as soups, sauces, frozen gravies and pie fillings. Depending upon the degree of modification, phosphated distarch phosphate starch can contain 70%-85% type RS4 resistant starch and can replace high glycemic flour in functional bread and other baked goods. Replacing flour with chemically modified resistant starch increases the dietary fiber and lowers the calorie content of foods. In 2011, the European Food Safety Authority approved a health claim that all types of resistant starch, including modified resistant starch, can reduce the post-prandial glycemic response in foods when the high carbohydrate baked food contains at least 14% of total starch as resistant starch. In 2019, the U.S. Food and Drug Administration, approved "cross-linked phosphorylated RS4", regardless of source, as dietary fiber on food labels. See also Modified starch Resistant starch
https://en.wikipedia.org/wiki/List%20of%20NHL%20mascots
This is a list of current and former National Hockey League (NHL) mascots, sorted alphabetically by team. As of 2023, the New York Rangers are the only team to not have a mascot. Current mascots Al the Octopus Al the Octopus is the octopus mascot of the Detroit Red Wings. It is also the only mascot that is not costumed. In 1952, when east side fish merchants Pete and Jerry Cusimano threw a real octopus onto the Olympia arena ice, the eight legs represented the eight victories needed to secure a Stanley Cup in those six-team days. Since then, fans throw an octopus onto the ice for good luck. In the 1995 Playoffs, fans threw fifty-four onto the ice. Arena Manager and Zamboni driver Al Sobotka ceremoniously scoops them up and whirls them over his head, and play continues. NHL Commissioner Gary Bettman forbade Sobotka from doing so during the 2008 playoffs, claiming that debris flew off the octopuses and onto the ice. Sobotka and the Red Wings have denied that this occurs, but even so Sobotka acquiesced and now twirls the octopuses once he departs the ice. In 2011, the NHL forbade fans from throwing any octopuses on the ice, penalizing all violators with a $500 fine. This has led to local outcry at the seemingly intentional destruction of a classic tradition. Red Wings' forward Johan Franzen has pledged to pay any and all fines as an attempt to continue the tradition. Two identical large purple prop octopuses (Al), named after ice manager Al Sobotka, used to be positioned in or on top of Joe Louis Arena for the duration of the playoffs. After closing down the arena after the 2016-2017 season, one was sold for $7,700. Bailey Bailey, the mascot of the Los Angeles Kings, is a 6-foot lion (6 foot 4 inches with mane included) who wears No. 72 because it is the average temperature in Los Angeles. He debuted during the 2007-2008 season and was named in honor of Garnet Bailey, who served as the Kings' Director of Pro Scouting from 1994 until his death in the September 11
https://en.wikipedia.org/wiki/Muscone
Muscone is a macrocyclic ketone, an organic compound that is the primary contributor to the odor of musk. The chemical structure of muscone was first elucidated by Leopold Ružička. It is a 15-membered ring ketone with one methyl substituent in the 3-position. It is an oily liquid that is found naturally as the (−)-enantiomer, (R)-3-methylcyclopentadecanone. Muscone has been synthesized as the pure (−)-enantiomer as well as the racemate. It is very slightly soluble in water and miscible with alcohol. Natural muscone is obtained from musk, a glandular secretion of the musk deer, which has been used in perfumery and medicine for thousands of years. Since obtaining natural musk requires killing the endangered animal, nearly all muscone used in perfumery today is synthetic. It has the characteristic smell of being "musky". One asymmetric synthesis of (−)-muscone begins with commercially available (+)-citronellal, and forms the 15-membered ring via ring-closing metathesis: A more recent enantioselective synthesis involves an intramolecular aldol addition/dehydration reaction of a macrocyclic diketone. Muscone is now produced synthetically for use in perfumes and for scenting consumer products. Isotopologues of muscone have been used in a study of the mechanism of olfaction. Global replacement of all hydrogens in muscone was achieved by heating muscone with Rh/C in D2O at 150 °C. It was found that the human musk-recognizing receptor, OR5AN1, identified using a heterologous olfactory receptor expression system and robustly responding to muscone, fails to distinguish between muscone and the so-prepared isotopologue in vitro. OR5AN1 is reported to bind to muscone and related musks such as civetone through hydrogen-bond formation from tyrosine-258 along with hydrophobic interactions with surrounding aromatic residues in the receptor.
https://en.wikipedia.org/wiki/Mushroom%20tea
Mushroom tea is an infusion of mushrooms in water, made by using edible/medicinal mushrooms (such as lingzhi mushroom) or psychedelic mushrooms (such as Psilocybe cubensis). The active ingredient in psychedelic mushrooms is psilocybin, while the active ingredients in medicinal mushrooms are thought to be beta-glucans. Korea In Korea, mushroom teas known as beoseot-cha ( ) are made from edible mushrooms such as black hoof mushroom, lingzhi mushroom, oyster mushroom, scaly hedgehog, and shiitake mushroom. Neungi-cha () – scaly hedgehog tea Neutari-cha () – oyster mushroom tea Pyogo-cha () – shiitake mushroom tea Sanghwang-cha () – black hoof mushroom tea Yeongji-cha () – lingzhi mushroom tea See also Kombucha (tea mushroom) Psychedelic mushrooms
https://en.wikipedia.org/wiki/Soy%20allergy
Soy allergy is a type of food allergy. It is a hypersensitivity to ingesting compounds in soy (Glycine max), causing an overreaction of the immune system, typically with physical symptoms, such as gastrointestinal discomfort, respiratory distress, or a skin reaction. Soy is among the eight most common foods inducing allergic reactions in children and adults. It has a prevalence of about 0.3% in the general population. Soy allergy is usually treated with an exclusion diet and vigilant avoidance of foods that may contain soy ingredients. The most severe food allergy reaction is anaphylaxis, which is a medical emergency requiring immediate attention and treatment with epinephrine. Signs and symptoms Acute soy allergy can have fast onset (from seconds to one hour) or slow onset (from hours to several days), depending on the conditions of exposure, whereas long-term soy allergy may begin in infancy with reaction to soy-based infant formula. Although most children outgrow soy allergy, some may have the allergy persist into adulthood. IgE allergy Symptoms may include: rash, hives, itching of the mouth, lips, tongue, throat, eyes, skin, or other areas, swelling of lips, tongue, eyelids, or the whole face, difficulty swallowing, runny or congested nose, hoarse voice, wheezing, shortness of breath, diarrhea, abdominal pain, lightheadedness, fainting, nausea and vomiting. Symptoms of allergies vary from person to person and may vary from incident to incident. Serious danger regarding allergies can begin when the respiratory tract or blood circulation is affected. The former can be indicated by wheezing, a blocked airway and cyanosis, the latter by weak pulse, pale skin, and fainting. When such severe symptoms occur, the allergic reaction is called anaphylaxis. Anaphylaxis occurs when IgE antibodies are released into the systemic circulation in response to the allergen, affecting multiple organs with severe symptoms. Untreated, the anaphylactic response can proceed to a ra
https://en.wikipedia.org/wiki/Learning%20automaton
A learning automaton is one type of machine learning algorithm studied since 1970s. Learning automata select their current action based on past experiences from the environment. It will fall into the range of reinforcement learning if the environment is stochastic and a Markov decision process (MDP) is used. History Research in learning automata can be traced back to the work of Michael Lvovitch Tsetlin in the early 1960s in the Soviet Union. Together with some colleagues, he published a collection of papers on how to use matrices to describe automata functions. Additionally, Tsetlin worked on reasonable and collective automata behaviour, and on automata games. Learning automata were also investigated by researches in the United States in the 1960s. However, the term learning automaton was not used until Narendra and Thathachar introduced it in a survey paper in 1974. Definition A learning automaton is an adaptive decision-making unit situated in a random environment that learns the optimal action through repeated interactions with its environment. The actions are chosen according to a specific probability distribution which is updated based on the environment response the automaton obtains by performing a particular action. With respect to the field of reinforcement learning, learning automata are characterized as policy iterators. In contrast to other reinforcement learners, policy iterators directly manipulate the policy π. Another example for policy iterators are evolutionary algorithms. Formally, Narendra and Thathachar define a stochastic automaton to consist of: a set X of possible inputs, a set Φ = { Φ1, ..., Φs } of possible internal states, a set α = { α1, ..., αr } of possible outputs, or actions, with r ≤ s, an initial state probability vector p(0) = ≪ p1(0), ..., ps(0) ≫, a computable function A which after each time step t generates p(t+1) from p(t), the current input, and the current state, and a function G: Φ → α which generates the outpu
https://en.wikipedia.org/wiki/Byte%20Code%20Engineering%20Library
The Byte Code Engineering Library (BCEL) is a project sponsored by the Apache Foundation previously under their Jakarta charter to provide a simple API for decomposing, modifying, and recomposing binary Java classes (I.e. bytecode). The project was conceived and developed by Markus Dahm prior to officially being donated to the Apache Jakarta foundation on 27 October 2001. Uses BCEL provides a simple library that exposes the internal aggregate components of a given Java class through its API as object constructs (as opposed to the disassembly of the lower-level opcodes). These objects also expose operations for modifying the binary bytecode, as well as generating new bytecode (via injection of new code into the existing code, or through generation of new classes altogether.) The BCEL library has been used in several diverse applications, such as: Java Bytecode Decompiling, Obfuscation, and Refactoring Performance and Profiling Instrumentation calls that capture performance metrics can be injected into Java class binaries to examine memory/coverage data. (For example, injecting instrumentation at entry/exit points.) Implementation of New Language Semantics For example, Aspect-Oriented additions to the Java language have been implemented by using BCEL to decompose class structures for point-cut identification, and then again when reconstituting the class by injecting aspect-related code back into the binary. (See: AspectJ) Static code analysis FindBugs uses BCEL to analyze Java bytecode for code idioms which indicate bugs. See also ObjectWeb ASM Javassist External links Apache Commons BCEL - The BCEL Project Home Page. BCEL-Based Project Listing - A listing of projects that make use of the BCEL Library. Apache Jakarta Home - The Apache Jakarta Home Page. AspectJ - The AspectJ Project Home Page. (One of the high-visibility projects that makes use of BCEL.) Virtualization software
https://en.wikipedia.org/wiki/PLECS
PLECS (Piecewise Linear Electrical Circuit Simulation) is a software tool for system-level simulations of electrical circuits developed by Plexim. It is especially designed for power electronics but can be used for any electrical network. PLECS includes the possibility to model controls and different physical domains (thermal, magnetic and mechanical) besides the electrical system. Most circuit simulation programs model switches as highly nonlinear elements. Due to steep voltage and current transient, the simulation becomes slow when switches are commutated. In most simplistic applications, switches are modelled as variable resistors that alternate between a very small and a very large resistance. In other cases, they are represented by a sophisticated semiconductor model. When simulating complex power electronic systems, however, the processes during switching are of little interest. In these situations it is more appropriate to use ideal switches that toggle instantaneously between a closed and an open circuit. This approach, which is implemented in PLECS, has two major advantages: Firstly, it yields systems that are piecewise-linear across switching instants, thus resolving the otherwise difficult problem of simulating the non-linear discontinuity that occurs in the equivalent-circuit at the switching instant. Secondly, to handle discontinuities at the switching instants, only two integration steps are required (one for before the instant, and one after). Both of these advantages speed up the simulation considerably, without sacrificing accuracy. Thus the software is ideally suited for modelling and simulation of complex drive systems and modular multilevel converters, for example. In recent years, PLECS has been extended to also support model-based development of controls with automatic code generation. In addition to software, the PLECS product family includes real-time simulation hardware for both hardware-in-the-loop (HIL) testing and rapid control prototy
https://en.wikipedia.org/wiki/Line%20of%20action
In physics, the line of action (also called line of application) of a force () is a geometric representation of how the force is applied. It is the straight line through the point at which the force is applied in the same direction as the vector . The concept is essential, for instance, for understanding the net effect of multiple forces applied to a body. For example, if two forces of equal magnitude act upon a rigid body along the same line of action but in opposite directions, they cancel and have no net effect. But if, instead, their lines of action are not identical, but merely parallel, then their effect is to create a moment on the body, which tends to rotate it. Calculation of torque For the simple geometry associated with the figure, there are three equivalent equations for the magnitude of the torque associated with a force directed at displacement from the axis whenever the force is perpendicular to the axis: where is the cross-product, is the component of perpendicular to , is the moment arm, and is the angle between and
https://en.wikipedia.org/wiki/Bird%20intelligence
The difficulty of defining or measuring intelligence in non-human animals makes the subject difficult to study scientifically in birds. In general, birds have relatively large brains compared to their head size. The visual and auditory senses are well developed in most species, though the tactile and olfactory senses are well realized only in a few groups. Birds communicate using visual signals as well as through the use of calls and song. The testing of intelligence in birds is therefore usually based on studying responses to sensory stimuli. The corvids (ravens, crows, jays, magpies, etc.) and psittacines (parrots, macaws, and cockatoos) are often considered the most intelligent birds, and are among the most intelligent animals in general. Pigeons, finches, domestic fowl, and birds of prey have also been common subjects of intelligence studies. Studies Bird intelligence has been studied through several attributes and abilities. Many of these studies have been on birds such as quail, domestic fowl, and pigeons kept under captive conditions. It has, however, been noted that field studies have been limited, unlike those of the apes. Birds in the crow family (corvids) as well as parrots (psittacines) have been shown to live socially, have long developmental periods, and possess large forebrains, all of which have been hypothesized to allow for greater cognitive abilities. Counting has traditionally been considered an ability that shows intelligence. Anecdotal evidence from the 1960s has suggested that crows can count up to 3. Researchers need to be cautious, however, and ensure that birds are not merely demonstrating the ability to subitize, or count a small number of items quickly. Some studies have suggested that crows may indeed have a true numerical ability. It has been shown that parrots can count up to 6. Cormorants used by Chinese fishermen were given every eighth fish as a reward, and found to be able to keep count up to 7. E.H. Hoh wrote in Natural Histo
https://en.wikipedia.org/wiki/Viglen
Viglen Ltd provides IT products and services, including storage systems, servers, workstations and data/voice communications equipment and services. History The company was formed in 1975, by Vigen Boyadjian. During the 1980s, the company specialised in direct sales through multi page advertisements in leading computer magazines, catering particularly, but not exclusively, to owners of Acorn computers. Viglen was acquired by Alan Sugar (later Lord Sugar)'s company Amstrad in June 1994. It was listed as a public limited company in 1997, and Amstrad plc shares were split into Viglen and Betacom shares, Betacom being renamed to Amstrad PLC. Following the sale in July 2007 of Amstrad PLC to Rupert Murdoch's BSkyB, Viglen became Sugar's sole IT establishment. Viglen used to be run by CEO Bordan Tkachuk, a longtime associate of Lord Sugar, who can be seen making special guest appearances on The Apprentice. From 1994 to 1998, the company sponsored Charlton Athletic F.C., expiring when they won promotion to the FA Premier League. In December 2005, Viglen relocated from its London headquarters in Wembley to Colney Street near St Albans, into a building which also houses its fabrication plant. , Viglen focused particularly on the education and public sectors, selling both desktop and server systems, and also had interests in other IT markets such as managed services, high performance clusters, and network attached storage. In July 2009, Lord Sugar resigned as the chairman of Viglen (and most of his other companies), handing over the reins of the company to longtime associate, Claude Littner. In January 2014, Sugar sold his interest in Viglen to the Westcoast Group, which merged it with another of its subsidiaries, XMA. The Apprentice Under its former ownership by Lord Sugar, the Viglen headquarters doubled up as one of the filming locations for the BBC programme The Apprentice, with various scenes including the infamous "job interviews" being set there. The "walk of sh
https://en.wikipedia.org/wiki/Magnetic%20photon
In physics, a magnetic photon is a hypothetical particle. It is a mixture of even and odd C-parity states and, unlike the normal photon, does not couple to leptons. It is predicted by certain extensions of electromagnetism to include magnetic monopoles. There is no experimental evidence for the existence of this particle, and several versions have been ruled out by negative experiments. The magnetic photon was predicted in 1966 by Nobel laureate Abdus Salam. See also Dual photon, a different extension for magnetic monopoles
https://en.wikipedia.org/wiki/Internet%20Gateway%20Device%20Protocol
Internet Gateway Device (IGD) Protocol is a protocol based on Universal Plug and Play (UPnP) for mapping ports in network address translation (NAT) setups, supported by some NAT-enabled routers. It is a common communications protocol for automatically configuring port forwarding, and is part of an ISO/IEC Standard rather than an Internet Engineering Task Force standard. Usage Applications using peer-to-peer networks, multiplayer gaming, and remote assistance programs need a way to communicate through home and business gateways. Without IGD one has to manually configure the gateway to allow traffic through, a process which is error-prone and time-consuming. Universal Plug and Play (UPnP) comes with a solution for network address translation traversal (NAT traversal) that implements IGD. IGD makes it easy to do the following: Add and remove port mappings Assign lease times to mappings Enumerate existing port mappings Learn the public (external) IP address The host can allow seeking for available IGDv1/IGDv2 devices with only one M-SEARCH for IGDv1 on the network via Simple Service Discovery Protocol (SSDP) which can be controlled then with the help of a network protocol such as SOAP. A discover request is sent via HTTP and port 1900 to the IPv4 multicast address 239.255.255.250 (for the IPv6 addresses see the Simple Service Discovery Protocol (SSDP)): M-SEARCH * HTTP/1.1 HOST: 239.255.255.250:1900 MAN: "ssdp:discover" MX: 2 ST: urn:schemas-upnp-org:device:InternetGatewayDevice:1 Security risks Malware can exploit the IGD protocol to bring connected devices under the control of a foreign user. The Conficker worm is an example of a botnet created using this vector. Compatibility issues There are numerous compatibility issues due the different interpretations of the very large actually backward compatible IGDv1 and IGDv2 specifications. One of them is the UPnP IGD client integrated with current Microsoft Windows and Xbox systems with certified IGDv2 rou
https://en.wikipedia.org/wiki/Mergelyan%27s%20theorem
Mergelyan's theorem is a result from approximation by polynomials in complex analysis proved by the Armenian mathematician Sergei Mergelyan in 1951. Statement Let K be a compact subset of the complex plane C such that C∖K is connected. Then, every continuous function f : K C, such that the restriction f to int(K) is holomorphic, can be approximated uniformly on K with polynomials. Here, int(K) denotes the interior of K. Mergelyan's theorem also holds for open Riemann surfaces If K is a compact set without holes in an open Riemann surface X, then every function in can be approximated uniformly on K by functions in . Mergelyan's theorem does not always hold in higher dimensions (spaces of several complex variables), but it has some consequences. History Mergelyan's theorem is a generalization of the Weierstrass approximation theorem and Runge's theorem. In the case that C∖K is not connected, in the initial approximation problem the polynomials have to be replaced by rational functions. An important step of the solution of this further rational approximation problem was also suggested by Mergelyan in 1952. Further deep results on rational approximation are due to, in particular, A. G. Vitushkin. Weierstrass and Runge's theorems were put forward in 1885, while Mergelyan's theorem dates from 1951. After Weierstrass and Runge, many mathematicians (in particular Walsh, Keldysh, Lavrentyev, Hartogs, and Rosenthal) had been working on the same problem. The method of the proof suggested by Mergelyan is constructive, and remains the only known constructive proof of the result. See also Arakelyan's theorem Hartogs–Rosenthal theorem Oka–Weil theorem
https://en.wikipedia.org/wiki/Dicalcium%20phosphate
Dicalcium phosphate is the calcium phosphate with the formula CaHPO4 and its dihydrate. The "di" prefix in the common name arises because the formation of the HPO42– anion involves the removal of two protons from phosphoric acid, H3PO4. It is also known as dibasic calcium phosphate or calcium monohydrogen phosphate. Dicalcium phosphate is used as a food additive, it is found in some toothpastes as a polishing agent and is a biomaterial. Preparation Dibasic calcium phosphate is produced by the neutralization of calcium hydroxide with phosphoric acid, which precipitates the dihydrate as a solid. At 60 °C the anhydrous form is precipitated: To prevent degradation that would form hydroxyapatite, sodium pyrophosphate or trimagnesium phosphate octahydrate are added when for example, dibasic calcium phosphate dihydrate is to be used as a polishing agent in toothpaste. In a continuous process CaCl2 can be treated with (NH4)2HPO4 to form the dihydrate: A slurry of the dihydrate is then heated to around 65–70 °C to form anhydrous CaHPO4 as a crystalline precipitate, typically as flat diamondoid crystals, which are suitable for further processing. Dibasic calcium phosphate dihydrate is formed in "brushite" calcium phosphate cements (CPC's), which have medical applications. An example of the overall setting reaction in the formation of "β-TCP/MCPM" (β-tricalcium phosphate/monocalcium phosphate) calcium phosphate cements is: Structure Three forms of dicalcium phosphate are known: dihydrate, CaHPO4•2H2O ('DPCD'), the mineral brushite monohydrate, CaHPO4•H2O ('DCPM') anhydrous CaHPO4, ('DCPA'), the mineral monetite. Below pH 4.8 the dihydrate and anhydrous forms of dicalcium phosphate are the most stable (insoluble) of the calcium phosphates. The structure of the anhydrous and dihydrated forms have been determined by X-ray crystallography and the structure of the monohydrate was determined by electron crystallography. The dihydrate (shown in table above) as well as the
https://en.wikipedia.org/wiki/Match%20Day%20II
Match Day II is a football sports game part of the Match Day series released for the Amstrad CPC, Amstrad PCW, ZX Spectrum, MSX and Commodore 64 platforms. It was created in 1987 by Jon Ritman with graphics by Bernie Drummond and music and sound by Guy Stevens (except for the Commodore version, which was a line-by-line conversion by John Darnell). It is the sequel to 1984's Match Day. Gameplay The controls consist of four directions (allowing eight directions including diagonals) and a shot button. Each team has seven players, including goalkeeper and there are league and cup options available. The game is considered highly addictive due to its difficulty level, the complete control over ball direction, power and elevation (using a Diamond Deflection System), and the importance of tactics and player positioning over the field (barging if necessary), which makes it challenging to break strong defences. Was the first game to use a kickometer. Some versions of the game play the song When the Saints Go Marching In while the players are walking to their initial positions on the field at the beginning of each half. The ZX Spectrum version of the game went to number 2 in the UK sales charts, behind Out Run, and was voted the 10th best game of all time in a special issue of Your Sinclair magazine in 2004. Related games The game is similar to a previous unpublished game by Jon Ritman, Soccerama. Later, in 1995, Jon Ritman tried to release Match Day III, but the name of the game was changed to Super Match Soccer to avoid any potential legal issues.
https://en.wikipedia.org/wiki/Fluid%20coupling
A fluid coupling or hydraulic coupling is a hydrodynamic or 'hydrokinetic' device used to transmit rotating mechanical power. It has been used in automobile transmissions as an alternative to a mechanical clutch. It also has widespread application in marine and industrial machine drives, where variable speed operation and controlled start-up without shock loading of the power transmission system is essential. Hydrokinetic drives, such as this, should be distinguished from hydrostatic drives, such as hydraulic pump and motor combinations. History The fluid coupling originates from the work of Hermann Föttinger, who was the chief designer at the AG Vulcan Works in Stettin. His patents from 1905 covered both fluid couplings and torque converters. Dr Gustav Bauer of the Vulcan-Werke collaborated with English engineer Harold Sinclair of Hydraulic Coupling Patents Limited to adapt the Föttinger coupling to vehicle transmission in an attempt to mitigate the lurching Sinclair had experienced while riding on London buses during the 1920s Following Sinclair's discussions with the London General Omnibus Company begun in October 1926, and trials on an Associated Daimler bus chassis, Percy Martin of Daimler decided to apply the principle to the Daimler group's private cars. During 1930 The Daimler Company of Coventry, England began to introduce a transmission system using a fluid coupling and Wilson self-changing gearbox for buses and their flagship cars. By 1933 the system was used in all new Daimler, Lanchester and BSA vehicles produced by the group from heavy commercial vehicles to small cars. It was soon extended to Daimler's military vehicles and in 1934 was featured in the Singer Eleven branded as Fluidrive. These couplings are described as constructed under Vulcan-Sinclair and Daimler patents. In 1939 General Motors Corporation introduced Hydramatic drive, the first fully automatic automotive transmission system installed in a mass-produced automobile. The Hydramatic
https://en.wikipedia.org/wiki/Sporotrichosis
Sporotrichosis, also known as rose handler's disease, is a fungal infection that may be localised to skin, lungs, bone and joint, or become systemic. It presents with firm painless nodules that later ulcerate. Following initial exposure to Sporothrix schenckii, the disease typically progresses over a period of a week to several months. Serious complications may develop in people who have a weakened immune system. Sporotrichosis is caused by fungi of the S. schenckii species complex. Because S. schenckii is naturally found in soil, hay, sphagnum moss, and plants, it most often affects farmers, gardeners, and agricultural workers. It enters through small cuts in the skin to cause a fungal infection. In cases of sporotrichosis affecting the lungs, the fungal spores enter by inhalation. Sporotrichosis can be acquired by handling cats with the disease; it is an occupational hazard for veterinarians. Treatment depends on the site and extent of infection. Topical antifungals may be applied to skin lesions. Deep infection in the lungs may require surgery. Systemic medications used include Itraconazole, posaconazole and amphotericin B. With treatment, most people will recover, but an immunocompromised status and systemic infection carry a worse prognosis. S. schenkii, the causal fungus, is found worldwide. The species was named for Benjamin Schenck, a medical student who, in 1896, was the first to isolate it from a human specimen. Sporotrichosis has been reported in cats, mules, dogs, mice and rats. Signs and symptoms Cutaneous or skin sporotrichosis This is the most common form of this disease. Symptoms of this form include nodular lesions or bumps in the skin, at the point of entry and also along lymph nodes and vessels. The lesion starts off small and painless, and ranges in color from pink to purple. Left untreated, the lesion becomes larger and look similar to a boil and more lesions will appear, until a chronic ulcer develops. Usually, cutaneous sporotricho
https://en.wikipedia.org/wiki/Acetogenesis
Acetogenesis is a process through which acetate is produced either by the reduction of CO2 or by the reduction of organic acids, rather than by the oxidative breakdown of carbohydrates or ethanol, as with acetic acid bacteria. The different bacterial species that are capable of acetogenesis are collectively termed acetogens. Reduction of CO2 to acetate by anaerobic bacteria occurs via the Wood–Ljungdahl pathway and requires an electron source (e.g., H2, CO, formate, etc.). Some acetogens can synthesize acetate autotrophically from carbon dioxide and hydrogen gas. Reduction of organic acids to acetate by anaerobic bacteria occurs via fermentation. Discovery In 1932, organisms were discovered that could convert hydrogen gas and carbon dioxide into acetic acid. The first acetogenic bacterium species, Clostridium aceticum, was discovered in 1936 by Klaas Tammo Wieringa. A second species, Moorella thermoacetica, attracted wide interest because of its ability, reported in 1942, to convert glucose into three moles of acetic acid. Biochemistry The precursor to acetic acid is the thioester acetyl CoA. The key aspects of the acetogenic pathway are several reactions that include the reduction of carbon dioxide to carbon monoxide and the attachment of the carbon monoxide to a methyl group. The first process is catalyzed by enzymes called carbon monoxide dehydrogenase. The coupling of the methyl group (provided by methylcobalamin) and the CO is catalyzed by acetyl CoA synthase. 2 CO2 + 4 H2 → CH3COOH + 2H2O Applications The unique metabolism of acetogens has significance in biotechnological uses. In carbohydrate fermentations, the decarboxylation reactions involved result in the loss of carbon into carbon dioxide. This loss is an issue with an increased requirement of minimization of CO2 emissions, as well as successful competition for fossil fuels with biofuel production being limited by monetary value. Acetogens can ferment glucose without any CO2 emissions and co
https://en.wikipedia.org/wiki/Grading%20%28tumors%29
In pathology, grading is a measure of the cell appearance in tumors and other neoplasms. Some pathology grading systems apply only to malignant neoplasms (cancer); others apply also to benign neoplasms. The neoplastic grading is a measure of cell anaplasia (reversion of differentiation) in the sampled tumor and is based on the resemblance of the tumor to the tissue of origin. Grading in cancer is distinguished from staging, which is a measure of the extent to which the cancer has spread. Pathology grading systems classify the microscopic cell appearance abnormality and deviations in their rate of growth with the goal of predicting developments at tissue level (see also the 4 major histological changes in dysplasia). Cancer is a disorder of cell life cycle alteration that leads (non-trivially) to excessive cell proliferation rates, typically longer cell lifespans and poor differentiation. The grade score (numerical: G1 up to G4) increases with the lack of cellular differentiation - it reflects how much the tumor cells differ from the cells of the normal tissue they have originated from (see 'Categories' below). Tumors may be graded on four-tier, three-tier, or two-tier scales, depending on the institution and the tumor type. The histologic tumor grade score along with the metastatic (whole-body-level cancer-spread) staging are used to evaluate each specific cancer patient, develop their individual treatment strategy and to predict their prognosis. A cancer that is very poorly differentiated is called anaplastic. Categories Grading systems are also different for many common types of cancer, though following a similar pattern with grades being increasingly malignant over a range of 1 to 4. If no specific system is used, the following general grades are most commonly used, and recommended by the American Joint Commission on Cancer and other bodies: GX Grade cannot be assessed G1 Well differentiated (Low grade) G2 Mode
https://en.wikipedia.org/wiki/Viktor%20Bunyakovsky
Viktor Yakovlevich Bunyakovsky (, ; , Bar, Podolia Governorate, Russian Empire – , St. Petersburg, Russian Empire) was a Russian mathematician, member and later vice president of the Petersburg Academy of Sciences. Bunyakovsky was a mathematician, noted for his work in theoretical mechanics and number theory (see: Bunyakovsky conjecture), and is credited with an early discovery of the Cauchy–Schwarz inequality, proving it for the infinite dimensional case as well as for definite integrals of real-valued functions in 1859, many years prior to Hermann Schwarz's works on the subject. Biography Viktor Yakovlevich Bunyakovsky was born in Bar, Podolia Governorate, Russian Empire (now Ukraine) in 1804. Bunyakovsky was a son of Colonel Yakov Vasilievich Bunyakovsky of a cavalry regiment, who was killed in Finland in 1809. Education Bunyakovsky obtained his initial mathematical education at the home of his father's friend, Count Alexander Tormasov, in St. Petersburg. In 1820, he traveled with the count's son to a university in Coburg and subsequently to the Sorbonne in Paris to study mathematics. At the Sorbonne, Bunyakovsky had opportunity to attend lectures from Laplace and Poisson. He focused his study and research on mathematics and physics. In 1824, Bunyakovsky received his bachelor's degree from the Sorbonne. Continuing his research, he wrote three doctoral dissertations under Cauchy's supervision by the spring of 1825: The rotary motion in a resistant medium of a set of plates of constant thickness and defined contour around an axis inclined with respect to the horizon; The determination of the radius vector in elliptical motion of planets; and The propagation of heat in solids. He successfully completed his dissertation on theoretical physics, theoretical mechanics and mathematical physics, and obtained his doctorate under Cauchy's supervision. Scientific and pedagogical work After the seven years abroad, Bunyakovsky returned to St. Petersburg in 1826 and took
https://en.wikipedia.org/wiki/Line%E2%80%93plane%20intersection
In analytic geometry, the intersection of a line and a plane in three-dimensional space can be the empty set, a point, or a line. It is the entire line if that line is embedded in the plane, and is the empty set if the line is parallel to the plane but outside it. Otherwise, the line cuts through the plane at a single point. Distinguishing these cases, and determining equations for the point and line in the latter cases, have use in computer graphics, motion planning, and collision detection. Algebraic form In vector notation, a plane can be expressed as the set of points for which where is a normal vector to the plane and is a point on the plane. (The notation denotes the dot product of the vectors and .) The vector equation for a line is where is a unit vector in the direction of the line, is a point on the line, and is a scalar in the real number domain. Substituting the equation for the line into the equation for the plane gives Expanding gives And solving for gives If then the line and plane are parallel. There will be two cases: if then the line is contained in the plane, that is, the line intersects the plane at each point of the line. Otherwise, the line and plane have no intersection. If there is a single point of intersection. The value of can be calculated and the point of intersection, , is given by . Parametric form A line is described by all points that are a given direction from a point. A general point on a line passing through points and can be represented as where is the vector pointing from to . Similarly a general point on a plane determined by the triangle defined by the points , and can be represented as where is the vector pointing from to , and is the vector pointing from to . The point at which the line intersects the plane is therefore described by setting the point on the line equal to the point on the plane, giving the parametric equation: This can be rewritten as which can be expressed in matrix
https://en.wikipedia.org/wiki/Goncalo%20alves
Gonçalo alves is a hardwood (from the Portuguese name, Gonçalo Alves). It is sometimes referred to as tigerwood—a name that underscores the wood's often dramatic, contrasting color scheme, that some compare to rosewood. While the sapwood is very light in color, the heartwood is a sombre brown, with dark streaks that give it a unique look. The wood's color deepens with exposure and age and even the plainer-looking wood has a natural luster. Two species are usually listed as sources for gonçalo alves: Astronium fraxinifolium and Astronium graveolens, although other species in the genus may yield similar wood; the amount of striping that is present may vary. All trees grow in neotropical forests; Brazil is a major exporter of these woods.
https://en.wikipedia.org/wiki/Inter-processor%20interrupt
In computing, an inter-processor interrupt (IPI), also known as a shoulder tap, is a special type of interrupt by which one processor may interrupt another processor in a multiprocessor system if the interrupting processor requires action from the other processor. Actions that might be requested include: flushes of memory management unit caches, such as translation lookaside buffers, on other processors when memory mappings are changed by one processor; stopping when the system is being shut down by one processor. Notify a processor that higher priority work is available. Notify a processor of work that cannot be done on all processors due to, e.g., asymmetric access to I/O channels special features on some processors Mechanism The M65MP option of OS/360 used the Direct Control feature of the S/360 to generate an interrupt on another processor; on S/370 and its successors, including z/Architecture, the SIGNAL PROCESSOR instruction provides a more formalized interface. The documentation for some IBM operating systems refers to this as a shoulder tap. On IBM PC compatible computers that use the Advanced Programmable Interrupt Controller (APIC), IPI signaling is often performed using the APIC. When a CPU wishes to send an interrupt to another CPU, it stores the interrupt vector and the identifier of the target's local APIC in the Interrupt Command Register (ICR) of its own local APIC. A message is then sent via the APIC bus to the target's local APIC, which then issues a corresponding interrupt to its own CPU. Examples In a multiprocessor system running Microsoft Windows, a processor may interrupt another processor for the following reasons, in addition to the ones listed above: queue a DISPATCH_LEVEL interrupt to schedule a particular thread for execution; kernel debugger breakpoint. IPIs are given an IRQL of 29. See also Interrupt Interrupt handler Non-maskable interrupt (NMI)
https://en.wikipedia.org/wiki/Bio21%20Institute
The Bio21 Institute of Molecular Science and Biotechnology, abbreviated as the Bio21 Institute, is an Australian scientific research institute that focuses on basic science and applied biotechnology. The Bio21 Institute is based at the University of Melbourne on Flemington Road in , Melbourne, Victoria. The institute is managed by the University of Melbourne and is supported by funding from the Victorian Government. History Established in 2002 and officially opened in 2005, the research centre conducts interdisciplinary learning and houses research groups specializing in biochemistry, cell biology, chemistry, and other life sciences. Bio21 accommodates over 500 research scientists, students and industry participants, making it one of the largest biotechnology research centres in Australia. In September 2006, Bio21 formed a partnership with Australian-based global bio-pharmaceutical company CSL Limited. 50 scientists from CSL were relocated to participate in activities at the Bio21. The goal of the partnership was for Bio21 to gain the expertise of industry professionals and for CSL to gain access to state-of-the-art equipment. See also Health in Australia
https://en.wikipedia.org/wiki/2004%20California%20Proposition%2071
Proposition 71 of 2004 (or the California Stem Cell Research and Cures Act) is a law enacted by California voters to support stem cell research in the state. It was proposed by means of the initiative process and approved in the 2004 state elections on November 2. The Act amended both the Constitution of California and the Health and Safety Code. The Act makes conducting stem cell research a state constitutional right. It authorizes the sale of general obligation bonds to allocate three billion dollars over a period of ten years to stem cell research and research facilities. Although the funds could be used to finance all kinds of stem cell research, it gives priority to human embryonic stem cell research. Proposition 71 created the California Institute for Regenerative Medicine (CIRM), which is in charge of making "grants and loans for stem cell research, for research facilities, and for other vital research opportunities to realize therapies" as well as establishing "the appropriate regulatory standards of oversight bodies for research and facilities development". The Act also establishes a governing body called the Independent Citizen's Oversight Committee (ICOC) to oversee CIRM. Proposition 71 is unique in at least three ways. Firstly, it uses general obligation bonds, which are usually used to finance brick-and-mortar projects such as bridges or hospitals, to fund scientific research. Secondly, by funding scientific research on such a large scale, California is taking on a role that is typically fulfilled by the U.S. federal government. Thirdly, Proposition 71 establishes the state constitutional right to conduct stem cell research. The initiative also represents a unique instance where the public directly decided to fund scientific research. By 2020, the funding from proposition 71 was mostly used, and so the California Institute for Regenerative Medicine expected to shut down if it did not receive additional funding. For that reason, another ballot init
https://en.wikipedia.org/wiki/Thermodynamic%20instruments
A thermodynamic instrument is any device which facilitates the quantitative measurement of thermodynamic systems. In order for a thermodynamic parameter to be truly defined, a technique for its measurement must be specified. For example, the ultimate definition of temperature is "what a thermometer reads". The question follows – what is a thermometer? There are two types of thermodynamic instruments, the meter and the reservoir. A thermodynamic meter is any device which measures any parameter of a thermodynamic system. A thermodynamic reservoir is a system which is so large that it does not appreciably alter its state parameters when brought into contact with the test system. Overview Two general complementary tools are the meter and the reservoir. It is important that these two types of instruments are distinct. A meter does not perform its task accurately if it behaves like a reservoir of the state variable it is trying to measure. If, for example, a thermometer, were to act as a temperature reservoir it would alter the temperature of the system being measured, and the reading would be incorrect. Ideal meters have no effect on the state variables of the system they are measuring. Thermodynamic meters A meter is a thermodynamic system which displays some aspect of its thermodynamic state to the observer. The nature of its contact with the system it is measuring can be controlled, and it is sufficiently small that it does not appreciably affect the state of the system being measured. The theoretical thermometer described below is just such a meter. In some cases, the thermodynamic parameter is actually defined in terms of an idealized measuring instrument. For example, the zeroth law of thermodynamics states that if two bodies are in thermal equilibrium with a third body, they are also in thermal equilibrium with each other. This principle, as noted by James Maxwell in 1872, asserts that it is possible to measure temperature. An idealized thermometer is a sample
https://en.wikipedia.org/wiki/Thermodynamic%20process
Classical thermodynamics considers three main kinds of thermodynamic process: (1) changes in a system, (2) cycles in a system, and (3) flow processes. (1) A Thermodynamic process is a process in which the thermodynamic state of a system is changed. A change in a system is defined by a passage from an initial to a final state of thermodynamic equilibrium. In classical thermodynamics, the actual course of the process is not the primary concern, and often is ignored. A state of thermodynamic equilibrium endures unchangingly unless it is interrupted by a thermodynamic operation that initiates a thermodynamic process. The equilibrium states are each respectively fully specified by a suitable set of thermodynamic state variables, that depend only on the current state of the system, not on the path taken by the processes that produce the state. In general, during the actual course of a thermodynamic process, the system may pass through physical states which are not describable as thermodynamic states, because they are far from internal thermodynamic equilibrium. Non-equilibrium thermodynamics, however, considers processes in which the states of the system are close to thermodynamic equilibrium, and aims to describe the continuous passage along the path, at definite rates of progress. As a useful theoretical but not actually physically realizable limiting case, a process may be imagined to take place practically infinitely slowly or smoothly enough to allow it to be described by a continuous path of equilibrium thermodynamic states, when it is called a "quasi-static" process. This is a theoretical exercise in differential geometry, as opposed to a description of an actually possible physical process; in this idealized case, the calculation may be exact. A really possible or actual thermodynamic process, considered closely, involves friction. This contrasts with theoretically idealized, imagined, or limiting, but not actually possible, quasi-static processes which may oc
https://en.wikipedia.org/wiki/Lagrange%27s%20theorem%20%28number%20theory%29
In number theory, Lagrange's theorem is a statement named after Joseph-Louis Lagrange about how frequently a polynomial over the integers may evaluate to a multiple of a fixed prime. More precisely, it states that if p is a prime number, , and is a polynomial with integer coefficients, then either: every coefficient of is divisible by p, or has at most solutions where is the degree of . If the modulus is not prime, then it is possible for there to be more than solutions. Proof The two key ideas are the following. Let be the polynomial obtained from by taking the coefficients . Now: is divisible by if and only if ; and has no more than roots. More rigorously, start by noting that if and only if each coefficient of is divisible by . Assume ; its degree is thus well-defined. It is easy to see . To prove (1), first note that we can compute either directly, i.e. by plugging in (the residue class of) and performing arithmetic in , or by reducing . Hence if and only if , i.e. if and only if is divisible by . To prove (2), note that is a field, which is a standard fact (a quick proof is to note that since is prime, is a finite integral domain, hence is a field). Another standard fact is that a non-zero polynomial over a field has at most as many roots as its degree; this follows from the division algorithm. Finally, note that two solutions are incongruent if and only if . Putting everything together, the number of incongruent solutions by (1) is the same as the number of roots of , which by (2) is at most , which is at most .
https://en.wikipedia.org/wiki/Network%20Data%20Representation
Network Data Representation (NDR) is an implementation of the presentation layer in the OSI model. It is used for DCE/RPC and Microsoft RPC (MSRPC). See also DCE/RPC Microsoft RPC External links NDR Specification Internet Standards Internet protocols Presentation layer protocols
https://en.wikipedia.org/wiki/Mesh%20analysis
Mesh analysis (or the mesh current method) is a method that is used to solve planar circuits for the currents (and indirectly the voltages) at any place in the electrical circuit. Planar circuits are circuits that can be drawn on a plane surface with no wires crossing each other. A more general technique, called loop analysis (with the corresponding network variables called loop currents) can be applied to any circuit, planar or not. Mesh analysis and loop analysis both make use of Kirchhoff’s voltage law to arrive at a set of equations guaranteed to be solvable if the circuit has a solution. Mesh analysis is usually easier to use when the circuit is planar, compared to loop analysis. Mesh currents and essential meshes Mesh analysis works by arbitrarily assigning mesh currents in the essential meshes (also referred to as independent meshes). An essential mesh is a loop in the circuit that does not contain any other loop. Figure 1 labels the essential meshes with one, two, and three. A mesh current is a current that loops around the essential mesh and the equations are solved in terms of them. A mesh current may not correspond to any physically flowing current, but the physical currents are easily found from them. It is usual practice to have all the mesh currents loop in the same direction. This helps prevent errors when writing out the equations. The convention is to have all the mesh currents looping in a clockwise direction. Figure 2 shows the same circuit from Figure 1 with the mesh currents labeled. Solving for mesh currents instead of directly applying Kirchhoff's current law and Kirchhoff's voltage law can greatly reduce the amount of calculation required. This is because there are fewer mesh currents than there are physical branch currents. In figure 2 for example, there are six branch currents but only three mesh currents. Setting up the equations Each mesh produces one equation. These equations are the sum of the voltage drops in a comple
https://en.wikipedia.org/wiki/Robust%20control
In control theory, robust control is an approach to controller design that explicitly deals with uncertainty. Robust control methods are designed to function properly provided that uncertain parameters or disturbances are found within some (typically compact) set. Robust methods aim to achieve robust performance and/or stability in the presence of bounded modelling errors. The early methods of Bode and others were fairly robust; the state-space methods invented in the 1960s and 1970s were sometimes found to lack robustness, prompting research to improve them. This was the start of the theory of robust control, which took shape in the 1980s and 1990s and is still active today. In contrast with an adaptive control policy, a robust control policy is static, rather than adapting to measurements of variations, the controller is designed to work assuming that certain variables will be unknown but bounded. Criteria for robustness Informally, a controller designed for a particular set of parameters is said to be robust if it also works well under a different set of assumptions. High-gain feedback is a simple example of a robust control method; with sufficiently high gain, the effect of any parameter variations will be negligible. From the closed-loop transfer function perspective, high open-loop gain leads to substantial disturbance rejection in the face of system parameter uncertainty. Other examples of robust control include sliding mode and terminal sliding mode control. The major obstacle to achieving high loop gains is the need to maintain system closed-loop stability. Loop shaping which allows stable closed-loop operation can be a technical challenge. Robust control systems often incorporate advanced topologies which include multiple feedback loops and feed-forward paths. The control laws may be represented by high order transfer functions required to simultaneously accomplish desired disturbance rejection performance with the robust closed-loop operation. High
https://en.wikipedia.org/wiki/Binary-to-text%20encoding
A binary-to-text encoding is encoding of data in plain text. More precisely, it is an encoding of binary data in a sequence of printable characters. These encodings are necessary for transmission of data when the communication channel does not allow binary data (such as email or NNTP) or is not 8-bit clean. PGP documentation () uses the term "ASCII armor" for binary-to-text encoding when referring to Base64. Overview The basic need for a binary-to-text encoding comes from a need to communicate arbitrary binary data over preexisting communications protocols that were designed to carry only English language human-readable text. Those communication protocols may only be 7-bit safe (and within that avoid certain ASCII control codes), and may require line breaks at certain maximum intervals, and may not maintain whitespace. Thus, only the 94 printable ASCII characters are "safe" to use to convey data. Description The ASCII text-encoding standard uses 7 bits to encode characters. With this it is possible to encode 128 (i.e. 27) unique values (0–127) to represent the alphabetic, numeric, and punctuation characters commonly used in English, plus a selection of Control characters which do not represent printable characters. For example, the capital letter A is represented in 7 bits as 100 00012, 0x41 (1018) , the numeral 2 is 011 00102 0x32 (628), the character } is 111 11012 0x7D (1758), and the Control character RETURN is 000 11012 0x0D (158). In contrast, most computers store data in memory organized in eight-bit bytes. Files that contain machine-executable code and non-textual data typically contain all 256 possible eight-bit byte values. Many computer programs came to rely on this distinction between seven-bit text and eight-bit binary data, and would not function properly if non-ASCII characters appeared in data that was expected to include only ASCII text. For example, if the value of the eighth bit is not preserved, the program might interpret a byte value above 1
https://en.wikipedia.org/wiki/Puzzle%20jug
A puzzle jug is a puzzle in the form of a jug, popular in the 18th and 19th centuries. Puzzle jugs of varying quality were popular in homes and taverns. An inscription typically challenges the drinker to consume the contents without spilling them, which, because the neck of the jug is perforated, is impossible to do conventionally. The solution to the puzzle is that the jug has a hidden tube, one end of which is the spout. The tube usually runs around the rim and then down the handle, with its other opening inside the jug and near the bottom. To solve the puzzle, the drinker must suck from the spout end of the tube. To make the puzzle more interesting, it was common to provide a number of additional holes along the tube, which must be closed off before the contents could be sucked. Some jugs even have a hidden hole to make the challenge still more confounding. History The earliest example in England is the Exeter puzzle jug—an example of medieval pottery in Britain. The Exeter puzzle jug dates from about AD 1300 and was originally made in Saintonge, Western France. The puzzle jug is a descendant of earlier drinking puzzles, such as the fuddling cup and the pot crown, each of which has a different solution. Known inscriptions include: Come drink of me and merry be. Come drink your fill, but do not spill. Fill me up with licker sweet / For it is good when fun us do meet. Gentlemen, now try your Skill / I'll hold your Sixpence if you Will / That you don't drink unless you spill. Here, Gentlemen, come try your skill / I'll hold a wager if you will / That you don't drink this liquor all / Without you spill and let some fall. Within this jug there is good liquor / 'tis fit for Parson or for Vicar / but how to drink and not to spill / will test the utmost of your skill See also Bridge-spouted vessel Dribble glass Fuddling cup Pythagorean cup
https://en.wikipedia.org/wiki/Fuddling%20cup
A fuddling cup is a three-dimensional puzzle in the form of a drinking vessel, made of three or more cups or jugs all linked together by holes and tubes. The challenge of the puzzle is to drink from the vessel in such a way that the beverage does not spill. To do this successfully, one must drink from the cups in a specific order. Fuddling cups were especially popular in 17th- and 18th-century England. See also Dribble glass Puzzle jug Pythagorean cup
https://en.wikipedia.org/wiki/INTERBUS
INTERBUS is a serial bus system which transmits data between control systems (e.g., PCs, PLCs, VMEbus computers, robot controllers etc.) and spatially distributed I/O modules that are connected to sensors and actuators (e.g., temperature sensors, position switches). The INTERBUS system was developed by Phoenix Contact and has been available since 1987. It is one of the leading Fieldbus systems in the automation industry and is fully standardized according to European Standard EN 50254 and IEC 61158. At the moment, more than 600 manufacturers are involved in the implementation of INTERBUS technology in control systems and field devices. Since 2011 is the INTERBUS technology hosted by the industry association Profibus and Profinet International. See also BiSS interface External links www.interbusclub.com www.phoenixcontact.com Explanation of Bit-based Sensor networks including SeriPlex Serial buses Industrial computing Industrial automation
https://en.wikipedia.org/wiki/Course%20%28medicine%29
In medicine the term course generally takes one of two meanings, both reflecting the sense of "path that something or someone moves along...process or sequence or steps": A course of medication is a period of continual treatment with drugs, sometimes with variable dosage and in particular combinations. For instance treatment with some drugs should not end abruptly. Instead, their course should end with a tapering dosage. Antibiotics: Taking the full course of antibiotics is important to prevent reinfection and/or development of drug-resistant bacteria. Steroids: For both short-term and long-term steroid treatment, when stopping treatment, the dosage is tapered rather than abruptly ended. This permits the adrenal glands to resume the body's natural production of cortisol. Abrupt discontinuation can result in adrenal insufficiency; and/or steroid withdrawal syndrome (a rebound effect in which exaggerated symptoms return). The course of a disease, also called its natural history, is the development of the disease in a patient, including the sequence and speed of the stages and forms they take. Typical courses of diseases include: chronic recurrent or relapsing subacute: somewhere between an acute and a chronic course acute: beginning abruptly, intensifying rapidly, not lasting long fulminant or peracute: particularly acute, especially if unusually violent A patient may be said to be at the beginning, the middle or the end, or at a particular stage of the course of a disease or a treatment. A precursor is a sign or event that precedes the course or a particular stage in the course of a disease, for example chills often are precursors to fevers.
https://en.wikipedia.org/wiki/Golvellius
is an action role-playing video game developed by Compile and originally released for the Japanese MSX home computer system in 1987. Sega licensed the franchise in 1988 and released the game for the Master System (the Mark III in Japan), featuring enhanced graphics and entirely different overworld and dungeon layouts. This version was released worldwide under the name Golvellius: Valley of Doom. Later that year (1988), Compile released yet another remake for the MSX2 system, titled . This game featured mostly the same graphics as the ones in the Sega Master System version, but the overworld and dungeon layouts are entirely different. In 2009 it was announced by DotEmu/D4 Entreprise that Golvellius was to be re-released for the iPhone OS platform. It is a port of the Master System version. The scenario is the same in all the three different versions of Golvellius. The ending promised a sequel, which was never developed/released. However, there is a spin-off game titled Super Cooks that came included in the 1989 release of the Disc Station Special Shoka Gou. Reception Computer and Video Games rated the Sega Master System version 87% in 1989. Console XS rated it 82% in 1992.
https://en.wikipedia.org/wiki/Cyclostationary%20process
A cyclostationary process is a signal having statistical properties that vary cyclically with time. A cyclostationary process can be viewed as multiple interleaved stationary processes. For example, the maximum daily temperature in New York City can be modeled as a cyclostationary process: the maximum temperature on July 21 is statistically different from the temperature on December 20; however, it is a reasonable approximation that the temperature on December 20 of different years has identical statistics. Thus, we can view the random process composed of daily maximum temperatures as 365 interleaved stationary processes, each of which takes on a new value once per year. Definition There are two differing approaches to the treatment of cyclostationary processes. The stochastic approach is to view measurements as an instance of an abstract stochastic process model. As an alternative, the more empirical approach is to view the measurements as a single time series of data--that which has actually been measured in practice and, for some parts of theory, conceptually extended from an observed finite time interval to an infinite interval. Both mathematical models lead to probabilistic theories: abstract stochastic probability for the stochastic process model and the more empirical Fraction Of Time (FOT) probability for the alternative model. The FOT probability of some event associated with the time series is defined to be the fraction of time that event occurs over the lifetime of the time series. In both approaches, the process or time series is said to be cyclostationary if and only if its associated probability distributions vary periodically with time. However, in the non-stochastic time-series approach, there is an alternative but equivalent definition: A time series that contains no finite-strength additive sine-wave components is said to exhibit cyclostationarity if and only if there exists some nonlinear time-invariant transformation of the time series that pro
https://en.wikipedia.org/wiki/SAP%20Graphical%20User%20Interface
SAP GUI is the graphical user interface client in SAP ERP's 3-tier architecture of database, application server and client. It is software that runs on a Microsoft Windows, Apple Macintosh or Unix desktop, and allows a user to access SAP functionality in SAP applications such as SAP ERP and SAP Business Information Warehouse (BW). It is used for remote access to the SAP central server in a company network. Family SAP GUI for the Windows environment and Apple Macintosh SAP GUI for the Java(TM) environment SAP GUI for HTML / Internet Transaction Server (ITS) Requires Internet Explorer or Firefox as a browser; other browsers are not officially supported by SAP. Microsoft Windows releases Java releases – for other operating systems Single sign-on SAP GUI on Microsoft Windows or Internet Explorer can also be used for single sign-on. There are several portal-based authentication applications for single sign-on. SAP GUI can have single sign-on with SAP Logon Ticket as well. Single sign-on also works in the Java GUI. Criticism of using SAP GUI for authentication to SAP server access SAP is a distributed application, where client software (SAP GUI) installed on a user's workstation is used to access the central SAP server remotely over the company's network. Users need to authenticate themselves when accessing SAP. By default, however, SAP uses unencrypted communication, which allows potential company-internal attackers to get access to usernames and passwords by listening on the network. This can expose the complete SAP system, if a person is able to get access to this information for a user with extended authorization in the SAP system. Information about this feature is publicly accessible on the Internet. SAP Secure Network Communications SAP offers an option to strongly protect communication between clients and servers, called Secure Network Communications (SNC). Security In total, the vendor has released 25 security patches (aka SAP Security Notes). One of
https://en.wikipedia.org/wiki/Lowest%20published%20toxic%20dose
In toxicology, the lowest published toxic dose (Toxic Dose Low, TDLo) is the lowest dosage per unit of bodyweight (typically stated in milligrams per kilogram) of a substance known to have produced signs of toxicity in a particular animal species. When quoting a TDLo, the particular species and method of administration (e.g. ingested, inhaled, intravenous) are typically stated. The TDLo is different from the LD50 (lethal dose) which is the dose causing death in 50% of people who are exposed or who consume the substance. See also Certain safety factor Lowest published lethal dose (LDLo) Median lethal dose (LD50)
https://en.wikipedia.org/wiki/Combined%20Cipher%20Machine
The Combined Cipher Machine (CCM) (or Combined Cypher Machine) was a common cipher machine system for securing Allied communications during World War II and, for a few years after, by NATO. The British Typex machine and the US ECM Mark II were both modified so that they were interoperable. History The British had shown their main cipher machine — Typex — to the US on their entry into the war, but the Americans were reluctant to share their machine, the ECM Mark II. There was a need for secure inter-Allied communications, and so a joint cipher machine adapted from both countries' systems was developed by the US Navy. Use The "Combined Cipher Machine" was approved in October 1942, and production began two months later. The requisite adapters, designed by Don Seiler, were all manufactured in the US, as Britain did not have sufficient manufacturing resources at the time. The CCM was initially used on a small scale for naval use from 1 November 1943, becoming operational on all US and UK armed services in April 1944. The adapter to convert the ECM into the CCM was denoted the ASAM 5 by the US Army (in 1949) and CSP 1600 by the US Navy (the Navy referred to the entire ECM machine with CCM adapter as the CSP 1700). The adapter was a replacement rotor basket, so the ECM could be easily converted for CCM use in the field. A specially converted ECM, termed the CCM Mark II, was also made available to Britain and Canada. The CCM programme cost US$6 million. SIGROD was an implementation of the CCM which, at one point, was proposed as a replacement for the ECM Mark II (Savard and Pekelney, 1999). TypeX Mark 23 was a later model of the Typex cipher machine family that was adapted for use with the Combined Cipher Machine. Security While Allied codebreakers had much success reading the equivalent German machine, the Lorenz cipher, their German counterparts, although performing some initial analysis, had no success with the CCM. However, there were security problems with th
https://en.wikipedia.org/wiki/Cirsium%20edule
Cirsium edule, the edible thistle or Indian thistle, is a species of thistle in the genus Cirsium, native to western North America from southeastern Alaska south through British Columbia to Washington and Oregon, and locally inland to Idaho. It is a larval host to the mylitta crescent and the painted lady. Cirsium edule is a tall herbaceous perennial plant, reaching in height. The leaves are very spiny, lobed, 10–30 cm long and 2–5 cm broad (smaller on the upper part of the flower stem). The inflorescence is 3–4 cm diameter, purple, with numerous disc florets but no ray florets. The achenes are 4–5 mm long, with a downy pappus which assists in wind dispersal. It is monocarpic, growing as a low rosette of leaves for a number of years, then sending up the tall flowering stem in spring, with the plant dying after seed maturation. Edible thistle is used by Native Americans for its edible roots and young shoots. The roots are sweet, but contain inulin, which gives some people digestive problems. Varieties Cirsium edule var. edule - Oregon, Washington Cirsium edule var. macounii (Greene) D.J.Keil - Oregon, Washington, British Columbia, Alaska Cirsium edule var. edule wenatchense D.J.Keil - Washington
https://en.wikipedia.org/wiki/P%20element
P elements are transposable elements that were discovered in Drosophila as the causative agents of genetic traits called hybrid dysgenesis. The transposon is responsible for the P trait of the P element and it is found only in wild flies. They are also found in many other eukaryotes. The name comes from evolutionary biologist Margaret Kidwell, who, together with James Kidwell and John Sved, researched hybrid dysgenesis in Drosophila. They referred to strains as P of paternal and M of maternal if they contributed to hybrid dysgenesis in this reproductive role. The P element encodes for an enzyme known as P transposase. Unlike laboratory-bred females, wild-type females are thought also to express an inhibitor to P transposase function, produced by the very same element. This inhibitor reduces the disruption to the genome caused by the movement of P elements, allowing fertile progeny. Evidence for this comes from crosses of laboratory females (which lack the P transposase inhibitor) with wild-type males (which have P elements). In the absence of the inhibitor, the P elements can proliferate throughout the genome, disrupting many genes and often proving lethal to progeny or rendering them sterile. P elements are commonly used as mutagenic agents in genetic experiments with Drosophila. One advantage of this approach is that the mutations are easy to locate. In hybrid dysgenesis, one strain of Drosophila mates with another strain of Drosophila, producing hybrid offspring and causing chromosomal damage known to be dysgenic. Hybrid dysgenesis requires a contribution from both parents. For example, in the P-M system, where the P strain contributes paternally and M strain contributes maternally, dysgenesis can occur. The reverse cross, with an M cytotype father and a P mother, produces normal offspring, as it crosses in a P x P or M x M manner. P male chromosomes can cause dysgenesis when crossed with an M female. Characteristics The P element is a class II transposon, an
https://en.wikipedia.org/wiki/Schick%20test
The Schick test, developed in 1913, is a skin test used to determine whether or not a person is susceptible to diphtheria. It was named after its inventor, Béla Schick (1877–1967), a Hungarian-born American pediatrician. Procedure The test is a simple procedure. A small amount (0.1 ml) of diluted (1/50 MLD) diphtheria toxin is injected intradermally into one arm of the person and a heat inactivated toxin on the other as a control. If a person does not have enough antibodies to fight it off, the skin around the injection will become red and swollen, indicating a positive result. This swelling disappears after a few days. If the person has an immunity, then little or no swelling and redness will occur, indicating a negative result. Results can be interpreted as: Positive: when the test results in a wheal of 5–10 mm diameter, reaching its peak in 4–7 days. The control arm shows no reaction. This indicates that the subject lacks antibodies against the toxin and hence is susceptible to the disease. Pseudo-positive: when there is only a red-colored inflammation (erythema) and it disappears within 4 days. This happens on both the arms since the subject is immune but hypersensitive to the toxin. Negative reaction: Indicates that the person is immune. Combined reaction: Initial picture is like that of the pseudo-reaction but the erythema fades off after 4 days only in the control arm. It progresses on the test arm to a typical positive. The subject is interpreted to be both susceptible and hypersensitive. The test was created when immunizing agents were scarce and not very safe; however, as newer and safer toxoids became available, susceptibility tests were no longer required.
https://en.wikipedia.org/wiki/Thermodynamic%20cycle
A thermodynamic cycle consists of linked sequences of thermodynamic processes that involve transfer of heat and work into and out of the system, while varying pressure, temperature, and other state variables within the system, and that eventually returns the system to its initial state. In the process of passing through a cycle, the working fluid (system) may convert heat from a warm source into useful work, and dispose of the remaining heat to a cold sink, thereby acting as a heat engine. Conversely, the cycle may be reversed and use work to move heat from a cold source and transfer it to a warm sink thereby acting as a heat pump. If at every point in the cycle the system is in thermodynamic equilibrium, the cycle is reversible. Whether carried out reversible or irreversibly, the net entropy change of the system is zero, as entropy is a state function. During a closed cycle, the system returns to its original thermodynamic state of temperature and pressure. Process quantities (or path quantities), such as heat and work are process dependent. For a cycle for which the system returns to its initial state the first law of thermodynamics applies: The above states that there is no change of the internal energy () of the system over the cycle. represents the total work and heat input during the cycle and would be the total work and heat output during the cycle. The repeating nature of the process path allows for continuous operation, making the cycle an important concept in thermodynamics. Thermodynamic cycles are often represented mathematically as quasistatic processes in the modeling of the workings of an actual device. Heat and work Two primary classes of thermodynamic cycles are power cycles and heat pump cycles. Power cycles are cycles which convert some heat input into a mechanical work output, while heat pump cycles transfer heat from low to high temperatures by using mechanical work as the input. Cycles composed entirely of quasistatic processes can operate
https://en.wikipedia.org/wiki/State%20variable
A state variable is one of the set of variables that are used to describe the mathematical "state" of a dynamical system. Intuitively, the state of a system describes enough about the system to determine its future behaviour in the absence of any external forces affecting the system. Models that consist of coupled first-order differential equations are said to be in state-variable form. Examples In mechanical systems, the position coordinates and velocities of mechanical parts are typical state variables; knowing these, it is possible to determine the future state of the objects in the system. In thermodynamics, a state variable is an independent variable of a state function. Examples include internal energy, enthalpy, temperature, pressure, volume and entropy. Heat and work are not state functions, but process functions. In electronic/electrical circuits, the voltages of the nodes and the currents through components in the circuit are usually the state variables. In any electrical circuit, the number of state variables are equal to the number of (independent) storage elements, which are inductors and capacitors. The state variable for an inductor is the current through the inductor, while that for a capacitor is the voltage across the capacitor. In ecosystem models, population sizes (or concentrations) of plants, animals and resources (nutrients, organic material) are typical state variables. Control systems engineering In control engineering and other areas of science and engineering, state variables are used to represent the states of a general system. The set of possible combinations of state variable values is called the state space of the system. The equations relating the current state of a system to its most recent input and past states are called the state equations, and the equations expressing the values of the output variables in terms of the state variables and inputs are called the output equations. As shown below, the state equations and output equ
https://en.wikipedia.org/wiki/Ancient%20DNA
Ancient DNA (aDNA) is DNA isolated from ancient specimens. Due to degradation processes (including cross-linking, deamination and fragmentation) ancient DNA is more degraded in comparison with contemporary genetic material. Even under the best preservation conditions, there is an upper boundary of 0.4–1.5 million years for a sample to contain sufficient DNA for sequencing technologies. The oldest sample ever sequenced is estimated to be 1.65 million years old. Genetic material has been recovered from paleo/archaeological and historical skeletal material, mummified tissues, archival collections of non-frozen medical specimens, preserved plant remains, ice and from permafrost cores, marine and lake sediments and excavation dirt. On 7 December 2022, The New York Times reported that two-million year old genetic material was found in Greenland, and is currently considered the oldest DNA discovered so far. History of ancient DNA studies 1980s The first study of what would come to be called aDNA was conducted in 1984, when Russ Higuchi and colleagues at the University of California, Berkeley reported that traces of DNA from a museum specimen of the Quagga not only remained in the specimen over 150 years after the death of the individual, but could be extracted and sequenced. Over the next two years, through investigations into natural and artificially mummified specimens, Svante Pääbo confirmed that this phenomenon was not limited to relatively recent museum specimens but could apparently be replicated in a range of mummified human samples that dated as far back as several thousand years. The laborious processes that were required at that time to sequence such DNA (through bacterial cloning) were an effective brake on the study of ancient DNA (aDNA) and the field of museomics. However, with the development of the Polymerase Chain Reaction (PCR) in the late 1980s, the field began to progress rapidly. Double primer PCR amplification of aDNA (jumping-PCR) can produce highl
https://en.wikipedia.org/wiki/Metamagnetism
Metamagnetism is a sudden (often, dramatic) increase in the magnetization of a material with a small change in an externally applied magnetic field. The metamagnetic behavior may have quite different physical causes for different types of metamagnets. Some examples of physical mechanisms leading to metamagnetic behavior are: Itinerant metamagnetism - Exchange splitting of the Fermi surface in a paramagnetic system of itinerant electrons causes an energetically favorable transition to bulk magnetization near the transition to a ferromagnet or other magnetically ordered state. Antiferromagnetic transition - Field-induced spin flips in antiferromagnets cascade at a critical energy determined by the applied magnetic field. Depending on the material and experimental conditions, metamagnetism may be associated with a first-order phase transition, a continuous phase transition at a critical point (classical or quantum), or crossovers beyond a critical point that do not involve a phase transition at all. These wildly different physical explanations sometimes lead to confusion as to what the term "metamagnetic" is referring in specific cases.
https://en.wikipedia.org/wiki/Tweak%20programming%20environment
Tweak is a graphical user interface (GUI) layer written by Andreas Raab for the Squeak development environment, which in turn is an integrated development environment based on the Smalltalk-80 computer programming language. Tweak is an alternative to an earlier graphic user interface layer called Morphic. Development began in 2001. Applications that use the Tweak software include Sophie (version 1), a multimedia and e-book authoring system, and a family of virtual world systems: Open Cobalt, Teleplace, OpenQwaq, 3d ICC's Immersive Terf and the Croquet Project. Influences An experimental version of Etoys, a programming environment for children, used Tweak instead of Morphic. Etoys was a major influence on a similar Squeak-based programming environment known as Scratch.
https://en.wikipedia.org/wiki/Odor%20detection%20threshold
The odor detection threshold is the lowest concentration of a certain odor compound that is perceivable by the human sense of smell. The threshold of a chemical compound is determined in part by its shape, polarity, partial charges, and molecular mass. The olfactory mechanisms responsible for a compound's different detection threshold is not well understood. As such, odor thresholds cannot be accurately predicted. Rather, they must be measured through extensive tests using human subjects in laboratory settings. Optical isomers can have different detection thresholds because their conformations may cause them to be less perceivable for the human nose. It is only in recent years that such compounds were separated on gas chromatographs. For raw water treatment and waste water management, the term commonly used is Threshold Odor Number (TON). For instance, the water to be supplied for domestic use in Illinois is 3 TON. Values The threshold value is the concentration at which an aroma or taste can be detected (air, water and fat). The recognition threshold or arousal threshold of olfactory neurons is the concentration at which you can identify an odor (air, water and fat). The odour activity value is the concentration divided by the threshold. The flavor impact is the value the rate of change in perception with concentration. The flavor contribution of an aroma component in a mixture to the total profile can be calculated from the total odor units and the number contributed by that aroma chemical. Odor detection value Odor threshold value (OTV) (also aroma threshold value (ATV), Flavor threshold) is defined as the most minimal concentration of a substance that can be detected by a human nose. Some substances can be detected when their concentration is only few milligrams per 1000 tonnes, which is less than a drop in an Olympic swimming pool. Odor threshold value can be expressed as a concentration in water or concentration in air. Two major types of flavor thr
https://en.wikipedia.org/wiki/Victor%20Isakov
Victor Isakov (1947 – May 14, 2021) was a mathematician working in the field of inverse problems for partial differential equations and related topics (potential theory, uniqueness of continuation and Carleman estimates, nonlinear functional analysis and calculus of variation). He was a distinguished professor in the Department of Mathematics and Statistics at Wichita State University. His areas of professional interest included: Inverse problems of gravimetry (general uniqueness conditions and local solvability theorems) and related problems of imaging including prospecting active part of the brain and the source of noise of the aircraft from exterior measurements of electromagnetic and acoustical fields. Inverse problems of conductivity (uniqueness of discontinuous conductivity and numerical methods) and their applications to medical imaging and nondestructive testing of materials for cracks and inclusions. Inverse scattering problems (uniqueness and stability of penetrable and soft scatterers). Finding constitutional laws from experimental data (reconstructing nonlinear partial differential equation from all or some boundary data). Uniqueness of the continuation for hyperbolic equations and systems of mathematical physics. The inverse option pricing problem. Publications Isakov has over 90 publications in print or in preparation as of late 2005, which include: Increased stability in the continuation of solutions to the Helmholtz equation (with Tomasz Hrycak), Inverse Problems, 20(2004), 697-712. Inverse Problems for Partial Differential Equations, Applied Mathematical Sciences (Springer-Verlag), Vol 127, 2nd ed., 2006. Presentations: During the last 15 years, he delivered approximately 90 invited talks at international and national conferences and universities in Austria, Canada, China, Finland, France, Germany, Italy, Japan, Poland, Russia, Sweden, Switzerland, South Korea, Tunisia, and United Kingdom. He was a principal speaker at the summer AMS-SIAM r
https://en.wikipedia.org/wiki/Xanthene
Xanthene (9H-xanthene, 10H-9-oxaanthracene) is the organic compound with the formula CH2[C6H4]2O. It is a yellow solid that is soluble in common organic solvents. Xanthene itself is an obscure compound, but many of its derivatives are useful dyes. Xanthene dyes Dyes that contain a xanthene core include fluorescein, eosins, and rhodamines. Xanthene dyes tend to be fluorescent, yellow to pink to bluish red, brilliant dyes. Many xanthene dyes can be prepared by condensation of derivates of phthalic anhydride with derivates of resorcinol or 3-aminophenol. Further reading See also Xanthone Xanthydrol
https://en.wikipedia.org/wiki/International%20Colloquium%20on%20Automata%2C%20Languages%20and%20Programming
ICALP, the International Colloquium on Automata, Languages, and Programming is an academic conference organized annually by the European Association for Theoretical Computer Science and held in different locations around Europe. Like most theoretical computer science conferences its contributions are strongly peer-reviewed. The articles have appeared in proceedings published by Springer in their Lecture Notes in Computer Science, but beginning in 2016 they are instead published by the Leibniz International Proceedings in Informatics. The ICALP conference series was established by Maurice Nivat, who organized the first ICALP in Paris, France in 1972. The second ICALP was held in 1974, and since 1976 ICALP has been an annual event, nowadays usually taking place in July. Since 1999, the conference was thematically split into two tracks on "Algorithms, Complexity and Games" (Track A) and "Automata, Logic, Semantics, and Theory of Programming" (Track B), corresponding to the (at least until 2005) two main streams of the journal Theoretical Computer Science. Beginning with the 2005 conference, a third track (Track C) was added in order to allow deeper coverage of a particular topic. From 2005 until 2008, the third track was dedicated to "Security and Cryptography Foundations", and in 2009, it is devoted to the topic "Foundations of Networked Computation: Models, Algorithms and Information Management". Track C was dropped from the 2020 conference, with submissions from these areas invited to submit into Track A. Because of the COVID-19 pandemic, the 2020 conference was also unusual, taking place virtually for the first time (having been originally scheduled to take place in Beijing, China and later moved to Saarbrücken, Germany). ICALP 2021 took place virtually too. Gödel Prize The Gödel Prize, a prize for outstanding papers in theoretical computer science and awarded jointly by the EATCS and the ACM SIGACT, is presented every second year at ICALP. Presentation of th
https://en.wikipedia.org/wiki/BBC%20Research%20%26%20Development
BBC Research & Development is the technical research department of the BBC. Function It has responsibility for researching and developing advanced and emerging media technologies for the benefit of the corporation, and wider UK and European media industries, and is also the technical design authority for a number of major technical infrastructure transformation projects for the UK broadcasting industry. Structure BBC R&D is part of the wider BBC Design & Engineering, and is led by Jatin Aythora, Director, Research & Development. In 2011, the North Lab moved into MediaCityUK in Salford along with several other departments of the BBC, whilst the South Lab remained in White City in UK. History In April 1930 the Development section of the BBC became the Research Department. The department as it stands today was formed in 1993 from the merger of the BBC Designs Department and the BBC Research Department. From 2006 to 2008 it was known as Research and Innovation but has since reverted to its original name. BBC Research & Development has made major contributions to broadcast technology, carrying out original research in many areas, and developing items like the peak programme meter (PPM) which became the basis for many world standards. Innovations It has also been involved in many well-known consumer technologies such as teletext, DAB, NICAM and Freeview. It was at the forefront of the development of FM radio, stereo FM, and RDS. These innovations have led to Queen's Awards for Innovation in 1969, 1974, 1983, 1987, 1992, 1998, 2001 and 2011. In the 1970s, its engineers designed the famous LS3/5A studio monitor for use in outside broadcasting units. Licensed to manufacturers, the loudspeaker sold 100,000 pairs in its 20+ years' life. Closure of Kingswood Warren and move to London and Salford In early 2010 the department had approximately 135 staff based at three locations: White City in London, Kingswood Warren in Kingswood, Surrey, and the R&D (North Lab) at the
https://en.wikipedia.org/wiki/Contrast-enhanced%20ultrasound
Contrast-enhanced ultrasound (CEUS) is the application of ultrasound contrast medium to traditional medical sonography. Ultrasound contrast agents rely on the different ways in which sound waves are reflected from interfaces between substances. This may be the surface of a small air bubble or a more complex structure. Commercially available contrast media are gas-filled microbubbles that are administered intravenously to the systemic circulation. Microbubbles have a high degree of echogenicity (the ability of an object to reflect ultrasound waves). There is a great difference in echogenicity between the gas in the microbubbles and the soft tissue surroundings of the body. Thus, ultrasonic imaging using microbubble contrast agents enhances the ultrasound backscatter, (reflection) of the ultrasound waves, to produce a sonogram with increased contrast due to the high echogenicity difference. Contrast-enhanced ultrasound can be used to image blood perfusion in organs, measure blood flow rate in the heart and other organs, and for other applications. Targeting ligands that bind to receptors characteristic of intravascular diseases can be conjugated to microbubbles, enabling the microbubble complex to accumulate selectively in areas of interest, such as diseased or abnormal tissues. This form of molecular imaging, known as targeted contrast-enhanced ultrasound, will only generate a strong ultrasound signal if targeted microbubbles bind in the area of interest. Targeted contrast-enhanced ultrasound may have many applications in both medical diagnostics and medical therapeutics. However, the targeted technique has not yet been approved by the FDA for clinical use in the United States. Contrast-enhanced ultrasound is regarded as safe in adults, comparable to the safety of MRI contrast agents, and better than radiocontrast agents used in contrast CT scans. The more limited safety data in children suggests that such use is as safe as in the adult population. Bubble echocard
https://en.wikipedia.org/wiki/Biotic%20potential
Biotic potential is described by the unrestricted growth of populations resulting in the maximum growth of that population.   Biotic potential is the highest possible vital index of a species; therefore, when the species has its highest birthrate and lowest mortality rate. Quantitative Expression The biotic potential is the quantitative expression of the ability of a species to face natural selection in any environment. The main equilibrium of a particular population is described by the equation: Number of Individuals = Biotic Potential/Resistance of the Environment (Biotic and Abiotic) Chapman also relates to a "vital index", regarding a ratio to find the rate of surviving members of a species, whereas; Vital Index = (number of births/number of deaths)*100. Components According to the ecologist R.N. Chapman (1928), the biotic potential could be divided into a reproductive and survival potential. The survival potential could in turn be divided into nutritive and protective potentials. Reproductive potential (potential natality) is the upper limit to biotic potential in the absence of mortality. Survival potential is the reciprocal of mortality. Because reproductive potential does not account for the number of gametes surviving, survival potential is a necessary component of biotic potential. In the absence of mortality, biotic potential = reproductive potential. Chapman also identified two additional components of nutritive and protective potentials as divisions of the survival potential. Nutritive potential is the ability to acquire and use food for growth and energy. Protective potential is described by the ability of the organism to protect itself against the dynamic forces of environment in order to insure successful reproduction and offspring. Full expression of the biotic potential of an organism is restricted by environmental resistance, any condition that inhibits the increase in number of the population. It is generally only reached when environmen
https://en.wikipedia.org/wiki/Ilium%20%28bone%29
The ilium () (: ilia) is the uppermost and largest region of the coxal bone, and appears in most vertebrates including mammals and birds, but not bony fish. All reptiles have an ilium except snakes, although some snake species have a tiny bone which is considered to be an ilium. The ilium of the human is divisible into two parts, the body and the wing; the separation is indicated on the top surface by a curved line, the arcuate line, and on the external surface by the margin of the acetabulum. The name comes from the Latin (ile, ilis), meaning "groin" or "flank". Structure The ilium consists of the body and wing. Together with the ischium and pubis, to which the ilium is connected, these form the pelvic bone, with only a faint line indicating the place of union. The body () forms less than two-fifths of the acetabulum; and also forms part of the acetabular fossa. The internal surface of the body is part of the wall of the lesser pelvis and gives origin to some fibers of the obturator internus. The wing () is the large expanded portion which bounds the greater pelvis laterally. It has an external and an internal surface, a crest, and two borders—an anterior and a posterior. Biiliac width In humans, biiliac width is an anatomical term referring to the widest measure of the pelvis between the outer edges of the upper iliac bones. Biiliac width has the following common synonyms: pelvic bone width, biiliac breadth, intercristal breadth/width, bi-iliac breadth/width and biiliocristal breadth/width. It is best measured by anthropometric calipers (an anthropometer designed for such measurement is called a pelvimeter). Attempting to measure biiliac width with a tape measure along a curved surface is inaccurate. The biiliac width measure is helpful in obstetrics because a pelvis that is significantly too small or too large can have complications. For example, a large baby or a small pelvis often lead to death unless a caesarean section is performed. It is also us
https://en.wikipedia.org/wiki/Celestial%20Emporium%20of%20Benevolent%20Knowledge
Celestial Emporium of Benevolent Knowledge () is a fictitious taxonomy of animals described by the writer Jorge Luis Borges in his 1942 essay "The Analytical Language of John Wilkins" (). Overview Wilkins, a 17th-century philosopher, had proposed a universal language based on a classification system that would encode a description of the thing a word describes into the word itself—for example, Zi identifies the genus beasts; Zit denotes the "difference" rapacious beasts of the dog kind; and finally Zitα specifies dog. In response to this proposal and in order to illustrate the arbitrariness and cultural specificity of any attempt to categorize the world, Borges describes this example of an alternate taxonomy, supposedly taken from an ancient Chinese encyclopaedia entitled Celestial Emporium of Benevolent Knowledge. The list divides all animals into 14 categories. Borges claims that the list was discovered in its Chinese source by the translator Franz Kuhn. In his essay, Borges compares this classification with one allegedly used at the time by the Institute of Bibliography in Brussels, which he considers similarly chaotic. Borges says the Institute divides the universe in 1000 sections, of which number 262 is about the Pope, ironically classified apart from section 264, that on the Roman Catholic Church. Meanwhile section 294 encompasses all four of Hinduism, Shinto, Buddhism and Taoism. He also finds excessive heterogeneity in section 179, which includes animal cruelty, suicide, mourning, and an assorted group of vices and virtues. Borges concludes: "there is no description of the universe that isn't arbitrary and conjectural for a simple reason: we don't know what the universe is". Nevertheless, he finds Wilkins' language to be clever (ingenioso) in its design, as arbitrary as it may be. He points out that in a language with a divine scheme of the universe, beyond human capabilities, the name of an object would include the details of its entire past and futur
https://en.wikipedia.org/wiki/Future%20Evolution
Future Evolution is a book written by paleontologist Peter Ward and illustrated by Alexis Rockman. He addresses his own opinion of future evolution and compares it with Dougal Dixon's After Man: A Zoology of the Future and H. G. Wells's The Time Machine. According to Ward, humanity may exist for a long time. Nevertheless, we are impacting our planet. He splits his book in different chronologies, starting with the near future (the next 1,000 years). Humanity would be struggling to support a massive population of 11 billion. Global warming raises sea levels. The ozone layer weakens. Most of the available land is devoted to agriculture due to the demand for food. Despite all this, the oceanic wildlife remains untethered by most of these impacts, specifically the commercial farmed fish. This is, according to Ward, an era of extinction that would last about 10 million years (note that many human-caused extinctions have already occurred). After that, Earth gets stranger. Ward labels the species that have the potential to survive in a human-infested world. These include dandelions, raccoons, owls, pigs, cattle, rats, snakes, and crows to name but a few. In the human-infested ecosystem, those preadapted to live amongst man survived and prospered. Ward describes garbage dumps 10 million years in the future infested with multiple species of rats, a snake with a sticky frog-like tongue to snap up rodents, and pigs with snouts specialized for rooting through garbage. The story's time traveller who views this new refuse-covered habitat is gruesomely attacked by ravenous flesh-eating crows. Ward then questions the potential for humanity to evolve into a new species. According to him, this is incredibly unlikely. For this to happen a human population must isolate itself and interbreed until it becomes a new species. Then he questions if humanity would survive or extinguish itself by climate change, nuclear war, disease, or the posing threat of nanotechnology as terrorist weapon