source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/List%20of%20Microsoft%20Windows%20versions | Microsoft Windows is a computer operating system developed by Microsoft. It was first launched in 1985 as a graphical operating system built on MS-DOS. The initial version was followed by several subsequent releases, and by the early 1990s, the Windows line had split into two separate lines of releases: Windows 9x for consumers and Windows NT for businesses and enterprises. In the following years, several further variants of Windows would be released: Windows CE in 1996 for embedded systems; Pocket PC in 2000 (renamed to Windows Mobile in 2003 and Windows Phone in 2010) for personal digital assistants and, later, smartphones; Windows Holographic in 2016 for AR/VR headsets; and several other editions.
Personal computer versions
A "personal computer" version of Windows is considered to be a version that end-users or OEMs can install on personal computers, including desktop computers, laptops, and workstations.
The first five versions of Windows–Windows 1.0, Windows 2.0, Windows 2.1, Windows 3.0, and Windows 3.1–were all based on MS-DOS, and were aimed at both consumers and businesses. However, Windows 3.1 had two separate successors, splitting the Windows line in two: the consumer-focused "Windows 9x" line, consisting of Windows 95, Windows 98, and Windows Me; and the professional Windows NT line, comprising Windows NT 3.1, Windows NT 3.5, Windows NT 3.51, Windows NT 4.0, and Windows 2000. These two lines were reunited into a single line with the NT-based Windows XP; this Windows release succeeded both Windows Me and Windows 2000 and had separate editions for consumer and professional use. Since Windows XP, multiple further versions of Windows have been released, the most recent of which is Windows 11.
Mobile versions
Mobile versions refer to versions of Windows that can run on smartphones or personal digital assistants.
Server versions
High-performance computing (HPC) servers
Windows Essential Business Server
Windows Home Server
Windows MultiPoint Server
Wi |
https://en.wikipedia.org/wiki/Algebra%20Project | The Algebra Project is a national U.S. mathematics literacy program aimed at helping low-income students and students of color achieve the mathematical skills in high school that are a prerequisite for a college preparatory mathematics sequence. Founded by Civil Rights activist and Math educator Bob Moses in the 1980s, the Algebra Project provides curricular materials, teacher training, and professional development support and community involvement activities for schools to improve mathematics education.
By 2001, the Algebra Project had trained approximately 300 teachers and was reaching 10,000 students in 28 locations in 10 states.
History
The Algebra Project was founded in 1982 by Bob Moses in Cambridge, Massachusetts. Moses worked with his daughter's eighth-grade teacher, Mary Lou Mehrling, to provide extra tutoring for several students in her class in algebra. Moses, who had taught secondary school mathematics in New York City and Tanzania, wanted to ensure that those students had sufficient algebra skills to qualify for honors math and science courses in high school. Through his tutorage, students from the Open Program of the Martin Luther King School passed the citywide algebra examination and qualified for ninth grade honors geometry, the first students from the program to do so. The Algebra Project grew out of attempts to recreate this on a wider community level, to provide similar students with a higher level of mathematical literacy.
The Algebra Project now focuses on the southern states of the United States, where the Southern Initiative of the Algebra Project is directed by Dave Dennis.
Young People's Project
Founded in 1996, the Young People's Project (YPP) is a spin-off of the Algebra Project, which recruits and trains high school and college age "Math Literacy Workers" to tutor younger students in mathematics, and is directed by Omowale Moses. YPP has established sites in Jackson, Mississippi, Chicago, and the Greater Boston area of Massachusetts |
https://en.wikipedia.org/wiki/Mail-11 | Mail-11 was the native email transport protocol used by Digital Equipment Corporation's VMS operating system, and supported by several other DEC operating systems such as Ultrix.
It normally used the DECnet networking system as opposed to TCP/IP.
Similar to Internet SMTP based mail, Mail-11 mail had To: Cc: and Subj: headers
and date-stamped each message.
Mail-11 was one of the most widely used email systems of the 1980s, and was still in fairly wide use until as late as the mid-1990s. Messages from Mail-11 systems were frequently gatewayed out to SMTP, Usenet, and Bitnet systems, and thus are sometimes encountered browsing archives of those systems dating from when Mail-11 was in common use.
Several very large DECnet networks with Mail-11 service existed, most notably ENET, which was DEC's worldwide internal network. Another big user was HEPnet, a network for the high-energy physics research community that linked many universities and research labs.
Mail-11 used two colons (::) rather than an at sign (@) to separate user and hostname,
and hostname came first.
Some example headers
To: THEWAL::HARKAWIK
A message to user HARKAWIK on a machine or cluster of machines called THEWAL.
Note that under VMS, usernames were not case-sensitive and were usually shown in uppercase,
but under Ultrix, usernames were case-sensitive, and most sites followed the unix convention of using lower case usernames. Names of machines on a DECnet network were not case-sensitive. Thus, the header above implies that the mail is going to a VMS system, but the one following implies the user is on a Unix system.
To: DS5353::tabak
A message to user tabak on node DS5353. Probably an Ultrix system.
From: GUESS::YERAZUNIS "it's.. it's DIP !" 21-SEP-1989 10:28:38.87
To: DECWRL::"decvax!peregrine!dmi"
CC: YERAZUNIS
This message was sent to the gateway at DEC's Western Research Labs, one of DEC's main Internet gateways. From there, it was expected to travel via uucp, from host |
https://en.wikipedia.org/wiki/Hydrodynamic%20focusing | In microbiology, hydrodynamic focusing is a technique used to provide more accurate results when using flow cytometers or Coulter counters for determining the size of bacteria or cells.
Technique
Measuring particles
Cells are counted as they are forced to pass through a small channel (often referred to as a flow cell), causing disruptions in a laser light beam or electricity flow. These disruptions are analyzed by the instruments. It is difficult to create tunnels narrow enough for this purpose using ordinary manufacturing processes, as the diameter must be in the magnitude of micrometers, and the length of the tunnel should exceed several millimeters. The standard channel size used in most production flow cytometers is 250 by 250 micrometers.
Focusing with a fluid
Hydrodynamic focusing solves this problem by building up the walls of the tunnel from fluid, using the effects of fluid dynamics. A wide (hundreds of micrometers in diameter) tube made of glass or plastic is used, through which a "wall" of fluid called the sheath flow is pumped. The sample is injected into the middle of the sheath flow. If the two fluids differ enough in their velocity or density, they do not mix: they form a two-layer stable flow.
Sources |
https://en.wikipedia.org/wiki/Regelation | Regelation is the phenomenon of ice melting under pressure and refreezing when the pressure is reduced. This can be demonstrated by looping a fine wire around a block of ice, with a heavy weight attached to it. The pressure exerted on the ice slowly melts it locally, permitting the wire to pass through the entire block. The wire's track will refill as soon as pressure is relieved, so the ice block will remain intact even after wire passes completely through. This experiment is possible for ice at −10 °C or cooler, and while essentially valid, the details of the process by which the wire passes through the ice are complex. The phenomenon works best with high thermal conductivity materials such as copper, since latent heat of fusion from the top side needs to be transferred to the lower side to supply latent heat of melting. In short, the phenomenon in which ice converts to liquid due to applied pressure and then re-converts to ice once the pressure is removed is called regelation.
Regelation was discovered by Michael Faraday. It occurs only for substances such as ice, that have the property of expanding upon freezing, for the melting points of those substances decrease with the increasing external pressure. The melting point of ice falls by 0.0072 °C for each additional atm of pressure applied. For example, a pressure of 500 atmospheres is needed for ice to melt at −4 °C.
Surface melting
For a normal crystalline ice far below its melting point, there will be some relaxation of the atoms near the surface. Simulations of ice near to its melting point show that there is significant melting of the surface layers rather than a symmetric relaxation of atom positions. Nuclear magnetic resonance provided evidence for a liquid layer on the surface of ice. In 1998, using atomic force microscopy, Astrid Döppenschmidt and Hans-Jürgen Butt measured the thickness of the liquid-like layer on ice to be roughly 32 nm at −1 °C, and 11 nm at −10 °C.
The surface melting can account |
https://en.wikipedia.org/wiki/Cauchy%20surface | In the mathematical field of Lorentzian geometry, a Cauchy surface is a certain kind of submanifold of a Lorentzian manifold. In the application of Lorentzian geometry to the physics of general relativity, a Cauchy surface is usually interpreted as defining an "instant of time"; in the mathematics of general relativity, Cauchy surfaces are important in the formulation of the Einstein equations as an evolutionary problem.
They are named for French mathematician Augustin-Louis Cauchy (1789-1857) due to their relevance for the Cauchy problem of general relativity.
Informal introduction
Although it is usually phrased in terms of general relativity, the formal notion of a Cauchy surface can be understood in familiar terms. Suppose that humans can travel at a maximum speed of 20 miles per hour. This places constraints, for any given person, upon where they can reach by a certain time. For instance, it is impossible for a person who is in Mexico at 3 o'clock to arrive in Libya by 4 o'clock; however it is possible for a person who is in Manhattan at 1 o'clock to reach Brooklyn by 2 o'clock, since the locations are ten miles apart. So as to speak semi-formally, ignore time zones and travel difficulties, and suppose that travelers are immortal beings who have lived forever.
The system of all possible ways to fill in the four blanks in
defines the notion of a causal structure. A Cauchy surface for this causal structure is a collection of pairs of locations and times such that, for any hypothetical traveler whatsoever, there is exactly one location and time pair in the collection for which the traveler was at the indicated location at the indicated time.
There are a number of uninteresting Cauchy surfaces. For instance, one Cauchy surface for this causal structure is given by considering the pairing of every location with the time of 1 o'clock (on a certain specified day), since any hypothetical traveler must have been at one specific location at this time; furthermore, n |
https://en.wikipedia.org/wiki/Video%20quality | Video quality is a characteristic of a video passed through a video transmission or processing system that describes perceived video degradation (typically, compared to the original video). Video processing systems may introduce some amount of distortion or artifacts in the video signal that negatively impacts the user's perception of a system. For many stakeholders in video production and distribution, assurance of video quality is an important task.
Video quality evaluation is performed to describe the quality of a set of video sequences under study. Video quality can be evaluated objectively (by mathematical models) or subjectively (by asking users for their rating). Also, the quality of a system can be determined offline (i.e., in a laboratory setting for developing new codecs or services), or in-service (to monitor and ensure a certain level of quality).
From analog to digital video
Since the world's first video sequence was recorded and transmitted, many video processing systems have been designed. Such systems encode video streams and transmit them over various kinds of networks or channels. In the ages of analog video systems, it was possible to evaluate the quality aspects of a video processing system by calculating the system's frequency response using test signals (for example, a collection of color bars and circles).
Digital video systems have almost fully replaced analog ones, and quality evaluation methods have changed. The performance of a digital video processing and transmission system can vary significantly and depends on many factors including the characteristics of the input video signal (e.g. amount of motion or spatial details), the settings used for encoding and transmission, and the channel fidelity or network performance.
Objective video quality
Objective video quality models are mathematical models that approximate results from subjective quality assessment, in which human observers are asked to rate the quality of a video. In this cont |
https://en.wikipedia.org/wiki/Sensory%20processing | Sensory processing is the process that organizes and distinguishes sensation (sensory information) from one's own body and the environment, thus making it possible to use the body effectively within the environment. Specifically, it deals with how the brain processes multiple sensory modality inputs, such as proprioception, vision, auditory system, tactile, olfactory, vestibular system, interoception, and taste into usable functional outputs.
It has been believed for some time that inputs from different sensory organs are processed in different areas in the brain. The communication within and among these specialized areas of the brain is known as functional integration. Newer research has shown that these different regions of the brain may not be solely responsible for only one sensory modality, but could use multiple inputs to perceive what the body senses about its environment. Multisensory integration is necessary for almost every activity that we perform because the combination of multiple sensory inputs is essential for us to comprehend our surroundings.
Overview
It has been believed for some time that inputs from different sensory organs are processed in different areas in the brain, relating to systems neuroscience. Using functional neuroimaging, it can be seen that sensory-specific cortices are activated by different inputs. For example, regions in the occipital cortex are tied to vision and those on the superior temporal gyrus are recipients of auditory inputs. There exist studies suggesting deeper multisensory convergences than those at the sensory-specific cortices, which were listed earlier. This convergence of multiple sensory modalities is known as multisensory integration.
Sensory processing deals with how the brain processes sensory input from multiple sensory modalities. These include the five classic senses of vision (sight), audition (hearing), tactile stimulation (touch), olfaction (smell), and gustation (taste). Other sensory modalities ex |
https://en.wikipedia.org/wiki/Brainwave%20entrainment | Brainwave entrainment, also referred to as brainwave synchronization or neural entrainment, refers to the observation that brainwaves (large-scale electrical oscillations in the brain) will naturally synchronize to the rhythm of periodic external stimuli, such as flickering lights, speech, music, or tactile stimuli.
As different conscious states can be associated with different dominant brainwave frequencies, it is hypothesized that brainwave entrainment might induce a desired state. Researchers have found, for instance, that acoustic entrainment of delta waves in slow wave sleep had the functional effect of improving memory in healthy subjects.
Neural oscillation
Neural oscillations are rhythmic or repetitive electrochemical activity in the brain and central nervous system. Such oscillations can be characterized by their frequency, amplitude and phase. Neural tissue can generate oscillatory activity driven by mechanisms within individual neurons, as well as by interactions between them. They may also adjust frequency to synchronize with the periodic vibration of external acoustic or visual stimuli.
The activity of neurons generate electric currents; and the synchronous action of neural ensembles in the cerebral cortex, comprising large numbers of neurons, produce macroscopic oscillations. These phenomena can be monitored and graphically documented by an electroencephalogram (EEG). The electroencephalographic representations of those oscillations are typically denoted by the term 'brainwaves' in common parlance.
The technique of recording neural electrical activity within the brain from electrochemical readings taken from the scalp originated with the experiments of Richard Caton in 1875, whose findings were developed into electroencephalography (EEG) by Hans Berger in the late 1920s.
Neural oscillation and cognitive functions
The functional role of neural oscillations is still not fully understood; however they have been shown to correlate with emotional res |
https://en.wikipedia.org/wiki/Continuous%20design | Evolutionary design, continuous design, evolutive design, or incremental design is directly related to any modular design application, in which components can be freely substituted to improve the design, modify performance, or change another feature at a later time.
Informatics
In particular, it applies (with the name continuous design) to software development. In this field it is a practice of creating and modifying the design of a system as it is developed, rather than purporting to specify the system completely before development starts (as in the waterfall model). Continuous design was popularized by extreme programming. Continuous design also uses test driven development and refactoring.
Martin Fowler wrote a popular book called Refactoring, as well as a popular article entitled "Is Design Dead?", that talked about continuous/evolutionary design. James Shore wrote an article in IEEE titled "Continuous Design".
Industrial design
Modular design states that a product is made of subsystems that are joined together to create a full product. The above design model defined in electronics and evolved in industrial design into well consolidated industrial standards related to platform concept and its evolution.
See also
Rapid application development
Continuous integration
Evolutionary database design |
https://en.wikipedia.org/wiki/Object%20Modeling%20in%20Color | UML color standards are a set of four colors associated with Unified Modeling Language (UML) diagrams. The coloring system indicates which of several archetypes apply to the UML object. UML typically identifies a stereotype with a bracketed comment for each object identifying whether it is a class, interface, etc.
These colors were first suggested by Peter Coad, Eric Lefebvre, and Jeff De Luca in a series of articles in The Coad Letter, and later published in their book Java Modeling In Color With UML.
Over hundreds of domain models, it became clear that four major "types" of classes appeared again and again, though they had different names in different domains. After much discussion, these were termed archetypes, which is meant to convey that the classes of a given archetype follow more or less the same form. That is, attributes, methods, associations, and interfaces are fairly similar among classes of a given archetype.
When attempting to classify a given domain class, one typically asks about the color standards in this order:
pinkmoment-interval — Does it represent a moment or interval of time that we need to remember and work with for legal or business reasons? Examples in business systems generally model activities involving people, places and things such as a sale, an order, a rental, an employment, making a journey, etc.
yellowroles — Is it a way of participating in an activity (by either a person, place, or thing)? A person playing the role of an employee in an employment, a thing playing the role of a product in a sale, a location playing the role of a classroom for a training course, are examples of roles.
bluedescription — Is it simply a catalog-entry-like description which classifies or 'labels' an object? For example, the make and model of a car categorises or describes a number of physical vehicles. The relationship between the blue description and green party, place or thing is a type-instance relationship based on differences in the values of da |
https://en.wikipedia.org/wiki/Actin-binding%20protein | Actin-binding proteins (also known as ABPs) are proteins that bind to actin. This may mean ability to bind actin monomers, or polymers, or both.
Many actin-binding proteins, including α-actinin, β-spectrin, dystrophin, utrophin and fimbrin, do this through the actin-binding calponin homology domain.
This is a list of actin-binding proteins in alphabetical order.
0–9
25kDa
25kDa ABP from aorta p185neu
30akDA 110 kD dimer ABP
30bkDa 110 kD (Drebrin)
34kDA
45kDa
p53
p58gag
p116rip
A
a-actinin
Abl
ABLIM Actin-Interacting MAPKKK
ABP120
ABP140
Abp1p
ABP280 (Filamin)
ABP50 (EF-1a)
Acan 125 (Carmil)
ActA
Actibind
Actin
Actinfilin
Actinogelin
Actin-regulating kinases
Actin-Related Proteins
Actobindin
Actolinkin
Actopaxin
Actophorin
Acumentin (= L-plastin)
Adducin
ADF/Cofilin
Adseverin (scinderin)
Afadin
AFAP-110
Affixin
Aginactin
AIP1
Aldolase
Angiogenin
Anillin
Annexins
Aplyronine
Archvillin
Arginine kinase
Arp2/3 complex
B
Band 4.1
Band 4.9(Dematin)
b-actinin
b-Cap73
Bifocal
Bistramide A
BPAG1
Brevin (Gelsolin)
C
c-Abl
Calpactin (Annexin)
CHO1
Cortactin
CamKinase II
Calponin
Chondramide
Cortexillin
CAP
Caltropin
CH-ILKBP
CPb3
Cap100
Calvasculin
Ciboulot
Coactosin
CAP23
CARMIL
Acan125
Cingulin
Cytovillin (Ezrin)
CapZ/Capping Protein
a-Catenin
Cofilin
CR16
Caldesmon
CCT
Comitin
Calicin
Centuarin
Coronin
D
DBP40
Drebrin
Dematin (Band 4.9)
Dynacortin
Destrin (ADF/cofilin)
Dystonins
Diaphanous
Dystroglycan
DNase I
Dystrophin
Doliculide
Dolastatins
E
EAST
Endossin
EF-1a (ABP50)
Eps15
EF-1b
EPLIN
EF-2
Epsin
EGF receptor
ERK
ENC-1
ERM proteins (ezrin, radixin, moesin, plus merlin)
END3p
Ezrin (the E of ERM protein family)
F
F17R
Fodrin (spectrin)
Fascin
Formins
Fessilin
Frabin
FHL3
Fragmin
Fhos
FLNA (filamin A)
Fimbrin (plastin)
G
GAP43
Glycogenins
Gas2
G-proteins
Gastrin-Binding Protein
Gelactins I-IV
Gelsolins
Girdin
Glucokinase
H
Harmonin b
Hrp36
Hexokinase
Hrp65-2
Hectochlorin
HS1 (actin binding protein)
Helicase II
Hsp27
HIP1 (Huntingtin Interacting protein 1)
H |
https://en.wikipedia.org/wiki/IPFC | IPFC stands for Internet Protocol over Fibre Channel. It governs a set of standards created in January 2006 for address resolution (ARP) and transmitting IPv4 and IPv6 network packets over a Fibre Channel (FC) network. IPFC makes up part of the FC-4 protocol-mapping layer of a Fibre Channel system.
In IPFC, each IP datagram packet is wrapped into a FC frame, with its own header, and transmitted as a sequence of one or more frames. The receiver at the other end receives the frames, strips the FC headers and reassembles the IP packet. IP datagrams of up to 65,280 bytes in size may be accommodated. ARP packet transmission works in the same fashion. Each IP datagram exchange is unidirectional, although IP and TCP allow for bidirectional communication within their protocols.
IPFC is an application protocol that is typically implemented as a device driver in an operating system. IP over FC plays a less important role in storage area networking than SCSI over Fibre Channel or IP over Ethernet. IPFC has been used, for example, to provide clock synchronization via the Network Time Protocol (NTP).
See also
iFCP - Internet Fibre Channel Protocol
Fibre Channel over IP |
https://en.wikipedia.org/wiki/Alvarez%20hypothesis | The Alvarez hypothesis posits that the mass extinction of the non-avian dinosaurs and many other living things during the Cretaceous–Paleogene extinction event was caused by the impact of a large asteroid on the Earth. Prior to 2013, it was commonly cited as having happened about 65 million years ago, but Renne and colleagues (2013) gave an updated value of 66 million years. Evidence indicates that the asteroid fell in the Yucatán Peninsula, at Chicxulub, Mexico. The hypothesis is named after the father-and-son team of scientists Luis and Walter Alvarez, who first suggested it in 1980. Shortly afterwards, and independently, the same was suggested by Dutch paleontologist Jan Smit.
In March 2010, an international panel of scientists endorsed the asteroid hypothesis, specifically the Chicxulub impact, as being the cause of the extinction. A team of 41 scientists reviewed 20 years of scientific literature and in so doing also ruled out other theories such as massive volcanism. They had determined that a space rock in diameter hurtled into earth at Chicxulub. For comparison, the Martian moon Phobos has a diameter of , and Mount Everest is just under . The collision would have released the same energy as , over a billion times the energy of the atomic bombs dropped on Hiroshima and Nagasaki.
A 2016 drilling project into the peak ring of the crater strongly supported the hypothesis, and confirmed various matters that had been unclear until that point. These included the fact that the peak ring comprised granite (a rock found deep within the Earth) rather than typical sea floor rock, which had been shocked, melted, and ejected to the surface in minutes, and evidence of colossal seawater movement directly afterwards from sand deposits. Crucially, the cores also showed a near complete absence of gypsum, a sulfate-containing rock, which would have been vaporized and dispersed as an aerosol into the atmosphere, confirming the presence of a probable link between the impact a |
https://en.wikipedia.org/wiki/Nanotribology | Nanotribology is the branch of tribology that studies friction, wear, adhesion and lubrication phenomena at the nanoscale, where atomic interactions and quantum effects are not negligible. The aim of this discipline is characterizing and modifying surfaces for both scientific and technological purposes.
Nanotribological research has historically involved both direct and indirect methodologies. Microscopy techniques, including Scanning Tunneling Microscope (STM), Atomic-Force Microscope (AFM) and Surface Forces Apparatus, (SFA) have been used to analyze surfaces with extremely high resolution, while indirect methods such as computational methods and Quartz crystal microbalance (QCM) have also been extensively employed.
Changing the topology of surfaces at the nanoscale, friction can be either reduced or enhanced more intensively than macroscopic lubrication and adhesion; in this way, superlubrication and superadhesion can be achieved. In micro- and nano-mechanical devices problems of friction and wear, that are critical due to the extremely high surface volume ratio, can be solved covering moving parts with super lubricant coatings. On the other hand, where adhesion is an issue, nanotribological techniques offer a possibility to overcome such difficulties.
History
Friction and wear have been technological issues since ancient periods. On the one hand, the scientific approach of the last centuries towards the comprehension of the underlying mechanisms was focused on macroscopic aspects of tribology. On the other hand, in nanotribology, the systems studied are composed of nanometric structures, where volume forces (such as those related to mass and gravity) can often be considered negligible compared to surface forces. Scientific equipment to study such systems have been developed only in the second half of the 20th century. In 1969 the very first method to study the behavior of a molecularly thin liquid film sandwiched between two smooth surfaces through the SFA |
https://en.wikipedia.org/wiki/Dual%20graph | In the mathematical discipline of graph theory, the dual graph of a planar graph is a graph that has a vertex for each face of . The dual graph has an edge for each pair of faces in that are separated from each other by an edge, and a self-loop when the same face appears on both sides of an edge. Thus, each edge of has a corresponding dual edge, whose endpoints are the dual vertices corresponding to the faces on either side of . The definition of the dual depends on the choice of embedding of the graph , so it is a property of plane graphs (graphs that are already embedded in the plane) rather than planar graphs (graphs that may be embedded but for which the embedding is not yet known). For planar graphs generally, there may be multiple dual graphs, depending on the choice of planar embedding of the graph.
Historically, the first form of graph duality to be recognized was the association of the Platonic solids into pairs of dual polyhedra. Graph duality is a topological generalization of the geometric concepts of dual polyhedra and dual tessellations, and is in turn generalized combinatorially by the concept of a dual matroid. Variations of planar graph duality include a version of duality for directed graphs, and duality for graphs embedded onto non-planar two-dimensional surfaces.
These notions of dual graphs should not be confused with a different notion, the edge-to-vertex dual or line graph of a graph.
The term dual is used because the property of being a dual graph is symmetric, meaning that if is a dual of a connected graph , then is a dual of . When discussing the dual of a graph , the graph itself may be referred to as the "primal graph". Many other graph properties and structures may be translated into other natural properties and structures of the dual. For instance, cycles are dual to cuts, spanning trees are dual to the complements of spanning trees, and simple graphs (without parallel edges or self-loops) are dual to 3-edge-connected graphs.
|
https://en.wikipedia.org/wiki/Categories%20for%20the%20Working%20Mathematician | Categories for the Working Mathematician (CWM) is a textbook in category theory written by American mathematician Saunders Mac Lane, who cofounded the subject together with Samuel Eilenberg. It was first published in 1971, and is based on his lectures on the subject given at the University of Chicago, the Australian National University, Bowdoin College, and Tulane University. It is widely regarded as the premier introduction to the subject.
Contents
The book has twelve chapters, which are:
Chapter I. Categories, Functors, and Natural Transformations.
Chapter II. Constructions on Categories.
Chapter III. Universals and Limits.
Chapter IV. Adjoints.
Chapter V. Limits.
Chapter VI. Monads and Algebras.
Chapter VII. Monoids.
Chapter VIII. Abelian Categories.
Chapter IX. Special Limits.
Chapter X. Kan Extensions.
Chapter XI. Symmetry and Braiding in Monoidal Categories
Chapter XII. Structures in Categories.
Chapters XI and XII were added in the 1998 second edition, the first in view of its importance in string theory and quantum field theory, and the second to address higher-dimensional categories that have come into prominence.
Although it is the classic reference for category theory, some of the terminology is not standard. In particular, Mac Lane attempted to settle an ambiguity in usage for the terms epimorphism and monomorphism by introducing the terms epic and monic, but the distinction is not in common use. |
https://en.wikipedia.org/wiki/Carrier%20grade%20open%20framework | Carrier grade open framework (CGOF) is a hardware-independent architecture for the telecommunications industry. CGOF is based on a collection of open standards and is offered as a basis for new solution development. CGOF specifies the functional components needed to create next generation network (NGN) solutions, the relationship of those components to each other, and the interfaces among the components.
External links
IBM white paper on CGOF
Oracle CFG Home Page
Network architecture |
https://en.wikipedia.org/wiki/Biologist | A biologist is a scientist who conducts research in biology. Biologists are interested in studying life on Earth, whether it is an individual cell, a multicellular organism, or a community of interacting populations. They usually specialize in a particular branch (e.g., molecular biology, zoology, and evolutionary biology) of biology and have a specific research focus (e.g., studying malaria or cancer).
Biologists who are involved in basic research have the aim of advancing knowledge about the natural world. They conduct their research using the scientific method, which is an empirical method for testing hypotheses. Their discoveries may have applications for some specific purpose such as in biotechnology, which has the goal of developing medically useful products for humans.
In modern times, most biologists have one or more academic degrees such as a bachelor's degree plus an advanced degree like a master's degree or a doctorate. Like other scientists, biologists can be found working in different sectors of the economy such as in academia, nonprofits, private industry, or government.
History
Francesco Redi, the founder of biology, is recognized to be one of the greatest biologists of all time. Robert Hooke, an English natural philosopher, coined the term cell, suggesting plant structure's resemblance to honeycomb cells.
Charles Darwin and Alfred Wallace independently formulated the theory of evolution by natural selection, which was described in detail in Darwin's book On the Origin of Species, which was published in 1859. In it, Darwin proposed that the features of all living things, including humans, were shaped by natural processes of descent with accumulated modification leading to divergence over long periods of time. The theory of evolution in its current form affects almost all areas of biology. Separately, Gregor Mendel formulated in the principles of inheritance in 1866, which became the basis of modern genetics.
In 1953, James D. Watson and Francis |
https://en.wikipedia.org/wiki/Oseledets%20theorem | In mathematics, the multiplicative ergodic theorem, or Oseledets theorem provides the theoretical background for computation of Lyapunov exponents of a nonlinear dynamical system. It was proved by Valery Oseledets (also spelled "Oseledec") in 1965 and reported at the International Mathematical Congress in Moscow in 1966. A conceptually different proof of the multiplicative ergodic theorem was found by M. S. Raghunathan. The theorem has been extended to semisimple Lie groups by V. A. Kaimanovich and further generalized in the works of David Ruelle, Grigory Margulis, Anders Karlsson, and François Ledrappier.
Cocycles
The multiplicative ergodic theorem is stated in terms of matrix cocycles of a dynamical system. The theorem states conditions for the existence of the defining limits and describes the Lyapunov exponents. It does not address the rate of convergence.
A cocycle of an autonomous dynamical system X is a map
C : X×T → Rn×n satisfying
where X and T (with T = Z⁺ or T = R⁺) are the phase space
and the time range, respectively, of the dynamical system,
and In is the n-dimensional unit matrix.
The dimension n of the matrices C is not related to the phase space X.
Examples
A prominent example of a cocycle is given by the matrix Jt in the theory of Lyapunov exponents. In this special case, the dimension n of the matrices is the same as the dimension of the manifold X.
For any cocycle C, the determinant det C(x, t) is a one-dimensional cocycle.
Statement of the theorem
Let μ be an ergodic invariant measure on X and C a cocycle
of the dynamical system such that for each t ∈ T, the maps and are L1-integrable with respect to μ. Then for μ-almost all x and each non-zero vector u ∈ Rn the limit
exists and assumes, depending on u but not on x, up to n different values.
These are the Lyapunov exponents.
Further, if λ1 > ... > λm
are the different limits then there are subspaces Rn = R1 ⊃ ... ⊃ Rm ⊃ Rm+1 = {0}, depending on x, such that the limit is λi |
https://en.wikipedia.org/wiki/List%20of%20South%20African%20flags | This article lists the flags of the various colonies and states that have existed in South Africa since 1652, as well as other flags pertaining to South Africa, including governmental, military, police and provincial flags.
Overview
The following flags have been used as the national flag of the Union of South Africa and the Republic of South Africa:
History
Historical flags (1652–1928)
Many flags were used in South Africa prior to political unification in 1910.
The original Dutch East India Company colony at the Cape of Good Hope (1652–1795) flew the Dutch flag, with the VOC logo in the centre. This flag was also flown during the period of Batavian Republic rule (1803–06).
The Boer Republics, i.e. the Orange Free State (1854–1902), the South African Republic (1857–1902), Stellaland (1882–85), Goshen (1883–85), the Nieuwe Republiek (1884–88), and the Klein Vrystaat (1886–1891) had their own flags. Several derived from the Dutch flag.
The British colonies that existed in the 19th century flew the British flags, and from the early 1870s some, i.e. Natal, Cape Colony, and later the Orange River Colony and the Transvaal, added their own colonial flag badges.
The Union of South Africa, formed in 1910, initially used a red ensign defaced with a badge depicting the Union coat of arms. The first South African national flag, introduced in 1928, superseded it.
National flags (1928–1994)
The Hertzog administration introduced the flag after several years of political controversy. Approved by Parliament in 1927, it was first hoisted on 31 May 1928.
The flag reflected the Union's predecessors. The basis was the Prince's Flag (royal tricolour) of the Netherlands, with the addition of a Union Jack to represent the Cape and Natal, the former Orange Free state flag, and the former South African Republic flag.
Until 1957, the flag was flown subordinate to the British Union Jack.
The flag remained unchanged when South Africa became a republic on 31 May 1961.
Homeland fl |
https://en.wikipedia.org/wiki/Filter%20%28software%29 | A filter is a computer program or subroutine to process a stream, producing another stream. While a single filter can be used individually, they are frequently strung together to form a pipeline.
Some operating systems such as Unix are rich with filter programs. Windows 7 and later are also rich with filters, as they include Windows PowerShell. In comparison, however, few filters are built into cmd.exe (the original command-line interface of Windows), most of which have significant enhancements relative to the similar filter commands that were available in MS-DOS. OS X includes filters from its underlying Unix base but also has Automator, which allows filters (known as "Actions") to be strung together to form a pipeline.
Unix
In Unix and Unix-like operating systems, a filter is a program that gets most of its data from its standard input (the main input stream) and writes its main results to its standard output (the main output stream). Auxiliary input may come from command line flags or configuration files, while auxiliary output may go to standard error. The command syntax for getting data from a device or file other than standard input is the input operator (<). Similarly, to send data to a device or file other than standard output is the output operator (>). To append data lines to an existing output file, one can use the append operator (>>). Filters may be strung together into a pipeline with the pipe operator ("|"). This operator signifies that the main output of the command to the left is passed as main input to the command on the right.
The Unix philosophy encourages combining small, discrete tools to accomplish larger tasks. The classic filter in Unix is Ken Thompson's , which Doug McIlroy cites as what "ingrained the tools outlook irrevocably" in the operating system, with later tools imitating it. at its simplest prints any lines containing a character string to its output. The following is an example:
cut -d : -f 1 /etc/passwd | grep foo
This finds |
https://en.wikipedia.org/wiki/225%20%28number%29 | 225 (two hundred [and] twenty-five) is the natural number following 224 and preceding 226.
In mathematics
225 is the smallest number that is a polygonal number in five different ways. It is a square number ,
an octagonal number, and a squared triangular number .
As the square of a double factorial, counts the number of permutations of six items in which all cycles have even length, or the number of permutations in which all cycles have odd length. And as one of the Stirling numbers of the first kind, it counts the number of permutations of six items with exactly three cycles.
225 is a highly composite odd number, meaning that it has more divisors than any smaller odd numbers. After 1 and 9, 225 is the third smallest number n for which , where σ is the sum of divisors function and φ is Euler's totient function. 225 is a refactorable number.
225 is the smallest square number to have one of every digit in some number base (225 is 3201 in base 4)
225 is the first odd number with exactly 9 divisors. |
https://en.wikipedia.org/wiki/Timothy%20M.%20Chan | Timothy Moon-Yew Chan is a Founder Professor in the Department of Computer Science at the University of Illinois at Urbana–Champaign. He was formerly Professor and University Research Chair
in the David R. Cheriton School of Computer Science, University of Waterloo, Canada.
He graduated with BA (summa cum laude) from Rice University in 1992, and completed his Ph.D. in Computer Science at UBC in 1995 at the age of 19. His late mother, Miu Yung Chan, was a molecular physicist with a Ph.D. from Ohio State University.
He is currently an associate editor for SIAM Journal on Computing
and the International Journal of Computational Geometry and Applications. He is also a member of the editorial board of Algorithmica,
Discrete & Computational Geometry,
and Computational Geometry: Theory and Applications.
Chan has published extensively. His research covers data structures, algorithms, and computational geometry.
Recognition
He was awarded the Governor General's Gold Medal (as Head of Graduating Class in the Faculty of Graduate Studies at the University of British Columbia during convocation), the NSERC doctoral prize, and the Premier's Research Excellence Award (PREA) of Ontario, Canada.
He was elected as an ACM Fellow in 2019 "for contributions to computational geometry, algorithms, and data structures".
See also
Chan's algorithm, an output-sensitive algorithm for planar convex hulls |
https://en.wikipedia.org/wiki/Autolysis%20%28biology%29 | In biology, autolysis, more commonly known as self-digestion, refers to the destruction of a cell through the action of its own enzymes. It may also refer to the digestion of an enzyme by another molecule of the same enzyme.
The term derives from the Greek αὐτο- 'self' and λύσις 'splitting'.
Biochemical mechanisms of cell destruction
Autolysis is uncommon in living adult organisms and usually occurs in necrotic tissue as enzymes act on components of the cell that would not normally serve as substrates. These enzymes are released due to the cessation of active processes in the cell that provide substrates in healthy, living tissue; autolysis in itself is not an active process. In other words, though autolysis resembles the active process of digestion of nutrients by live cells, the dead cells are not actively digesting themselves as is often claimed, and as the synonym self-digestion suggests. Failure of respiration and subsequent failure of oxidative phosphorylation is the trigger of the autolytic process. The reduced availability and subsequent absence of high-energy molecules that are required to maintain the integrity of the cell and maintain homeostasis causes significant changes in the biochemical operation of the cell.
Molecular oxygen serves as the terminal electron acceptor in the series of biochemical reactions known as oxidative phosphorylation that are ultimately responsible for the synthesis of adenosine triphosphate, the main source of energy for otherwise thermodynamically unfavorable cellular processes. Failure of delivery of molecular oxygen to cells results in a metabolic shift to anaerobic glycolysis, in which glucose is converted to pyruvate as an inefficient means of generating adenosine triphosphate. Glycolysis has a lower ATP yield than oxidative phosphorylation and generates acidic byproducts that decrease the pH of the cell, which enables many of the enzymatic processes involved in autolysis.
Limited synthesis of adenosine triphosphate |
https://en.wikipedia.org/wiki/Predictive%20modelling | Predictive modelling uses statistics to predict outcomes. Most often the event one wants to predict is in the future, but predictive modelling can be applied to any type of unknown event, regardless of when it occurred. For example, predictive models are often used to detect crimes and identify suspects, after the crime has taken place.
In many cases, the model is chosen on the basis of detection theory to try to guess the probability of an outcome given a set amount of input data, for example given an email determining how likely that it is spam.
Models can use one or more classifiers in trying to determine the probability of a set of data belonging to another set. For example, a model might be used to determine whether an email is spam or "ham" (non-spam).
Depending on definitional boundaries, predictive modelling is synonymous with, or largely overlapping with, the field of machine learning, as it is more commonly referred to in academic or research and development contexts. When deployed commercially, predictive modelling is often referred to as predictive analytics.
Predictive modelling is often contrasted with causal modelling/analysis. In the former, one may be entirely satisfied to make use of indicators of, or proxies for, the outcome of interest. In the latter, one seeks to determine true cause-and-effect relationships. This distinction has given rise to a burgeoning literature in the fields of research methods and statistics and to the common statement that "correlation does not imply causation".
Models
Nearly any statistical model can be used for prediction purposes. Broadly speaking, there are two classes of predictive models: parametric and non-parametric. A third class, semi-parametric models, includes features of both. Parametric models make "specific assumptions with regard to one or more of the population parameters that characterize the underlying distribution(s)". Non-parametric models "typically involve fewer assumptions of structure and |
https://en.wikipedia.org/wiki/Origamic%20architecture | Origamic architecture is a form of kirigami that involves the three-dimensional reproduction of architecture and monuments, on various scales, using cut-out and folded paper, usually thin paperboard. Visually, these creations are comparable to intricate 'pop-ups', indeed, some works are deliberately engineered to possess 'pop-up'-like properties. However, origamic architecture tends to be cut out of a single sheet of paper, whereas most pop-ups involve two or more. To create the three-dimensional image out of the two-dimensional surface requires skill akin to that of an architect.
Origin
The development of origamic architecture began with Professor Masahiro Chatani's (then a newly appointed professor at the Tokyo Institute of Technology) experiments with designing original and unique greeting cards. Japanese culture encourages the giving and receiving of cards for various special occasions and holidays, particularly the Japanese New Year, and according to his own account, Professor Chatani personally felt that greeting cards were a significant form of connection and communication between people. He worried that in today's fast-paced modern world, the emotional connections called up and created by the exchange of greeting cards would become scarce.
In the early 1980s, Professor Chatani began to experiment with cutting and folding paper to make unique and interesting pop-up cards. He used the techniques of origami (Japanese paper folding) and kirigami (Japanese papercutting), as well as his experience in architectural design, to create intricate patterns that played with light and shadow. Many of his creations are made of stark white paper which emphasizes the shadowing effects of the cuts and folds. In the preface to one of his books, he called the shadows of the three-dimensional cutouts a "dreamy scene" that invited the viewer into a "fantasy world".
At first, Professor Chatani simply gave the cards to his friends and family. Over the next nearly thirty |
https://en.wikipedia.org/wiki/List%20of%20Indian%20flags | This is a list of flags used in India by various organizations.
National flag
Governmental flag
Ensigns
Naval
Port authorities
Military flags
Indian Armed Forces
Army
Components
Air Force
Navy
Coast Guard
Paramilitary forces
Other agencies
Former Flags of Indian Armed Forces
Indian Air Force rank flags (1950-1980)
Indian Naval Ensigns
Indian Navy flags
Indian Navy rank flags
(British) Indian Army
(Royal) Indian Air Force
(Royal) Indian Marine/(Royal) Indian Navy
State and union territory flags
At present there are no officially recognised flags for individual states and union territories of India. No legal prohibitions to prevent states adopting distinctive flags exist in either the Emblems and Names (Prevention of Improper Use) Act, 1950 or the Prevention of Insults to National Honour Act, 1971. In a 1994 case before the Supreme Court of India, S. R. Bommai v. Union of India, the Supreme Court declared that there is no prohibition in the Constitution of India for a state to have its own flag. However, a state flag should not dishonour the national flag. The Flag code of India also permits other flags to be flown with the Flag of India, but not on the same flag pole of in a superior position to the national flag.
Former official state flags
The state of Jammu and Kashmir had an officially recognised state flag between 1952 and 2019 under the special status granted to the state by Article 370 of the Constitution of India.
Proposed state flags
Flags have been proposed for Tamil Nadu and Karnataka, but neither were officially adopted.
Banners of the states and union territories
When a distinctive banner is required to represent a state or union territory, the emblem of the state or union territory is usually displayed on a white field.
States
Union territories
Historical flags
Indian polities
Colonial India
British rule in India
Princely states
French India
Portuguese India
Dutch India
Danish India
Swedish India
Austrian India
|
https://en.wikipedia.org/wiki/Optical%20theorem | In physics, the optical theorem is a general law of wave scattering theory, which relates the zero-angle scattering amplitude to the total cross section of the scatterer. It is usually written in the form
where (0) is the scattering amplitude with an angle of zero, that is the amplitude of the wave scattered to the center of a distant screen and is the wave vector in the incident direction.
Because the optical theorem is derived using only conservation of energy, or in quantum mechanics from conservation of probability, the optical theorem is widely applicable and, in quantum mechanics, includes both elastic and inelastic scattering.
The generalized optical theorem, first derived by Werner Heisenberg, follows from the unitary condition and is given by
where is the scattering amplitude that depends on the direction of the incident wave and the direction of scattering and is the differential solid angle. When , the above relation yields the optical theorem since the left-hand side is just twice the imaginary part of and since . For scattering in a centrally symmetric field, depends only on the angle between and , in which case, the above relation reduces to
where and are the angles between and and some direction .
History
The optical theorem was originally developed independently by Wolfgang Sellmeier and Lord Rayleigh in 1871. Lord Rayleigh recognized the zero-angle scattering amplitude in terms of the index of refraction as
(where is the number density of scatterers),
which he used in a study of the color and polarization of the sky.
The equation was later extended to quantum scattering theory by several individuals, and came to be known as the Bohr–Peierls–Placzek relation after a 1939 paper. It was first referred to as the "optical theorem" in print in 1955 by Hans Bethe and Frederic de Hoffmann, after it had been known as a "well known theorem of optics" for some time.
Derivation
The theorem can be derived rather directly from a treat |
https://en.wikipedia.org/wiki/Spikelet | A spikelet, in botany, describes the typical arrangement of the flowers of grasses, sedges and some other monocots.
Each spikelet has one or more florets. The spikelets are further grouped into panicles or spikes. The part of the spikelet that bears the florets is called the rachilla.
In grasses
In Poaceae, the grass family, a spikelet consists of two (or sometimes fewer) bracts at the base, called glumes, followed by one or more florets. A floret consists of the flower surrounded by two bracts, one external—the lemma—and one internal—the palea. The perianth is reduced to two scales, called lodicules, that expand and contract to spread the lemma and palea; these are generally interpreted to be modified sepals.
The flowers are usually hermaphroditic—maize being an important exception—and mainly anemophilous or wind-pollinated, although insects occasionally play a role.
Lemma
Lemma is a phytomorphological term referring to a part of the spikelet. It is the lowermost of two chaff-like bracts enclosing the grass floret. It often bears a long bristle called an awn, and may be similar in form to the glumes—chaffy bracts at the base of each spikelet. It is usually interpreted as a bract but it has also been interpreted as one remnant (the abaxial) of the three members of outer perianth whorl (the palea may represent the other two members, having been joined together).
A lemma's shape, their number of veins, whether they are awned or not, and the presence or absence of hairs are particularly important characters in grass taxonomy.
Palea
Palea, in Poaceae, refers to one of the bract-like organs in the spikelet.
The palea is the uppermost of the two chaff-like bracts that enclose the grass floret (the other being the lemma). It is often cleft at the tip, implying that it may be a double structure derived from the union of two separate organs. This has led to suggestions that it may be what remains of the grass sepals (outer perianth whorl): specifically the two adax |
https://en.wikipedia.org/wiki/Two%20envelopes%20problem | The two envelopes problem, also known as the exchange paradox, is a paradox in probability theory. It is of special interest in decision theory and for the Bayesian interpretation of probability theory. It is a variant of an older problem known as the necktie paradox.
The problem is typically introduced by formulating a hypothetical challenge like the following example:
Since the situation is symmetric, it seems obvious that there is no point in switching envelopes. On the other hand, a simple calculation using expected values suggests the opposite conclusion, that it is always beneficial to swap envelopes, since the person stands to gain twice as much money if they switch, while the only risk is halving what they currently have.
Introduction
Problem
A person is given two indistinguishable envelopes, each of which contains a sum of money. One envelope contains twice as much as the other. The person may pick one envelope and keep whatever amount it contains. They pick one envelope at random but before they open it they are given the chance to take the other envelope instead.
The switching argument
Now suppose the person reasons as follows:
The puzzle
The puzzle is to find the flaw in the line of reasoning in the switching argument. This includes determining exactly why and under what conditions that step is not correct, to be sure not to make this mistake in a situation where the misstep may not be so obvious. In short, the problem is to solve the paradox. The puzzle is not solved by finding another way to calculate the probabilities that does not lead to a contradiction.
Multiplicity of proposed solutions
There have been many solutions proposed, and commonly one writer proposes a solution to the problem as stated, after which another writer shows that altering the problem slightly revives the paradox. Such sequences of discussions have produced a family of closely related formulations of the problem, resulting in voluminous literature on the subject.
No p |
https://en.wikipedia.org/wiki/Route%20poisoning | Route poisoning is a method to prevent a router from sending packets through a route that has become invalid within computer networks. Distance-vector routing protocols in computer networks use route poisoning to indicate to other routers that a route is no longer reachable and should not be considered from their routing tables. Unlike the split horizon with poison reverse, route poisoning provides for sending updates with unreachable hop counts immediately to all the nodes in the network.
When the protocol detects an invalid route, all of the routers in the network are informed that the bad route has an infinite (∞) route metric. This makes all nodes on the invalid route seem infinitely distant, preventing any of the routers from sending packets over the invalid route.
Some distance-vector routing protocols, such as RIP, use a maximum hop count to determine how many routers the traffic must go through to reach the destination. Each route has a hop count number assigned to it which is incremented as the routing information is passed from router to router. A route is considered unreachable if the hop count exceeds the maximum allowed. Route poisoning is a method of quickly forgetting outdated routing information from other router's routing tables by changing its hop count to be unreachable (higher than the maximum number of hops allowed) and sending a routing update. In the case of RIP, the maximum hop count is 15, so to perform route poisoning on a route its hop count is changed to 16, deeming it unreachable, and a routing update is sent.
If these updates are lost, some nodes in the network would not be informed that a route is invalid, so they could attempt to send packets over the bad route and cause a problem known as a routing loop. Therefore, route poisoning is used in conjunction with holddowns to keep update messages from falsely reinstating the validity of a bad route. This prevents routing loops, improving the overall efficiency of the network. |
https://en.wikipedia.org/wiki/Clustered%20web%20hosting | Clustered hosting is a type of web hosting that spreads the load of hosting across multiple physical machines, or node, increasing availability and decreasing the chances of one service (e.g., FTP or email) affecting another (e.g., MySQL). Many large websites run on clustered hosting solutions, for example, large discussion forums will tend to run using multiple front-end webservers with multiple back-end database servers.
Typically, most hosting infrastructures are based on the paradigm of using a single physical machine to host multiple hosted services, including web, database, email, FTP and others. A single physical machine is not only a single point of failure, but also has finite capacity for traffic, that in practice can be troublesome for a busy website or for a website that is experiencing transient bursts in traffic.
By clustering services across multiple hardware machines and using load balancing, single points of failure can be eliminated, increasing availability of a website and other web services beyond that of ordinary single server hosting. A single server can require periodic reboots for software upgrades and the like, whereas in a clustered platform you can stagger the restarts such that the service is still available whilst still upgrading all necessary machines in the cluster.
Clustered hosting is similar to cloud hosting, in that the resources of many machines are available for a website to utilize on demand, making scalability a large advantage to a clustered hosting solution.
See also
High-availability cluster |
https://en.wikipedia.org/wiki/Klotho%20%28biology%29 | Klotho is an enzyme that in humans is encoded by the KL gene. The three subfamilies of klotho are α-klotho, β-klotho, and γ-klotho. α-klotho activates FGF23, and β-klotho activates FGF19 and FGF21. When the subfamily is not specified, the word "klotho" typically refers to the α-klotho subfamily, because α-klotho was discovered before the other members.
α-klotho is highly expressed in the brain, liver and kidney. β-klotho is predominantly expressed in the liver. γ-klotho is expressed in the skin.
Klotho can exist in a membrane-bound form or a (hormonal) soluble, circulating form. Proteases can convert the membrane-bound form into the circulating form.
The KL gene encodes a type-I single-pass transmembrane protein that is related to β-glucuronidases. Reduced production of this protein has been observed in patients with chronic kidney failure (CKF), and this may be one of the factors underlying degenerative processes (e.g., arteriosclerosis, osteoporosis, and skin atrophy) seen in CKF. Mutations within the family have been associated with ageing, bone loss and alcohol consumption. Transgenic mice that overexpress Klotho live longer than wild-type mice.
Structure
The α-klotho gene is located on chromosome 13, and is translated into a single-pass integral membrane protein. The intracellular portion of the α-klotho protein is short (11 amino acids), whereas the extracellular portion is long (980 amino acids). The transmembrane portion is also comparatively short (21 amino acids). The extracellular portion contains two repeat sequences, termed the KL1 (about 450 amino acids) and KL2 (about 430 amino acids) domains. In the kidney and the choroid plexus of the brain, the transmembrane protein can be proteolytically cleaved to produce a 130-Kilo-Dalton, soluble form of α-klotho protein, released into the circulation and cerebrospinal fluid, respectively. In humans, the secreted form of klotho is more dominant than the membrane form.
The β-Klotho gene is located on chrom |
https://en.wikipedia.org/wiki/SystemVerilog | SystemVerilog, standardized as IEEE 1800, is a hardware description and hardware verification language used to model, design, simulate, test and implement electronic systems. SystemVerilog is based on Verilog and some extensions, and since 2008, Verilog is now part of the same IEEE standard. It is commonly used in the semiconductor and electronic design industry as an evolution of Verilog.
History
SystemVerilog started with the donation of the Superlog language to Accellera in 2002 by the startup company Co-Design Automation. The bulk of the verification functionality is based on the OpenVera language donated by Synopsys. In 2005, SystemVerilog was adopted as IEEE Standard 1800-2005. In 2009, the standard was merged with the base Verilog (IEEE 1364-2005) standard, creating IEEE Standard 1800-2009. The current version is IEEE standard 1800-2017.
The feature-set of SystemVerilog can be divided into two distinct roles:
SystemVerilog for register-transfer level (RTL) design is an extension of Verilog-2005; all features of that language are available in SystemVerilog. Therefore, Verilog is a subset of SystemVerilog.
SystemVerilog for verification uses extensive object-oriented programming techniques and is more closely related to Java than Verilog. These constructs are generally not synthesizable.
The remainder of this article discusses the features of SystemVerilog not present in Verilog-2005.
Design features
Data lifetime
There are two types of data lifetime specified in SystemVerilog: static and automatic. Automatic variables are created the moment program execution comes to the scope of the variable. Static variables are created at the start of the program's execution and keep the same value during the entire program's lifespan, unless assigned a new value during execution.
Any variable that is declared inside a task or function without specifying type will be considered automatic. To specify that a variable is static place the "static" keyword in the decla |
https://en.wikipedia.org/wiki/Soybean%20oil | Soybean oil (British English: soyabean oil) is a vegetable oil extracted from the seeds of the soybean (Glycine max). It is one of the most widely consumed cooking oils and the second most consumed vegetable oil. As a drying oil, processed soybean oil is also used as a base for printing inks (soy ink) and oil paints.
History
Soybeans were cultivated in China by the late Shang dynasty, around 1000 BCE. Shijing, the Book of Odes, contains several poems mentioning soybeans.
Production
To produce soybean oil, the soybeans are cracked, adjusted for moisture content, heated to between 60 and 88 °C (140–190 °F), rolled into flakes, and solvent-extracted with hexanes. The oil is then refined, blended for different applications, and sometimes hydrogenated. Soybean oils, both liquid and partially hydrogenated are sold as "vegetable oil", or are ingredients in a wide variety of processed foods. Most of the remaining residue (soybean meal) is used as animal feed.
In the 2002–2003 growing season, 30.6 million tons (MT) of soybean oil were produced worldwide, constituting about half of worldwide edible vegetable oil production, and thirty percent of all fats and oils produced, including animal fats and oils derived from tropical plants.
In 2018–2019, world production was at 57.4 MT with the leading producers including China (16.6 MT), US (10.9 MT), Argentina (8.4 MT), Brazil (8.2 MT), and EU (3.2 MT).
Composition
Soybean oil contains only trace amounts of fatty carboxylic acids (about 0.3% by mass in the crude oil, 0.03% in the refined oil). Instead it contains esters. In the following content, the expressions "fatty acids" and "acid" below refer to esters rather than carboxylic acids.
Per 100 g, soybean oil has 16 g of saturated fat, 23 g of monounsaturated fat, and 58 g of polyunsaturated fat. The major unsaturated fatty acids in soybean oil triglycerides are the polyunsaturates alpha-linolenic acid (C-18:3), 7-10%, and linoleic acid (C-18:2), 51%; and the monounsatu |
https://en.wikipedia.org/wiki/Darwin%20%28programming%20game%29 | Darwin was a programming game invented in August 1961 by Victor A. Vyssotsky, Robert Morris Sr., and M. Douglas McIlroy. (Dennis Ritchie is sometimes incorrectly cited as a co-author, but was not involved.) The game was developed at Bell Labs, and played on an IBM 7090 mainframe there. The game was only played for a few weeks before Morris developed an "ultimate" program that eventually brought the game to an end, as no-one managed to produce anything that could defeat it.
Description
The game consisted of a program called the umpire and a designated section of the computer's memory known as the arena, into which two or more small programs, written by the players, were loaded. The programs were written in 7090 machine code, and could call a number of functions provided by the umpire in order to probe other locations within the arena, kill opposing programs, and claim vacant memory for copies of themselves.
The game ended after a set amount of time, or when copies of only one program remained alive. The player who wrote the last surviving program was declared winner.
Up to 20 memory locations within each program (fewer in later versions of the game) could be designated as protected. If one of these protected locations was probed by another program, the umpire would immediately transfer control to the program that was probed. This program would then continue to execute until it, in turn, probed a protected location of some other program, and so forth.
While the programs were responsible for copying and relocating themselves, they were forbidden from altering memory locations outside themselves without permission from the umpire. As the programs were executed directly by the computer, there was no physical mechanism in place to prevent cheating. Instead, the source code for the programs was made available for study after each game, allowing players to learn from each other and to verify that their opponents hadn't cheated.
The smallest program that could |
https://en.wikipedia.org/wiki/Multiplication%20%28music%29 | The mathematical operations of multiplication have several applications to music. Other than its application to the frequency ratios of intervals (for example, Just intonation, and the twelfth root of two in equal temperament), it has been used in other ways for twelve-tone technique, and musical set theory. Additionally ring modulation is an electrical audio process involving multiplication that has been used for musical effect.
A multiplicative operation is a mapping in which the argument is multiplied. Multiplication originated intuitively in interval expansion, including tone row order number rotation, for example in the music of Béla Bartók and Alban Berg. Pitch number rotation, Fünferreihe or "five-series" and Siebenerreihe or "seven-series", was first described by Ernst Krenek in Über neue Musik. Princeton-based theorists, including James K. Randall, Godfrey Winham, and Hubert S. Howe "were the first to discuss and adopt them, not only with regards to twelve-tone series".
Pitch-class multiplication modulo 12
When dealing with pitch-class sets, multiplication modulo 12 is a common operation. Dealing with all twelve tones, or a tone row, there are only a few numbers which one may multiply a row by and still end up with a set of twelve distinct tones. Taking the prime or unaltered form as P0, multiplication is indicated by Mx, x being the multiplicator:
Mx(y) ≡ xy mod 12
The following table lists all possible multiplications of a chromatic twelve-tone row:
Note that only M1, M5, M7, and M11 give a one-to-one mapping (a complete set of 12 unique tones). This is because each of these numbers is relatively prime to 12. Also interesting is that the chromatic scale is mapped to the circle of fourths with M5, or fifths with M7, and more generally under M7 all even numbers stay the same while odd numbers are transposed by a tritone. This kind of multiplication is frequently combined with a transposition operation. It was first described in print by Herbert Eime |
https://en.wikipedia.org/wiki/Primary%20progressive%20aphasia | Primary progressive aphasia (PPA) is a type of neurological syndrome in which language capabilities slowly and progressively become impaired. As with other types of aphasia, the symptoms that accompany PPA depend on what parts of the left hemisphere are significantly damaged. However, unlike most other aphasias, PPA results from continuous deterioration in brain tissue, which leads to early symptoms being far less detrimental than later symptoms.
Those with PPA slowly lose the ability to speak, write, read, and generally comprehend language. Eventually, almost every patient becomes mute and completely loses the ability to understand both written and spoken language. Although it was first described as solely impairment of language capabilities while other mental functions remain intact, it is now recognized that many, if not most of those with PPA experience impairment of memory, short-term memory formation and loss of executive functions.
It was first described as a distinct syndrome by M. Marsel Mesulam in 1982. Primary progressive aphasias have a clinical and pathological overlap with the frontotemporal lobar degeneration (FTLD) spectrum of disorders and Alzheimer's disease. However, PPA is not considered synonymous to Alzheimer's disease due to the fact that, unlike those affected by Alzheimer's disease, those with PPA are generally able to maintain the ability to care for themselves, remain employed, and pursue interests and hobbies.
Moreover, in diseases such as Alzheimer's disease, Pick's disease, and Creutzfeldt-Jakob disease, progressive deterioration of comprehension and production of language is just one of the many possible types of mental deterioration, such as the progressive decline of memory, motor skills, reasoning, awareness, and visuospatial skills.
Causes
Currently, the specific causes for PPA and other degenerative brain disease similar to PPA are viewed as idiopathic (unknown). Autopsies have revealed a variety of brain abnormalities in |
https://en.wikipedia.org/wiki/Equidistribution%20theorem | In mathematics, the equidistribution theorem is the statement that the sequence
a, 2a, 3a, ... mod 1
is uniformly distributed on the circle , when a is an irrational number. It is a special case of the ergodic theorem where one takes the normalized angle measure .
History
While this theorem was proved in 1909 and 1910 separately by Hermann Weyl, Wacław Sierpiński and Piers Bohl, variants of this theorem continue to be studied to this day.
In 1916, Weyl proved that the sequence a, 22a, 32a, ... mod 1 is uniformly distributed on the unit interval. In 1937, Ivan Vinogradov proved that the sequence pn a mod 1 is uniformly distributed, where pn is the nth prime. Vinogradov's proof was a byproduct of the odd Goldbach conjecture, that every sufficiently large odd number is the sum of three primes.
George Birkhoff, in 1931, and Aleksandr Khinchin, in 1933, proved that the generalization x + na, for almost all x, is equidistributed on any Lebesgue measurable subset of the unit interval. The corresponding generalizations for the Weyl and Vinogradov results were proven by Jean Bourgain in 1988.
Specifically, Khinchin showed that the identity
holds for almost all x and any Lebesgue integrable function ƒ. In modern formulations, it is asked under what conditions the identity
might hold, given some general sequence bk.
One noteworthy result is that the sequence 2ka mod 1 is uniformly distributed for almost all, but not all, irrational a. Similarly, for the sequence bk = 2ka, for every irrational a, and almost all x, there exists a function ƒ for which the sum diverges. In this sense, this sequence is considered to be a universally bad averaging sequence, as opposed to bk = k, which is termed a universally good averaging sequence, because it does not have the latter shortcoming.
A powerful general result is Weyl's criterion, which shows that equidistribution is equivalent to having a non-trivial estimate for the exponential sums formed with the sequence as exponen |
https://en.wikipedia.org/wiki/Host%20signal%20processing | Host signal processing (HSP) is a term used in computing to describe hardware such as a modem or printer which is emulated (to various degrees) in software. Intel refers to the technology as native signal processing (NSP). HSP replaces dedicated DSP or ASIC hardware by using the general purpose CPU of the host computer.
Modems using HSP are known as winmodems (a term trademarked by 3COM / USRobotics, but genericized) or softmodems. Printers using HSP are known as GDI printers (after the MS Windows GDI software interface), winprinters (named after winmodems) or softprinters.
The Apple II Disk II floppy drive used the host CPU to process drive control signals, instead of a microcontroller. This instance of HSP predates the usage of the terms HSP and NSP.
In the mid- to late-1990s, Intel pursued native signal processing technology to improve multimedia handling. According to testimony by Intel, Microsoft opposed development of NSP because the technology could reduce the necessity of the Microsoft Windows operating system. Intel claims to have terminated development of NSP because of threats from Microsoft. |
https://en.wikipedia.org/wiki/American%20Morse%20code | American Morse Code — also known as Railroad Morse—is the latter-day name for the original version of the Morse Code developed in the mid-1840s, by Samuel Morse and Alfred Vail for their electric telegraph. The "American" qualifier was added because, after most of the rest of the world adopted "International Morse Code," the companies that continued to use the original Morse Code were mainly located in the United States. American Morse is now nearly extinct—it is most frequently seen in American railroad museums and American Civil War reenactments—and "Morse Code" today virtually always means the International Morse which supplanted American Morse.
History
American Morse Code was first used on the Baltimore-Washington telegraph line, a telegraph line constructed between Baltimore, Maryland, and the old Supreme Court chamber in the Capitol building in Washington, D.C. The first public message "What hath God wrought" was sent on May 24, 1844, by Morse in Washington to Alfred Vail at the Baltimore and Ohio Railroad (B&O) "outer depot" (now the B&O Railroad Museum) in Baltimore. The message is a Bible verse from Numbers 23:23, chosen for Morse by Annie Ellsworth, daughter of the Governor of Connecticut. The original paper tape received by Vail in Baltimore is on display in the Library of Congress in Washington, D.C.
In its original implementation, the Morse Code specification included the following:
short mark or dot ()
longer mark or dash ()
intra-character gap (standard gap between the dots and dashes in a character)
short gap (between letters)
medium gap (between words)
long gap (between sentences)
long intra-character gap (longer internal gap used in C, O, R, Y, Z and &)
"long dash" (, the letter L)
even longer dash (, the numeral 0)
Various other companies and countries soon developed their own variations of the original Morse Code. Of special importance was one standard, originally created in Germany by Friedrich Clemens Gerke in 1848, which was sim |
https://en.wikipedia.org/wiki/Cross-sectional%20regression | In statistics and econometrics, a cross-sectional regression is a type of regression in which the explained and explanatory variables are all associated with the same single period or point in time. This type of cross-sectional analysis is in contrast to a time-series regression or longitudinal regression in which the variables are considered to be associated with a sequence of points in time.
For example, in economics a regression to explain and predict money demand (how much people choose to hold in the form of the most liquid assets) could be conducted with either cross-sectional or time series data. A cross-sectional regression would have as each data point an observation on a particular individual's money holdings, income, and perhaps other variables at a single point in time, and different data points would reflect different individuals at the same point in time. In contrast, a regression using time series would have as each data point an entire economy's money holdings, income, etc. at one point in time, and different data points would be drawn on the same economy but at different points in time.
See also
Linear regression
Regression analysis |
https://en.wikipedia.org/wiki/Krylov%20subspace | In linear algebra, the order-r Krylov subspace generated by an n-by-n matrix A and a vector b of dimension n is the linear subspace spanned by the images of b under the first r powers of A (starting from ), that is,
Background
The concept is named after Russian applied mathematician and naval engineer Alexei Krylov, who published a paper about it in 1931.
Properties
.
Let . Then are linearly independent unless , for all , and . So is the maximal dimension of the Krylov subspaces .
The maximal dimension satisfies and .
Consider , where is the minimal polynomial of . We have . Moreover, for any , there exists a for which this bound is tight, i.e. .
is a cyclic submodule generated by of the torsion -module , where is the linear space on .
can be decomposed as the direct sum of Krylov subspaces.
Use
Krylov subspaces are used in algorithms for finding approximate solutions to high-dimensional linear algebra problems. Many linear dynamical system tests in control theory, especially those related to controllability and observability, involve checking the rank of the Krylov subspace. These tests are equivalent to finding the span of the Gramians associated with the system/output maps so the uncontrollable and unobservable subspaces are simply the orthogonal complement to the Krylov subspace.
Modern iterative methods such as Arnoldi iteration can be used for finding one (or a few) eigenvalues of large sparse matrices or solving large systems of linear equations. They try to avoid matrix-matrix operations, but rather multiply vectors by the matrix and work with the resulting vectors. Starting with a vector , one computes , then one multiplies that vector by to find and so on. All algorithms that work this way are referred to as Krylov subspace methods; they are among the most successful methods currently available in numerical linear algebra. These methods can be used in situations where there is an algorithm to compute the matrix-vector multiplication |
https://en.wikipedia.org/wiki/Yearbook%20of%20International%20Organizations | The Yearbook of International Organizations is a reference work on non-profit international organizations, published by the Union of International Associations. It was first published in 1908 under the title Annuaire de la vie internationale, and has been known under its current title since 1950. It is seen as a quasi-official source associated with the United Nations.
The Yearbook contains profiles of over 67,000 organizations active in about 300 countries and territories in every field of human endeavor. It profiles both international intergovernmental organizations and non-governmental organizations, from formal structures to informal networks, from professional bodies to recreational clubs. The Yearbook does not, however, include for-profit enterprises. Profiles include names and addresses, historical and structural information, aims, links with other organizations, as well as specifics on activities, events, publications and membership. In addition to organization profiles, the Yearbook also provides biographies of important members, a bibliography of the important publications of international organizations, and statistics.
The Yearbook is published in six book volumes and online.
See also
International Congress Calendar
Encyclopedia of World Problems and Human Potential |
https://en.wikipedia.org/wiki/Uranium%20rhodium%20germanium | Uranium rhodium germanium (URhGe) is the first discovered metal that becomes superconducting in the presence of an extremely strong electromagnetic field. Very unlike other superconducting materials, whose superconducting properties can be lost due to strong magnetic fields, uranium rhodium germanium actually regains superconducting abilities at about 8 teslas.
Process
URhGe's critical temperature (Tc) is normally about 280 millikelvins.
The Grenoble team in France, headed by Andrew D. Huxley, first cooled down the sample below its critical temperature and raised the magnetic field to 2 T. As expected, the sample's superconducting properties vanished. However, when the team raised the magnetic field to 8 T, the superconducting behavior continued. The critical temperature at that field strength increased to about 400 millikelvins. The sample retained the superconducting state until 13 T. They also found that at 12 T, the URhGe sample experienced a magnetic phase transition. |
https://en.wikipedia.org/wiki/Content%20reference%20identifier |
Overview
A content reference identifier or CRID is a concept from the standardization work done by the TV-Anytime forum. It is or closely matches the concept of the Uniform Resource Locator, or URL, as used on the World-Wide Web:
The concept of CRID permits referencing contents unambiguously, regardless of their location, i.e., without knowing specific broadcast information (time, date and channel) or how to obtain them through a network, for instance, by means of a streaming service or by downloading a file from an Internet server.
The receiver must be capable of resolving these unambiguous references, i.e. of translating them into specific data that will allow it to obtain the location of that content in order to acquire it. This makes it possible for recording processes to take place without knowing that information, and even without knowing beforehand the duration of the content to be recorded: a complete series by a simple click, a program that has not been scheduled yet, a set of programs grouped by a specific criterion…
This framework allows for the separation between the reference to a given content (the CRID) and the necessary information to acquire it, which is called a “locator”. Each CRID may lead to one or more locators which will represent different copies of the same content. They may be identical copies broadcast in different channels or dates, or cost different prices. They may also be distinct copies with different technical parameters such as format or quality.
It may also be the case that the resolution process of a CRID provides another CRID as a result (for example, its reference in a different network, where it has an alternative identifier assigned by a different operator) or a set of CRIDs (for instance, if the original CRID represents a TV series, in which case the resolution process would result in the list of CRIDs representing each episode).
From the above it can be concluded that provided that a given content can belong to many |
https://en.wikipedia.org/wiki/TV-Anytime | TV-Anytime is a set of specifications for the controlled delivery of multimedia content to a user's local storage. It seeks to exploit the evolution in convenient, high capacity storage of digital information to provide consumers with a highly personalized TV experience. Users will have access to content from a wide variety of sources, tailored to their needs and personal preferences. TV-Anytime specifications are specified by the TV-Anytime Forum.
The TV-Anytime Forum
The global TV-Anytime Forum is an association of organizations which seeks to develop specifications to enable audio-visual and other services based on mass-market high volume digital storage in consumer platforms.
It was formed in Newport Beach, California, United States, on 27–29 September 1999 after DAVIC was closed down. It was wound up on 27 July 2005 following the publication of RFC 4078 (reference: http://www.tv-anytime.org/).
Its first specifications were published by the European Telecommunications Standards Institute (ETSI) on August 1, 2003 as TS 102 822-1 'Broadcast and On-line Services: Search, select, and rightful use of content on personal storage systems ("TV-Anytime")'. RFC 4078 (The TV-Anytime Content Reference Identifier (CRID)) was published in May 2005.
TV-Anytime has more than 60 member companies from Europe (including BBC, BSkyB, Canal+ Technologies, Disney, EBU, Nederlands Omroep Productie Bedrijf (NOB) , France Telecom, Nokia, Philips, PTT Research , Thomson), Asia (including ETRI, KETI, NHK, NTT, Dentsu, Hakuhodo, Nippon TV, Sony, Panasonic, LG, Samsung, Sharp, Toshiba) and the USA (including Motorola, Microsoft, and Nielsen).
The objectives
The TV-Anytime Forum has set up the following four objectives for their standardization work:
Develop specifications that will enable applications to exploit local persistent storage in consumer electronics platforms.
Be network independent with regard to the means for content delivery to consumer electronics equipment, including |
https://en.wikipedia.org/wiki/Bus%20bunching | In public transport, bus bunching, clumping, convoying, piggybacking or platooning is a phenomenon whereby two or more transit vehicles (such as buses or trains) that were scheduled at regular intervals along a common route instead bunch together and form a platoon. This occurs when leading vehicles are unable to keep their schedule and fall behind to such an extent that trailing vehicles catch up to them.
Description
A bus that is running slightly late will, in addition to its normal load, pick up passengers who would have taken the next bus if the first bus had not been late. These extra passengers delay the first bus even further. In contrast, the bus behind the late bus has a lighter passenger load than it otherwise would have, and may therefore run ahead of schedule. The classical theory causal model for irregular intervals is based on the observation that a late bus tends to get later and later as it completes its run, while the bus following it tends to get earlier and earlier. Eventually these buses form a pair, one right after another, and the service deteriorates as the headway degrades from its nominal value. The buses that are stuck together are called a bus bunch or banana bus; this may also involve more than two buses. This effect is often theorised to be the primary cause of reliability problems on bus and metro systems. Simulation studies have successfully demonstrated the extent of possible factors influencing bus bunching, and they may also be used to understand the impact of actions taken to overcome negative effects of bunching.
Clumping can be caused by random heavy usage of any particular vehicle, resulting in it falling behind schedule. The leading vehicle eventually lapses towards the time slot of a later scheduled vehicle. Sometimes, the later scheduled vehicle gets ahead of its own timetable, and the two vehicles meet between their scheduled times. Sometimes one scheduled vehicle may pass another.
Clumping can be prevented or reduced as |
https://en.wikipedia.org/wiki/Chorus%20%28audio%20effect%29 | Chorus (or chorusing, choruser or chorused effect) is an audio effect that occurs when individual sounds with approximately the same time, and very similar pitches, converge. While similar sounds coming from multiple sources can occur naturally, as in the case of a choir or string orchestra, it can also be simulated using an electronic effects unit or signal processing device.
When the effect is produced successfully, none of the constituent sounds are perceived as being out of tune. It is characteristic of sounds with a rich, shimmering quality that would be absent if the sound came from a single source. The shimmer occurs because of beating. The effect is more apparent when listening to sounds that sustain for longer periods of time.
The chorus effect is especially easy to hear when listening to a choir or string ensemble. A choir has multiple people singing each part (alto, tenor, etc.). A string ensemble has multiple violinists and possibly multiples of other stringed instruments.
Acoustically created
Although most acoustic instruments cannot produce a chorus effect by themselves, some instruments (particularly, chordophones with multiple courses of strings) can produce it as part of their own design. The effect can make these acoustic instruments sound fuller and louder than by using a single tone generator (b.e.: a single vibrating string or a reed). Some examples:
Piano – Each of the hammers strikes a course of multiple strings tuned to nearly the same pitch (for all notes except the bass notes). Professional piano tuners carefully control the mistuning of each string to add movement without losing clarity. However, in some poorly-cared instruments (like the honky-tonk pianos), the effect is more prominent.
Santur (and similar coursed-hammered dulcimers) – As well as on the piano, the player can strike (by using a pair of manual hammers instead) a course of multiple strings tuned to nearly the same pitch. As the instrument is frequently tuned by the mus |
https://en.wikipedia.org/wiki/GUID%20Partition%20Table | The GUID Partition Table (GPT) is a standard for the layout of partition tables of a physical computer storage device, such as a hard disk drive or solid-state drive, using universally unique identifiers, which are also known as globally unique identifiers (GUIDs). Forming a part of the Unified Extensible Firmware Interface (UEFI) standard (Unified EFI Forum-proposed replacement for the PC BIOS), it is nevertheless also used for some BIOSs, because of the limitations of master boot record (MBR) partition tables, which use 32 bits for logical block addressing (LBA) of traditional 512-byte disk sectors.
All modern personal computer operating systems support GPT. Some, including macOS and Microsoft Windows on the x86 architecture, support booting from GPT partitions only on systems with EFI firmware, but FreeBSD and most Linux distributions can boot from GPT partitions on systems with either the BIOS or the EFI firmware interface.
History
The Master Boot Record (MBR) partitioning scheme, widely used since the early 1980s, imposed limitations for use of modern hardware. The available size for block addresses and related information is limited to 32 bits. For hard disks with 512byte sectors, the MBR partition table entries allow a maximum size of 2 TiB (2³² × 512bytes) or 2.20 TB (2.20 × 10¹² bytes).
In the late 1990s, Intel developed a new partition table format as part of what eventually became the Unified Extensible Firmware Interface (UEFI). The GUID Partition Table is specified in chapter 5 of the UEFI 2.8 specification. GPT uses 64 bits for logical block addresses, allowing a maximum disk size of 264 sectors. For disks with 512byte sectors, the maximum size is 8 ZiB (264 × 512bytes) or 9.44 ZB (9.44 × 10²¹ bytes). For disks with 4,096byte sectors the maximum size is 64 ZiB (264 × 4,096bytes) or 75.6 ZB (75.6 × 10²¹ bytes).
In 2010, hard-disk manufacturers introduced drives with 4,096byte sectors (Advanced Format). For compatibility with legacy hardware and so |
https://en.wikipedia.org/wiki/SH2%20domain | The SH2 (Src Homology 2) domain is a structurally conserved protein domain contained within the Src oncoprotein and in many other intracellular signal-transducing proteins. SH2 domains bind to phosphorylated tyrosine residues on other proteins, modifying the function or activity of the SH2-containing protein. The SH2 domain may be considered the prototypical modular protein-protein interaction domain, allowing the transmission of signals controlling a variety of cellular functions. SH2 domains are especially common in adaptor proteins that aid in the signal transduction of receptor tyrosine kinase pathways.
Structure and interactions
SH2 domains contain about 100 amino acid residues and exhibit a central antiparallel β-sheet centered between two α-helices. Binding to phosphotyrosine-containing peptides involves a strictly-conserved Arg residue that pairs with the negatively-charged phosphate on the phosphotyrosine, and a surrounding pocket that recognizes flanking sequences on the target peptide. Compared to other signaling proteins, SH2 domains exhibit only a moderate degree of specificity for their target peptides, due to the relative weakness of the interactions with the flanking sequences.
Over 100 human proteins are known to contain SH2 domains. A variety of tyrosine-containing sequences have been found to bind SH2 domains and are conserved across a wide range of organisms, performing similar functions. Binding of a phosphotyrosine-containing protein to an SH2 domain may lead to either activation or inactivation of the SH2-containing protein, depending on the types of interactions formed between the SH2 domain and other domains of the enzyme. Mutations that disrupt the structural stability of the SH2 domain, or that affect the binding of the phosphotyrosine peptide of the target, are involved in a range of diseases including X-linked agammaglobulinemia and severe combined immunodeficiency.
Diversity
SH2 domains are not present in yeast and appear at t |
https://en.wikipedia.org/wiki/Orientation%20%28geometry%29 | In geometry, the orientation, attitude, bearing, direction, or angular position of an object – such as a line, plane or rigid body – is part of the description of how it is placed in the space it occupies.
More specifically, it refers to the imaginary rotation that is needed to move the object from a reference placement to its current placement. A rotation may not be enough to reach the current placement, in which case it may be necessary to add an imaginary translation to change the object's position (or linear position). The position and orientation together fully describe how the object is placed in space. The above-mentioned imaginary rotation and translation may be thought to occur in any order, as the orientation of an object does not change when it translates, and its position does not change when it rotates.
Euler's rotation theorem shows that in three dimensions any orientation can be reached with a single rotation around a fixed axis. This gives one common way of representing the orientation using an axis–angle representation. Other widely used methods include rotation quaternions, rotors, Euler angles, or rotation matrices. More specialist uses include Miller indices in crystallography, strike and dip in geology and grade on maps and signs.
Unit vector may also be used to represent an object's normal vector orientation or the relative direction between two points.
Typically, the orientation is given relative to a frame of reference, usually specified by a Cartesian coordinate system.
Two objects sharing the same direction are said to be codirectional (as in parallel lines).
Two directions are said to be opposite if they are the additive inverse of one another, as in an arbitrary unit vector and its multiplication by -1.
Two directions are obtuse if they form an obtuse angle (greater than a right angle) or, equivalently, if their scalar product or scalar projection is negative.
Mathematical representations
Three dimensions
In general the position and |
https://en.wikipedia.org/wiki/Coherent%20control | Coherent control is a quantum mechanics-based method for controlling dynamic processes by light. The basic principle is to control quantum interference phenomena, typically by shaping the phase of laser pulses. The basic ideas have proliferated, finding vast application in spectroscopy mass spectra, quantum information processing, laser cooling, ultracold physics and more.
Brief History
The initial idea was to control the outcome of chemical reactions. Two approaches were pursued:
in the time domain, a "pump-dump" scheme where the control is the time delay between pulses
in the frequency domain, interfering pathways controlled by one and three photons.
The two basic methods eventually merged with the introduction of optimal control theory.
Experimental realizations soon followed in the time domain and in the frequency domain. Two interlinked developments accelerated the field of coherent control: experimentally, it was the development of pulse shaping by a spatial light modulator and its employment in coherent control. The second development was the idea of automatic feedback control and its experimental realization.
Controllability
Coherent control aims to steer a quantum system from an initial state to a target state via an external field. For given initial and final (target) states, the coherent control is termed state-to-state control. A generalization is steering simultaneously an arbitrary set of initial pure states to an arbitrary set of final states i.e. controlling a unitary transformation. Such an application sets the foundation for a quantum gate operation.
Controllability of a closed quantum system has been addressed by Tarn and Clark. Their theorem based in control theory states that for a finite-dimensional, closed-quantum system, the system is completely controllable, i.e. an arbitrary unitary transformation of the system can be realized by an appropriate application of the controls if the control operators and the unperturbed Hamiltonian |
https://en.wikipedia.org/wiki/Robinson%20arithmetic | In mathematics, Robinson arithmetic is a finitely axiomatized fragment of first-order Peano arithmetic (PA), first set out by Raphael M. Robinson in 1950. It is usually denoted Q. Q is almost PA without the axiom schema of mathematical induction. Q is weaker than PA but it has the same language, and both theories are incomplete. Q is important and interesting because it is a finitely axiomatized fragment of PA that is recursively incompletable and essentially undecidable.
Axioms
The background logic of Q is first-order logic with identity, denoted by infix '='. The individuals, called natural numbers, are members of a set called N with a distinguished member 0, called zero. There are three operations over N:
A unary operation called successor and denoted by prefix S;
Two binary operations, addition and multiplication, denoted by infix + and ·, respectively.
The following axioms for Q are Q1–Q7 in (cf. also the axioms of first-order arithmetic). Variables not bound by an existential quantifier are bound by an implicit universal quantifier.
Sx ≠ 0
0 is not the successor of any number.
(Sx = Sy) → x = y
If the successor of x is identical to the successor of y, then x and y are identical. (1) and (2) yield the minimum of facts about N (it is an infinite set bounded by 0) and S (it is an injective function whose domain is N) needed for non-triviality. The converse of (2) follows from the properties of identity.
y=0 ∨ ∃x (Sx = y)
Every number is either 0 or the successor of some number. The axiom schema of mathematical induction present in arithmetics stronger than Q turns this axiom into a theorem.
x + 0 = x
x + Sy = S(x + y)
(4) and (5) are the recursive definition of addition.
x·0 = 0
x·Sy = (x·y) + x
(6) and (7) are the recursive definition of multiplication.
Variant axiomatizations
The axioms in are (1)–(13) in . The first 6 of Robinson's 13 axioms are required only when, unlike here, the background logic does not include identity.
The usual stri |
https://en.wikipedia.org/wiki/Capacitance%20multiplier | A capacitance multiplier is designed to make a capacitor function like a much larger capacitor. This can be achieved in at least two ways.
An active circuit, using a device such as a transistor or operational amplifier
A passive circuit, using autotransformers. These are typically used for calibration standards. The General Radio / IET labs 1417 is one such example.
Capacitor multipliers make low-frequency filters and long-duration timing circuits possible that would be impractical with actual capacitors. Another application is in DC power supplies where very low ripple voltage (under load) is of paramount importance, such as in class-A amplifiers.
Transistor-based
Here the capacitance of capacitor C1 is multiplied by approximately the transistor's current gain (β).
Without Q, R2 would be the load on the capacitor. With Q in place, the loading imposed upon C1 is simply the load current reduced by a factor of (β + 1). Consequently, C1 appears multiplied by a factor of (β + 1) when viewed by the load.
Another way is to look at this circuit as an emitter follower with capacitor C1 holding voltage at base constant with load of input impedance of Q1: R2 multiplied by (1 + β), so the output current is stabilized much more against power line voltage noise.
Operational amplifier based
Here, the capacitance of capacitor C1 is multiplied by the ratio of resistances: C = C1 * R1 / R2 at the Vi node.
The synthesized capacitance also brings a series resistance approximately equal to R2, and a leakage current appears across the capacitance because of the input offsets of OP. These problems can be avoided by a circuit with two op amps. In this circuit the input to OP1 can be a.c.-coupled if necessary, and the capacitance can be made variable by making the ratio of R1 to R2 variable. C = C1 * (1 + (R2 / R1)).
In the circuits described above the capacitance is grounded, but floating capacitance multipliers are possible.
A negative capacitance multiplier can be create |
https://en.wikipedia.org/wiki/Crocket | A crocket (or croquet) is a small, independent decorative element common in Gothic architecture. The name derives from the diminutive of the French croc, meaning "hook", due to the resemblance of a crocket to a bishop's crook-shaped crosier.
Description
Crockets, in the form of stylized carvings of curled leaves, buds or flowers, are used at regular intervals to decorate (for example) the sloping edges of spires, finials, pinnacles, and wimpergs.
As ornaments
When crockets decorate the capitals of columns, these are called crocket capitals. This element is also used as an ornament on furniture and metalwork in the Gothic style.
Examples
All Souls College – Oxford
Canterbury Cathedral
Notre Dame Cathedral – Paris
León Cathedral – Spain
Duke Chapel |
https://en.wikipedia.org/wiki/Evolutionarily%20stable%20state | A population can be described as being in an evolutionarily stable state when that population's "genetic composition is restored by selection after a disturbance, provided the disturbance is not too large" (Maynard Smith, 1982). This population as a whole can be either monomorphic or polymorphic. This is now referred to as convergent stability.
History & connection to evolutionary stable strategy
While related to the concept of an evolutionarily stable strategy (ESS), evolutionarily stable states are not identical and the two terms cannot be used interchangeably.
An ESS is a strategy that, if adopted by all individuals within a population, cannot be invaded by alternative or mutant strategies. This strategy becomes fixed in the population because alternatives provide no fitness benefit that would be selected for. In comparison, an evolutionarily stable state describes a population that returns as a whole to its previous composition even after being disturbed. In short: the ESS refers to the strategy itself, uninterrupted and supported through natural selection, while the evolutionarily stable state refers more broadly to a population-wide balance of one or more strategies that may be subjected to temporary change.
The term ESS was first used by John Maynard Smith in an essay from the 1972 book On Evolution. Maynard Smith developed the ESS drawing in part from game theory and Hamilton’s work on the evolution of sex ratio. The ESS was later expanded upon in his book Evolution and the Theory of Games in 1982, which also discussed the evolutionarily stable state.
Mixed v. single strategies
There has been variation in how the term is used and exploration of under what conditions an evolutionarily stable state might exist. In 1984, Benhard Thomas compared “discrete” models in which all individuals use only one strategy to “continuous” models in which individuals employ mixed strategies. While Maynard Smith had originally defined an ESS as being a single “uninvadable |
https://en.wikipedia.org/wiki/English%20in%20computing | The English language is sometimes described as the lingua franca of computing. In comparison to other sciences, where Latin and Greek are often the principal sources of vocabulary, computer science borrows more extensively from English. In the past, due to the technical limitations of early computers, and the lack of international standardization on the Internet, computer users were limited to using English and the Latin alphabet. However, this historical limitation is less present today, due to innovations in internet infrastructure and increases in computer speed. Most software products are localized in numerous languages and the invention of the Unicode character encoding has resolved problems with non-Latin alphabets. Some limitations have only been changed recently, such as with domain names, which previously allowed only ASCII characters.
English is seen as having this role due to the prominence of the United States and the United Kingdom, both English-speaking countries, in the development and popularization of computer systems, computer networks, software and information technology.
History
Computer Science has an ultimately mathematical foundation which was laid by non-English speaking cultures. The first mathematically literate societies in the Ancient Near East recorded methods for solving mathematical problems in steps, The word 'algorithm' comes from the name of a famous medieval Arabic mathematician who contributed to the spread of Hindu-Arabic numerals, al-Khwārizmī, and the first systematic treatment of binary numbers was completed by Leibniz, a German mathematician. Leibniz wrote his treatise on the topic in French, the lingua franca of science at the time, and innovations in what is now called Computer hardware occurred outside of an English tradition, with Pascal inventing the first mechanical calculator, and Leibniz improving it.
Interest in building computing machines first emerged in the 19th century, with the coming of the Second Industrial |
https://en.wikipedia.org/wiki/National%20Archive%20of%20Computerized%20Data%20on%20Aging | The National Archive of Computerized Data on Aging (NACDA), located within ICPSR, is funded by the National Institute on Aging (NIA). NACDA's mission is to advance research on aging by helping researchers to profit from the under-exploited potential of a broad range of datasets.
NACDA acquires and preserves data relevant to gerontological research, processing as needed to promote effective research use, disseminates them to researchers, and facilitates their use. By preserving and making available the largest library of electronic data on aging in the United States, NACDA offers opportunities for secondary analysis on major issues of scientific and policy relevance.
Description
A program within the Inter-university Consortium for Political and Social Research (ICPSR) at the University of Michigan. The NACDA collection consists of over sixteen hundred datasets relevant to gerontological research and represents the world's largest collection of publicly available research data on the aging lifecourse.
History
The NACDA Program on Aging began over 45 years ago under the sponsorship of the United States Administration on Aging (AoA). At that time NACDA was seen as a novel experiment - neither the concept of a research archive devoted to aging issues nor the idea of making research data freely available to the public were well established. Over the years, NACDA’s mission has changed both in scope and in direction. Originally conceived as a storehouse for data, NACDA has aggressively pursued a role of increasing involvement in the research community by actively promoting and distributing data. In 1984, the NIA became the sponsor of the National Archive of Computerized Data on Aging, and NACDA has flourished under its support. Over the years, NACDA has evolved and grown in response to changes in technology. In many instances, leading the pace of change in methodology related to the storage, protection, and distribution of data.
NACDA was one of the first organizations |
https://en.wikipedia.org/wiki/Heyting%20arithmetic | In mathematical logic, Heyting arithmetic is an axiomatization of arithmetic in accordance with the philosophy of intuitionism. It is named after Arend Heyting, who first proposed it.
Axiomatization
Heyting arithmetic can be characterized just like the first-order theory of Peano arithmetic , except that it uses the intuitionistic predicate calculus for inference. In particular, this means that the double-negation elimination principle, as well as the principle of the excluded middle , do not hold. Note that to say does not hold exactly means that the excluded middle statement is not automatically provable for all propositions - indeed many such statements are still provable in and the negation of any such disjunction is inconsistent. is strictly stronger than in the sense that all -theorems are also -theorems.
Heyting arithmetic comprises the axioms of Peano arithmetic and the intended model is the collection of natural numbers . The signature includes zero "" and the successor "", and the theories characterize addition and multiplication. This impacts the logic: With , it is a metatheorem that can be defined as and so that is for every proposition . The negation of is of the form and thus a trivial proposition.
For numbers, write for .
For a fixed number , the equality is true by reflexivity and a proposition is equivalent to .
It may be shown that can then be defined as . This formal elimination of disjunctions was not possible in the quantifier-free primitive recursive arithmetic . The theory may be extended with function symbols for any primitive recursive function, making also a fragment of this theory. For a total function , one often considers predicates of the form .
Theorems
Double negations
With explosion valid in any intuitionistic theory, if is a theorem for some , then by definition is provable if and only if the theory is inconsistent. Indeed, in Heyting arithmetic the double-negation explicitly expresses . For a predicate , |
https://en.wikipedia.org/wiki/Luminous%20energy | In photometry, luminous energy is the perceived energy of light. This is sometimes called the quantity of light. Luminous energy is not the same as radiant energy, the corresponding objective physical quantity. This is because the human eye can only see light in the visible spectrum and has different sensitivities to light of different wavelengths within the spectrum. When adapted for bright conditions (photopic vision), the eye is most sensitive to light at a wavelength of 555 nm. Light with a given amount of radiant energy will have more luminous energy if the wavelength is 555 nm than if the wavelength is longer or shorter. Light whose wavelength is well outside the visible spectrum has a luminous energy of zero, regardless of the amount of radiant energy present.
The SI unit of luminous energy is the lumen second, which is unofficially known as the talbot in honor of William Henry Fox Talbot. In other systems of units, luminous energy may be expressed in basic units of energy.
Explanation
Luminous energy is related to radiant energy by the expression
Here is the wavelength of light, and is the luminosity function, which represents the eye's sensitivity to different wavelengths of light.
Luminous energy is the integrated luminous flux in a given period of time:
See also
Coefficient of utilization
Radiant energy |
https://en.wikipedia.org/wiki/AMTOR | AMTOR (Amateur Teleprinting Over Radio) is a type of telecommunications system that consists of two or more electromechanical teleprinters in different locations that send and receive messages to one another. AMTOR is a specialized form of RTTY protocol. The term is an acronym for Amateur Teleprinting Over Radio and is derived from ITU-R recommendation 476-1 and is known commercially as SITOR (Simplex Telex Over radio) developed primarily for maritime use in the 1970s. AMTOR was developed in 1978 by Peter Martinez, G3PLX, with the first contact taking place in September 1978 with G3YYD on the 2m Amateur band. It was developed on homemade Motorola 6800-based microcomputers in assembler code. It was used extensively by amateur radio operators in the 1980s and 1990s but has now fallen out of use as improved PC-based data modes are now used and teleprinters became out of fashion.
AMTOR improves on RTTY by incorporating error detection or error correction techniques. The protocol remains relatively uncomplicated and AMTOR performs well even in poor and noisy HF conditions. AMTOR operates in one of two modes: an error detection mode and an automatic repeat request (ARQ) mode.
The AMTOR protocol utilizes a 7-bit code for each character, with each code-word having four mark and three space bits. If the received code does not match a four-to-three (4:3) ratio, the receiver assumes an error has occurred. In error detection mode, the code word will be dropped; in automatic repeat request mode, the receiver requests that the original data be resent. AMTOR also supports FEC in which simple bit-errors can be corrected.
AMTOR utilizes FSK, with a frequency shift of 170 Hz, and a symbol rate of 100 Baud.
AMTOR is rarely used today, as other protocols such as PSK31 are becoming favoured by amateur operators for real-time text communications. The ARRL has announced that as of August 17, 2009, it will be dropping AMTOR bulletin service in favor of the more popular MFSK16 and P |
https://en.wikipedia.org/wiki/Enstrophy | In fluid dynamics, the enstrophy can be interpreted as another type of potential density; or, more concretely, the quantity directly related to the kinetic energy in the flow model that corresponds to dissipation effects in the fluid. It is particularly useful in the study of turbulent flows, and is often identified in the study of thrusters as well as in combustion theory and meteorology.
Given a domain and a once-weakly differentiable vector field which represents a fluid flow, such as a solution to the Navier-Stokes equations, its enstrophy is given by:where . This quantity is the same as the squared seminorm of the solution in the Sobolev space .
Incompressible flow
In the case that the flow is incompressible, or equivalently that , the enstrophy can be described as the integral of the square of the vorticity :
or, in terms of the flow velocity:
In the context of the incompressible Navier-Stokes equations, enstrophy appears in the following useful result:
The quantity in parentheses on the left is the kinetic energy in the flow, so the result says that energy declines proportional to the kinematic viscosity times the enstrophy.
See also
Atmospheric circulation
Turbulence |
https://en.wikipedia.org/wiki/Zero-sum%20problem | In number theory, zero-sum problems are certain kinds of combinatorial problems about the structure of a finite abelian group. Concretely, given a finite abelian group G and a positive integer n, one asks for the smallest value of k such that every sequence of elements of G of size k contains n terms that sum to 0.
The classic result in this area is the 1961 theorem of Paul Erdős, Abraham Ginzburg, and Abraham Ziv. They proved that for the group of integers modulo n,
Explicitly this says that any multiset of 2n − 1 integers has a subset of size n the sum of whose elements is a multiple of n, but that the same is not true of multisets of size 2n − 2. (Indeed, the lower bound is easy to see: the multiset containing n − 1 copies of 0 and n − 1 copies of 1 contains no n-subset summing to a multiple of n.) This result is known as the Erdős–Ginzburg–Ziv theorem after its discoverers. It may also be deduced from the Cauchy–Davenport theorem.
More general results than this theorem exist, such as Olson's theorem, Kemnitz's conjecture (proved by Christian Reiher in 2003), and the weighted EGZ theorem (proved by David J. Grynkiewicz in 2005).
See also
Davenport constant
Subset sum problem |
https://en.wikipedia.org/wiki/I-drive | i-drive was a file hosting service that operated from 1998 to 2002.
The name derived from the words "Internet drive".
History
Based in San Francisco, the company was founded in 1998 with seed investors and launched its first product, an online file storage service in August 1999. The idea originated from an early company Jeff Bonforte co-founded in 1996 called ShellServer.net, which provided 10 MB of space for IRC users. Bonforte compiled the founding team, which included Chris Lindland, Patrick Fenton, Tim Craycroft, Rich MacAlmon, John Reddig and Lou Perrelli (the last three were also the company's first angel investors). Originally presented as i-drive.com, the company acquired the domain idrive.com around October 1999. The initial product offered a limited amount of free file storage space, and later enhanced the offering with 'sideloading' – storing files such as MP3 files collected on the World Wide Web without the need for the user to download them to their individual computer.
In January 2000, the company began offering unlimited storage space and an application called Filo.
In 2001 the company transitioned from offering the free storage service and transformed the underlying software architecture into a middleware storage mechanism and product, seeking to sell into various markets including the 3G marketplace, targeting companies such as DoCoMo and Earthlink.
In January 2002 the company name was changed to Anuvio Technologies.
i-drive's assets were acquired by the EMC Corporation in 2002. Certain assets (including the idrive.com domain name) were acquired by Pro Softnet Corp which also offered online storage services. At its height, i-drive hosted over 10 million registered users, employed 110 people, and held partnerships with MP3.com, ZDnet.com, and 40 major universities. The service was rated as a "Top 5 Web Application" by CNET in 2000 and one of the "3 Top Technologies to Watch" by Fortune Magazine in 2000. The company raised over US$30 million fro |
https://en.wikipedia.org/wiki/Chromalveolata | Chromalveolata was a eukaryote supergroup present in a major classification of 2005, then regarded as one of the six major groups within the eukaryotes. It was a refinement of the kingdom Chromista, first proposed by Thomas Cavalier-Smith in 1981. Chromalveolata was proposed to represent the organisms descended from a single secondary endosymbiosis involving a red alga and a bikont. The plastids in these organisms are those that contain chlorophyll c.
However, the monophyly of the Chromalveolata has been rejected. Thus, two papers published in 2008 have phylogenetic trees in which the chromalveolates are split up, and recent studies continue to support this view.
Groups and classification
Historically, many chromalveolates were considered plants, because of their cell walls, photosynthetic ability, and in some cases their morphological resemblance to the land plants (Embryophyta). However, when the five-kingdom system (proposed in 1969) took prevalence over the animal–plant dichotomy, most of what we now call chromalveolates were put into the kingdom Protista, but the water molds and slime nets were put into the kingdom Fungi, while the brown algae stayed in the plant kingdom. These various organisms were later grouped together and given the name Chromalveolata by Cavalier-Smith. He believed them to be a monophyletic group, but this is not the case.
In 2005, in a classification reflecting the consensus at the time, the Chromalveolata were regarded as one of the six major clades of eukaryotes. Although not given a formal taxonomic status in this classification, elsewhere the group had been treated as a Kingdom. The Chromalveolata were divided into four major subgroups:
Cryptophyta
Haptophyta
Stramenopiles (or Heterokontophyta)
Alveolata
Other groups that may be included within, or related to, chromalveolates, are:
Centrohelids
Katablepharids
Telonemia
Though several groups, such as the ciliates and the water molds, have lost the ability to photosynthesize, |
https://en.wikipedia.org/wiki/Blood%20volume | Blood volume (volemia) is the volume of blood (blood cells and plasma) in the circulatory system of any individual.
Humans
A typical adult has a blood volume of approximately 5 liters, with females and males having approximately the same blood percentage by weight (approx 7 to 8%) Blood volume is regulated by the kidneys.
Blood volume (BV) can be calculated given the hematocrit (HC; the fraction of blood that is red blood cells) and plasma volume (PV), with the hematocrit being regulated via the blood oxygen content regulator:
Blood volume measurement may be used in people with congestive heart failure, chronic hypertension, kidney failure and critical care.
The use of relative blood volume changes during dialysis is of questionable utility.
Total Blood Volume can be measured manually via the Dual Isotope or Dual Tracer Technique, a classic technique, available since the 1950s. This technique requires double labeling of the blood; that is 2 injections and 2 standards (51Cr-RBC for tagging red blood cells and I-HAS for tagging plasma volume) as well as withdrawing and re-infusing patients with their own blood for blood volume analysis results. This method may take up to 6 hours for accurate results. The blood volume is 70 ml/kg body weight in adult males, 65 ml/kg in adult females and 70-75 ml/kg in children (1 year old and over).
Semi-automated system
Blood volume may also be measured semi-automatically. The BVA-100, a product of Daxor Corporation, is an FDA-cleared diagnostic used at leading medical centers in the United States which consists of an automated well counter interfaced with a computer. It is able to report with 98% accuracy within 60 minutes the Total Blood Volume (TBV), Plasma Volume (PV) and Red Cell Volume (RCV) using the indicator dilution principle, microhematocrit centrifugation and the Ideal Height and Weight Method. The indicator, or tracer, is an I-131 albumin injection. An equal amount of the tracer is injected into a known and unknown |
https://en.wikipedia.org/wiki/Graphplan | Graphplan is an algorithm for automated planning developed by Avrim Blum and Merrick Furst in 1995. Graphplan takes as input a planning problem expressed in STRIPS and produces, if one is possible, a sequence of operations for reaching a goal state.
The name graphplan is due to the use of a novel planning graph, to reduce the amount of search needed to find the solution from straightforward exploration of the state space graph.
In the state space graph:
the nodes are possible states,
and the edges indicate reachability through a certain action.
On the contrary, in Graphplan's planning graph:
the nodes are actions and atomic facts, arranged into alternate levels,
and the edges are of two kinds:
from an atomic fact to the actions for which it is a condition,
from an action to the atomic facts it makes true or false.
the first level contains true atomic facts identifying the initial state.
Lists of incompatible facts that cannot be true at the same time and incompatible actions that cannot be executed together are also maintained.
The algorithm then iteratively extends the planning graph, proving that there are no solutions of length l-1 before looking for plans of length l by backward chaining: supposing the goals are true, Graphplan looks for the actions and previous states from which the goals can be reached, pruning as many of them as possible thanks to incompatibility information.
A closely related approach to planning is the Planning as Satisfiability (Satplan). Both reduce the automated planning problem to search for plans of different fixed horizon lengths. |
https://en.wikipedia.org/wiki/Stern%E2%80%93Brocot%20tree | In number theory, the Stern–Brocot tree is an infinite complete binary tree in which the vertices correspond one-for-one to the positive rational numbers, whose values are ordered from the left to the right as in a search tree.
The Stern–Brocot tree was introduced independently by and . Stern was a German number theorist; Brocot was a French clockmaker who used the Stern–Brocot tree to design systems of gears with a gear ratio close to some desired value by finding a ratio of smooth numbers near that value.
The root of the Stern–Brocot tree corresponds to the number 1. The parent-child relation between numbers in the Stern–Brocot tree may be defined in terms of continued fractions or mediants, and a path in the tree from the root to any other number q provides a sequence of approximations to q with smaller denominators than q. Because the tree contains each positive rational number exactly once, a breadth first search of the tree provides a method of listing all positive rationals that is closely related to Farey sequences. The left subtree of the Stern–Brocot tree, containing the rational numbers in the range (0,1), is called the Farey tree.
A tree of continued fractions
Every positive rational number may be expressed as a continued fraction of the form
where and are non-negative integers, and each subsequent coefficient is a positive integer. This representation is not unique because
but using this equivalence to replace every continued fraction ending with a one by a shorter continued fraction shows that every rational number has a unique representation in which the last coefficient is greater than one. Then, unless , the number has a parent in the Stern–Brocot tree given by the continued fraction expression Equivalently this parent is formed by decreasing the denominator in the innermost term of the continued fraction by 1, and contracting with the previous term if the fraction becomes . For instance, the rational number has the continued fraction |
https://en.wikipedia.org/wiki/Plating%20efficiency | Plating efficiency ("PE") is a measure of the number of colonies originating from single cells. It is a very sensitive test and is often used for determining the nutritional requirements of cells, testing serum lots, measuring the effects of growth factors, and for toxicity testing.
Plating Efficiency is the number of cells that grow into colonies per 100 cells inoculated. That is, it is the proportion of cells that attach and grow to the number of cells originally plated, expressed as a percentage. PE can be determined by the following formulae:
or
The first method is more accurate.
Cell growth in culture generally undergoes a decline after plating, and graphically, PE is the global minimum (lowest point) of the growth curve at day one, after which growth rises again. The decrease in viable cells after plating is due to "anchorage-dependence"--cells must attach to the bottom of the culture dish.
Plating Efficiency is one of the parameters typically used to define growth properties of cells in culture. Other common parameters are doubling time ("DT") (which is an average of generation time ("GT")), and saturation density ("SD"). |
https://en.wikipedia.org/wiki/List%20of%20Belgian%20flags | This is a list of flags used in Belgium.
National flag
Ensign
Military
Sub-national
Regions and communities
Provinces
Community commissions in Brussels
Municipalities
Royal standards
Monarch
Each royal standard for a monarch is a square rouge ponceau banner of the royal arms, personalised with the king's cypher in each corner.
Historical flags
Wallonia
Colonial
Political flags
House flags of Belgian freight companies
Other |
https://en.wikipedia.org/wiki/Static%20timing%20analysis | Static timing analysis (STA) is a simulation method of computing the expected timing of a synchronous digital circuit without requiring a simulation of the full circuit.
High-performance integrated circuits have traditionally been characterized by the clock frequency at which they operate. Measuring the ability of a circuit to operate at the specified speed requires an ability to measure, during the design process, its delay at numerous steps. Moreover, delay calculation must be incorporated into the inner loop of timing optimizers at various phases of design, such as logic synthesis, layout (placement and routing), and in in-place optimizations performed late in the design cycle. While such timing measurements can theoretically be performed using a rigorous circuit simulation, such an approach is liable to be too slow to be practical. Static timing analysis plays a vital role in facilitating the fast and reasonably accurate measurement of circuit timing. The speedup comes from the use of simplified timing models and by mostly ignoring logical interactions in circuits. This has become a mainstay of design over the last few decades.
One of the earliest descriptions of a static timing approach was based on the Program Evaluation and Review Technique (PERT), in 1966. More modern versions and algorithms appeared in the early 1980s.
Purpose
In a synchronous digital system, data is supposed to move in lockstep, advancing one stage on each tick of the clock signal. This is enforced by synchronizing elements such as flip-flops or latches, which copy their input to their output when instructed to do so by the clock. Only two kinds of timing errors are possible in such a system:
A Max time violation, when a signal arrives too late, and misses the time when it should advance. These are more commonly known as setup violations/checks which actually are a subset of max time violations involving a cycle shift on synchronous paths.
A Min time violation, when an input signal ch |
https://en.wikipedia.org/wiki/Digital%20signage | Digital signage is a segment of electronic signage. Digital displays use technologies such as LCD, LED, projection and e-paper to display digital images, video, web pages, weather data, restaurant menus, or text. They can be found in public spaces, transportation systems, museums, stadiums, retail stores, hotels, restaurants and corporate buildings etc., to provide wayfinding, exhibitions, marketing and outdoor advertising. They are used as a network of electronic displays that are centrally managed and individually addressable for the display of text, animated or video messages for advertising, information, entertainment and merchandising to targeted audiences.
Roles and function
The many different uses of digital signage allow a business to accomplish a variety of goals. Some of the most common applications include:
Public information – news, weather, traffic and local (location specific) information, such as building directory with a map, fire exits and traveler information.
Internal information - corporate messages, such as health & safety items, news and so forth.
Product information – pricing, photos, raw materials or ingredients, suggested applications and other product information - especially useful in food marketing where signage may include nutritional facts or suggested uses or recipes.
Information to enhance the customer service experience - interpretive signage in museums, galleries, zoos, parks and gardens, exhibitions, tourist and cultural attractions.
Advertising and Promotion – promoting products or services, may be related to the location of the sign or using the screen's audience reach for general advertising.
Brand building – in-store digital sign to promote the brand and build a brand identity.
Influencing customer behavior – navigation, directing customers to different areas, increasing the "dwell time" on the store premises and a wide range of other uses in service of such influence.
Influencing product or brand decision-making - Signage a |
https://en.wikipedia.org/wiki/DNA%20bank | DNA banking is the secure, long term storage of an individual’s genetic material. DNA is most commonly extracted from blood, but can also be obtained from saliva and other tissues. DNA banks allow for conservation of genetic material and comparative analysis of an individual's genetic information. Analyzing an individual's DNA can allow scientists to predict genetic disorders, as used in preventive genetics or gene therapy, and prove that person's identity, as used in the criminal justice system. There are multiple methods for testing and analyzing genetic information including restriction fragment length polymorphism (RFLP) and polymerase chain reactions (PCR).
Uses
DNA banking is used to conserve genetic material, especially that of organisms that face extinction. This is a more prominent issue today due to deforestation and climate change, which serve as a threat to biodiversity. The genetic information can be stored within lambda phage and plasma vectors. The National Institute of Agrobiological Sciences (NIAS) DNA Bank, for example, collects the DNA of agricultural organisms, such as rice and fish, for scientific research. Most DNA provided by DNA banks is used for studies to attempt to develop more productive or more environmentally friendly agricultural species. Some DNA banks also store the DNA of rare or endangered species to ensure their survival.
The DNA bank can be used to compare and analyze DNA samples. Comparison of DNA samples allowed scientists to work on the Human Genome Project, which maps out many of the genes on human DNA. It has also led to the development of preventive genetics. Samples from the DNA bank have been used to identify patterns and determine which genes lead to specific disorders. Once people know which genes lead to disorders, people can take steps to lessen the effects of that disorder. This can occur through adjustments in lifestyle, as demonstrated in preventive healthcare, or even through gene therapy. DNA can be banked at |
https://en.wikipedia.org/wiki/List%20of%20Barbadian%20flags | This is a list of flags used in Barbados.
National flag
Governmental flags
Military flags
Historical flags
See also
Flag of the British Windward Islands
Flag of the West Indies Federation
National symbols of Barbados
External links
Flag of the prime minister of Barbados
Flag of the governor-general of Barbados
History of Barbados
Colonial symbols of Barbados
Barbados Air Force roundel
Flags
Flags
Barbados |
https://en.wikipedia.org/wiki/Synthetic%20setae | Synthetic setae emulate the setae found on the toes of a gecko and scientific research in this area is driven towards the development of dry adhesives. Geckos have no difficulty mastering vertical walls and are apparently capable of adhering themselves to just about any surface. The five-toed feet of a gecko are covered with elastic hairs called setae and the ends of these hairs are split into nanoscale structures called spatulae (because of their resemblance to actual spatulas). The sheer abundance and proximity to the surface of these spatulae make it sufficient for van der Waals forces alone to provide the required adhesive strength. Following the discovery of the gecko's adhesion mechanism in 2002, which is based on van der Waals forces, biomimetic adhesives have become the topic of a major research effort. These developments are poised to yield families of novel adhesive materials with superior properties which are likely to find uses in industries ranging from defense and nanotechnology to healthcare and sport.
Basic principles
Geckos are renowned for their exceptional ability to stick and run on any vertical and inverted surface (excluding Teflon). However gecko toes are not sticky in the usual way like chemical adhesives. Instead, they can detach from the surface quickly and remain quite clean around everyday contaminants even without grooming.
Extraordinary adhesion
The two front feet of a tokay gecko can withstand 20.1 N of force parallel to the surface with 227 mm2 of pad area, a force as much as 40 times the gecko's weight. Scientists have been investigating the secret of this extraordinary adhesion ever since the 19th century, and at least seven possible mechanisms for gecko adhesion have been discussed over the past 175 years. There have been hypotheses of glue, friction, suction, electrostatics, micro-interlocking and intermolecular forces. Sticky secretions were ruled out first early in the study of gecko adhesion since geckos lack glandular tiss |
https://en.wikipedia.org/wiki/Japanese%20postal%20mark | is the service mark of Japan Post and its successor, Japan Post Holdings, the postal operator in Japan. It is also used as a Japanese postal code mark since the introduction of the latter in 1968. Historically, it was used by the , which operated the postal service. The mark is a stylized katakana syllable te (テ), from the word . The mark was introduced on February 8, 1887 (Meiji 20.2.8).
Usage
To indicate a postal code, the mark is written first, and the postal code is written after. For example, one area of Meguro, Tokyo, would have 〒153-0061 written on any mail, in order to direct mail to that location. This usage has resulted in the inclusion of the mark into the Japanese character sets for computers, and thus eventually their inclusion into Unicode, where it can also be found on the Japanese Post Office emoji. In most keyboard-based Japanese input systems, it can be created by typing "yuubin" and then doing a kanji conversion.
Of the versions shown to the right, the one on the far right (〒) is the standard mark used in addressing. A circled yūbin mark is often used on maps to denote post offices. Other variants have been used as conformity marks inherited from the Ministry of Communications: for example, a similar circled mark was used for electrical certification of Category B appliances, contrasted with a triangle-enclosed postal mark (⮗) for Category A appliances, under a precursor to the Act on Product Safety of Electrical Appliances and Materials. The Unicode code chart, as of version 13.0, labels the "Circled Postal Mark" character (〶, U+3036) as "symbol for type B electronics". An enclosed version incorporating a sawtooth wave shape is used as a conformity mark for Ministry of Internal Affairs and Communications regulations on radio and other electromagnetic wave equipment.
Encoding
The postal mark appears in the following encoded characters. Before the introduction of Unicode, the simple postal mark was encoded for Japanese use in JIS X 0208 (inc |
https://en.wikipedia.org/wiki/Handle-o-Meter | The Handle-o-Meter is a testing machine developed by Johnson & Johnson and now manufactured by Thwing-Albert that measures the "handle" of sheeted materials: a combination of its surface friction and flexibility. Originally, it was used to test the durability and flexibility of toilet paper and paper towels.
The test sample is placed over an adjustable slot. The resistance encountered by the penetrator blade as it is moved into the slot by a pivoting arm is measured by the machine.
Details
The data collected when such nonwovens, tissues, toweling, film and textiles are tested has been shown to correlate well with the actual performance of these specific material's performance as a finished product.
Materials are simply placed over the slot that extends across the instrument platform, and then the tester hits test. There are three different test modes which can be applied to the material: single, double, and quadruple. The average is automatically calculated for double or quadruple tests.
Features
Adjustable slot openings
Interchangeable beams
Auto-ranging
2 x 40 LCD display
Statistical Analysis
RS-232 Output and Serial Port
Industry Standards:
ASTM D2923, D6828-02
TAPPI T498
INDA IST 90.3 |
https://en.wikipedia.org/wiki/Microsoft%20RPC | Microsoft RPC (Microsoft Remote Procedure Call) is a modified version of DCE/RPC. Additions include partial support for UCS-2 (but not Unicode) strings, implicit handles, and complex calculations in the variable-length string and structure paradigms already present in DCE/RPC.
Example
The DCE 1.0 reference implementation only allows such constructs as , or possibly . MSRPC allows much more complex constructs such as and even , a common expression in DCOM IDL files.
Use
MSRPC was used by Microsoft to seamlessly create a client/server model in Windows NT, with very little effort. For example, the Windows Server domains protocols are entirely MSRPC based, as is Microsoft's DNS administrative tool. Microsoft Exchange Server 5.5's administrative front-ends are all MSRPC client/server applications, and its MAPI was made more secure by "proxying" MAPI over a set of simple MSRPC functions that enable encryption at the MSRPC layer without involving the MAPI protocol.
History
MSRPC is derived from the Distributed Computing Environment 1.2 reference implementation from the Open Software Foundation, but has been copyrighted by Microsoft. DCE/RPC was originally commissioned by the Open Software Foundation, an industry consortium to set vendor- and technology-neutral open standards for computing infrastructure. None of the Unix vendors (now represented by the Open Group), wanted to use the complex DCE or such components as DCE/RPC at the time.
Microsoft's Component Object Model is based heavily on MSRPC, adding interfaces and inheritance. The marshalling semantics of DCE/RPC are used to serialize method calls and results between processes with separate address spaces, albeit COM did not initially allow network calls between different machines.
With Distributed Component Object Model (DCOM), COM was extended to software components distributed across several networked computers. DCOM, which originally was called "Network OLE", extends Microsoft's COM, and provides the com |
https://en.wikipedia.org/wiki/Cooking%20spray | Cooking spray is a spray form of an oil as a lubricant, lecithin as an emulsifier, and a propellant such as nitrous oxide, carbon dioxide or propane. Cooking spray is applied to frying pans and other cookware to prevent food from sticking. Traditionally, cooks use butter, shortening, or oils poured or rubbed on cookware. Most cooking sprays have less food energy per serving than an application of vegetable oil, because they are applied in a much thinner layer: US regulations allow many to be labelled "zero-calorie"; in the UK sprays claim to supply "less than 1 calorie per serving". Popular US brands include Pam, Crisco, and Baker's Joy. Sprays are available with plain vegetable oil, butter and olive oil flavor.
Cooking spray has other culinary uses besides being applied to cookware. Sticky candies such as Mike and Ike that are often sold in bulk vending machines may be sprayed with cooking spray to keep them from sticking together in the machines. Coating the inside of a measuring cup with the spray allows sticky substances such as honey to pour out more easily. Vegetables may be sprayed before seasoning to make the seasonings stick better. |
https://en.wikipedia.org/wiki/DCE/RPC | DCE/RPC, short for "Distributed Computing Environment / Remote Procedure Calls", is the remote procedure call system developed for the Distributed Computing Environment (DCE). This system allows programmers to write distributed software as if it were all working on the same computer, without having to worry about the underlying network code.
History
DCE/RPC was commissioned by the Open Software Foundation in a "Request for Technology" (1993 David Chappell). One of the key companies that contributed was Apollo Computer, who brought in NCA - "Network Computing Architecture" which became Network Computing System (NCS) and then a major part of DCE/RPC itself. The naming convention for transports that can be designed (as architectural plugins) and then made available to DCE/RPC echoes these origins, e.g. ncacn_np (SMB Named Pipes transport); ncacn_tcp (DCE/RPC over TCP/IP) and ncacn_http to name a small number.
DCE/RPC's history is such that it's sometimes cited as an example of design by committee. It is also frequently noted for its complexity, however this complexity is often a result of features that target large distributed systems and which are often unmatched by more recent RPC implementations such as SOAP.
Software license
Previously, the DCE source was only available under a proprietary license. As of January 12, 2005, it is available under a recognized open source license (LGPL), which permits a broader community to work on the source to expand its features and keep it current. The source may be downloaded over the web. The release consists of about 100 ".tar.gz" files that take up 170 Megabytes. (Note that they include the PostScript of all the documentation, for example.)
The Open Group has stated it will work with the DCE community to make DCE available to the open source development community, as well as continuing to offer the source through The Open Group’s web site.
DCE/RPC's reference implementation (version 1.1) was previously available unde |
https://en.wikipedia.org/wiki/Dynkin%20system | A Dynkin system, named after Eugene Dynkin, is a collection of subsets of another universal set satisfying a set of axioms weaker than those of -algebra. Dynkin systems are sometimes referred to as -systems (Dynkin himself used this term) or d-system. These set families have applications in measure theory and probability.
A major application of -systems is the - theorem, see below.
Definition
Let be a nonempty set, and let be a collection of subsets of (that is, is a subset of the power set of ). Then is a Dynkin system if
is closed under complements of subsets in supersets: if and then
is closed under countable increasing unions: if is an increasing sequence of sets in then
It is easy to check that any Dynkin system satisfies:
is closed under complements in : if then
Taking shows that
is closed under countable unions of pairwise disjoint sets: if is a sequence of pairwise disjoint sets in (meaning that for all ) then
To be clear, this property also holds for finite sequences of pairwise disjoint sets (by letting for all ).
Conversely, it is easy to check that a family of sets that satisfy conditions 4-6 is a Dynkin class.
For this reason, a small group of authors have adopted conditions 4-6 to define a Dynkin system as they are easier to verify.
An important fact is that any Dynkin system that is also a -system (that is, closed under finite intersections) is a -algebra. This can be verified by noting that conditions 2 and 3 together with closure under finite intersections imply closure under finite unions, which in turn implies closure under countable unions.
Given any collection of subsets of there exists a unique Dynkin system denoted which is minimal with respect to containing That is, if is any Dynkin system containing then is called the
For instance,
For another example, let and ; then
Sierpiński–Dynkin's π-λ theorem
Sierpiński-Dynkin's - theorem:
If is a -system and is a Dynkin system with then
In |
https://en.wikipedia.org/wiki/Provable%20prime | In number theory, a provable prime is an integer that has been calculated to be prime using a primality-proving algorithm. Boot-strapping techniques using Pocklington primality test are the most common ways to generate provable primes for cryptography.
Contrast with probable prime, which is likely (but not certain) to be prime, based on the output of a probabilistic primality test.
In principle, every prime number can be proved to be prime in polynomial time by using the AKS primality test. Other methods which guarantee that their result is prime, but which do not work for all primes, are useful for the random generation of provable primes.
Provable primes have also been generated on embedded devices.
See also
Probable prime
Primality test |
https://en.wikipedia.org/wiki/Finite%20character | In mathematics, a family of sets is of finite character if for each , belongs to if and only if every finite subset of belongs to . That is,
For each , every finite subset of belongs to .
If every finite subset of a given set belongs to , then belongs to .
Properties
A family of sets of finite character enjoys the following properties:
For each , every (finite or infinite) subset of belongs to .
Every nonempty family of finite character has a maximal element with respect to inclusion (Tukey's lemma): In , partially ordered by inclusion, the union of every chain of elements of also belongs to , therefore, by Zorn's lemma, contains at least one maximal element.
Example
Let be a vector space, and let be the family of linearly independent subsets of . Then is a family of finite character (because a subset is linearly dependent if and only if has a finite subset which is linearly dependent).
Therefore, in every vector space, there exists a maximal family of linearly independent elements. As a maximal family is a vector basis, every vector space has a (possibly infinite) vector basis.
See also
Hereditarily finite set |
https://en.wikipedia.org/wiki/DIN%20sync | DIN sync, also called Sync24, is a synchronization interface for electronic musical instruments. It was introduced in 1980 by Roland Corporation and has been superseded by MIDI.
Definition and history
DIN sync was introduced in 1980 by Roland Corporation with the release of the TR-808 drum machine. The intended use was the synchronization of music sequencers, drum machines, arpeggiators and similar devices. It was superseded by MIDI in the mid-to-late 1980s.
DIN sync consists of two signals, clock (tempo) and run/stop. Both signals are TTL compatible, meaning the low state is 0 V and the high state is about +5 V. The clock signal is a low-frequency pulse wave suggesting the tempo. Instead of measuring the waveform's frequency, the machine receiving the signal merely has to count the number of pulses to work out when to increment its position in the music. Roland equipment uses 24 pulses per quarter note, known as Sync24. Therefore, a Roland-compatible device playing sixteenth notes would have to advance to the next note every time it receives 6 pulses. Korg equipment uses 48 pulses per quarter note. The run/stop signal indicates whether the sequence is playing or not.
If a device is a DIN sync sender, the positive slope of start/stop must reset the clock signal, and the clock signal must start with a delay of 9 ms.
A detailed description on how to implement a DIN sync sender with Play, Pause, Continue and Stop functionality was published by E-RM Erfindungsbuero.
Pinouts
DIN sync is so named because it uses 5-pin DIN connectors, the same as used for MIDI. DIN sync itself is not a DIN standard. Note that despite using the same connectors as MIDI, it uses different pins on these connectors (1, 2, and 3 rather than MIDI's 2, 4 and 5), so a cable made specifically for MIDI will not necessarily have the pins required for DIN sync connected. In some applications the remaining DIN sync pins (4 and 5) are used as tap and fill in or reset and start, but this di |
https://en.wikipedia.org/wiki/Reaction%20norm | In ecology and genetics, a reaction norm, also called a norm of reaction, describes the pattern of phenotypic expression of a single genotype across a range of environments. One use of reaction norms is in describing how different species—especially related species—respond to varying environments. But differing genotypes within a single species may also show differing reaction norms relative to a particular phenotypic trait and environment variable. For every genotype, phenotypic trait, and environmental variable, a different reaction norm can exist; in other words, an enormous complexity can exist in the interrelationships between genetic and environmental factors in determining traits. The concept was introduced by Richard Woltereck in 1909.
A monoclonal example
Scientifically analyzing norms of reaction in natural populations can be very difficult, simply because natural populations of sexually reproductive organisms usually do not have cleanly separated or superficially identifiable genetic distinctions. However, seed crops produced by humans are often engineered to contain specific genes, and in some cases seed stocks consist of clones. Accordingly, distinct seed lines present ideal examples of differentiated norms of reaction. In fact, agricultural companies market seeds for use in particular environments based on exactly this.
Suppose the seed line A contains an allele a, and a seed line B of the same crop species contains an allele b, for the same gene. With these controlled genetic groups, we might cultivate each variety (genotype) in a range of environments. This range might be either natural or controlled variations in environment. For example, an individual plant might receive either more or less water during its growth cycle, or the average temperature the plants are exposed to might vary across a range.
A simplification of the norm of reaction might state that seed line A is good for "high water conditions" while a seed line B is good for |
https://en.wikipedia.org/wiki/Logical%20Unit%20Number%20masking | Logical Unit Number Masking or LUN masking is an authorization process that makes a Logical Unit Number available to some hosts and unavailable to other hosts.
LUN Masking is a level of security that makes a LUN available to only selected hosts and unavailable to all others. This kind of security is done on the SAN level and is based on the host HBA, i.e. you can give access of specific LUN on the SAN to specific host with specific HBA.
LUN masking is mainly implemented at the host bus adapter (HBA) level. The security benefits of LUN masking implemented at HBAs are limited, since with many HBAs it is possible to forge source addresses (WWNs/MACs/IPs) and compromise the access. Many storage controllers also support LUN masking. When LUN masking is implemented at the storage controller level, the controller itself enforces the access policies to the device and as a result it is more secure. However, it is mainly implemented not as a security measure per se, but rather as a protection against misbehaving servers which may corrupt disks belonging to other servers. For example, Windows servers attached to a SAN will, under some conditions, corrupt non-Windows (Unix, Linux, NetWare) volumes on the SAN by attempting to write Windows volume labels to them. By hiding the other LUNs from the Windows server, this can be prevented, since the Windows server does not even realize the other LUNs exist.
See also
Persistent binding
External links
LUN Masking
LUN Masking and Zoning
Computer storage buses |
https://en.wikipedia.org/wiki/Virtual%20tape%20library | A virtual tape library (VTL) is a data storage virtualization technology used typically for backup and recovery purposes. A VTL presents a storage component (usually hard disk storage) as tape libraries or tape drives for use with existing backup software.
Virtualizing the disk storage as tape allows integration of VTLs with existing backup software and existing backup and recovery processes and policies. The benefits of such virtualization include storage consolidation and faster data restore processes. For most mainframe data centers, the storage capacity varies, however protecting its business and mission critical data is always vital.
Most current VTL solutions use SAS or SATA disk arrays as the primary storage component due to their relatively low cost. The use of array enclosures increases the scalability of the solution by allowing the addition of more disk drives and enclosures to increase the storage capacity.
The shift to VTL also eliminates streaming problems that often impair efficiency in tape drives as disk technology does not rely on streaming and can write effectively regardless of data transfer speeds.
By backing up data to disks instead of tapes, VTL often increases performance of both backup and recovery operations. Restore processes are found to be faster than backup regardless of implementations. In some cases, the data stored on the VTL's disk array is exported to other media, such as physical tapes, for disaster recovery purposes (scheme called disk-to-disk-to-tape, or D2D2T).
Alternatively, most contemporary backup software products introduced also direct usage of the file system storage (especially network-attached storage, accessed through NFS and CIFS protocols over IP networks) not requiring a tape library emulation at all. They also often offer a disk staging feature: moving the data from disk to a physical tape for a long-term storage.
While a virtual tape library is very fast, the disk storage within is not designed to be remo |
https://en.wikipedia.org/wiki/CEO%20%28Data%20General%29 | Comprehensive Electronic Office, often referred to by its initialism CEO, was a suite of office automation software from Data General introduced in 1981. It included word processing, e-mail, spreadsheets, business graphics and desktop accessories. The software was developed mostly in PL/I on and for the AOS and AOS/VS operating systems.
Overview
CEO was considered office automation software, which was an attempt to create a paperless office. CEO has also been cited as an example of an executive information system and as a decision support system.
It included a main program known as the Control Program, which offered a menu driven interface on the assorted dumb terminals which existed at the time. The Control Program communicated with separate "Services" like the Mail Server, Calendar Server, File Server (for documents). There was also a Word Processor and a data management program which was also accessible from the Control Program. In 1985, Data General announced a complementary product, TEO (Technical Electronic Office), focused on the office automation needs of engineering professionals.
In later years, CEO offerings grew to include various products to connect to CEO from early personal computers. The first such product was called CEO Connection. Later a product named CEO Object Office shipped which repackaged HP NewWave (an object oriented graphical interface).
CEO code was heavily dependent on the INFOS II database. When Data General moved from the Eclipse MV platform to the AViiON, CEO was not ported to the new platform as the cost would have been prohibitive.
CEO was often compared with IBM's offering, commonly called PROFS.
CEO offered integration with DISOSS and SNADS. CEO also supported Xodiac, Data General's proprietary networking system. In 1989, Data General unveiled an email gateway product, Communications Server, which provided interoperability of CEO with X.400 email systems and X.500 directories.
One early CEO site, Deutsche Credit in Chi |
https://en.wikipedia.org/wiki/Windows%20Preinstallation%20Environment | Windows Preinstallation Environment (also known as Windows PE and WinPE) is a lightweight version of Windows used for the deployment of PCs, workstations, and servers, or troubleshooting an operating system while it is offline. It is intended to replace MS-DOS boot disks and can be booted via USB flash drive, PXE, iPXE, CD, DVD, or hard disk. Traditionally used by large corporations and OEMs (to preinstall Windows client operating systems on PCs during manufacturing), it is now widely available free of charge via Windows Assessment and Deployment Kit (WADK) (formerly Windows Automated Installation Kit (WAIK)).
Overview
WinPE was originally intended to be used only as a pre-installation platform for deploying Microsoft Windows operating systems, specifically to replace MS-DOS in this respect. WinPE has the following uses:
Deployment of workstations and servers in large corporations as well as pre-installation by system builders of workstations and servers to be sold to end users.
Recovery platform to run 32-bit or 64-bit recovery tools such as Winternals ERD Commander or Windows Recovery Environment (Windows RE).
Platform for running third-party 32-bit or 64-bit disk cloning utilities.
The package can be used for developer testing or as a recovery CD/DVD for system administrators. Many customized WinPE boot CDs packaged with third-party applications for different uses are now available from volunteers via the Internet. The package can also be used as the base of a forensics investigation to either capture a disk image or run analysis tools without mounting any available disks and thus changing state.
Version 2.0 introduced a number of improvements and extended the availability of WinPE to all customers, not just corporate enterprise customers by downloading and installing Microsoft's Windows Automated Installation Kit (WAIK).
It was originally designed and built by a small team of engineers in Microsoft's Windows Deployment team, including Vijay Jayaseelan, |
https://en.wikipedia.org/wiki/Weill%20Cornell%20Graduate%20School%20of%20Medical%20Sciences | The Weill Cornell Graduate School of Medical Sciences (WCGS) (formerly known as the Cornell University Graduate School of Medical Sciences) is a graduate college of Cornell University that was founded in 1952 as an academic partnership between two major medical institutions in New York City: the Weill Cornell Medical College and the Sloan Kettering Institute. Cornell is involved in the Tri-Institutional MD-PhD Program with Rockefeller University and the Sloan Kettering Institute; each of these three institutions is part of a large biomedical center extending along York Avenue between 65th and 72nd Streets on the Upper East Side of Manhattan.
Programs of study
Weill Cornell Graduate School of Medical Sciences (WCGS) partners with neighboring institutions along York Avenue, also known as the “corridor of science” in New York City. Such partnerships with Memorial Sloan Kettering Cancer Center, New York-Presbyterian Hospital, the Hospital for Special Surgery and The Rockefeller University offer specialized learning opportunities.
WCGS offers a variety of programs at both the Masters and Doctoral levels. As a partnership between the Sloan Kettering Institute and Weill Cornell Medical College, WCGS offers seven PhD programs as well as four distinct Masters programs. Additionally, the school offers two Tri-Institutional PhDs, a Tri-Institutional MD/PhD and the opportunity for students to participate in an Accelerated PhD/MBA program.
PhD Programs:
Biochemistry and Structural Biology
Molecular Biology
Cell and Developmental Biology
Immunology and Microbial Pathogenesis
Pharmacology
Neuroscience
Physiology Biophysics and Systems Biology
Tri-Institutional PhD Programs
Chemical Biology
Computational Biology and Medicine
Tri-I MD / PhD Program
See also
Weill Cornell Medical College
Tri-Institutional MD-PhD Program |
https://en.wikipedia.org/wiki/Energy%20condition | In relativistic classical field theories of gravitation, particularly general relativity, an energy condition is a generalization of the statement "the energy density of a region of space cannot be negative" in a relativistically-phrased mathematical formulation. There are multiple possible alternative ways to express such a condition such that can be applied to the matter content of the theory. The hope is then that any reasonable matter theory will satisfy this condition or at least will preserve the condition if it is satisfied by the starting conditions.
Energy conditions are not physical constraints , but are rather mathematically imposed boundary conditions that attempt to capture a belief that "energy should be positive". Many energy conditions are known to not correspond to physical reality—for example, the observable effects of dark energy are well-known to violate the strong energy condition.
In general relativity, energy conditions are often used (and required) in proofs of various important theorems about black holes, such as the no hair theorem or the laws of black hole thermodynamics.
Motivation
In general relativity and allied theories, the distribution of the mass, momentum, and stress due to matter and to any non-gravitational fields is described by the energy–momentum tensor (or matter tensor) . However, the Einstein field equation in itself does not specify what kinds of states of matter or non-gravitational fields are admissible in a spacetime model. This is both a strength, since a good general theory of gravitation should be maximally independent of any assumptions concerning non-gravitational physics, and a weakness, because without some further criterion the Einstein field equation admits putative solutions with properties most physicists regard as unphysical, i.e. too weird to resemble anything in the real universe even approximately.
The energy conditions represent such criteria. Roughly speaking, they crudely describe properties common |
https://en.wikipedia.org/wiki/Rare%20Symmetry%20Violating%20Processes | The Rare Symmetry Violating Processes (RSVP) was a physics project terminated by the National Science Foundation, in August, 2005, originally meant for construction in the same year, at Brookhaven National Laboratory on Long Island.
The Experiments
The project's two experiments were to investigate the relation between the electron and its heavier cousin the muon, and to examine differences in the behavior of matter and antimatter, which were to utilize the existing Brookhaven particle accelerator called the Alternating Gradient Synchrotron (AGS).
The project had been budgeted at approximately $145 million for construction, between fiscal year 2005 and 2010.
Particle experiments |
https://en.wikipedia.org/wiki/Event%20%28particle%20physics%29 | In particle physics, an event refers to the results just after a fundamental interaction takes place between subatomic particles, occurring in a very short time span, at a well-localized region of space. Because of the uncertainty principle, an event in particle physics does not have quite the same meaning as it does in the theory of relativity, in which an "event" is a point in spacetime which can be known exactly, i.e., a spacetime coordinate.
Overview
In a typical particle physics event, the incoming particles are scattered or destroyed, and up to hundreds of particles can be produced, although few are likely to be new particles not discovered before.
In the old bubble chambers and cloud chambers, "events" could be seen by observing charged particle tracks emerging from the region of the event before they curl due to the magnetic field through the chamber acting on the particles. At modern particle accelerators, events are the result of the interactions which occur from a beam crossing inside a particle detector.
Physical quantities used to analyze events include the differential cross section, the flux of the beams (which in turn depends on the number density of the particles in the beam and their average velocity), and the rate and luminosity of the experiment.
Individual particle physics events are modeled by scattering theory based on an underlying quantum field theory of the particles and their interactions. The S-matrix is used to characterize the probability of various event outgoing particle states given the incoming particle states. For suitable quantum field theories, the S-matrix may be calculated by a perturbative expansion in terms of Feynman diagrams.
Events occur naturally in astrophysics and geophysics, such as subatomic particle showers produced from cosmic ray scattering events. |
https://en.wikipedia.org/wiki/List%20of%20Pakistani%20flags | This is a list of flags used in Pakistan.
National flag
Government flags
Civil ensign
Civil air ensign
Provincial and territorial flags
Military
Naval rank flags
Political flags
Political parties
Opposition/Rebel flag
Historical flags
Pre-colonial states
British India
Princely states of Pakistan
Former national flag proposals
See also
National Flag of Pakistan |
https://en.wikipedia.org/wiki/Lund%20string%20model | In particle physics, the Lund string model is a phenomenological model of hadronization. It treats all but the highest-energy gluons as field lines, which are attracted to each other due to the gluon self-interaction and so form a narrow tube (or string) of strong color field. Compared to electric or magnetic field lines, which are spread out because the carrier of the electromagnetic force, the photon, does not interact with itself.
String fragmentation is one of the parton fragmentation models used in the PYTHIA/Jetset and UCLA event generators, and explains many features of hadronization quite well. In particular, the model predicts that in addition to the particle jets formed along the original paths of two separating quarks, there will be a spray of hadrons produced between the jets by the string itself—which is precisely what is observed.
This use of "string" is not the same as in string theory, in which strings are the fundamental objects of nature rather than collections of field lines.
See also
QCD string |
https://en.wikipedia.org/wiki/List%20of%20Chinese%20flags | This is a list of flags of entities named or related to "China".
People's Republic of China
National flags
Special administrative regions flags
Military flags
Civil flags
City flags
Political flags
Flags of Political Groups and Separatist Movements
Proposed national flags of the People's Republic of China
In July 1949, a contest was announced for a national flag for the newly founded People's Republic of China (PRC). From a total of about 3,000 proposed designs, 38 finalists were chosen. In September, the current flag, submitted by Zeng Liansong, was officially adopted, with the hammer and sickle removed.
Alternative proposals
Selection of proposals
House flags
Historical Communist States
Historical Military Flags
Republic of China
National flags
Standards
Head of state
Vice president
Other high executive officials
Military flags
Army
Navy
Air Force
Marine Corps
Combined Logistics Command
National Defense University
Coast Guard Administration
Police
Water Police
Fire Service
Rescue aviation
Ministries
Councils
Agency
Civil and Merchant Ensign
Postal flags
Chinese Maritime Customs Service
Salt Administration
Yacht Club Ensign
Sporting flags
City and county flags
As of 18 November 1997, the Chinese Government banned localities from making and using local flags and emblems. Despite the ban, some cities have adopted their own flag that often includes their local emblem as shown below. The ROC-controlled areas continues to use the respective flags.
Provinces
The PRC-controlled mainland does not have provincial flags, but the ROC-controlled area has a flag for one of its two provinces.
History
University flags
Political flags
Cultural flags
Proposed flags
Republic of China
Taiwan Independence Movement
Railway flags
House flags
Association flags
Warlords
Pre-Qing States
Standards
Qing dynasty and other pre-1912 states
National flags
Standards
Military flags
Navy
Chinese Maritime Customs Service
House flags
Fla |
https://en.wikipedia.org/wiki/Erwin%20Kreyszig | Erwin Otto Kreyszig (January 6, 1922 in Pirna, Germany – December 12, 2008) was a German Canadian applied mathematician and the Professor of Mathematics at Carleton University in Ottawa, Ontario, Canada. He was a pioneer in the field of applied mathematics: non-wave replicating linear systems. He was also a distinguished author, having written the textbook Advanced Engineering Mathematics, the leading textbook for civil, mechanical, electrical, and chemical engineering undergraduate engineering mathematics.
Kreyszig received his PhD degree in 1949 at the University of Darmstadt under the supervision of Alwin Walther. He then continued his research activities at the universities of Tübingen and Münster. Prior to joining Carleton University in 1984, he held positions at Stanford University (1954/55), the University of Ottawa (1955/56), Ohio State University (1956–60, professor 1957) and he completed his habilitation at the University of Mainz. In 1960 he became professor at the Technical University of Graz and organized the Graz 1964 Mathematical Congress. He worked at the University of Düsseldorf (1967–71) and at the University of Karlsruhe (1971–73). From 1973 through 1984 he worked at the University of Windsor and since 1984 he had been at Carleton University. He was awarded the title of Distinguished Research Professor in 1991 in recognition of a research career during which he published 176 papers in refereed journals, and 37 in refereed conference proceedings.
Kreyszig was also an administrator, developing a Computer Centre at the University of Graz, and at the Mathematics Institute at the University of Düsseldorf. In 1964, he took a leave of absence from Graz to initiate a doctoral program in mathematics at Texas A&M University.
Kreyszig authored 14 books, including Advanced Engineering Mathematics, which was published in its 10th edition in 2011. He supervised 104 master's and 22 doctoral students as well as 12 postdoctoral researchers. Together with his so |
https://en.wikipedia.org/wiki/Semi-symmetric%20graph | In the mathematical field of graph theory, a semi-symmetric graph is an undirected graph that is edge-transitive and regular, but not vertex-transitive. In other words, a graph is semi-symmetric if each vertex has the same number of incident edges, and there is a symmetry taking any of the graph's edges to any other of its edges, but there is some pair of vertices such that no symmetry maps the first into the second.
Properties
A semi-symmetric graph must be bipartite, and its automorphism group must act transitively on each of the two vertex sets of the bipartition (in fact, regularity is not required for this property to hold). For instance, in the diagram of the Folkman graph shown here, green vertices can not be mapped to red ones by any automorphism, but every two vertices of the same color are symmetric with each other.
History
Semi-symmetric graphs were first studied E. Dauber, a student of F. Harary, in a paper, no longer available, titled "On line- but not point-symmetric graphs". This was seen by Jon Folkman, whose paper, published in 1967, includes the smallest semi-symmetric graph, now known as the Folkman graph, on 20 vertices.
The term "semi-symmetric" was first used by Klin et al. in a paper they published in 1978.
Cubic graphs
The smallest cubic semi-symmetric graph (that is, one in which each vertex is incident to exactly three edges) is the Gray graph on 54 vertices. It was first observed to be semi-symmetric by . It was proven to be the smallest cubic semi-symmetric graph by Dragan Marušič and Aleksander Malnič.
All the cubic semi-symmetric graphs on up to 10000 vertices are known. According to Conder, Malnič, Marušič and Potočnik, the four smallest possible cubic semi-symmetric graphs after the Gray graph are the Iofinova–Ivanov graph on 110 vertices, the Ljubljana graph on 112 vertices, a graph on 120 vertices with girth 8 and the Tutte 12-cage. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.