source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Wirehog | Wirehog was a friend-to-friend file sharing program that was linked to Facebook and allowed people to transfer files directly between computers.
History
Wirehog was created by Andrew McCollum, Mark Zuckerberg, Adam D'Angelo, and Sean Parker during their development of the Facebook social networking website in Palo Alto in the summer and fall of 2004. The only way to join Wirehog was through an invitation from a member and although it was originally planned as an integrated feature of Facebook, it could also be used by friends who were not registered on Facebook. Wirehog was launched in October 2004, and taken down in January 2006. Its target audience at the time was the same as the campus-only file-sharing service i2hub that had launched earlier that year. i2hub was gaining a lot of traction and growing rapidly. In an interview with The Harvard Crimson, Zuckerberg said, "I think Wirehog will probably spread in the same way that thefacebook did."
The software was described by its creators as "an HTTP file transfer system using dynamic DNS and NAT traversal to make your personal computer addressable, routable and easily accessible". The client allowed users to both access data stored on their home computer from a remote location and let friends exchange files between each other's computers. In ways, Wirehog was a project comparable to Alex Pankratov's Hamachi VPN, the open-source OneSwarm private network, or the darknet RetroShare software.
Until at least July 2005, Facebook officially endorsed the p2p client, saying on their website:
"Wirehog is a social application that lets friends exchange files of any type with each other over the web. Facebook and Wirehog are integrated so that Wirehog knows who your friends are in order to make sure that only people in your network can see your files. Facebook certifies that it is okay to enter your facebook email address and password into Wirehog for the purposes of this integration."
The Wirehog software was written in |
https://en.wikipedia.org/wiki/Negative%20luminescence | Negative luminescence is a physical phenomenon by which an electronic device emits less thermal radiation when an electric current is passed through it than it does in thermal equilibrium (current off). When viewed by a thermal camera, an operating negative luminescent device looks colder than its environment.
Physics
Negative luminescence is most readily observed in semiconductors. Incoming infrared radiation is absorbed in the material by the creation of an electron–hole pair. An electric field is used to remove the electrons and holes from the region before they have a chance to recombine and re-emit thermal radiation. This effect occurs most efficiently in regions of low charge carrier density.
Negative luminescence has also been observed in semiconductors in orthogonal electric and magnetic fields. In this case, the junction of a diode is not necessary and the effect can be observed in bulk material. A term that has been applied to this type of negative luminescence is galvanomagnetic luminescence.
Negative luminescence might appear to be a violation of Kirchhoff's law of thermal radiation. This is not true, as the law only applies in thermal equilibrium.
Another term that has been used to describe negative luminescent devices is "Emissivity switch", as an electric current changes the effective emissivity.
History
This effect was first seen by Russian physicists in the 1960s in A.F.Ioffe Physicotechnical Institute, Leningrad, Russia. Subsequently, it was studied in semiconductors such as indium antimonide (InSb), germanium (Ge) and indium arsenide (InAs) by workers in West Germany, Ukraine (Institute of Semiconductor Physics, Kyiv), Japan (Chiba University) and the United States. It was first observed in the mid-infrared (3-5 µm wavelength) in the more convenient diode structures in InSb heterostructure diodes by workers at the Defence Research Agency, Great Malvern, UK (now QinetiQ). These British workers later demonstrated LWIR band (8-12 µm) negative |
https://en.wikipedia.org/wiki/RNA%20polymerase%20I | RNA polymerase 1 (also known as Pol I) is, in higher eukaryotes, the polymerase that only transcribes ribosomal RNA (but not 5S rRNA, which is synthesized by RNA polymerase III), a type of RNA that accounts for over 50% of the total RNA synthesized in a cell.
Structure and function
Pol I is a 590 kDa enzyme that consists of 14 protein subunits (polypeptides), and its crystal structure in the yeast Saccharomyces cerevisiae was solved at 2.8Å resolution in 2013. Twelve of its subunits have identical or related counterparts in RNA polymerase II (Pol II) and RNA polymerase III (Pol III). The other two subunits are related to Pol II initiation factors and have structural homologues in Pol III.
Ribosomal DNA transcription is confined to the nucleolus, where about 400 copies of the 42.9-kb rDNA gene are present, arranged as tandem repeats in nucleolus organizer regions. Each copy contains a ~13.3 kb sequence encoding the 18S, the 5.8S, and the 28S RNA molecules, interlaced with two internal transcribed spacers, ITS1 and ITS2, and flanked upstream by a 5' external transcribed spacer and a downstream 3' external transcribed spacer. These components are transcribed together to form the 45S pre-rRNA. The 45S pre-rRNA is then post-transcriptionally cleaved by C/D box and H/ACA box snoRNAs, removing the two spacers and resulting in the three rRNAs by a complex series of steps. The 5S ribosomal RNA is transcribed by Pol III. Because of the simplicity of Pol I transcription, it is the fastest-acting polymerase and contributes up to 60% of cellular transcription levels in exponentially growing cells.
In Saccharomyces cerevisiae, the 5S rDNA has the unusual feature of lying inside the rDNA repeat. It is flanked by non-transcribed spacers NTS1 and NTS2, and is transcribed backwards by Pol III, separately from the rest of the rDNA.
Regulation of rRNA transcription
The rate of cell growth is directly dependent on the rate of protein synthesis, which is itself intricately linked |
https://en.wikipedia.org/wiki/Nintendo%20Tumbler%20Puzzle | The Nintendo Tumbler Puzzle, also known as the Ten Billion Barrel in English and originally in Japanese, is a mathematical and mechanical puzzle. It is one of many mechanical toys invented by Gunpei Yokoi at Nintendo. It was released in 1980 under . The patent expired in March 1995 due to non-payment of a maintenance fee.
Overview
The puzzle consists of a cylinder of transparent plastic divided into six levels, within a black plastic frame. The frame consists of upper and lower discs that are joined through the middle of the cylinder.
The top and bottom levels of the cylinder form a single piece, but between them are two rotatable pieces each two levels high. Each of the four central levels is divided into five chambers each containing a colored ball. The top and bottom levels have only three chambers, containing either three balls or three parts of the frame depending on the relative position of frame and cylinder.
The balls in three of the five resulting columns of chambers can be moved up or down one level by raising or lowering the frame relative to the transparent cylinder.
The object is to sort the balls, so that each of the five columns contains balls of a single color.
Cameos
As a tribute to the late creator of the puzzle and former Metroid series producer, Gunpei Yokoi, the puzzle has a small cameo appearance in Metroid Prime for the GameCube. A large-scale version appears in the Phazon Mines in which Samus Aran uses the Morph Ball to interact with it by rotating the levels and climbing the side of it with magnetic rails.
The puzzle appears in Animal Crossing: New Leaf, as one of the prizes from Redd during the fireworks displays throughout August.
It is an easter egg in The Legend of Zelda: Majora's Mask 3D.
In WarioWare Gold, it appears as one of the microgames, requiring the matching of 4 of its marbles. |
https://en.wikipedia.org/wiki/Revetment | A revetment in stream restoration, river engineering or coastal engineering is a facing of impact-resistant material (such as stone, concrete, sandbags, or wooden piles) applied to a bank or wall in order to absorb the energy of incoming water and protect it from erosion. River or coastal revetments are usually built to preserve the existing uses of the shoreline and to protect the slope.
In architecture generally, it means a retaining wall. In military engineering it is a structure formed to secure an area from artillery, bombing, or stored explosives.
Freshwater revetments
Many revetments are used to line the banks of freshwater rivers, lakes, and man-made reservoirs, especially to prevent damage during periods of floods or heavy seasonal rains (see riprap). Many materials may be used: wooden piles, loose-piled boulders or concrete shapes, or more solid banks.
Concrete revetments are the most common type of infrastructure used to control the Mississippi River. More than of concrete matting has been placed in river bends between Cairo, Illinois and the Gulf of Mexico to slow the natural erosion that would otherwise frequently change small parts of the river's course.
Revetments as coastal defence
Revetments are used as a low-cost solution for coastal erosion defense in areas where crashing waves may otherwise deplete the coastline.
Wooden revetments are made of planks laid against wooden frames so that they disrupt the force of the water. Although once popular, the use of wooden revetments has largely been replaced by modern concrete-based defense structures such as tetrapods. In the 1730s, wooden revetments protecting dikes in the Netherlands were phased out due to the spread of shipworm infestations.
Dynamic revetments use gravel or cobble-sized rocks to mimic a natural cobble beach for the purpose of reducing wave energy and stopping or slowing coastal erosion.
Unlike solid structures, dynamic revetments are designed to allow wave action to rearra |
https://en.wikipedia.org/wiki/History%20of%20genetics | The history of genetics dates from the classical era with contributions by Pythagoras, Hippocrates, Aristotle, Epicurus, and others. Modern genetics began with the work of the Augustinian friar Gregor Johann Mendel. His work on pea plants, published in 1866, provided the initial evidence that, on its rediscovery in 1900, helped to establish the theory of Mendelian inheritance.
In ancient Greece, Hippocrates suggested that all organs of the body of a parent gave off invisible “seeds,” miniaturised components, that were transmitted during sexual intercourse and combined in the mother's womb to form a baby. In the Early Modern times, William Harvey's
book On Animal Generation contradicted Aristotle's theories of genetics and embryology.
The 1900 rediscovery of Mendel's work by Hugo de Vries, Carl Correns and Erich von Tschermak led to rapid advances in genetics. By 1915 the basic principles of Mendelian genetics had been studied in a wide variety of organisms — most notably the fruit fly Drosophila melanogaster. Led by Thomas Hunt Morgan and his fellow "drosophilists", geneticists developed the Mendelian model, which was widely accepted by 1925. Alongside experimental work, mathematicians developed the statistical framework of population genetics, bringing genetic explanations into the study of evolution.
With the basic patterns of genetic inheritance established, many biologists turned to investigations of the physical nature of the gene. In the 1940s and early 1950s, experiments pointed to DNA as the portion of chromosomes (and perhaps other nucleoproteins) that held genes. A focus on new model organisms such as viruses and bacteria, along with the discovery of the double helical structure of DNA in 1953, marked the transition to the era of molecular genetics.
In the following years, chemists developed techniques for sequencing both nucleic acids and proteins, while many others worked out the relationship between these two forms of biological molecules and disc |
https://en.wikipedia.org/wiki/Channelrhodopsin | Channelrhodopsins are a subfamily of retinylidene proteins (rhodopsins) that function as light-gated ion channels. They serve as sensory photoreceptors in unicellular green algae, controlling phototaxis: movement in response to light. Expressed in cells of other organisms, they enable light to control electrical excitability, intracellular acidity, calcium influx, and other cellular processes (see optogenetics). Channelrhodopsin-1 (ChR1) and Channelrhodopsin-2 (ChR2) from the model organism Chlamydomonas reinhardtii are the first discovered channelrhodopsins. Variants that are sensitive to different colors of light or selective for specific ions (ACRs, KCRs) have been cloned from other species of algae and protists.
History
Phototaxis and photoorientation of microalgae have been studied over more than a hundred years in many laboratories worldwide. In 1980, Ken Foster developed the first consistent theory about the functionality of algal eyes. He also analyzed published action spectra and complemented blind cells with retinal and retinal analogues, which led to the conclusion that the photoreceptor for motility responses in Chlorophyceae is rhodopsin.
Photocurrents of the Chlorophyceae Heamatococcus pluvialis and Chlamydomonas reinhardtii were studied over many years in the groups of Oleg Sineshchekov and Peter Hegemann. Based on action spectroscopy and simultaneous recordings of photocurrents and flagellar beating, it was determined that the photoreceptor currents and subsequent flagellar movements are mediated by rhodopsin and control phototaxis and photophobic responses. The extremely fast rise of the photoreceptor current after a brief light flash led to the conclusion that the rhodopsin and the channel are intimately linked in a protein complex, or even within one single protein.
The name "channelrhodopsin" was coined to highlight this unusual property, and the sequences were renamed accordingly. The nucleotide sequences of the rhodopsins now called channelr |
https://en.wikipedia.org/wiki/Laplace%E2%80%93Beltrami%20operator | In differential geometry, the Laplace–Beltrami operator is a generalization of the Laplace operator to functions defined on submanifolds in Euclidean space and, even more generally, on Riemannian and pseudo-Riemannian manifolds. It is named after Pierre-Simon Laplace and Eugenio Beltrami.
For any twice-differentiable real-valued function f defined on Euclidean space Rn, the Laplace operator (also known as the Laplacian) takes f to the divergence of its gradient vector field, which is the sum of the n pure second derivatives of f with respect to each vector of an orthonormal basis for Rn. Like the Laplacian, the Laplace–Beltrami operator is defined as the divergence of the gradient, and is a linear operator taking functions into functions. The operator can be extended to operate on tensors as the divergence of the covariant derivative. Alternatively, the operator can be generalized to operate on differential forms using the divergence and exterior derivative. The resulting operator is called the Laplace–de Rham operator (named after Georges de Rham).
Details
The Laplace–Beltrami operator, like the Laplacian, is the (Riemannian) divergence of the (Riemannian) gradient:
An explicit formula in local coordinates is possible.
Suppose first that M is an oriented Riemannian manifold. The orientation allows one to specify a definite volume form on M, given in an oriented coordinate system xi by
where is the absolute value of the determinant of the metric tensor, and the dxi are the 1-forms forming the dual frame to the frame
of the tangent bundle and is the wedge product.
The divergence of a vector field on the manifold is then defined as the scalar function with the property
where LX is the Lie derivative along the vector field X. In local coordinates, one obtains
where here and below the Einstein notation is implied, so that the repeated index i is summed over.
The gradient of a scalar function ƒ is the vector field grad f that may be defined through the in |
https://en.wikipedia.org/wiki/Tensor%20density | In differential geometry, a tensor density or relative tensor is a generalization of the tensor field concept. A tensor density transforms as a tensor field when passing from one coordinate system to another (see tensor field), except that it is additionally multiplied or weighted by a power W of the Jacobian determinant of the coordinate transition function or its absolute value. A tensor density with a single index is called a vector density. A distinction is made among (authentic) tensor densities, pseudotensor densities, even tensor densities and odd tensor densities. Sometimes tensor densities with a negative weight W are called tensor capacity. A tensor density can also be regarded as a section of the tensor product of a tensor bundle with a density bundle.
Motivation
In physics and related fields, it is often useful to work with the components of an algebraic object rather than the object itself. An example would be decomposing a vector into a sum of basis vectors weighted by some coefficients such as
where is a vector in 3-dimensional Euclidean space, are the usual standard basis vectors in Euclidean space. This is usually necessary for computational purposes, and can often be insightful when algebraic objects represent complex abstractions but their components have concrete interpretations. However, with this identification, one has to be careful to track changes of the underlying basis in which the quantity is expanded; it may in the course of a computation become expedient to change the basis while the vector remains fixed in physical space. More generally, if an algebraic object represents a geometric object, but is expressed in terms of a particular basis, then it is necessary to, when the basis is changed, also change the representation. Physicists will often call this representation of a geometric object a tensor if it transforms under a sequence of linear maps given a linear change of basis (although confusingly others call the underlyi |
https://en.wikipedia.org/wiki/PCM%20adaptor | A PCM adaptor is a device that encodes digital audio as video for recording on a videocassette recorder. The adapter also has the ability to decode a video signal back to digital audio for playback. This digital audio system was used for mastering early compact discs.
Operation
High-quality pulse-code modulation (PCM) audio requires a significantly larger bandwidth than a regular analog audio signal. For example, a 16-bit PCM signal requires an analog bandwidth of about 1-1.5 MHz compared to about 15-20 kHz of analog bandwidth required for an analog audio signal. A standard analog audio recorder cannot meet this requirement. One solution arrived at in the early 1980s was to use a videotape recorder, which is capable of recording signals with higher bandwidths.
A means of converting digital audio into a video format was necessary. Such an audio recording system includes two devices: the PCM adaptor, which converts audio into pseudo-video, and the videocassette recorder. A PCM adaptor performs an analog-to-digital conversion producing series of binary digits, which, in turn, is coded and modulated into a black and white video signal, appearing as a vibrating checkerboard pattern, which can then be recorded as a video signal.
Most video-based PCM adaptors record audio at 14 or 16 bits per sample, with a sampling frequency of 44.1 kHz for PAL or monochrome NTSC, or 44.056 kHz for color NTSC. Some of the earlier models, such as the Sony PCM-100, recorded 16 bits per sample, but used only 14 of the bits for the audio, with the remaining 2 bits used for error correction for the case of dropouts or other anomalies being present on the videotape.
Sampling frequency
The use of video for the PCM adapter helps to explain the choice of sampling frequency for the CD, because the number of video lines, frame rate and bits per line end up dictating the sampling frequency one can achieve. A sampling frequency of 44.1 kHz was thus adopted for the compact disc, as at the time, th |
https://en.wikipedia.org/wiki/Embedded%20Java | Embedded Java refers to versions of the Java program language that are designed for embedded systems. Since 2010 embedded Java implementations have come closer to standard Java, and are now virtually identical to the Java Standard Edition. Since Java 9 customization of the Java Runtime through modularization removes the need for specialized Java profiles targeting embedded devices.
History
Although in the past some differences existed between embedded Java and traditional PC based Java, the only difference now is that embedded Java code in these embedded systems is mainly contained in constrained memory, such as flash memory. A complete convergence has taken place since 2010, and now Java software components running on large systems can run directly with no recompilation at all on design-to-cost mass-production devices (such as consumers, industrial, white goods, healthcare, metering, smart markets in general)
CORE embedded Java API for a unified Embedded Java ecosystem
In order for a software component to run on any Java system, it must target the core minimal API provided by the different providers of the embedded Java ecosystem. Companies share the same eight packages of pre-written programs. The packages (java.lang, java.io, java.util, ... ) form the CORE Embedded Java API, which means that embedded programmers using the Java language can use them in order to make any worthwhile use of the Java language.
Old distinctions between SE embedded API and ME embedded API from ORACLE
Java SE embedded is based on desktop Java Platform, Standard Edition. It is designed to be used on systems with at least 32 MB of RAM, and can work on Linux ARM, x86, or Power ISA, and Windows XP and Windows XP Embedded architectures.
Java ME embedded used to be based on the Connected Device Configuration subset of Java Platform, Micro Edition. It is designed to be used on systems with at least 8 MB of RAM, and can work on Linux ARM, PowerPC, or MIPS architecture.
See also
Excelsio |
https://en.wikipedia.org/wiki/Telecommunication%20control%20unit | A telecommunication control unit (TCU), line control unit, or terminal control unit (although terminal control unit may also refer to a terminal cluster controller) is a Front-end processor for mainframes and some minicomputers which supports attachment of one or more telecommunication lines. TCUs free processors from handling the data coming in and out of RS-232 ports. The TCU can support multiple terminals, sometimes hundreds. Many of these TCUs can support RS-232 when it is required, although there are other serial interfaces as well.
The advent of ubiquitous TCP/IP has reduced the need for telecommunications control units.
See also
Terminal access controller
IBM 270x
IBM 3705 Communications Controller
IBM 3720
IBM 3745 |
https://en.wikipedia.org/wiki/Solidum%20Systems | Solidum Systems was a fabless semiconductor company founded by Feliks Welfeld and Misha Nossik in Ottawa, Ontario Canada in 1997. The company developed a series of rule-based network classification semiconductor devices. Some of their devices could be found in systems which supported 10 Gbit/s interfaces.
Solidum was acquired in October 2002 by Integrated Device Technology. IDT closed the Ottawa offices supporting the product in March 2009.
Misha Nossik was also the second chairman of the Network Processing Forum. The NPF also released the Look-Aside Interface which is an important specification for Network Search Elements such as Solidum's devices.
Products
Solidum produced a set of Traffic Classification devices called the PAX.port 1100, PAX.port 1200, and PAX.port 2500
The classifier chips were used in Network Switches and Load Balancers.
External links
Packet Description Language introduced Archived
1999 Packet Processing introduction Archived
2001 2nd round financing
2002 NPF names Misha Nossik Chairman
Companies established in 1997
Defunct networking companies
Fabless semiconductor companies
Companies based in Ottawa
Semiconductor companies of Canada
Defunct computer companies of Canada |
https://en.wikipedia.org/wiki/Polar%20set | In functional and convex analysis, and related disciplines of mathematics, the polar set is a special convex set associated to any subset of a vector space lying in the dual space
The bipolar of a subset is the polar of but lies in (not ).
Definitions
There are at least three competing definitions of the polar of a set, originating in projective geometry and convex analysis.
In each case, the definition describes a duality between certain subsets of a pairing of vector spaces over the real or complex numbers ( and are often topological vector spaces (TVSs)).
If is a vector space over the field then unless indicated otherwise, will usually, but not always, be some vector space of linear functionals on and the dual pairing will be the bilinear () defined by
If is a topological vector space then the space will usually, but not always, be the continuous dual space of in which case the dual pairing will again be the evaluation map.
Denote the closed ball of radius centered at the origin in the underlying scalar field of by
Functional analytic definition
Absolute polar
Suppose that is a pairing.
The polar or absolute polar of a subset of is the set:
where denotes the image of the set under the map defined by
If denotes the convex balanced hull of which by definition is the smallest convex and balanced subset of that contains then
This is an affine shift of the geometric definition;
it has the useful characterization that the functional-analytic polar of the unit ball (in ) is precisely the unit ball (in ).
The prepolar or absolute prepolar of a subset of is the set:
Very often, the prepolar of a subset of is also called the polar or absolute polar of and denoted by ;
in practice, this reuse of notation and of the word "polar" rarely causes any issues (such as ambiguity) and many authors do not even use the word "prepolar".
The bipolar of a subset of often denoted by is the set ;
that is,
Real polar
|
https://en.wikipedia.org/wiki/Jonathan%20Steuer | Jonathan Steuer (born December 3, 1965, in Wisconsin) is a pioneer in online publishing.
Steuer led the launch teams of a number of early and influential online publishing ventures, including Cyborganic, a pioneering online/offline community, HotWired, the first ad-supported web magazine, and c|net's online operations. Steuer's article "Defining virtual realities: Dimensions determining telepresence", is widely cited in academic and industry literature. Originally published in 1992 in the Journal of Communication 42, 73-9, it has been reprinted in Communication in the Age of Virtual Reality (1995), F. Biocca & M. R. Levy (Eds.).
Steuer's vividness and interactivity matrix from that article appeared in Wired circa 1995 and has been particularly influential in shaping the discourse by defining virtual reality in terms of human experience, rather than technological hardware, and setting out vividness and interactivity as axial dimensions of that experience. Steuer's notability in diverse arenas as a scholar, architect, and instigator of new media is documented in multiple, independent, non-trivial, published works.
Steuer has been a consultant and senior executive for a number of other online media startups: CNet, ZDTV, Sawyer Media Systems and Scient.
Steuer has an AB in philosophy from Harvard University, and a PhD in communication theory & research from Stanford University. There, his doctoral dissertation concerned Vividness and Source of Evaluation as Determinants of Social Responses Toward Mediated Representations of Agency.
Personal Life
He is married to Majorie Ingall. A longtime resident of the Bay Area, today Steuer resides in New York City. |
https://en.wikipedia.org/wiki/Red%20meat | In gastronomy, red meat is commonly red when raw (and a dark color after it is cooked), in contrast to white meat, which is pale in color before (and after) cooking. In culinary terms, only flesh from mammals or fowl (not fish) is classified as red or white. In nutritional science, red meat is defined as any meat that has more of the protein myoglobin than white meat. White meat is defined as non-dark meat from fish or chicken (excluding the leg or thigh, which is called dark meat).
Definition
Under the culinary definition, the meat from adult or "gamey" mammals (for example, beef, horse, mutton, venison, boar, hare) is red meat, while that from young mammals (rabbit, veal, lamb) is white. Poultry is white, excluding certain birds such as ostriches. Most cuts of pork are red, others are white. Game is sometimes put in a separate category altogether. (French: viandes noires — "dark meats".) Some meats (lamb, pork) are classified differently by different writers.
According to the United States Department of Agriculture (USDA), all meats obtained from mammals (regardless of cut or age) are red meats because they contain more myoglobin, which gives them their red color, than fish or white meat (but not necessarily dark meat) from chicken. Some cuts of pork are considered white under the culinary definition, but all pork is considered red meat in nutritional studies. The National Pork Board has positioned it as "the other white meat", profiting from the ambiguity to suggest that pork has the nutritional properties of white meat, which is considered more healthful.
Nutrition
Red meat contains large amounts of iron, creatine, minerals such as zinc and phosphorus, and B-vitamins: (niacin, vitamin B12, thiamin and riboflavin). Red meat is a source of lipoic acid.
Red meat contains small amounts of vitamin D. Offal such as liver contains much higher quantities than other parts of the animal.
In 2011, the USDA launched MyPlate, which did not distinguish between kinds of |
https://en.wikipedia.org/wiki/Keyswitch | A keyswitch is a type of small switch used for keys on keyboards.
Key switch is also used to describe a switch operated by a key, usually used in burglar alarm circuits. A car ignition is also a switch of this type.
Computer keyboards |
https://en.wikipedia.org/wiki/Polar%20distance%20%28astronomy%29 | In the celestial equatorial coordinate system Σ(α, δ) in astronomy, polar distance (PD) is an angular distance of a celestial object on its meridian measured from the celestial pole, similar to the way declination (dec, δ) is measured from the celestial equator.
Definition
Polar distance in celestial navigation is the angle between the pole and the Position of body on its Declination.
Referring to diagram:
P- Pole , WQE- Equator , Z - Zenith of observer ,
Y- Lower meridian passage of body
X- Upper meridian passage of body
Here body will be on declination circle ( XY). The distance between PY or PX will be the Polar distance of the body.
NP=ZQ=Latitude of observer
NY and NX will be the True altitude of body at that instant.
Polar distance (PD) = 90° ± δ
Polar distances are expressed in degrees and cannot exceed 180° in magnitude. An object on the celestial equator has a PD of 90°.
Polar distance is affected by the precession of the equinoxes.
If the polar distance of the Sun is equal to the observer's latitude, the shadow path of a gnomon's tip on a sundial will be a parabola; at higher latitudes it will be an ellipse and lower, a hyperbola. |
https://en.wikipedia.org/wiki/Covermount | Covermount (sometimes written cover mount) is the name given to storage media (containing software and or audiovisual media) or other products (ranging from toys to flip-flops) packaged as part of a magazine or newspaper. The name comes from the method of packaging; the media or product is placed in a transparent plastic sleeve and mounted on the cover of the magazine with adhesive tape or glue.
History
Audio recordings were distributed in the UK by the use of covermounts in the 1960s by the fortnightly satirical magazine Private Eye though the term "covermount" was not in usage at that time. The Private Eye recordings were pressed onto 7" floppy vinyl (known as "flexi-discs" and "flimsies") and mounted on to the front of the magazine. The weekly pop music paper NME issued audio recordings of rock music on similar 7" flexi-discs as covermounts in the 1970s.
The covermount practice continued with computer magazines in the early era of home computers. In the United Kingdom computer hobbyist magazines began distributing tapes and later floppy disks with their publications. These disks included demo and shareware versions of games, applications, computer drivers, operating systems, computer wallpapers and other (usually free) content. One of the first covermount games to be added as a covermount was the 1984 The Thompson Twins Adventure.
Most magazines backed up by large publishers like Linux Format included a covermount CD or DVD with a Linux distribution and other open-source applications. The distribution of discs with source programs was also common in programming magazines: while the printed version had the code explained, the disk had the code ready to be compiled without forcing the reader to type the whole listing into the computer by hand.
In November 2015, The MagPi magazine brought the concept full circle and attached a free Raspberry Pi Zero on the cover, the first full computer to be included as a covermount on a magazine.
In other places, such as Finl |
https://en.wikipedia.org/wiki/Elevation | The elevation of a geographic location is its height above or below a fixed reference point, most commonly a reference geoid, a mathematical model of the Earth's sea level as an equipotential gravitational surface (see Geodetic datum § Vertical datum).
The term elevation is mainly used when referring to points on the Earth's surface, while altitude or geopotential height is used for points above the surface, such as an aircraft in flight or a spacecraft in orbit, and depth is used for points below the surface.
Elevation is not to be confused with the distance from the center of the Earth. Due to the equatorial bulge, the summits of Mount Everest and Chimborazo have, respectively, the largest elevation and the largest geocentric distance.
Aviation
In aviation, the term elevation or aerodrome elevation is defined by the ICAO as the highest point of the landing area. It is often measured in feet and can be found in approach charts of the aerodrome. It is not to be confused with terms such as the altitude or height.
Maps and GIS
GIS or geographic information system is a computer system that allows for visualizing, manipulating, capturing, and storage of data with associated attributes. GIS offers better understanding of patterns and relationships of the landscape at different scales. Tools inside the GIS allow for manipulation of data for spatial analysis or cartography.
A topographical map is the main type of map used to depict elevation, often through use of contour lines.
In a Geographic Information System (GIS), digital elevation models (DEM) are commonly used to represent the surface (topography) of a place, through a raster (grid) dataset of elevations. Digital terrain models are another way to represent terrain in GIS.
USGS (United States Geologic Survey) is developing a 3D Elevation Program (3DEP) to keep up with growing needs for high quality topographic data. 3DEP is a collection of enhanced elevation data in the form of high quality LiDAR data over the c |
https://en.wikipedia.org/wiki/Elevation%20%28ballistics%29 | In ballistics, the elevation is the angle between the horizontal plane and the axial direction of the barrel of a gun, mortar or heavy artillery. Originally, elevation was a linear measure of how high the gunners had to physically lift the muzzle of a gun up from the gun carriage to compensate for projectile drop and hit targets at a certain distance.
Until WWI
Though early 20th-century firearms were relatively easy to fire, artillery was not. Before and during World War I, the only way to effectively fire artillery was plotting points on a plane.
Most artillery units seldom employed their guns in small numbers. Instead of using pin-point artillery firing they used old means of "fire for effect" using artillery en masse. This tactic was employed successfully by past armies.
By World War I, reasonably accurate artillery fire was possible even at long range requiring significant elevation. However, artillery tactics used in previous wars were carried on, and still had similar success where great accuracy was not required. Large warships such as battleships carried large-caliber guns that needed to be elevated above the direct point of aim for firing accurately at small targets at long range.
From WWII
As time passed on, more accurate artillery guns were developed in a range of sizes. Some small artillery pieces were used at high elevations as mortars, medium-sized guns were used on tanks as well as fixed positions, and the largest guns became long-range land batteries and battleship armaments.
With the introduction of better tanks in World War II, elevation had to be taken into account by tank gunners, which had to aim through the Gunner's Auxiliary Sights (GAS) or even through iron sights. At shorter ranges the high velocity of tank and other munitions made elevation less of an issue.
During World War II artillery fire-control systems (FCS) were introduced, improving the effectiveness of artillery fire.
With advances in the 21st century, it has become easy t |
https://en.wikipedia.org/wiki/Ecosystem%20ecology | Ecosystem ecology is the integrated study of living (biotic) and non-living (abiotic) components of ecosystems and their interactions within an ecosystem framework. This science examines how ecosystems work and relates this to their components such as chemicals, bedrock, soil, plants, and animals.
Ecosystem ecology examines physical and biological structures and examines how these ecosystem characteristics interact with each other. Ultimately, this helps us understand how to maintain high quality water and economically viable commodity production. A major focus of ecosystem ecology is on functional processes, ecological mechanisms that maintain the structure and services produced by ecosystems. These include primary productivity (production of biomass), decomposition, and trophic interactions.
Studies of ecosystem function have greatly improved human understanding of sustainable production of forage, fiber, fuel, and provision of water. Functional processes are mediated by regional-to-local level climate, disturbance, and management. Thus ecosystem ecology provides a powerful framework for identifying ecological mechanisms that interact with global environmental problems, especially global warming and degradation of surface water.
This example demonstrates several important aspects of ecosystems:
Ecosystem boundaries are often nebulous and may fluctuate in time
Organisms within ecosystems are dependent on ecosystem level biological and physical processes
Adjacent ecosystems closely interact and often are interdependent for maintenance of community structure and functional processes that maintain productivity and biodiversity
These characteristics also introduce practical problems into natural resource management. Who will manage which ecosystem? Will timber cutting in the forest degrade recreational fishing in the stream? These questions are difficult for land managers to address while the boundary between ecosystems remains unclear; even though decisions in |
https://en.wikipedia.org/wiki/List%20of%20Tetris%20variants | This is a list of variants of the game Tetris. It includes officially licensed Tetris sequels, as well as unofficial clones.
Official games
Unofficial games
See also
List of puzzle video games
Notes |
https://en.wikipedia.org/wiki/CBC%20Tower%20%28Mont-Carmel%29 | The CBC Tower, also known as the WesTower Transmission Tower, was a guyed mast (now after its reconstruction) for FM- and TV-transmission located atop Mont-Carmel near Shawinigan, Quebec, Canada. The tower was built in 1972 and it served for several decades as Quebec's primary CBC transmission point and also served several radio and television stations for the Trois-Rivières market.
2001 Incident
On April 22, 2001 a lone pilot, Gilbert Paquette, flew his Cessna 150 into the tower and was killed. The fuselage of the plane remained wedged in the upper part of the tower, with the pilot's body inside. The crash also knocked the tower several metres off balance. It was decided that due to the structural damage and the need to recover the pilot's body the mast would have to be demolished. Several days later a controlled implosion brought the tower down, not damaging the several buildings nearby. At the time, it was the tallest structure to have ever been demolished with explosives.
In July 2003, a new mast broadcasting with over double the effective radiated power of the original mast (from 4,386 watts to 9,300 watts) was built exactly at the same place, on the same base. Its height is the same, for the tower structure, but the antenna on the top who was is now.
See also
List of masts |
https://en.wikipedia.org/wiki/Data%20cube | In computer programming contexts, a data cube (or datacube) is a multi-dimensional ("n-D") array of values. Typically, the term data cube is applied in contexts where these arrays are massively larger than the hosting computer's main memory; examples include multi-terabyte/petabyte data warehouses and time series of image data.
The data cube is used to represent data (sometimes called facts) along some dimensions of interest.
For example, in online analytical processing (OLAP) such dimensions could be the subsidiaries a company has, the products the company offers, and time; in this setup, a fact would be a sales event where a particular product has been sold in a particular subsidiary at a particular time. In satellite image timeseries dimensions would be latitude and longitude coordinates and time; a fact (sometimes called measure) would be a pixel at a given space and time as taken by the satellite (following some processing that is not of concern here).
Even though it is called a cube (and the examples provided above happen to be 3-dimensional for brevity), a data cube generally is a multi-dimensional concept which can be 1-dimensional, 2-dimensional, 3-dimensional, or higher-dimensional.
In any case, every dimension divides data into groups of cells whereas each cell in the cube represents a single measure of interest. Sometimes cubes hold only few values with the rest being empty, i.e. undefined, sometimes most or all cube coordinates hold a cell value. In the first case such data are called sparse, in the second case they are called dense, although there is no hard delineation between both.
History
Multi-dimensional arrays have long been familiar in programming languages. Fortran offers arbitrarily-indexed 1-D arrays and arrays of arrays, which allows the construction of higher-dimensional arrays, up to 15 dimensions. APL supports n-D arrays with a rich set of operations. All these have in common that arrays must fit into the main memory and are available |
https://en.wikipedia.org/wiki/Orgueil%20%28meteorite%29 | Orgueil is a scientifically important carbonaceous chondrite meteorite that fell in southwestern France in 1864.
History
The Orgueil meteorite fell on May 14, 1864, a few minutes after 20:00 local time, near Orgueil in southern France. About 20 stones fell over an area of 5-10 square kilometres. A specimen of the meteorite was analyzed that same year by François Stanislaus Clöez, professor of chemistry at the Musée d'Histoire Naturelle, who focused on the organic matter found in this meteorite. He wrote that it contained carbon, hydrogen, and oxygen, and its composition was very similar to peat from the Somme valley or to the lignite of Ringkohl near Kassel. An intense scientific discussion ensued, continuing into the 1870s, as to whether the organic matter might have a biological origin.
Curation and Distribution
Orgueil specimens are in curation by bodies around the world. Given the large mass, samples are in circulation for nondestructive (and with sufficient justification, destructive) study and test.
Source: Grady, M. M. Catalogue of Meteorites, 5th Edition, Cambridge University Press
Composition and classification
Orgueil is one of five known meteorites belonging to the CI chondrite group (see meteorites classification), and is the largest (). This group has a composition that is essentially identical to that of the sun, excluding gaseous elements like hydrogen and helium. Notably though, the Orgueil meteor is highly enriched in (volatile) mercury - undetectable in the solar photosphere, and this is a major driver of the "mercury paradox" that mercury abundances in meteors do not follow its volatile nature and isotopic ratios based expected behaviour in the solar nebula.
Because of its extraordinarily primitive composition and relatively large mass, Orgueil is one of the most-studied meteorites. One notable discovery in Orgueil was a high concentration of isotopically anomalous xenon called "xenon-HL". The carrier of this gas is extremely fine-grained d |
https://en.wikipedia.org/wiki/Comprehensive%20School%20Mathematics%20Program | Comprehensive School Mathematics Program (CSMP) stands for both the name of a curriculum and the name of the project that was responsible for developing curriculum materials in the United States.
Two major curricula were developed as part of the overall CSMP project: the Comprehensive School Mathematics Program (CSMP), a K–6 mathematics program for regular classroom instruction, and the Elements of Mathematics (EM) program, a grades 7–12 mathematics program for gifted students. EM treats traditional topics rigorously and in-depth, and was the only curriculum that strictly adhered to Goals for School Mathematics: The Report of the Cambridge Conference on School Mathematics (1963). As a result, it includes much of the content generally required for an undergraduate mathematics major. These two curricula are unrelated to one another, but certain members of the CSMP staff contributed to the development of both projects. Additionally, some staff of the Elements of Mathematics were also involved with the Secondary School Mathematics Curriculum Improvement Study program being. What follows is a description of the K–6 program that was designed for a general, heterogeneous audience.
The CSMP project was established in 1966, under the direction of Burt Kaufman, who remained director until 1979, succeeded by Clare Heidema. It was originally affiliated with Southern Illinois University in Carbondale, Illinois. After a year of planning, CSMP was incorporated into the Central Midwest Regional Educational Laboratory (later CEMREL, Inc.), one of the national educational laboratories funded at that time by the U.S. Office of Education. In 1984, the project moved to Mid-continental Research for Learning (McREL) Institute's Comprehensive School Reform program, who supported the program until 2003. Heidema remained director to its conclusion. In 1984, it was implemented in 150 school districts in 42 states and about 55,000 students.
Overview
The CSMP project employs four non-verbal |
https://en.wikipedia.org/wiki/Matrix%20calculus | In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.
Two competing notational conventions split the field of matrix calculus into two separate groups. The two groups can be distinguished by whether they write the derivative of a scalar with respect to a vector as a column vector or a row vector. Both of these conventions are possible even when the common assumption is made that vectors should be treated as column vectors when combined with matrices (rather than row vectors). A single convention can be somewhat standard throughout a single field that commonly uses matrix calculus (e.g. econometrics, statistics, estimation theory and machine learning). However, even within a given field different authors can be found using competing conventions. Authors of both groups often write as though their specific conventions were standard. Serious mistakes can result when combining results from different authors without carefully verifying that compatible notations have been used. Definitions of these two conventions and comparisons between them are collected in the layout conventions section.
Scope
Matrix calculus refers to a number of different notations that use matrices and vectors to collect the derivative of each component of the dependent variable with respect to each component of the independent variable. In general, the independent variable can be a scalar, a vector, or a matrix while the d |
https://en.wikipedia.org/wiki/Mousetrap%20car | A mousetrap car is a small vehicle whose only source of motive power is a mousetrap. Variations include the use of multiple traps, or very big rat traps, for added power.
Mousetrap cars are often used in physics or other physical science classes to help students build problem-solving skills, develop spatial awareness, learn to budget time, and practice cooperative behavior.
Design
The general style for a mousetrap car varies. A number of commercial vendors offer plans, kits and complete cars for sale. In addition to mousetrap cars, contests have been created for mousetrap boats. and mousetrap airplanes.
Spring power
A mousetrap is powered by a helical torsion spring. Torsion springs obey an angular form of Hooke's law:
where is the torque exerted by the spring in newton-meters, and is the angle of twist from its equilibrium position in radians. is a constant with units of newton-meters / radian, variously called the spring's torsion coefficient, torsion elastic modulus, or just spring constant, equal to the torque required to twist the spring through an angle of 1 radian. It is analogous to the spring constant of a linear spring.
The energy of U, in joules, stored in a torsion spring is:
When a mousetrap is assembled, the spring is initially twisted beyond its equilibrium position so that it applies significant torque to the bar when the trap is closed.
Power transmission to axle
This motion must be used to turn the car's axle or wheels. The most common solution is to attach a string to the mouse trap's arm and then wrap it around an axle. As the bar is released, it pulls on the string, causing the axle (and wheels) to turn.
Tying the string directly to the mousetrap's bar, however, will not make good use of the energy stored in the spring. The distance between the opened and closed positions of the bar of a mousetrap is typically 10 cm, so this is how much string would be pulled. Wrapped around even a small diameter axle, this amount of string wi |
https://en.wikipedia.org/wiki/Linkage%20isomerism | In chemistry, linkage isomerism or ambidentate isomerism is a form of isomerism in which certain coordination compounds have the same composition but differ in their metal atom's connectivity to a ligand.
Typical ligands that give rise to linkage isomers are:
cyanide, – isocyanide,
cyanate, – isocyanate,
thiocyanate, – isothiocyanate,
selenocyanate, – isoselenocyanate,
nitrite,
sulfite,
Examples of linkage isomers are violet-colored and orange-colored . The isomerization of the S-bonded isomer to the N-bonded isomer occurs intramolecularly.
The complex cis-dichlorotetrakis(dimethylsulfoxide)ruthenium(II) () exhibits linkage isomerism of dimethyl sulfoxide ligands due to S- vs. O-bonding. Trans-dichlorotetrakis(dimethylsulfoxide)ruthenium(II) does not exhibit linkage isomers.
History
Linkage isomerism was first noted for nitropentaamminecobalt(III) chloride, . This cationic cobalt complex can be isolated as either of two linkage isomers. In the yellow-coloured isomer, the nitro ligand is bound through nitrogen. In the red linkage isomer, the nitrito is bound through one oxygen atom. The O-bonded isomer is often written as . Although the existence of the isomers had been known since the late 1800s, only in 1907 was the difference explained. It was later shown that the red isomer converted to the yellow isomer upon UV-irradiation. In this particular example, the formation of the nitro isomer () from the nitrito isomer () occurs by an intramolecular rearrangement. |
https://en.wikipedia.org/wiki/Global%20Assembly%20Cache | The Global Assembly Cache (GAC) is a machine-wide CLI assembly cache for the Common Language Infrastructure (CLI) in Microsoft's .NET Framework. The approach of having a specially controlled central repository addresses the flaws in the shared library concept and helps to avoid pitfalls of other solutions that led to drawbacks like DLL hell.
Requirements
Assemblies residing in the GAC must adhere to a specific versioning scheme which allows for side-by-side execution of different code versions. Specifically, such assemblies must be strongly named.
Usage
There are two ways to interact with the GAC: the Global Assembly Cache Tool (gacutil.exe) and the Assembly Cache Viewer (shfusion.dll).
Global Assembly Cache Tool
gacutil.exe is an older command-line utility that shipped with .NET 1.1 and is still available with the .NET SDK.
One can check the availability of a shared assembly in GAC by using the command:
gacutil.exe /l <assemblyName>
One can register a shared assembly in the GAC by using the command:
gacutil.exe /i <assemblyName>
Or by copying an assembly file into the following location:
%windir%\assembly\
Note that for .NET 4.0 the GAC location is now:
%windir%\Microsoft.NET\assembly\
Other options for this utility will be briefly described if you use the /? flag, i.e.:
gacutil.exe /?
Assembly Cache Viewer
The newer interface, the Assembly Cache Viewer, is integrated into Windows Explorer. Browsing %windir%\assembly\ (for example, C:\WINDOWS\assembly) or %WINDIR%\Microsoft.NET\assembly, displays the assemblies contained in the cache along with their versions, culture, public key token, and processor architecture. Assemblies are installed by dragging and dropping and uninstalled by selecting and pressing the delete key or using the context menu.
With the launch of the .NET Framework 4, the Assembly Cache Viewer shell extension is obsolete.
Example of use
A computer has two CLI assemblies both named AssemblyA, but one is version 1.0 and the other is ve |
https://en.wikipedia.org/wiki/Magic%20series | A magic series is a set of distinct positive integers which add up to the magic constant of a magic square and a magic cube, thus potentially making up lines in magic tesseracts.
So, in an n × n magic square using the numbers from 1 to n2, a magic series is a set of n distinct numbers adding up to n(n2 + 1)/2. For n = 2, there are just two magic series, 1+4 and 2+3. The eight magic series when n = 3 all appear in the rows, columns and diagonals of a 3 × 3 magic square.
Maurice Kraitchik gave the number of magic series up to n = 7 in Mathematical Recreations in 1942 . In 2002, Henry Bottomley extended this up to n = 36 and independently Walter Trump up to n = 32. In 2005, Trump extended this to n = 54 (over 2 × 10111) while Bottomley gave an experimental approximation for the numbers of magic series:
In July 2006, Robert Gerbicz extended this sequence up to n = 150.
In 2013 Dirk Kinnaes was able to exploit his insight that the magic series could be related to the volume of a polytope. Trump used this new approach to extend the sequence up to n = 1000.
Mike Quist showed that the exact second-order count has a multiplicative factor of equivalent to a denominator of
Richard Schroeppel in 1973 published the complete enumeration of the order 5 magic squares at 275,305,224. This recent magic series work gives hope that the relationship between the magic series and the magic square may provide an exact count for order 6 or order 7 magic squares. Consider an intermediate structure that lies in complexity between the magic series and the magic square. It might be described as an amalgamation of 4 magic series that have only one unique common integer. This structure forms the two major diagonals and the central row and column for an odd order magic square. Building blocks such as these could be the way forward. |
https://en.wikipedia.org/wiki/Q-ball | In theoretical physics, Q-ball is a type of non-topological soliton. A soliton is a localized field configuration that is stable—it cannot spread out and dissipate. In the case of a non-topological soliton, the stability is guaranteed by a conserved charge: the soliton has lower energy per unit charge than any other configuration. (In physics, charge is often represented by the letter "Q", and the soliton is spherically symmetric, hence the name.)
Intuitive explanation
A Q-ball arises in a theory of bosonic particles when there is an attraction between the particles. Loosely speaking, the Q-ball is a finite-sized "blob" containing a large number of particles. The blob is stable against fission into smaller blobs and against "evaporation" via emission of individual particles, because, due to the attractive interaction, the blob is the lowest-energy configuration of that number of particles. (This is analogous to the fact that nickel-62 is the most stable nucleus because it is the most stable configuration of neutrons and protons. However, nickel-62 is not a Q-ball, in part because neutrons and protons are fermions, not bosons.)
For there to be a Q-ball, the number of particles must be conserved (i.e. the particle number is a conserved "charge", so the particles are described by a complex-valued field ), and the interaction potential of the particles must have a negative (attractive) term. For non-interacting particles, the potential would be just a mass term , and there would be no Q-ball. But if one adds an attractive term (and positive higher powers of to ensure that the potential has a lower bound), then there are values of where , i.e. the energy of these field values is less than the energy of a free field. This corresponds to saying that one can create blobs of non-zero field (i.e. clusters of many particles) whose energy is lower than the same number of individual particles far apart. Those blobs are therefore stable against evaporation into individual |
https://en.wikipedia.org/wiki/Option%20ROM | An Option ROM for the PC platform (i.e. the IBM PC and derived successor computer systems) is a piece of firmware that resides in ROM on an expansion card (or stored along with the main system BIOS), which gets executed to initialize the device and (optionally) add support for the device to the BIOS. In its usual use, it is essentially a driver that interfaces between the BIOS API and hardware. Technically, an option ROM is firmware that is executed by the BIOS after POST (the testing and initialization of basic system hardware) and before the BIOS boot process, gaining complete control of the system and being generally unrestricted in what it can do. The BIOS relies on each option ROM to return control to the BIOS so that it can either call the next option ROM or commence the boot process. For this reason, it is possible (but not usual) for an option ROM to keep control and preempt the BIOS boot process. The BIOS (at least as originally designed by IBM) generally scans for and initializes (by executing) option ROMs in ascending address order at 2 KB address intervals within two different address ranges above address C0000h in the conventional (20-bit) memory address space; later systems may also scan additional address ranges in the 24-bit or 32-bit extended address space.
Option ROMs are necessary to enable non-Plug and Play peripheral devices to boot and to extend the BIOS to provide support for any non-Plug and Play peripheral device in the same way that standard and motherboard-integrated peripherals are supported. Option ROMs are also used to extend the BIOS or to add other firmware services to the BIOS. In principle, an option ROM could provide any sort of firmware extension, such as a library of video graphics subroutines, or a set of PCM audio processing services, and cause it to be installed into the system RAM and optionally the CPU interrupt system before boot time.
A common option ROM is the video BIOS which gets loaded very early on in the boot p |
https://en.wikipedia.org/wiki/Cooler | A cooler, portable ice chest, ice box, cool box, chilly bin (in New Zealand), or esky (Australia) is an insulated box used to keep food or drink cool.
Ice cubes are most commonly placed in it to help the contents inside stay cool. Ice packs are sometimes used, as they either contain the melting water inside, or have a gel sealed inside that stays cold longer than plain ice (absorbing heat as it changes phase).
Coolers are often taken on picnics, and on vacation or holiday. Where summers are hot, they may also be used just for getting cold groceries home from the store, such as keeping ice cream from melting in a hot automobile. Even without adding ice, this can be helpful, particularly if the trip home will be lengthy. Some coolers have built-in cupholders in the lid.
They are usually made with interior and exterior shells of plastic, with a hard foam in between. They come in sizes from small personal ones to large family ones with wheels. Disposable ones are made solely from polystyrene foam (such as is a disposable coffee cup) about 2 cm or one inch thick. Most reusable ones have molded-in handles; a few have shoulder straps. The cooler has developed from just a means of keeping beverages cold into a mode of transportation with the ride-on cooler. A thermal bag, cooler bag or cool bag is very similar in concept, but typically smaller and not rigid.
History
The original inventor of the cooler is unknown, with versions becoming available in various parts of the world throughout the 1950s.
The portable ice chest was patented in the USA by Richard C. Laramy of Joliet, Illinois. On February 24, 1951, Laramy filed an application with the United States Patent Office for a portable ice chest (Serial No. 212,573). The patent (#2,663,157) was issued December 22, 1953.
In 1952, the portable Esky Auto Box was released in Australia by the Sydney refrigeration company Malley’s. Made from steel and finished in baked enamel and chrome, with cork sheeting for insulati |
https://en.wikipedia.org/wiki/Egorov%27s%20theorem | In measure theory, an area of mathematics, Egorov's theorem establishes a condition for the uniform convergence of a pointwise convergent sequence of measurable functions. It is also named Severini–Egoroff theorem or Severini–Egorov theorem, after Carlo Severini, an Italian mathematician, and Dmitri Egorov, a Russian physicist and geometer, who published independent proofs respectively in 1910 and 1911.
Egorov's theorem can be used along with compactly supported continuous functions to prove Lusin's theorem for integrable functions.
Historical note
The first proof of the theorem was given by Carlo Severini in 1910: he used the result as a tool in his research on series of orthogonal functions. His work remained apparently unnoticed outside Italy, probably due to the fact that it is written in Italian, appeared in a scientific journal with limited diffusion and was considered only as a means to obtain other theorems. A year later Dmitri Egorov published his independently proved results, and the theorem became widely known under his name: however, it is not uncommon to find references to this theorem as the Severini–Egoroff theorem. The first mathematicians to prove independently the theorem in the nowadays common abstract measure space setting were , and in : an earlier generalization is due to Nikolai Luzin, who succeeded in slightly relaxing the requirement of finiteness of measure of the domain of convergence of the pointwise converging functions in the ample paper . Further generalizations were given much later by Pavel Korovkin, in the paper , and by Gabriel Mokobodzki in the paper .
Formal statement and proof
Statement
Let (fn) be a sequence of M-valued measurable functions, where M is a separable metric space, on some measure space (X,Σ,μ), and suppose there is a measurable subset A ⊆ X, with finite μ-measure, such that (fn) converges μ-almost everywhere on A to a limit function f. The following result holds: for every ε > 0, there exists a measurable subs |
https://en.wikipedia.org/wiki/Neutron%20activation | Neutron activation is the process in which neutron radiation induces radioactivity in materials, and occurs when atomic nuclei capture free neutrons, becoming heavier and entering excited states. The excited nucleus decays immediately by emitting gamma rays, or particles such as beta particles, alpha particles, fission products, and neutrons (in nuclear fission). Thus, the process of neutron capture, even after any intermediate decay, often results in the formation of an unstable activation product. Such radioactive nuclei can exhibit half-lives ranging from small fractions of a second to many years.
Neutron activation is the only common way that a stable material can be induced into becoming intrinsically radioactive. All naturally occurring materials, including air, water, and soil, can be induced (activated) by neutron capture into some amount of radioactivity in varying degrees, as a result of the production of neutron-rich radioisotopes. Some atoms require more than one neutron to become unstable, which makes them harder to activate because the probability of a double or triple capture by a nucleus is below that of single capture. Water, for example, is made up of hydrogen and oxygen. Hydrogen requires a double capture to attain instability as tritium (hydrogen-3), while natural oxygen (oxygen-16) requires three captures to become unstable oxygen-19. Thus water is relatively difficult to activate, as compared to sodium chloride (NaCl), in which both the sodium and chlorine atoms become unstable with a single capture each. These facts were experienced first-hand at the Operation Crossroads atomic test series in 1946.
Examples
An example of this kind of a nuclear reaction occurs in the production of cobalt-60 within a nuclear reactor:
The cobalt-60 then decays by the emission of a beta particle plus gamma rays into nickel-60. This reaction has a half-life of about 5.27 years, and due to the availability of cobalt-59 (100% of its natural abundance), this neutr |
https://en.wikipedia.org/wiki/Glycine%20receptor | The glycine receptor (abbreviated as GlyR or GLR) is the receptor of the amino acid neurotransmitter glycine. GlyR is an ionotropic receptor that produces its effects through chloride current. It is one of the most widely distributed inhibitory receptors in the central nervous system and has important roles in a variety of physiological processes, especially in mediating inhibitory neurotransmission in the spinal cord and brainstem.
The receptor can be activated by a range of simple amino acids including glycine, β-alanine and taurine, and can be selectively blocked by the high-affinity competitive antagonist strychnine. Caffeine is a competitive antagonist of GlyR. Cannabinoids enhance the function.
The protein Gephyrin has been shown to be necessary for GlyR clustering at inhibitory synapses. GlyR is known to colocalize with the GABAA receptor on some hippocampal neurons. Nevertheless, some exceptions can occur in the central nervous system where the GlyR α1 subunit and gephyrin, its anchoring protein, are not found in dorsal root ganglion neurons despite the presence of GABAA receptors.
History
Glycine and its receptor were first suggested to play a role in inhibition of cells in 1965. Two years later, experiments showed that glycine had a hyperpolarizing effect on spinal motor neurons due to increased chloride conductance through the receptor. Then, in 1971, glycine was found to be localized in the spinal cord using autoradiography. All of these discoveries resulted in the conclusion that glycine is a primary inhibitory neurotransmitter of the spinal cord that works via its receptor.
Arrangement of subunits
Strychnine-sensitive GlyRs are members of a family of ligand-gated ion channels. Receptors of this family are arranged as five subunits surrounding a central pore, with each subunit composed of four α helical transmembrane segments. There are presently four known isoforms of the ligand-binding α-subunit (α1-4) of GlyR (GLRA1, GLRA2, GLRA3, GLRA4) and a |
https://en.wikipedia.org/wiki/Nicholas%20Kurti | Nicholas Kurti, () (14 May 1908 – 24 November 1998) was a Hungarian-born British physicist who lived in Oxford, UK, for most of his life.
Career
Born in Budapest, Kurti went to high school at the Minta Gymnasium, but due to anti-Jewish laws he had to leave the country, gaining his master's degree at the Sorbonne in Paris. He obtained his doctorate in low-temperature physics in Berlin, working with Professor Franz Simon. Kurti and Simon continued to work together during 1931–1933 at the Technische Hochschule in Breslau. However, when Adolf Hitler rose to power, both Simon and Kurti left Germany, joining the Clarendon Laboratory in the University of Oxford, England.
During World War II, Kurti worked on the Manhattan project, returning to Oxford in 1945. In 1955 he won the Fernand Holweck Medal and Prize. In 1956, Simon and Kurti built a laboratory experiment that reached a temperature of one microkelvin. This work attracted worldwide attention, and Kurti was elected a Fellow of the Royal Society. He later became the society's Vice-President from 1965 to 1967.
Kurti became a Fellow of Brasenose College, Oxford, in 1947 and became Professor of Physics at Oxford in 1967, a post he held until his retirement in 1975. He was also Visiting Professor at City College in New York City, the University of California, Berkeley, and Amherst College in Massachusetts.
Nicholas Kurti was elected as a Fellow of the Royal Society (FRS) in 1956, becoming vice-president in 1965, and was appointed as a Commander of the British Empire (CBE) in 1973.
Personal life
Kurti's hobby was cooking, and he was an enthusiastic advocate of applying scientific knowledge to culinary problems, a field known today as gastrophysics. In 1969 he gave a talk at the Royal Institution titled "The physicist in the kitchen", in which he amazed the audience by using the recently invented microwave oven to make a "reverse Baked Alaska" — a Frozen Florida — hot liquor enclosed by a shell of frozen meringue. Ov |
https://en.wikipedia.org/wiki/PICAXE | PICAXE is a microcontroller system based on a range of Microchip PIC microcontrollers. PICAXE devices are Microchip PIC devices with pre-programmed firmware that enables bootloading of code directly from a PC, simplifying hobbyist embedded development (not unlike the Arduino and Parallax BASIC Stamp systems). PICAXE devices have been produced by Revolution Education (Rev-Ed) since 1999.
Hardware
There are currently six (6) PICAXE variants of differing pin counts (8-14-18-20-28-40) and are available as DIL and SMD.
PICAXE microcontrollers are pre-programmed with an interpreter similar to the BASIC Stamp but using internal EEPROM instead, thus reducing cost. This also allows downloads to be made with a simple serial connection which eliminates the need for a PIC programmer. PICAXE is programmed using an RS-232 serial cable or a USB cable which connects a computer to the download circuit, which normally uses a 3.5 mm jack and two resistors.
Programming language
PICAXE microcontrollers are programmed using BASIC.
The PICAXE interpreter features bit-banged communications:
Serial (asynchronous serial)
SPI (synchronous serial)
Infrared (using a 38 kHz carrier, seven data bits and five ID bits)
One-wire
The "readtemp" command reads the temperature from a DS18B20 temperature sensor and converts it into Celsius.
All current PICAXEs have commands for using hardware features of the underlying PIC microcontrollers:
Hardware asynchronous serial
Hardware synchronous serial
Hardware PWM
DAC
ADC
SR Latch
Timers (two on X2/X1 parts which have settable intervals, only one on M2 parts with a fixed interval, older parts have none)
Comparators
Internal temperature measurement
Program space
All current PICAXE chips have at least 2048 bytes of on board program memory available for user programs:
08M2 - 2048 bytes
14M2 - 2048
18M2+ - 2048
20M2 - 2048
20X2 - 4096
28X1 - 4096
40X1 - 4096
28X2 - 4096 per slot with four slots for a total of 16 KiB
40X2 - 4096 per sl |
https://en.wikipedia.org/wiki/Lunar%20Receiving%20Laboratory | The Lunar Receiving Laboratory (LRL) was a facility at NASA's Lyndon B. Johnson Space Center (Building 37) that was constructed to quarantine astronauts and material brought back from the Moon during the Apollo program to reduce the risk of back-contamination. After recovery at sea, crews from Apollo 11, Apollo 12, and Apollo 14 walked from their helicopter to the Mobile Quarantine Facility on the deck of an aircraft carrier and were brought to the LRL for quarantine. Samples of rock and regolith that the astronauts collected and brought back were flown directly to the LRL and initially analyzed in glovebox vacuum chambers.
The quarantine requirement was dropped for Apollo 15 and later missions. The LRL was used for study, distribution, and safe storage of the lunar samples. Between 1969 and 1972, six Apollo space flight missions brought back 382 kilograms (842 pounds) of lunar rocks, core samples, pebbles, sand, and dust from the lunar surface—in all, 2,200 samples from six exploration sites. Other lunar samples were returned to Earth by three automated Soviet spacecraft, Luna 16 in 1970, Luna 20 in 1972, and Luna 24 in 1976, which returned samples totaling 300 grams (about 3/4 pound).
In 1976, some of the samples were moved to Brooks Air Force Base in San Antonio, Texas, for second-site storage. In 1979, a Lunar Sample Laboratory Facility was built to serve as the chief repository for the Apollo samples: permanent storage in a physically secure and non-contaminating environment. The facility includes vaults for the samples and records, and laboratories for sample preparation and study. The Lunar Receiving Laboratory building was later occupied by NASA's Life Sciences division, contained biomedical and environment labs, and was used for experiments involving human adaptation to microgravity.
In September 2019, NASA announced that the Lunar Receiving Laboratory had not been used for two years and would be demolished.
See also
Moon rock
Lunar Sample Laborator |
https://en.wikipedia.org/wiki/Dark%20galaxy | A dark galaxy is a hypothesized galaxy with no (or very few) stars. They received their name because they have no visible stars but may be detectable if they contain significant amounts of gas. Astronomers have long theorized the existence of dark galaxies, but there are no confirmed examples to date. Dark galaxies are distinct from intergalactic gas clouds caused by galactic tidal interactions, since these gas clouds do not contain dark matter, so they do not technically qualify as galaxies. Distinguishing between intergalactic gas clouds and galaxies is difficult; most candidate dark galaxies turn out to be tidal gas clouds. The best candidate dark galaxies to date include HI1225+01, AGC229385, and numerous gas clouds detected in studies of quasars.
On 25 August 2016, astronomers reported that Dragonfly 44, an ultra diffuse galaxy (UDG) with the mass of the Milky Way galaxy, but with nearly no discernable stars or galactic structure, is made almost entirely of dark matter.
Observational evidence
Large surveys with sensitive but low-resolution radio telescopes like Arecibo or the Parkes Telescope look for 21-cm emission from atomic hydrogen in galaxies. These surveys are then matched to optical surveys to identify any objects with no optical counterpart; i.e., sources with no stars.
Another way astronomers search for dark galaxies is to look for hydrogen absorption lines in the spectra of background quasars. This technique has revealed many intergalactic clouds of hydrogen, but following up on candidate dark galaxies is difficult, since these sources tend to be too far away and are often optically drowned out by the bright light from the quasars.
Nature of dark galaxies
Origin
In 2005, astronomers discovered gas cloud VIRGOHI21 and attempted to determine what it was and why it exerted such a massive gravitational pull on galaxy NGC 4254. After years of ruling out other possible explanations, some have concluded that VIRGOHI21 is a dark galaxy.
Size
The actua |
https://en.wikipedia.org/wiki/Mensch%20Computer | The Mensch Computer is a personal computer system produced by the Western Design Center (WDC). It is based on the WDC 65C265 microcontroller, which implements the instruction sets of two microprocessors: the 16-bit W65C816/65816, and the 8-bit 6502. The computer is named after Bill Mensch, designer of the 6502 and subsequent series of microprocessor.
The system is designed for hobbyists and people who enjoy computer programming, especially at the assembly language level, and includes a basic set of peripherals which can be expanded by the owner. Much software originally written for other computer systems which use the 65816 or 6502 instruction sets (such as the Nintendo Entertainment System, Super Nintendo, or Apple IIGS, among others) can be run on the Mensch Computer (either directly as binary object code or through reassembling the software source code), to the extent that such software does not rely on hardware configurations which differ from the Mensch Computer.
The Mensch Computer includes a read-only memory (ROM) machine code monitor (a type of firmware), and many software routines are available to programmers by calling subroutines in the ROM. Typically, the system runs Mensch Works, a software suite also named after Bill Mensch. |
https://en.wikipedia.org/wiki/Quick%20clay | Quick clay, also known as Leda clay and Champlain Sea clay in Canada, is any of several distinctively sensitive glaciomarine clays found in Canada, Norway, Russia, Sweden, Finland, the United States and other locations around the world. The clay is so unstable that when a mass of quick clay is subjected to sufficient stress, the material behavior may drastically change from that of a particulate material to that of a watery fluid. Landslides occur because of the sudden soil liquefaction caused by external sollicitations such as vibrations induced by an earthquake, or massive rainfalls.
Quick clay main deposits
Quick clay is found only in countries close to the north pole, such as Russia; Canada; Norway; Sweden; and Finland; and in Alaska, United States; since they were glaciated during the Pleistocene epoch. In Canada, the clay is associated primarily with the Pleistocene-era Champlain Sea, in the modern Ottawa Valley, the St. Lawrence Valley, and the Saguenay River regions.
Quick clay has been the underlying cause of many deadly landslides. In Canada alone, it has been associated with more than 250 mapped landslides. Some of these are ancient, and may have been triggered by earthquakes.
Clay colloids stability
Quick clay has a remolded strength which is much less than its strength upon initial loading. This is caused by its highly unstable clay particle structure.
Quick clay is originally deposited in a marine environment. Clay mineral particles are always negatively charged because of the presence of permanent negative charges and pH dependent charges at their surface. Because of the need to respect electro-neutrality and a net zero electrical charge balance, these negative electrical charges are always compensated by the positive charges born by cations (such as Na+) adsorbed onto the surface of the clay, or present in the clay pore water. Exchangeable cations are present in the clay minerals interlayers and on the external basal planes of clay platelets. Ca |
https://en.wikipedia.org/wiki/High-power%20rocketry | High-power rocketry is a hobby similar to model rocketry. The major difference is that higher impulse range motors are used. The National Fire Protection Association (NFPA) definition of a high-power rocket is one that has a total weight of more than and contains a motor or motors containing more than of propellant and/or rated at more than 160 Newton-seconds (40.47 lbf·s) of total impulse, or that uses a motor with an average thrust of or more.
Types
High-power rockets are defined as rockets flown using commercially available motors ranging from H to O class. In the U.S., the NFPA1122 standard dictates guidelines for model rocketry, while NFPA1127 is specific to high-power rockets. In most U.S. states NFPA1122 has been adopted as part of the legal code. A smaller number of states use NFPA1127.
Associations
The Tripoli Rocketry Association and the National Association of Rocketry are the major sanctioning bodies for the hobby in the US, providing member certifications, and criteria for general safety guidelines.
In most other countries, where HPR is supported, the regulations are similar to or derived from
the Tripoli Rocket Association Unified Safety Code and the NAR High-power Certification system.
In Australia, there are several prefectures of the Tripoli Rocketry Association.
In Canada, the Canadian Association of Rocketry - L'Association Canadienne De Fuséologie is appointed as regulator for the hobby.
In New Zealand, the controlling body for rocketry is the New Zealand Rocketry Association or NZRA
In South Africa, the controlling body for rocketry is the Rocketry Organization of South Africa or Rocketry SA.
In the UK the British Model Flying Association or BMFA is the governing body with United Kingdom Rocketry Association or UKRA acting as the High Power Association for Motor flights classed as H and above.
In Germany, Austria and Switzerland, the Interessengemeinschaft Modellraketen has an approved HPR certification program which is cross-recogni |
https://en.wikipedia.org/wiki/Gr%C3%B6nwall%27s%20inequality | In mathematics, Grönwall's inequality (also called Grönwall's lemma or the Grönwall–Bellman inequality) allows one to bound a function that is known to satisfy a certain differential or integral inequality by the solution of the corresponding differential or integral equation. There are two forms of the lemma, a differential form and an integral form. For the latter there are several variants.
Grönwall's inequality is an important tool to obtain various estimates in the theory of ordinary and stochastic differential equations. In particular, it provides a comparison theorem that can be used to prove uniqueness of a solution to the initial value problem; see the Picard–Lindelöf theorem.
It is named for Thomas Hakon Grönwall (1877–1932). Grönwall is the Swedish spelling of his name, but he spelled his name as Gronwall in his scientific publications after emigrating to the United States.
The inequality was first proven by Grönwall in 1919 (the integral form below with and being constants).
Richard Bellman proved a slightly more general integral form in 1943.
A nonlinear generalization of the Grönwall–Bellman inequality is known as Bihari–LaSalle inequality. Other variants and generalizations can be found in Pachpatte, B.G. (1998).
Differential form
Let denote an interval of the real line of the form or or with . Let and be real-valued continuous functions defined on . If is differentiable in the interior of (the interval without the end points and possibly ) and satisfies the differential inequality
then is bounded by the solution of the corresponding differential equation :
for all .
Remark: There are no assumptions on the signs of the functions and .
Proof
Define the function
Note that satisfies
with and for all . By the quotient rule
Thus the derivative of the function is non-positive and the function is bounded above by its value at the initial point of the interval :
which is Grönwall's inequality.
Integral form for continuous |
https://en.wikipedia.org/wiki/Perianth | The perianth (perigonium, perigon or perigone in monocots) is the non-reproductive part of the flower, and structure that forms an envelope surrounding the sexual organs, consisting of the calyx (sepals) and the corolla (petals) or tepals when called a perigone. The term perianth is derived from Greek περί (, "around") and άνθος (, "flower"), while perigonium is derived from περί () and γόνος (, "seed, sex organs").
In the mosses and liverworts (Marchantiophyta), the perianth is the sterile tubelike tissue that surrounds the female reproductive structure (or developing sporophyte).
Flowering plants
In flowering plants, the perianth may be described as being either dichlamydeous/heterochlamydeous in which the calyx and corolla are clearly separate, or homochlamydeous, in which they are indistinguishable (and the sepals and petals are collectively referred to as tepals). When the perianth is in two whorls, it is described as biseriate. While the calyx may be green, known as sepaloid, it may also be brightly coloured, and is then described as petaloid. When the undifferentiated tepals resemble petals, they are also referred to as "petaloid", as in petaloid monocots or liliod monocots, orders of monocots with brightly coloured tepals. The corolla and petals have a role in attracting pollinators, but this may be augmented by more specialised structures like the corona (see below).
When the corolla consists of separate tepals the term apotepalous is used, or syntepalous if the tepals are fused to one another. The petals may be united to form a tubular corolla (gamopetalous or sympetalous). If either the petals or sepals are entirely absent, the perianth can be described as being monochlamydeous.
Both sepals and petals may have stomata and veins, even if vestigial. In some taxa, for instance some magnolias and water lilies, the perianth is arranged in a spiral on nodes, rather than whorls. Flowers with spiral perianths tend to also be those with undifferentiated peria |
https://en.wikipedia.org/wiki/List%20of%20ECMAScript%20engines | An ECMAScript engine is a program that executes source code written in a version of the ECMAScript language standard, for example, JavaScript.
Just-in-time compilation engines
These are new generation ECMAScript engines for web browsers, all implementing just-in-time compilation (JIT) or variations of that idea. The performance benefits for just-in-time compilation make it much more suitable for web applications written in JavaScript.
Carakan: A JavaScript engine developed by Opera Software ASA, included in the 10.50 release of the Opera web browser, until switching to V8 with Opera 15 (released in 2013).
Chakra (JScript9): A JScript engine used in Internet Explorer. It was first previewed at MIX 10 as part of the Internet Explorer 9 Platform Preview.
Chakra: A JavaScript engine previously used in older versions of Microsoft Edge, before being replaced by V8.
SpiderMonkey: A JavaScript engine in Mozilla Gecko applications, including Firefox. The engine currently includes the IonMonkey compiler and OdinMonkey optimization module, has previously included the TraceMonkey compiler (first JavaScript JIT) and JägerMonkey.
JavaScriptCore: A JavaScript interpreter and JIT originally derived from KJS. It is used in the WebKit project and applications such as Safari. Also known as Nitro, SquirrelFish, and SquirrelFish Extreme.
JScript .NET: A .NET Framework JScript engine used in ASP.NET based on Common Language Runtime and COM Interop. Support was dropped with .NET Core and CoreCLR so its future looks questionable for ASP.NET Core.
Tamarin: An ActionScript and ECMAScript engine used in Adobe Flash.
V8: A JavaScript engine used in Google Chrome and other Chromium-based browsers, Node.js, Deno, and V8.NET.
GNU Guile features an ECMAScript interpreter as of version 1.9
Nashorn: A JavaScript engine used in Oracle Java Development Kit (JDK) since version 8.
iv, ECMAScript Lexer / Parser / Interpreter / VM / method JIT written in C++.
CL-JavaScript: Can compile Java |
https://en.wikipedia.org/wiki/Marine%20clay | Marine clay is a type of clay found in coastal regions around the world. In the northern, deglaciated regions, it can sometimes be quick clay, which is notorious for being involved in landslides.
Marine clay is a particle of soil that is dedicated to a particle size class, this is usually associated with USDA's classification with sand at 0.05mm, silt at 0.05-.002mm and clay being less than 0.002 mm in diameter. Paired with the fact this size of particle was deposited within a marine system involving the erosion and transportation of the clay into the ocean.
Soil particles become suspended when in a solution with water, with sand being affected by the force of gravity first with suspended silt and clay still floating in solution. This is also known as turbidity, in which floating soil particles create a murky brown color to a water solution. These clay particles are then transferred to the abyssal plain in which they are deposited in high percentages of clay.
Once the clay is deposited on the ocean floor it can change its structure through a process known as flocculation, process by which fine particulates are caused to clump together or floc. These can be either edge to edge flocculation or edge to face flocculation. Relating to individual clay particles interacting with each other. Clays can also be aggregated or shifted in their structure besides being flocculated.
Particles configurations
Clay particles can self-assemble into various configurations, each with totally different properties.
This change in structure to the clay particles is due to a swap in cations with the basic structure of a clay particle. This basic structure of the clay particle is known as a silica tetrahedral or aluminum octahedral. They are the basic structure of clay particles composing of one cation, usually silica or aluminum surrounded by hydroxide anions, these particles form in sheets forming what we know as clay particles and have very specific properties to them including m |
https://en.wikipedia.org/wiki/Citronellal | {{chembox
| Watchedfields = changed
| verifiedrevid = 443528634
| Reference =<ref>Citronellal, The Merck Index, 12th Edition</ref>
| Name = Citronellal
| ImageFile_Ref =
| ImageFile = Structural formula of (RS)-Citronellal.svg
| ImageSize = 150
| ImageAlt = Skeletal formula of (+)-citronellal
| ImageFile1 = (+)-Citronellal 3D ball.png
| ImageAlt1 = Ball-and-stick model of the (+)-citronellal molecule
| ImageCaption1 = (+)-Citronellal
| ImageFile2 = (-)-Citronellal 3D ball.png
| ImageAlt2 = Ball-and-stick model of the (-)-citronellal molecule
| ImageCaption2 = (-)-Citronellal
| IUPACName = 3,7-dimethyloct-6-enal
|Section1=
|Section2=
|Section3=
|Section4=
}}
Citronellal or rhodinal (C10H18O) is a monoterpenoid aldehyde, the main component in the mixture of terpenoid chemical compounds that give citronella oil its distinctive lemon scent.
Citronellal is a main isolate in distilled oils from the plants Cymbopogon (excepting C. citratus, culinary lemongrass), lemon-scented gum, and lemon-scented teatree. The (S'')-(−)-enantiomer of citronellal makes up to 80% of the oil from kaffir lime leaves and is the compound responsible for its characteristic aroma.
Citronellal has insect repellent properties, and research shows high repellent effectiveness against mosquitoes. Another research shows that citronellal has strong antifungal qualities.
Compendial status
British Pharmacopoeia
See also
Citral
Citronellol
Citronella oil
Hydroxycitronellal
Perfume allergy |
https://en.wikipedia.org/wiki/Citral | Citral is an acyclic monoterpene aldehyde. Being a monoterpene, it is made of two isoprene units. Citral is a collective term which covers two geometric isomers that have their own separate names; the E-isomer is named geranial (trans-citral; α-citral) or citral A. The Z-isomer is named neral (cis-citral; β-citral) or citral B. These stereoisomers occur as a mixture, not necessarily racemic; e.g. in essential oil of Australian ginger, the neral to geranial ratio is 0.61.
Occurrence
Citral is present in the volatile oils of several plants, including lemon myrtle (90–98%), Litsea citrata (90%), Litsea cubeba (70–85%), lemongrass (65–85%), lemon tea-tree (70–80%), Ocimum gratissimum (66.5%), Lindera citriodora (about 65%), Calypranthes parriculata (about 62%), petitgrain (36%), lemon verbena (30–35%), lemon ironbark (26%), lemon balm (11%), lime (6–9%), lemon (2–5%), and orange. Further, in the lipid fraction (essential oil) of Australian ginger (51–71%) Of the many sources of citral, the Australian myrtaceous tree, lemon myrtle, Backhousia citriodora F. Muell. (of the family Myrtaceae), is considered superior.
Uses
Citral has a strong lemon (citrus) scent and is used as an aroma compound in perfumery. It is used to fortify lemon oil. (Nerol, another perfumery compound, has a less intense but sweeter lemon note.) The aldehydes citronellal and citral are considered key components responsible for the lemon note with citral preferred.
It also has pheromonal effects in acari and insects.
Citral is used in the synthesis of vitamin A, lycopene, ionone and methylionone, and to mask the smell of smoke.
The herb Cymbopogon citratus has shown promising insecticidal and antifungal activity against storage pests.
Food additive
Citral is commonly used as a food additive ingredient.
It has been tested (2016) in vitro against the food-borne pathogen Cronobacter sakazakii.
Medical exploration
In a report (1997), citral is mentioned as cytotoxic to P(388) mouse leukaemia ce |
https://en.wikipedia.org/wiki/Ionone | The ionones, from greek ἴον ion "violet", are a series of closely related chemical substances that are part of a group of compounds known as rose ketones, which also includes damascones and damascenones. Ionones are aroma compounds found in a variety of essential oils, including rose oil. β-Ionone is a significant contributor to the aroma of roses, despite its relatively low concentration, and is an important fragrance chemical used in perfumery. The ionones are derived from the degradation of carotenoids.
The combination of α-ionone and β-ionone is characteristic of the scent of violets and used with other components in perfumery and flavouring to recreate their scent.
The carotenes α-carotene, β-carotene, γ-carotene, and the xanthophyll β-cryptoxanthin, can all be metabolized to β-ionone, and thus have vitamin A activity because they can be converted by plant-eating animals to retinol and retinal. Carotenoids that do not contain the β-ionone moiety cannot be converted to retinol, and thus have no vitamin A activity.
Biosynthesis
Carotenoids are the precursors of important fragrance compounds in several flowers. For example, a 2010 study of ionones in Osmanthus fragrans Lour. var. aurantiacus determined its essential oil contained the highest diversity of carotenoid-derived volatiles among the flowering plants investigated. A cDNA encoding a carotenoid cleavage enzyme, OfCCD1, was identified from transcripts isolated from flowers of O. fragrans Lour. The recombinant enzymes cleaved carotenes to produce α-ionone and β-ionone in in vitro assays.
The same study also discovered that carotenoid content, volatile emissions, and OfCCD1 transcript levels are subject to photorhythmic changes, and principally increased during daylight hours. At the times when OfCCD1 transcript levels reached their maxima, the carotenoid content remained low or slightly decreased. The emission of ionones was also higher during the day; however, emissions decreased at a lower rate tha |
https://en.wikipedia.org/wiki/Git | Git () is a distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers who are collaboratively developing source code during software development. Its goals include speed, data integrity, and support for distributed, non-linear workflows (thousands of parallel branches running on different computers).
Git was originally authored by Linus Torvalds in 2005 for development of the Linux kernel, with other kernel developers contributing to its initial development. Since 2005, Junio Hamano has been the core maintainer. As with most other distributed version control systems, and unlike most client–server systems, every Git directory on every computer is a full-fledged repository with complete history and full version-tracking abilities, independent of network access or a central server. Git is free and open-source software shared under the GPL-2.0-only license.
Since its creation, Git has become the most popular distributed version control system, with nearly 95% of developers reporting it as their primary version control system as of 2022. There are many popular offerings of Git repository services, including GitHub, SourceForge, Bitbucket and GitLab.
History
Git development was started by Torvalds in April 2005 when the proprietary source-control management (SCM) system used for Linux kernel development since 2002, BitKeeper, revoked its free license for Linux development. The copyright holder of BitKeeper, Larry McVoy, claimed that Andrew Tridgell had created SourcePuller by reverse engineering the BitKeeper protocols. The same incident also spurred the creation of another version-control system, Mercurial.
Torvalds wanted a distributed system that he could use like BitKeeper, but none of the available free systems met his needs. He cited an example of a source-control management system needing 30 seconds to apply a patch and update all associated metadata, and noted that this would not s |
https://en.wikipedia.org/wiki/Domatic%20number | In graph theory, a domatic partition of a graph is a partition of into disjoint sets , ,..., such that each Vi is a dominating set for G. The figure on the right shows a domatic partition of a graph; here the dominating set consists of the yellow vertices, consists of the green vertices, and consists of the blue vertices.
The domatic number is the maximum size of a domatic partition, that is, the maximum number of disjoint dominating sets. The graph in the figure has domatic number 3. It is easy to see that the domatic number is at least 3 because we have presented a domatic partition of size 3. To see that the domatic number is at most 3, we first review a simple upper bound.
Upper bounds
Let be the minimum degree of the graph . The domatic number of is at most . To see this, consider a vertex of degree . Let consist of and its neighbours. We know that (1) each dominating set must contain at least one vertex in (domination), and (2) each vertex in is contained in at most one dominating set (disjointness). Therefore, there are at most disjoint dominating sets.
The graph in the figure has minimum degree , and therefore its domatic number is at most 3. Hence we have shown that its domatic number is exactly 3; the figure shows a maximum-size domatic partition.
Lower bounds
If there is no isolated vertex in the graph (that is, ≥ 1), then the domatic number is at least 2. To see this, note that (1) a weak 2-coloring is a domatic partition if there is no isolated vertex, and (2) any graph has a weak 2-coloring. Alternatively, (1) a maximal independent set is a dominating set, and (2) the complement of a maximal independent set is also a dominating set if there are no isolated vertices.
The figure on the right shows a weak 2-coloring, which is also a domatic partition of size 2: the dark nodes are a dominating set, and the light nodes are another dominating set (the light nodes form a maximal independent set). See weak coloring for more information.
|
https://en.wikipedia.org/wiki/Defensin | Defensins are small cysteine-rich cationic proteins across cellular life, including vertebrate and invertebrate animals, plants, and fungi. They are host defense peptides, with members displaying either direct antimicrobial activity, immune signaling activities, or both. They are variously active against bacteria, fungi and many enveloped and nonenveloped viruses. They are typically 18-45 amino acids in length, with three or four highly conserved disulphide bonds.
In animals, they are produced by cells of the innate immune system and epithelial cells, whereas in plants and fungi they are produced by a wide variety of tissues. An organism usually produces many different defensins, some of which are stored inside the cells (e.g. in neutrophil granulocytes to kill phagocytosed bacteria), and others are secreted into the extracellular medium. For those that directly kill microbes, their mechanism of action varies from disruption of the microbial cell membrane to metabolic disruption.
Varieties
The name 'defensin' was coined in the mid-1980s, though the proteins have been called 'Cationic Antimicrobial Proteins,' 'Neutrophil peptides,' 'Gamma thionins' amongst others.
Proteins called 'defensins' are not all evolutionarily related to one another. Instead fall into two broad superfamilies, each of which contains multiple families. One superfamily, the trans-defensins, contains the defensins found in humans and other vertebrates, as well as some invertebrates. The other superfamily, cis-defensins, contains the defensins found in invertebrates, plants, and fungi. The superfamilies and families are determined by the overall tertiary structure, and each family usually has a conserved pattern of disulphide bonds. All defensins form small and compact folded structures, typically with a high positive charge, that are highly stable due to the multiple disulphide bonds. In all families, the underlying genes responsible for defensin production are highly polymorphic.
Trans-defe |
https://en.wikipedia.org/wiki/Pteropsida | Pteropsida is a subdivision of vascular plants that is no longer in use. It included all flowering plants and ferns and was divided into Filicinae, Gymnospermae, and Angiospermae. |
https://en.wikipedia.org/wiki/National%20Microbiology%20Laboratory | The National Microbiology Laboratory (NML) is part of the Public Health Agency of Canada (PHAC), the agency of the Government of Canada that is responsible for public health, health emergency preparedness and response, and infectious and chronic disease control and prevention.
NML is located in several sites across the country including the Canadian Science Centre for Human and Animal Health (CSCHAH) in Winnipeg, Manitoba. NML has a second site in Winnipeg, the JC Wilt Infectious Disease Research Centre on Logan Avenue which serves as a hub for HIV research and diagnostics in Canada. The three other primary sites include locations in Guelph, St. Hyacinthe and Lethbridge.
The CSCHAH is a biosafety level 4 infectious disease laboratory facility, the only one of its kind in Canada. With maximum containment, scientists are able to work with pathogens including Ebola, Marburg and Lassa fever.
The NML's CSCHAH is also home to the Canadian Food Inspection Agency's National Centre for Foreign Animal Disease, and thus the scientists at the NML share their premises with animal virologists.
History
The National Microbiology Laboratory was preceded by the Bureau of Microbiology which was originally part of the Laboratory Centre for Disease Control of Health Canada in Ottawa. In the 1980s, Health Canada identified both the need to replace existing laboratory space that was reaching the end of its lifespan and the need for Containment Level 4 space in the country.
Around the same time, Agriculture Canada (prior to the Canadian Food Inspection Agency being formed) also identified the need for new laboratory space including high-containment. Numerous benefits were identified for housing both laboratories in one building and Winnipeg was chosen as the site; an announcement to that effect was made in October 1987.
After some debate, the spot chosen for the site was a city works yard near to the Health Sciences Centre (a major teaching hospital), the University of Manitoba's me |
https://en.wikipedia.org/wiki/Kampo | Kampo or , often known simply as , is the study of traditional Chinese medicine in Japan following its introduction, beginning in the 7th century. It was adapted and modified to suit Japanese culture and traditions. Traditional Japanese medicine uses most of the Chinese methods, including acupuncture, moxibustion, traditional Chinese herbology, and traditional food therapy.
History
Origins
According to Chinese mythology, the origins of traditional Chinese medicine are traced back to the three legendary sovereigns Fuxi, Shennong and the Yellow Emperor. Shennong is believed to have tasted hundreds of herbs to ascertain their medicinal value and effects on the human body and help relieve people of their sufferings. The oldest written record focusing solely on the medicinal use of plants was the Shennong Ben Cao Jing which was compiled around the end of the first century B.C. and is said to have classified 365 species of herbs or medicinal plants.
Chinese medical practices were introduced to Japan during the 6th century A.D. In 608, Empress Suiko dispatched E-Nichi, Fuku-In and other young physicians to China. It is said that they studied medicine there for 15 years. Until 838, Japan sent 19 missions to Tang China. While the officials studied Chinese government structures, physicians and many of the Japanese monks absorbed Chinese medical knowledge.
Early Japanese adaptation
In 702 A.D., the Taihō Code was promulgated as an adaptation of the governmental system of China's Tang dynasty. One section called for the establishment of a university (daigaku) including a medical school with an elaborate training program, but due to incessant civil war this program never became effective. Empress Kōmyō (701–760) established the Hidenin and Seyakuin in the Kōfuku-Temple (Kōfuku-ji) in Nara, being two Buddhist institutions that provided free healthcare and medicine for the needy. For centuries to come Japanese Buddhist monks were essential in conveying Chinese medical know-how |
https://en.wikipedia.org/wiki/Model%20of%20computation | In computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model which describes how an output of a mathematical function is computed given an input. A model describes how units of computations, memories, and communications are organized. The computational complexity of an algorithm can be measured given a model of computation. Using a model allows studying the performance of algorithms independently of the variations that are specific to particular implementations and specific technology.
Models
Models of computation can be classified into three categories: sequential models, functional models, and concurrent models.
Sequential models
Sequential models include:
Finite state machines
Post machines (Post–Turing machines and tag machines).
Pushdown automata
Register machines
Random-access machines
Turing machines
Decision tree model
Functional models
Functional models include:
Abstract rewriting systems
Combinatory logic
General recursive functions
Lambda calculus
Concurrent models
Concurrent models include:
Actor model
Cellular automaton
Interaction nets
Kahn process networks
Logic gates and digital circuits
Petri nets
Synchronous Data Flow
Some of these models have both deterministic and nondeterministic variants. Nondeterministic models are not useful for practical computation; they are used in the study of computational complexity of algorithms.
Models differ in their expressive power; for example, each function that can be computed by a Finite state machine can also be computed by a Turing machine, but not vice versa.
Uses
In the field of runtime analysis of algorithms, it is common to specify a computational model in terms of primitive operations allowed which have unit cost, or simply unit-cost operations. A commonly used example is the random-access machine, which has unit cost for read and write access to all of its memory cells. In this respect, it differs fro |
https://en.wikipedia.org/wiki/Cache-oblivious%20algorithm | In computing, a cache-oblivious algorithm (or cache-transcendent algorithm) is an algorithm designed to take advantage of a processor cache without having the size of the cache (or the length of the cache lines, etc.) as an explicit parameter. An optimal cache-oblivious algorithm is a cache-oblivious algorithm that uses the cache optimally (in an asymptotic sense, ignoring constant factors). Thus, a cache-oblivious algorithm is designed to perform well, without modification, on multiple machines with different cache sizes, or for a memory hierarchy with different levels of cache having different sizes. Cache-oblivious algorithms are contrasted with explicit loop tiling, which explicitly breaks a problem into blocks that are optimally sized for a given cache.
Optimal cache-oblivious algorithms are known for matrix multiplication, matrix transposition, sorting, and several other problems. Some more general algorithms, such as Cooley–Tukey FFT, are optimally cache-oblivious under certain choices of parameters. As these algorithms are only optimal in an asymptotic sense (ignoring constant factors), further machine-specific tuning may be required to obtain nearly optimal performance in an absolute sense. The goal of cache-oblivious algorithms is to reduce the amount of such tuning that is required.
Typically, a cache-oblivious algorithm works by a recursive divide-and-conquer algorithm, where the problem is divided into smaller and smaller subproblems. Eventually, one reaches a subproblem size that fits into the cache, regardless of the cache size. For example, an optimal cache-oblivious matrix multiplication is obtained by recursively dividing each matrix into four sub-matrices to be multiplied, multiplying the submatrices in a depth-first fashion. In tuning for a specific machine, one may use a hybrid algorithm which uses loop tiling tuned for the specific cache sizes at the bottom level but otherwise uses the cache-oblivious algorithm.
History
The idea (and nam |
https://en.wikipedia.org/wiki/Finite%20model%20theory | Finite model theory is a subarea of model theory. Model theory is the branch of logic which deals with the relation between a formal language (syntax) and its interpretations (semantics). Finite model theory is a restriction of model theory to interpretations on finite structures, which have a finite universe.
Since many central theorems of model theory do not hold when restricted to finite structures, finite model theory is quite different from model theory in its methods of proof. Central results of classical model theory that fail for finite structures under finite model theory include the compactness theorem, Gödel's completeness theorem, and the method of ultraproducts for first-order logic (FO).
While model theory has many applications to mathematical algebra, finite model theory became an "unusually effective" instrument in computer science. In other words: "In the history of mathematical logic most interest has concentrated on infinite structures. [...] Yet, the objects computers have and hold are always finite. To study computation we need a theory of finite structures." Thus the main application areas of finite model theory are: descriptive complexity theory, database theory and formal language theory.
Axiomatisability
A common motivating question in finite model theory is whether a given class of structures can be described in a given language. For instance, one might ask whether the class of cyclic graphs can be distinguished among graphs by a FO sentence, which can also be phrased as asking whether cyclicity is FO-expressible.
A single finite structure can always be axiomatized in first-order logic, where axiomatized in a language L means described uniquely up to isomorphism by a single L-sentence. Similarly, any finite collection of finite structures can always be axiomatized in first-order logic. Some, but not all, infinite collections of finite structures can also be axiomatized by a single first-order sentence.
Characterisation of a single |
https://en.wikipedia.org/wiki/Triband%20%28flag%29 | A triband is a vexillological style which consists of three stripes arranged to form a flag. These stripes may be two or three colours, and may be charged with an emblem in the middle stripe. All tricolour flags are tribands, but not all tribands are tricolour flags, which requires three unique colours.
Design
Outside of the name, which requires three bands of colour, there are no other requirements for what a triband must look like, so there are many flags that look very different from each other but are all considered tribands.
Some triband flags (e.g. those of Croatia and Ghana) have their stripes positioned horizontally, while others (e.g. that of Italy) position the stripes vertically. Often the stripes on a triband are of equal length and width, though this is not always the case, as can be seen in the flags of Colombia and Canada. Symbols on tribands may be seals, such as on the Belizean flag, or any manner of emblems of significance to the area the flag represents, such as in the flags of Argentina, India and Lebanon.
A triband is also a tricolour if the three stripes on the flag are all different colours, rather than two being the same colour. Examples of tricolour flags include those of the Netherlands and France.
Tricolour
A tricolour () or tricolor () is a type of triband design which originated in the 16th century as a symbol of republicanism, liberty, or revolution. The oldest tricolour flag originates from the Netherlands, which successor later inspired the French and Russian flags. The flags of France, Italy, Romania, Mexico, and Ireland were all first adopted with the formation of an independent republic in the period of the French Revolution to the Revolutions of 1848, with the exception of the Irish tricolour, which dates from 1848 but was not popularised until the Easter Rising in 1916 and adopted in 1919.
History
The first association of the tricolour with republicanism is the orange-white-blue design of the Prince's Flag (Prinsenvlag, pr |
https://en.wikipedia.org/wiki/Gutmann%20method | The Gutmann method is an algorithm for securely erasing the contents of computer hard disk drives, such as files. Devised by Peter Gutmann and Colin Plumb and presented in the paper Secure Deletion of Data from Magnetic and Solid-State Memory in July 1996, it involved writing a series of 35 patterns over the region to be erased.
The selection of patterns assumes that the user does not know the encoding mechanism used by the drive, so it includes patterns designed specifically for three types of drives. A user who knows which type of encoding the drive uses can choose only those patterns intended for their drive. A drive with a different encoding mechanism would need different patterns.
Most of the patterns in the Gutmann method were designed for older MFM/RLL encoded disks. Gutmann himself has noted that more modern drives no longer use these older encoding techniques, making parts of the method irrelevant. He said "In the time since this paper was published, some people have treated the 35-pass overwrite technique described in it more as a kind of voodoo incantation to banish evil spirits than the result of a technical analysis of drive encoding techniques".
Since about 2001, some ATA IDE and SATA hard drive manufacturer designs include support for the ATA Secure Erase standard, obviating the need to apply the Gutmann method when erasing an entire drive. The Gutmann method does not apply to USB sticks: an 2011 study reports that 71.7% of data remained available. On solid state drives it resulted in 0.8 - 4.3% recovery.
Background
The delete function in most operating systems simply marks the space occupied by the file as reusable (removes the pointer to the file) without immediately removing any of its contents. At this point the file can be fairly easily recovered by numerous recovery applications. However, once the space is overwritten with other data, there is no known way to use software to recover it. It cannot be done with software alone since the storag |
https://en.wikipedia.org/wiki/Chandra%20Wickramasinghe | Nalin Chandra Wickramasinghe (born 20 January 1939) is a Sri Lankan-born British mathematician, astronomer and astrobiologist of Sinhalese ethnicity. His research interests include the interstellar medium, infrared astronomy, light scattering theory, applications of solid-state physics to astronomy, the early Solar System, comets, astrochemistry, the origin of life and astrobiology. A student and collaborator of Fred Hoyle, the pair worked jointly for over 40 years as influential proponents of panspermia. In 1974 they proposed the hypothesis that some dust in interstellar space was largely organic, later proven to be correct.
Wickramasinghe has advanced numerous fringe claims, including the argument that various outbreaks of illnesses on Earth are of extraterrestrial origins, including the 1918 flu pandemic and certain outbreaks of polio and mad cow disease. For the 1918 flu pandemic they hypothesised that cometary dust brought the virus to Earth simultaneously at multiple locations—a view almost universally dismissed by experts on this pandemic. Claims connecting terrestrial disease and extraterrestrial pathogens have been rejected by the scientific community.
Wickramasinghe has written more than 40 books about astrophysics and related topics; he has made appearances on radio, television and film, and he writes online blogs and articles. He has appeared on BBC Horizon, UK Channel 5 and the History Channel. He appeared on the 2013 Discovery Channel program "Red Rain". He has an association with Daisaku Ikeda, president of the Buddhist sect Soka Gakkai International, that led to the publication of a dialogue with him, first in Japanese and later in English, on the topic of Space and Eternal Life.
Education and career
Wickramasinghe studied at Royal College, Colombo, the University of Ceylon (where he graduated in 1960 with a BSc First Class Honours in mathematics), and at Trinity College and Jesus College, Cambridge, where he obtained his PhD and ScD degrees. Fo |
https://en.wikipedia.org/wiki/Continuous%20integration | In software engineering, continuous integration (CI) is the practice of merging all developers' working copies to a shared mainline several times a day. Nowadays it is typically implemented in such a way that it triggers an automated build with testing. Grady Booch first proposed the term CI in his 1991 method, although he did not advocate integrating several times a day. Extreme programming (XP) adopted the concept of CI and did advocate integrating more than once per day – perhaps as many as tens of times per day.
Rationale
When embarking on a change, a developer takes a copy of the current code base on which to work. As other developers submit changed code to the source code repository, this copy gradually ceases to reflect the repository code. Not only can the existing code base change, but new code can be added as well as new libraries, and other resources that create dependencies, and potential conflicts.
The longer development continues on a branch without merging back to the mainline, the greater the risk of multiple integration conflicts and failures when the developer branch is eventually merged back. When developers submit code to the repository they must first update their code to reflect the changes in the repository since they took their copy. The more changes the repository contains, the more work developers must do before submitting their own changes.
Eventually, the repository may become so different from the developers' baselines that they enter what is sometimes referred to as "merge hell", or "integration hell", where the time it takes to integrate exceeds the time it took to make their original changes.
Workflows
Run tests locally
CI should be used in combination with automated unit tests written through the practices of test-driven development. All unit tests in the developer's local environment should be run and passed before committing to the mainline. This helps prevent one developer's work-in-progress from breaking another develope |
https://en.wikipedia.org/wiki/Reed%20bed | A reedbed or reed bed is a natural habitat found in floodplains, waterlogged depressions and
estuaries. Reedbeds are part of a succession from young reeds colonising open water or wet ground through a gradation of increasingly dry ground. As reedbeds age, they build up a considerable litter layer that eventually rises above the water level and that ultimately provides opportunities in the form of new areas for larger terrestrial plants such as shrubs and trees to colonise.
Artificial reedbeds are used to remove pollutants from greywater, and are also called constructed wetlands.
Types
Reedbeds vary in the species that they can support, depending upon water levels within the wetland system, climate, seasonal variations, and the nutrient status and salinity of the water. Reed swamps have 20 cm or more of surface water during the summer and often have high invertebrate and bird species use. Reed fens have water levels at or below the surface during the summer and are often more botanically complex. Reeds and similar plants do not generally grow in very acidic water; so, in these situations, reedbeds are replaced by bogs and vegetation such as poor fen.
Although common reeds are characteristic of reedbeds, not all vegetation dominated by this species is characteristic of reedbeds. It also commonly occurs in unmanaged, damp grassland and as an understorey in certain types of damp woodland.
Wildlife
Most European reedbeds mainly comprise common reed (Phragmites australis) but also include many other tall monocotyledons adapted to growing in wet conditions – other grasses such as reed sweet-grass (Glyceria maxima), Canary reed-grass (Phalaris arundinacea) and small-reed (Calamagrostis species), large sedges (species of Carex, Scirpus, Schoenoplectus, Cladium and related genera), yellow flag iris (Iris pseudacorus), reed-mace ("bulrush" – Typha species), water-plantains (Alisma species), and flowering rush (Butomus umbellatus). Many dicotyledons also occur, such a |
https://en.wikipedia.org/wiki/Intermembrane%20space | The intermembrane space (IMS) is the space occurring between or involving two or more membranes. In cell biology, it is most commonly described as the region between the inner membrane and the outer membrane of a mitochondrion or a chloroplast. It also refers to the space between the inner and outer nuclear membranes of the nuclear envelope, but is often called the perinuclear space. The IMS of mitochondria plays a crucial role in coordinating a variety of cellular activities, such as regulation of respiration and metabolic functions. Unlike the IMS of the mitochondria, the IMS of the chloroplast does not seem to have any obvious function.
Intermembrane space of mitochondria
Mitochondria are surrounded by two membranes; the inner and outer mitochondrial membranes. These two membranes allow the formation of two aqueous compartments, which are the intermembrane space (IMS) and the matrix. Channel proteins called porins in the outer membrane allow free diffusion of ions and small proteins about 5000 daltons or less into the IMS. This makes the IMS chemically equivalent to the cytosol regarding the small molecules it contains. By contrast, specific transport proteins are required to transport ions and other small molecules across the inner mitochondrial membrane into the matrix due to its impermeability. The IMS also contains many enzymes that use the ATP moving out of the matrix to phosphorylate other nucleotides and proteins that initiate apoptosis.
Translocation
Most of proteins destined for the mitochondrial matrix are synthesized as precursors in the cytosol and are imported into the mitochondria by the translocase of the outer membrane (TOM) and the translocase of the inner membrane (TIM). The IMS is involved in the mitochondrial protein translocation. The precursor proteins called small TIM chaperones which are hexameric complexes are located in the IMS and they bind hydrophobic precursor proteins and delivery the precursors to the TIM.
Oxidative phosphoryl |
https://en.wikipedia.org/wiki/Windows%20Server | Windows Server (formerly Windows NT Server) is a group of operating systems (OS) for servers that Microsoft has been developing since 1993. The first OS that was released for this platform is Windows NT 3.1 Advanced Server. With the release of Windows Server 2003, the brand name was changed to Windows Server. The latest release of Windows Server is Windows Server 2022, which was released in 2021.
Microsoft's history of developing operating systems for servers goes back to Windows NT 3.1 Advanced Server. Windows 2000 Server is the first OS to include Active Directory, DNS Server, DHCP Server, and Group Policy.
Members
Main releases
Main releases include:
Windows NT 3.1 Advanced Server (July 1993)
Windows NT Server 3.5 (September 1994)
Windows NT Server 3.51 (May 1995)
Windows NT 4.0 Server (July 1996)
Windows 2000 Server (December 1999)
Windows Server 2003 (April 2003)
Windows Server 2003 R2 (December 2005)
Windows Server 2008 (February 2008)
Windows Server 2008 R2 (October 2009)
Windows Server 2012 (September 2012)
Windows Server 2012 R2 (October 2013)
Windows Server 2016 (October 2016)
Windows Server 2019 (October 2018)
Windows Server 2022 (August 2021)
Traditionally, Microsoft supports Windows Server for 10 years, with five years of mainstream support and an additional five years of extended support. These releases also offer a complete desktop experience. Starting with Windows Server 2008 R2, Server Core and Nano Server configurations were made available to reduce the OS footprint. Between 2015 and 2021, Microsoft referred to these releases as "long-term support" releases to set them apart from semi-annual releases (see below.)
For sixteen years, Microsoft released a major version of Windows Server every four years, with one minor version released two years after a major release. The minor versions had an "R2" suffix in their names. In October 2018, Microsoft broke this tradition with the release of Windows Server 2019, which should have been "Windows Server |
https://en.wikipedia.org/wiki/Structurae | Structurae is an online database containing pictures and information about structural and civil engineering works, and their associated engineers, architects, and builders.
Overview
Structurae was founded in 1998 by Nicolas Janberg, who had studied civil engineering at Princeton University. In March 2012, Structurae was acquired by , a subsidiary of John Wiley & Sons, Inc., with Janberg joining the company as Structurae's editor-in-chief. At that time, the web site received more than one million pageviews per month, and was available in English, French and German. In 2015, Janberg bought the site back to operate it as a freelancer again.
Buildings in the Structurae database |
https://en.wikipedia.org/wiki/Douglas%20Houghton%20Campbell | Douglas Houghton Campbell (December 19, 1859 – February 24, 1953) was an American botanist and university professor. He was one of the 15 founding professors at Stanford University. His death was described as "the end of an era of a group of great plant morphologists."
Campbell was born and raised in Detroit, Michigan. His father, James V. Campbell, was a member of the Supreme Court of the state of Michigan and a law professor at the University of Michigan. Douglas Campbell graduated from Detroit High School in 1878, going on to study at the University of Michigan. He studied botany, learning new microscopy techniques, and becoming interested in cryptogrammic (deciduous) ferns. He received his master's degree in 1882, and taught botany at Detroit High School while he completed his PhD research. He received his PhD in 1886, then travelled to Germany to learn more microscopy techniques. He developed a technique to embed plant material in paraffin to make fine cross-sections; he was one of the first if not the first to study plant specimens using this technique, which had been newly developed by zoologists. He was also a pioneer in the study of microscopic specimens using vital stains.
When Campbell returned to the United States he took up a professorship at Indiana University (1888 to 1891), writing the textbook Elements of Structural and Systematic Botany. In 1891 he became the founding head of the botany department at Stanford University and remained at Stanford for the remainder of his career, retiring in 1925. He studied mosses and liverworts, producing The Structure and Development of Mosses and Ferns in 1895. This book, together with its subsequent editions in 1905 and 1918, became the authoritative work on the subject and "firmly established Campbell's reputation as one of the leading botanists of the United States." His Lectures on the Evolution of Plants was published in 1899, and became widely used as a botany textbook. University Textbook of Botany was |
https://en.wikipedia.org/wiki/Franz%20Nissl | Franz Alexander Nissl (9 September 1860, in Frankenthal – 11 August 1919, in Munich) was a German psychiatrist and medical researcher. He was a noted neuropathologist.
Early life
Nissl was born in Frankenthal to Theodor Nissl and Maria Haas. Theodor taught Latin in a Catholic school and wanted Franz to become a priest. However Franz entered the Ludwig Maximilian University of Munich to study medicine. Later, he specialized in Psychiatry.
One of Nissl's university professors was Bernhard von Gudden. His assistant, Sigbert Josef Maria Ganser suggested that Nissl write an essay on the pathology of the cells of the cortex of the brain. When the medical faculty offered a competition for a prize in neurology in 1884, Nissl undertook the brain-cortex study. He used alcohol as a fixative and developed a staining technique that allowed the demonstration of several new nerve-cell constituents. Nissl won the prize, and wrote his doctoral dissertation on the same topic in 1885.
Career in medical research and education
Professor von Gudden was the judge in Nissl's college-essay competition, and he was so impressed with the study that he offered Nissl an assistantship at the Furstenried castle southwest of Munich, where one of his responsibilities would be to care for the mad Prince Otto. Nissl accepted, and remained in that post from 1885 until 1888. There was a small laboratory at the castle, which enabled Nissl to continue with his neuropathological research. In 1888 Nissl moved to the Institution Blankenheim. In 1889 he went to Frankfurt as second in position under Emil Sioli (1852–1922) at the Städtische Irrenanstalt. There he met neurologist Ludwig Edinger and neuropathologist Karl Weigert, who was developing a neuroglial stain. This work motivated Nissl to study mental and nervous diseases by relating them to observable changes in glial cells, blood elements, blood vessels and brain tissue in general.
In Frankfurt Nissl became acquainted with Alois Alzheime |
https://en.wikipedia.org/wiki/Reset%20vector | In computing, the reset vector is the default location a central processing unit will go to find the first instruction it will execute after a reset. The reset vector is a pointer or address, where the CPU should always begin as soon as it is able to execute instructions. The address is in a section of non-volatile memory initialized to contain instructions to start the operation of the CPU, as the first step in the process of booting the system containing the CPU.
Examples
Below is a list of typically used addresses by different microprocessors:
x86 family (Intel)
The reset vector for the Intel 8086 processor is at physical address FFFF0h (16 bytes below 1 MB). The value of the CS register at reset is FFFFh and the value of the IP register at reset is 0000h to form the segmented address FFFFh:0000h, which maps to physical address FFFF0h.
The reset vector for the Intel 80286 processor is at physical address FFFFF0h (16 bytes below 16 MB). The value of the CS register at reset is F000h with the descriptor base set to FF0000h and the value of the IP register at reset is FFF0h to form the segmented address FF000h:FFF0h, which maps to physical address FFFFF0h in real mode. This was changed to allow sufficient space to switch to protected mode without modifying the CS register.
The reset vector for the Intel 80386 and later x86 processors is physical address FFFFFFF0h (16 bytes below 4 GB). The value of the selector portion of the CS register at reset is F000h, the value of the base portion of the CS register is FFFF0000h, and the value of the IP register at reset is FFF0h to form the segmented address FFFF0000h:FFF0h, which maps to the physical address FFFFFFF0h in real mode.
Others
The reset vector for ARM processors is address 0x0 or 0xFFFF0000. During normal execution RAM is re-mapped to this location to improve performance, compared to the original ROM-based vector table.
The reset vector for MIPS32 processors is at virtual address 0xBFC00000, which is l |
https://en.wikipedia.org/wiki/Jetronic | Jetronic is a trade name of a manifold injection technology for automotive petrol engines, developed and marketed by Robert Bosch GmbH from the 1960s onwards. Bosch licensed the concept to many automobile manufacturers. There are several variations of the technology offering technological development and refinement.
D-Jetronic (1967–1979)
Analogue fuel injection, 'D' is from meaning pressure. Inlet manifold vacuum is measured using a pressure sensor located in, or connected to the intake manifold, in order to calculate the duration of fuel injection pulses. Originally, this system was called Jetronic, but the name D-Jetronic was later created as a retronym to distinguish it from subsequent Jetronic iterations.
D-Jetronic was essentially a further refinement of the Electrojector fuel delivery system developed by the Bendix Corporation in the late 1950s. Rather than choosing to eradicate the various reliability issues with the Electrojector system, Bendix instead licensed the design to Bosch. With the role of the Bendix system being largely forgotten D-Jetronic became known as the first widely successful precursor of modern electronic common rail systems; it had constant pressure fuel delivery to the injectors and pulsed injections, albeit grouped (2 groups of injectors pulsed together) rather than sequential (individual injector pulses) as on later systems.
As in the Electrojector system, D-Jetronic used analogue circuitry, with no microprocessor nor digital logic, the ECU used about 25 transistors to perform all of the processing. Two important factors that led to the ultimate failure of the Electrojector system: the use of paper-wrapped capacitors unsuited to heat-cycling and amplitude modulation (tv/ham radio) signals to control the injectors were superseded. The still present lack of processing power and the unavailability of solid-state sensors meant that the vacuum sensor was a rather expensive precision instrument, rather like a barometer, with brass bello |
https://en.wikipedia.org/wiki/Elementary%20mathematics | Elementary mathematics, also known as primary or secondary school mathematics, is the study of mathematics topics that are commonly taught at the primary or secondary school levels around the world. It includes a wide range of mathematical concepts and skills, including number sense, algebra, geometry, measurement, and data analysis. These concepts and skills form the foundation for more advanced mathematical study and are essential for success in many fields and everyday life. The study of elementary mathematics is a crucial part of a student's education and lays the foundation for future academic and career success.
Strands of elementary mathematics
Number sense and numeration
Number Sense is an understanding of numbers and operations. In the 'Number Sense and Numeration' strand students develop an understanding of numbers by being taught various ways of representing numbers, as well as the relationships among numbers.
Properties of the natural numbers such as divisibility and the distribution of prime numbers, are studied in basic number theory, another part of elementary mathematics.
Elementary Focus
Abacus
LCM and GCD
Fractions and Decimals
Place Value & Face Value
Addition and subtraction
Multiplication and Division
Counting
Counting Money
Algebra
Representing and ordering numbers
Estimating
Approximating
Problem Solving
To have a strong foundation in mathematics and to be able to succeed in the other strands students need to have a fundamental understanding of number sense and numeration.
Spatial sense
'Measurement skills and concepts' or 'Spatial Sense' are directly related to the world in which students live. Many of the concepts that students are taught in this strand are also used in other subjects such as science, social studies, and physical education In the measurement strand students learn about the measurable attributes of objects, in addition to the basic metric system.
Elementary Focus
Standard and non-standard units of measurement
te |
https://en.wikipedia.org/wiki/K-nearest%20neighbors%20algorithm | In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression. In both cases, the input consists of the k closest training examples in a data set. The output depends on whether k-NN is used for classification or regression:
In k-NN classification, the output is a class membership. An object is classified by a plurality vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor.
In k-NN regression, the output is the property value for the object. This value is the average of the values of k nearest neighbors. If k = 1, then the output is simply assigned to the value of that single nearest neighbor.
k-NN is a type of classification where the function is only approximated locally and all computation is deferred until function evaluation. Since this algorithm relies on distance for classification, if the features represent different physical units or come in vastly different scales then normalizing the training data can improve its accuracy dramatically.
Both for classification and regression, a useful technique can be to assign weights to the contributions of the neighbors, so that the nearer neighbors contribute more to the average than the more distant ones. For example, a common weighting scheme consists in giving each neighbor a weight of 1/d, where d is the distance to the neighbor.
The neighbors are taken from a set of objects for which the class (for k-NN classification) or the object property value (for k-NN regression) is known. This can be thought of as the training set for the algorithm, though no explicit training step is required.
A peculiarity of the k-NN algorithm is that it is sensitive to |
https://en.wikipedia.org/wiki/New%20chemical%20entity | A new chemical entity (NCE) is, according to the U.S. Food and Drug Administration, a novel, small, chemical molecule drug that is undergoing clinical trials or has received a first approval (not a new use) by the FDA in any other application submitted under section 505(b) of the Federal Food, Drug, and Cosmetic Act.
A new molecular entity (NME) is a broader term that encompasses both an NCE or an NBE (New Biological Entity).
Definition
An active moiety is a molecule or ion, excluding those appended portions of the molecule that cause the drug to be an ester, salt (including a salt with hydrogen or coordination bonds), or other noncovalent derivative (such as a complex, chelate, or clathrate) of the molecule, responsible for the physiological or pharmacological action of the drug substance.
An NCE is a molecule developed by the innovator company in the early drug discovery stage, which after undergoing clinical trials could translate into a drug that could be a treatment for some disease. Synthesis of an NCE is the first step in the process of drug development. Once the synthesis of the NCE has been completed, companies have two options before them. They can either go for clinical trials on their own or license the NCE to another company. In the latter option, companies can avoid the expensive and lengthy process of clinical trials, as the licensee company would be conducting further clinical trials and subsequently launching the drug. Companies adopting this model of business would be able to generate high margins as they get a huge one-time payment for the NCE as well as entering into a revenue sharing agreement with the licensee company.
Under the Food and Drug Administration Amendments Act of 2007, all new chemical entities must first be reviewed by an advisory committee before the FDA can approve these products.
See also
Molecular/chemical entity
Medicinal chemistry
Drug development |
https://en.wikipedia.org/wiki/L%C3%A9vy%27s%20continuity%20theorem | In probability theory, Lévy’s continuity theorem, or Lévy's convergence theorem, named after the French mathematician Paul Lévy, connects convergence in distribution of the sequence of random variables with pointwise convergence of their characteristic functions.
This theorem is the basis for one approach to prove the central limit theorem and is one of the major theorems concerning characteristic functions.
Statement
Suppose we have
If the sequence of characteristic functions converges pointwise to some function
then the following statements become equivalent:
Proof
Rigorous proofs of this theorem are available. |
https://en.wikipedia.org/wiki/Talk%20box | A talk box (also spelled talkbox and talk-box) is an effects unit that allows musicians to modify the sound of a musical instrument by shaping the frequency content of the sound and to apply speech sounds (in the same way as singing) onto the sounds of the instrument. Typically, a talk box directs sound from the instrument into the musician's mouth by means of a plastic tube adjacent to a vocal microphone. The musician controls the modification of the instrument's sound by changing the shape of the mouth, "vocalizing" the instrument's output into a microphone.
Overview
A talk box is usually an effects pedal that sits on the floor and contains a speaker attached with an airtight connection to a plastic tube; however, it can come in other forms, including homemade, usually crude, versions, and higher quality custom-made versions. The speaker is generally in the form of a compression driver, the sound-generating part of a horn loudspeaker with the horn replaced by the tube connection.
The box has connectors for the connection to the speaker output of an instrument amplifier and a connection to a normal instrument speaker. A foot-operated switch on the box directs the sound either to the talk box speaker or to the normal speaker. The switch is usually a push-on/push-off type. The other end of the tube is taped to the side of a microphone, extending enough to direct the reproduced sound in or near the performer's mouth.
When activated, the sound from the amplifier is reproduced by the speaker in the talk box and directed through the tube into the performer's mouth. The shape of the mouth filters the sound, with the modified sound being picked up by the microphone. The shape of the mouth changes the harmonic content of the sound in the same way it affects the harmonic content generated by the vocal folds when speaking.
The performer can vary the shape of the mouth and position of the tongue, changing the sound of the instrument being reproduced by the talk box speake |
https://en.wikipedia.org/wiki/George%20Albert%20Smith%20%28filmmaker%29 | George Albert Smith (4 January 1864 – 17 May 1959) was an English stage hypnotist, psychic, magic lantern lecturer, Fellow of the Royal Astronomical Society, inventor and a key member of the loose association of early film pioneers dubbed the Brighton School by French film historian Georges Sadoul. He is best known for his controversial work with Edmund Gurney at the Society for Psychical Research, his short films from 1897 to 1903, which pioneered film editing and close-ups, and his development of the first successful colour film process, Kinemacolor.
Biography
Birth and early life
Smith was born in Cripplegate, London in 1864. His father Charles Smith was a ticket-writer and artist. He moved with his family to Brighton, where his mother ran a boarding house on Grand Parade, following the death of his father.
It was in Brighton in the early 1880s that Smith first came to public attention touring the city's performance halls as a stage hypnotist. In 1882 he teamed up with Douglas Blackburn in an act at the Brighton Aquarium involving muscle reading, in which the blindfolded performer identifies objects selected by the audience, and second sight, in which the blindfolded performer finds objects hidden by his assistant somewhere in the theatre.
The Society for Psychical Research (SPR) accepted Smith's claims that the act was genuine and after becoming a member of the society he was appointed private secretary to the Honorary Secretary Edmund Gurney from 1883 to 1888. In 1887, Gurney carried out a number of "hypnotic experiments" in Brighton, with Smith as his "hypnotiser", which in their day made Gurney an impressive figure to the British public.
Since then it has been heavily studied and critiqued by Trevor H. Hall in his study The Strange Case of Edmund Gurney. Hall concluded that Smith (using his stage abilities) faked the results that Gurney trusted in his research papers, and this may have led to Gurney's mysterious death from a narcotic overdose in June 18 |
https://en.wikipedia.org/wiki/Project%20Monterey | Project Monterey was an attempt to build a single Unix operating system that ran across a variety of 32-bit and 64-bit platforms, as well as supporting multi-processing. Announced in October 1998, several Unix vendors were involved; IBM provided POWER and PowerPC support from AIX, Santa Cruz Operation (SCO) provided IA-32 support, and Sequent added multi-processing (MP) support from their DYNIX/ptx system. Intel Corporation provided expertise and ISV development funding for porting to their upcoming IA-64 (Itanium Architecture) CPU platform, which was yet to be released at that time. The focus of the project was to create an enterprise-class UNIX for IA-64, which at the time was expected to eventually dominate the UNIX server market.
By March 2001, however, "the explosion in popularity of Linux ... prompted IBM to quietly ditch" this; all involved attempted to find a niche in the rapidly developing Linux market and moved their focus away from Monterey. Sequent was acquired by IBM in 1999. In 2000, SCO's UNIX business was purchased by Caldera Systems, a Linux distributor, who later renamed themselves the SCO Group. In the same year, IBM eventually declared Monterey dead. Intel, IBM, Caldera Systems, and others had also been running a parallel effort to port Linux to IA-64, Project Trillian, which delivered workable code in February 2000. In late 2000, IBM announced a major effort to support Linux.
In May 2001, the project announced the availability of a beta test version AIX-5L for IA-64, basically meeting its original primary goal. However, Intel had missed its delivery date for its first Itanium processor by two years, and the Monterey software had no market.
With the exception of the IA-64 port and Dynix MP improvements, much of the Monterey effort was an attempt to standardize existing versions of Unix into a single compatible system. Such efforts had been undertaken in the past (e.g., 3DA) and had generally failed, as the companies involved were too relia |
https://en.wikipedia.org/wiki/Charles%20P.%20Thacker | Charles Patrick "Chuck" Thacker (February 26, 1943 – June 12, 2017) was an American pioneer computer designer. He designed the Xerox Alto, which is the first computer that used a mouse-driven graphical user interface (GUI).
Biography
Thacker was born in Pasadena, California, on February 26, 1943. His father was Ralph Scott Thacker, born 1906, an electrical engineer (Caltech class of 1928) in the aeronautical industry. His mother was the former (Mattie) Fern Cheek, born 1922 in Oklahoma, a cashier and secretary, who soon raised their two sons on her own.
He received his B.S. in physics from the University of California, Berkeley in 1967. He then joined the university's "Project Genie" in 1968, which developed the pioneering Berkeley Timesharing System on the SDS 940. Butler Lampson, Thacker, and others then left to form the Berkeley Computer Corporation, where Thacker designed the processor and memory system. While BCC was not commercially successful, this group became the core technologists in the Computer Systems Laboratory at Xerox Palo Alto Research Center (PARC).
Thacker worked in the 1970s and 1980s at the PARC, where he served as project leader of the Xerox Alto personal computer system, was co-inventor of the Ethernet LAN, and contributed to many other projects, including the first laser printer.
In 1983, Thacker was a founder of the Systems Research Center (SRC) of Digital Equipment Corporation (DEC), and in 1997, he joined Microsoft Research to help establish Microsoft Research Cambridge in Cambridge, England.
After returning to the United States, Thacker designed the hardware for Microsoft's Tablet PC, based on his experience with the "interim Dynabook" at PARC, and later the Lectrice, a pen-based hand-held computer at DEC SRC.
From 2006–2010 Thacker was a research contributor to the Berkeley Research Accelerator for Multiple Processors (RAMP) based upon the Berkeley Emulation Engine FPGA platform, which sought to explore new processor designs throug |
https://en.wikipedia.org/wiki/List%20of%20HTML%20editors | The following is a list of HTML editors.
Source code editors
Source code editors evolved from basic text editors, but include additional tools specifically geared toward handling code.
ActiveState Komodo
Aptana
Arachnophilia
Atom
BBEdit
BlueFish
Coda
Codelobster
CoffeeCup HTML Editor
CudaText
Dreamweaver
Eclipse with the Web Tools Platform
Emacs
EmEditor
Geany
HTML-Kit
HomeSite
Kate
Microsoft Visual Studio
Microsoft Visual Studio Code
Notepad++
NetBeans IDE
PHPEdit
PhpStorm IDE
PSPad
RJ TextEd
SciTE
Smultron
Sublime Text
TED Notepad
TextMate
TextPad
TextWrangler
UltraEdit
Vim
Visual Studio Code
WebStorm
WYSIWYG editors
HTML editors that support What You See Is What You Get (WYSIWYG) paradigm provide a user interface similar to a word processor for creating HTML documents, as an alternative to manual coding. Achieving true WYSIWYG however is not always possible.
Adobe Dreamweaver
BlueGriffon
Bootstrap Studio
CKEditor
EZGenerator
FirstPage
Freeway
Google Web Designer
HTML-NOTEPAD
Jimdo
KompoZer
Maqetta
Microsoft Expression Web
Microsoft SharePoint Designer
Microsoft Visual Web Developer Express
Microsoft Publisher
Mobirise
NetObjects Fusion
Opera Dragonfly
Quanta Plus
RocketCake
SeaMonkey Composer
Silex website builder
TinyMCE
TOWeb
UltraEdit
Webflow
Wix.com
Word processors
While word processors are not ostensibly HTML editors, the following word processors are capable of editing and saving HTML documents. Results will vary when opening some web pages.
AbiWord
Apache OpenOffice
Apple Pages
AppleWorks
Collabora Online
Kingsoft Office
LibreOffice Writer
Microsoft Word
WordPerfect
WYSIWYM editors
WYSIWYM (what you see is what you mean) is an alternative paradigm to WYSIWYG, in which the focus is on the semantic structure of the document rather than on the presentation. These editors produce more logically structured markup than is typical of WYSIWYG editors, while retaining the advantage in ease of use over hand-coding using a tex |
https://en.wikipedia.org/wiki/Mastika | Mastika or mastiha is a liqueur seasoned with mastic, a resin with a slightly pine or cedar-like flavor gathered from the mastic tree, a small evergreen tree native to the Mediterranean region. In Greece, mastiha () or mastichato () is a sweet liqueur produced with the mastika resin from the Greek island of Chios, which is distilled after hardening to crystals. Sugar is typically added. It is a sweet liqueur that is typically consumed at the end of a meal. It has a distinctive flavor, reminiscent of pine and herbs. It is claimed to have medicinal properties and to aid digestion.
In August of 2012, wildfires spread across the island of Chios, scorching and destroying more than half of the island's mastic orchards. Because the product has a "protected designation of origin" from the European Union, the fire not only impacted local Chios farmers, who lost approximately 60 percent of their crops, but also derailed the global supply of the product.
Chios Mastiha
Chios Mastiha Liqueur (, ) is a liqueur flavoured with mastic distillate or mastic oil from the island of Chios. The name Chios Mastiha has protected designation of origin status in the European Union. Chios Mastiha liqueur is clear with a sweet aroma. It is traditionally served cold.
Production
The process is regulated by Greek law and includes the flavouring of alcohol with mastic oil by agitation or the distillation of mastic with alcohol. The solution is then diluted with water and sweetened with sugar. The final alcoholic strength by volume of Chios Mastiha must be at least 15%.
Flavouring
The only flavouring agents used in Chios Mastiha liqueur are an alcoholic distillate of mastic or mastic oil made from Chios mastic. Mastic is the hardened sap harvested from the mastic tree, Pistacia lentiscus var chia, a small evergreen shrub that grows on rocky terrain on the southern part of the island. Chios mastic is certified by the Agricultural Products Certification and Supervision Organization as part of |
https://en.wikipedia.org/wiki/Superformula | The superformula is a generalization of the superellipse and was proposed by Johan Gielis around 2000. Gielis suggested that the formula can be used to describe many complex shapes and curves that are found in nature. Gielis has filed a patent application related to the synthesis of patterns generated by the superformula, which expired effective 2020-05-10.
In polar coordinates, with the radius and the angle, the superformula is:
By choosing different values for the parameters and different shapes can be generated.
The formula was obtained by generalizing the superellipse, named and popularized by Piet Hein, a Danish mathematician.
2D plots
In the following examples the values shown above each figure should be m, n1, n2 and n3.
A GNU Octave program for generating these figures
function sf2d(n, a)
u = [0:.001:2 * pi];
raux = abs(1 / a(1) .* abs(cos(n(1) * u / 4))) .^ n(3) + abs(1 / a(2) .* abs(sin(n(1) * u / 4))) .^ n(4);
r = abs(raux) .^ (- 1 / n(2));
x = r .* cos(u);
y = r .* sin(u);
plot(x, y);
end
Extension to higher dimensions
It is possible to extend the formula to 3, 4, or n dimensions, by means of the spherical product of superformulas. For example, the 3D parametric surface is obtained by multiplying two superformulas r1 and r2. The coordinates are defined by the relations:
where (latitude) varies between −π/2 and π/2 and θ (longitude) between −π and π.
3D plots
3D superformula: a = b = 1; m, n1, n2 and n3 are shown in the pictures.
A GNU Octave program for generating these figures:
function sf3d(n, a)
u = [- pi:.05:pi];
v = [- pi / 2:.05:pi / 2];
nu = length(u);
nv = length(v);
for i = 1:nu
for j = 1:nv
raux1 = abs(1 / a(1) * abs(cos(n(1) .* u(i) / 4))) .^ n(3) + abs(1 / a(2) * abs(sin(n(1) * u(i) / 4))) .^ n(4);
r1 = abs(raux1) .^ (- 1 / n(2));
raux2 = abs(1 / a(1) * abs(cos(n(1) * v(j) / 4))) .^ n(3) + abs(1 / a(2) * abs(sin(n(1) * v(j) / 4))) .^ n(4);
r2 = abs(raux2) .^ (- 1 / n(2));
|
https://en.wikipedia.org/wiki/Tripoli%20Rocketry%20Association | The Tripoli Rocketry Association (TRA) is an international organization and one of the two major organizing bodies for high power rocketry in the United States.
History
Tripoli Rocketry Association was founded in 1964 in the Pittsburgh, Pennsylvania, region as a high school science club, integrating both rocketry and space science. The name "Tripoli" was chosen because the founding members came from three different towns, and one of them helped fund the club's early projects using gold coins that his father had brought back from Tripoli (whose name approximately means "three towns") Lebanon after World War II. By the late-1980s, it transitioned from a regional club into formal, incorporated, USA-national organization, with a focus on self-regulating advanced, High-Power Rocketry (HPR). The deregulation of the aviation industry by the Reagan Administration also facilitated the growth of HPR activities.
The founder was Francis G. Graham. Early members who helped expand the club were Curtis W. Hughes, Kenneth J. Good, and Arthur R. Bower, with Thomas J. (Tom) Blazanin leading its formalization as an incorporated national organization in 1987 with the assistance of Alaska lawyer Darrel J. Gardner.
Tripoli organizes many rocket launches, both regional events hosted by local prefectures and larger international launches like LDRS ("Large Dangerous Rocket Ships") and BALLS. They also provide insurance for organized launches, administer a member certification program for flying high power rockets, and perform testing and certification of commercial hobby rocket motors.
Tripoli has expanded internationally over the years, and currently has clubs in many different countries including Argentina, Australia, Canada, France, Germany, Ireland, Italy, Mexico, Netherlands, Spain, Sweden, Switzerland, the United Kingdom, and the United States. Additionally, it formerly was active in Israel.
Tripoli was involved as a plaintiff in a nine-year lawsuit (in conjunction with the |
https://en.wikipedia.org/wiki/Absolute%20threshold | In neuroscience and psychophysics, an absolute threshold was originally defined as the lowest level of a stimulus – light, sound, touch, etc. – that an organism could detect. Under the influence of signal detection theory, absolute threshold has been redefined as the level at which a stimulus will be detected a specified percentage (often 50%) of the time. The absolute threshold can be influenced by several different factors, such as the subject's motivations and expectations, cognitive processes, and whether the subject is adapted to the stimulus.
The absolute threshold can be compared to the difference threshold, which is the measure of how different two stimuli must be for the subject to notice that they are not the same.
Vision
A landmark 1942 experiment by Hecht, Shlaer, and Pirenne assessed the absolute threshold for vision. They tried to measure the minimum number of photons the human eye can detect 60% of the time, using the following controls:
Dark adaptation – the participants were completely dark adapted (a process lasting forty minutes) to optimise their visual sensitivity.
Location – the stimulus was presented to an area of the right eye where there is a high density of rod cells, 20 degrees to the left of the point of focus (i.e. 20 degrees to the right of the fovea). Roughly this degree of eccentricity (about 20 degrees) has the highest rod density across the whole retina. However, the corresponding location on the right retina, 20 degrees to the left, is very near the blind spot.
Stimulus size – the stimulus had a diameter of 10 minutes of arc (1 minute = 1/60th of a degree). Although not explicitly mentioned in the original research paper, this ensured that the light stimulus fell only on rod cells connected to the same nerve fibre (this is called the area of spatial summation).
Wavelength – the stimulus wavelength matched the maximum sensitivity of rod cells (510 nm).
Stimulus duration – 0.001 second (1 ms).
The researchers found that the e |
https://en.wikipedia.org/wiki/System%20Packet%20Interface | The System Packet Interface (SPI) family of Interoperability Agreements from the Optical Internetworking Forum specify chip-to-chip, channelized, packet interfaces commonly used in synchronous optical networking and Ethernet applications. A typical application of such a packet level interface is between a framer (for optical network) or a MAC (for IP network) and a network processor. Another application of this interface might be between a packet processor ASIC and a traffic manager device.
Context
There are two broad categories of chip-to-chip interfaces. The first, exemplified by PCI-Express and HyperTransport, supports reads and writes of memory addresses. The second broad category carries user packets over 1 or more channels and is exemplified by the IEEE 802.3 family of Media Independent Interfaces and the Optical Internetworking Forum family of System Packet Interfaces. Of these last two, the family of System Packet Interfaces is optimized to carry user packets from many channels. The family of System Packet Interfaces is the most important packet-oriented, chip-to-chip interface family used between devices in the Packet over SONET and Optical Transport Network, which are the principal protocols used to carry the internet between cities.
Specifications
The agreements are:
SPI-3 – Packet Interface for Physical and Link Layers for OC-48 (2.488 Gbit/s)
SPI-4.1 – System Physical Interface Level 4 (SPI-4) Phase 1: A System Interface for Interconnection Between Physical and Link Layer, or Peer-to-Peer Entities Operating at an OC-192 Rate (10 Gbit/s).
SPI-4.2 – System Packet Interface Level 4 (SPI-4) Phase 2: OC-192 System Interface for Physical and Link Layer Devices.
SPI-5 – Packet Interface for Physical and Link Layers for OC-768 (40 Gbit/s)
SPI-S – Scalable System Packet Interface - useful for interfaces starting with OC-48 and scaling into the Terabit range
History of the specifications
These agreements grew out of the donation to the OIF by PMC-S |
https://en.wikipedia.org/wiki/Miraculin | Miraculin is a taste modifier, a glycoprotein extracted from the fruit of Synsepalum dulcificum. The berry, also known as the miracle fruit, was documented by explorer Chevalier des Marchais, who searched for many different fruits during a 1725 excursion to its native West Africa.
Miraculin itself does not taste sweet. When taste buds are exposed to miraculin, the protein binds to the sweetness receptors. This causes normally sour-tasting acidic foods, such as citrus, to be perceived as sweet. The effect can last for one or two hours.
History
The sweetening properties of Synsepalum dulcificum berries were first noted by des Marchais during expeditions to West Africa in the 18th century. The term miraculin derived from experiments to isolate and purify the active glycoprotein that gave the berries their sweetening effects, results that were published simultaneously by Japanese and Dutch scientists working independently in the 1960s (the Dutch team called the glycoprotein mieraculin). The word miraculin was in common use by the mid-1970s.
Glycoprotein structure
Miraculin was first sequenced in 1989 and was found to be a 24.6 kilodalton glycoprotein consisting of 191 amino acids and 13.9% by weight of various sugars.
The sugars consist of a total of 3.4 kDa, composed of a molar ratio of glucosamine (31%), mannose (30%), fucose (22%), xylose (10%), and galactose (7%).
The native state of miraculin is a tetramer consisting of two dimers, each held together by a disulfide bridge. Both tetramer miraculin and native dimer miraculin in its crude state have the taste-modifying activity of turning sour tastes into sweet tastes. Miraculin belongs to the Kunitz STI protease inhibitor family.
Sweetness properties
Miraculin, unlike curculin (another taste-modifying agent), is not sweet by itself, but it can change the perception of sourness to sweetness, even for a long period after consumption. The duration and intensity of the sweetness-modifying effect depends on vari |
https://en.wikipedia.org/wiki/Lactisole | Lactisole is the sodium salt and commonly supplied form of 2-(4-methoxyphenoxy)propionic acid, a natural carboxylic acid found in roasted coffee beans. Like gymnemic acid, it has the property of masking sweet flavors and is used for this purpose in the food industry.
Chemistry
Chemically, lactisole is a double ether of hydroquinone. Since it contains an asymmetric carbon atom the molecule is chiral, with the S enantiomer predominating in natural sources and being primarily responsible for the sweetness-masking effect. Commercial lactisole is a racemic mixture of the R and S forms.
Natural occurrences
The parent acid of lactisole was discovered in 1989 in roasted Colombian arabica coffee beans in a concentration of 0.5 to 1.2 ppm.
Anti-sweet properties
At concentrations of 100–150 parts per million in food, lactisole largely suppresses the ability to perceive sweet tastes, both from sugar and from artificial sweeteners such as aspartame. A 12% sucrose solution was perceived like a 4% sucrose solution when lactisole was added. However, it is significantly less efficient than gymnemic acid with acesulfame potassium, sucrose, glucose and sodium saccharin. Research found also that it has no effect on the perception of bitterness, sourness and saltiness. According to a recent study, lactisole acts on a sweet taste receptor heteromer of the TAS1R3 sweet protein receptor in humans, but not on its rodent counterpart.
As a food additive
The principal use of lactisole is in jellies, jams, and similar preserved fruit products containing large amounts of sugar. In these products, by suppressing sugar's sweetness, it allows fruit flavors to come through. In the United States, lactisole is designated as generally recognized as safe (GRAS) by the Flavor and Extract Manufacturers Association (Fema number: 3773) and approved for use in food as flavoring agent up to 150 ppm. Currently, lactisole is manufactured and sold by Domino Sugar and its usage levels are between 50 a |
https://en.wikipedia.org/wiki/Representational%20difference%20analysis | Representational difference analysis (RDA) is a technique used in biological research to find sequence differences in two genomic or cDNA samples. Genomes or cDNA sequences from two samples (i.e. cancer sample and a normal sample) are PCR amplified and differences analyzed using subtractive DNA hybridization. This technology has been further enhanced through the development of representation oligonucleotide microarray analysis (ROMA), which uses array technology to perform such analyses. This method may also be adapted to detect DNA methylation differences, as seen in methylation-sensitive representational difference analysis (MS-RDA).
Theory
This method relies on PCR to differentially amplify non-homologous DNA regions between digested fragments of two nearly identical DNA species, that are called 'driver' and 'tester' DNA. Typically, tester DNA contains a sequence of interest that is non-homologous to driver DNA. When the two species are mixed, the driver sequence is added in excess to tester. During PCR, double stranded fragments first denature at ~95 °C and then re-anneal when subjected to the annealing temperature. Since driver and tester sequences are nearly identical, the excess of driver DNA fragments will anneal to homologous DNA fragments from the tester species. This blocks PCR amplification and there is no increase in homologous fragments. However, fragments that are different between the two species will not anneal to a complementary counterpart and will be amplified by PCR. As more cycles of RDA are performed, the pool of unique sequence fragment copies will grow faster than fragments found in both species. |
https://en.wikipedia.org/wiki/Poltergeist%20%28computer%20programming%29 | In computer programming, a poltergeist (or gypsy wagon) is a short-lived, typically stateless object used to perform initialization or to invoke methods in another, more permanent class. It is considered an anti-pattern. The original definition is by Michael Akroyd 1996 - Object World West Conference:
"As a gypsy wagon or a poltergeist appears and disappears mysteriously, so does this short lived object. As a consequence the code is more difficult to maintain and there is unnecessary resource waste. The typical cause for this anti-pattern is poor object design."
A poltergeist can often be identified by its name; they are often called "manager_", "controller_", "supervisor", "start_process", etc.
Sometimes, poltergeist classes are created because the programmer anticipated the need for a more complex architecture. For example, a poltergeist arises if the same method acts as both the client and invoker in a command pattern, and the programmer anticipates separating the two phases. However, this more complex architecture may actually never materialize.
Poltergeists should not be confused with long-lived, state-bearing objects of a pattern such as model–view–controller, or tier-separating patterns such as business-delegate.
To remove a poltergeist, delete the class and insert its functionality in the invoked class, possibly by inheritance or as a mixin.
See also
Anti-pattern
Factory (object-oriented programming)
YAGNI principle |
https://en.wikipedia.org/wiki/KUNS-TV | KUNS-TV (channel 51) is a television station licensed to Bellevue, Washington, United States, serving the Seattle area as an affiliate of the Spanish-language network Univision. It is owned by Sinclair Broadcast Group alongside dual ABC/CW affiliate KOMO-TV (channel 4). Both stations share studios within KOMO Plaza (formerly Fisher Plaza) in the Lower Queen Anne section of Seattle, while KUNS-TV's transmitter is located in the city's Queen Anne neighborhood.
History
On February 10, 1988, the Federal Communications Commission (FCC) issued a construction permit for television station KBEH. However, channel 51 did not begin its broadcasting operation until August 8, 1999, transmitting programs from the ValueVision network, which became ShopNBC in 2001 after NBC (now part of Comcast) acquired a 37% ownership stake in that network. In December 2000, the station changed its call letters to KWOG. Previously locally owned and operated and at one point being minority owned, the station was sold to Fisher Communications on September 29, 2006.
On October 31, 2006, the station changed its call letters one more time, this time to the current KUNS-TV. On January 1, 2007, it rang in the year by going from broadcasting home shopping programs to broadcasting Hispanic programming as a Univision affiliate almost instantly, providing viewers with programs such as Sabado Gigante, Despierta América and El Gordo y La Flaca, in addition to an assortment of telenovelas, along with many other programs. The station also started its own local newscast, Noticias Noroeste with Jaime Méndez and Roxy de la Torre. The newscast originates from a studio at KOMO Plaza (formerly Fisher Plaza) in Seattle.
On August 21, 2012, Fisher Communications signed an affiliation agreement with MundoFox, a Spanish-language competitor to Univision that is owned as a joint venture between Fox International Channels and Colombian broadcaster RCN TV, for KUNS and Portland sister station KUNP to be carried on both |
https://en.wikipedia.org/wiki/SPI-4.2 | SPI-4.2 is a version of the System Packet Interface published by the Optical Internetworking Forum. It was designed to be used in systems that support OC-192 SONET interfaces and is sometimes used in 10 Gigabit Ethernet based systems.
SPI-4 is an interface for packet and cell transfer between a physical layer (PHY) device and a link layer device, for aggregate bandwidths of OC-192 Asynchronous Transfer Mode and Packet over SONET/SDH (POS), as well as 10 Gigabit Ethernet applications.
SPI-4 has two types of transfers—Data when the RCTL signal is deasserted; Control when the RCTL signal is asserted. The transmit and receive data paths include, respectively, (TDCLK, TDAT[15:0],TCTL) and (RDCLK, RDAT[15:0], RCTL). The transmit and receive FIFO status channels include (TSCLK, TSTAT[1:0]) and (RSCLK, RSTAT[1:0]) respectively.
A typical application of SPI-4.2 is to connect a framer device to a network processor. It has been widely adopted by the high speed networking marketplace.
The interface consists of (per direction):
sixteen LVDS pairs for the data path
one LVDS pair for control
one LVDS pair for clock at half of the data rate
two FIFO status lines running at 1/8 of the data rate
one status clock
The clocking is source-synchronous and operates around 700 MHz. Implementations of SPI-4.2 have been produced which allow somewhat higher clock rates. This is important when overhead bytes are added to incoming packets.
PMC-Sierra made the original OIF contribution for SPI-4.2. That contribution was based on the PL-4 specification that was developed by PMC-Sierra in conjunction with the SATURN Development Group.
The physical layer of SPI-4.2 is very similar to the HyperTransport 1.x interface, although the logical layers are very different.
External links
OIF Interoperability Agreements
Network protocols |
https://en.wikipedia.org/wiki/Electron%20multiplier | An electron multiplier is a vacuum-tube structure that multiplies incident charges. In a process called secondary emission, a single electron can, when bombarded on secondary-emissive material, induce emission of roughly 1 to 3 electrons. If an electric potential is applied between this metal plate and yet another, the emitted electrons will accelerate to the next metal plate and induce secondary emission of still more electrons. This can be repeated a number of times, resulting in a large shower of electrons all collected by a metal anode, all having been triggered by just one.
History
In 1930, Russian physicist Leonid Aleksandrovitch Kubetsky proposed a device which used photocathodes combined with dynodes, or secondary electron emitters, in a single tube to remove secondary electrons by increasing the electric potential through the device. The electron multiplier can use any number of dynodes in total, which use a coefficient, σ, and created a gain of σn where n is the number of emitters.
Discrete dynode
Secondary electron emission begins when one electron hits a dynode inside a vacuum chamber and ejects electrons that cascade onto more dynodes and repeats the process over again. The dynodes are set up so that each time an electron hits the next one it will have an increase of about 100 electron Volts greater than the last dynode. Some advantages of using this include a response time in the picoseconds, a high sensitivity, and an electron gain of about 108 electrons.
Continuous dynode
A continuous dynode system uses a horn-shaped funnel of glass coated with a thin film of semiconducting materials. The electrodes have increasing resistance to allow secondary emission. Continuous dynodes use a negative high voltage in the wider end and goes to a positive near ground at the narrow end. The first device of this kind was called a Channel Electron Multiplier (CEM). CEMs required 2-4 kilovolts in order to achieve a gain of 106 electrons.
Microchannel plate
Anot |
https://en.wikipedia.org/wiki/Faraday%20cup | A Faraday cup is a metal (conductive) cup designed to catch charged particles in vacuum. The resulting current can be measured and used to determine the number of ions or electrons hitting the cup. The Faraday cup was named after Michael Faraday who first theorized ions around 1830.
Examples of devices which use Faraday cups include space probes (Voyager 1, & 2, Parker Solar Probe, etc.) and mass spectrometers.
Principle of operation
When a beam or packet of ions hits the metallic body of the cup, the apparatus gains a small net charge while the ions are neutralized as the charge is transferred to the metal walls. The metal part can then be discharged to measure a small current proportional to the number of impinging ions. The Faraday cup is essentially part of a circuit where ions are the charge carriers in vacuum and it is the interface to the solid metal where electrons act as the charge carriers (as in most circuits). By measuring the electric current (the number of electrons flowing through the circuit per second) in the metal part of the circuit, the number of charges being carried by the ions in the vacuum part of the circuit can be determined. For a continuous beam of ions (each with a single charge), the total number of ions hitting the cup per unit time is
where N is the number of ions observed in a time t (in seconds), I is the measured current (in amperes) and e is the elementary charge (about 1.60 × 10−19 C). Thus, a measured current of one nanoamp (10−9 A) corresponds to about 6 billion ions striking the Faraday cup each second.
Similarly, a Faraday cup can act as a collector for electrons in a vacuum (e.g. from an electron beam). In this case, electrons simply hit the metal plate/cup and a current is produced. Faraday cups are not as sensitive as electron multiplier detectors, but are highly regarded for accuracy because of the direct relation between the measured current and number of ions.
In plasma diagnostics
The Faraday cup utilizes a phys |
https://en.wikipedia.org/wiki/Global%20change | Global change in broad sense refers to planetary-scale changes in the Earth system. It is most commonly used to encompass the variety of changes connected to the rapid increase in human activities which started around mid-20th century, i.e., the Great Acceleration. While the concept stems from research on the climate change, it is used to adopt a more holistic view of the observed changes. Global change refers to the changes of the Earth system, treated in its entirety with interacting physicochemical and biological components as well as the impact human societies have on the components and vice versa. Therefore, the changes are studied through means of Earth system science.
History of global-change research
The first global efforts to address the environmental impact of human activities on the environment worldwide date before the concept of global change was introduced. Most notably, in 1972 United Nations Conference on the Human Environment was held in Stockholm, which led to United Nations Environment Programme. While the efforts were global and the effects across the globe were considered, the Earth system approach was not yet developed at this time. The events, however, started a chain of events that led to the emergence of the field of global change research.
The concept of global change was coined as researchers investigating climate change started that not only the climate but also other components of the Earth system change at a rapid pace, which can be contributed to human activities and follow dynamics similar to many societal changes. It has its origins in the World Climate Research Programme, or WCRP, an international program under the leadership of Peter Bolin, which at the time of its establishment in 1980 focused on determining if the climate is changing, can it be predicted and do humans cause the change. The first results not only confirmed human impact but led to the realisation of a larger phenomenon of global change. Subsequently Peter Boli |
https://en.wikipedia.org/wiki/Lant | Lant is aged urine. The term comes from Old English , which referred to urine. Collected urine was put aside to ferment until used for its chemical content in many pre-industrial processes, such as cleaning and production.
History
Because of its ammonium content, lant was most commonly used for floor cleaning and laundry. According to early housekeeping guides, bedpans would be collected by one of the younger male servants and put away to ferment to a mild caustic before use.
In larger cottage industries, lant was used in wool-processing and as a source of saltpeter for gunpowder. In times of urgent need and in districts where these were the chief industries, the whole town was expected to contribute to its supply.
See also
Ammonia
Urine
Home remedies
Nitrogen |
https://en.wikipedia.org/wiki/Vapour%20density | Vapour density is the density of a vapour in relation to that of hydrogen. It may be defined as mass of a certain volume of a substance divided by mass of same volume of hydrogen.
vapour density = mass of n molecules of gas / mass of n molecules of hydrogen gas .
vapour density = molar mass of gas / molar mass of H2
vapour density = molar mass of gas / 2.016
vapour density = × molar mass
(and thus: molar mass = ~2 × vapour density)
For example, vapour density of mixture of NO2 and N2O4 is 38.3. Vapour density is a dimensionless quantity.
Alternative definition
In many web sources, particularly in relation to safety considerations at commercial and industrial facilities in the U.S., vapour density is defined with respect to air, not hydrogen. Air is given a vapour density of one. For this use, air has a molecular weight of 28.97 atomic mass units, and all other gas and vapour molecular weights are divided by this number to derive their vapour density. For example, acetone has a vapour density of 2 in relation to air. That means acetone vapour is twice as heavy as air. This can be seen by dividing the molecular weight of Acetone, 58.1 by that of air, 28.97, which equals 2.
With this definition, the vapour density would indicate whether a gas is denser (greater than one) or less dense (less than one) than air. The density has implications for container storage and personnel safety—if a container can release a dense gas, its vapour could sink and, if flammable, collect until it is at a concentration sufficient for ignition. Even if not flammable, it could collect in the lower floor or level of a confined space and displace air, possibly presenting an asphyxiation hazard to individuals entering the lower part of that space.
See also
Relative density (also known as specific gravity)
Victor Meyer apparatus |
https://en.wikipedia.org/wiki/Model%20rocket%20motor%20classification | Motors for model rockets and high-powered rockets (together, consumer rockets) are classified by total impulse into a set of letter-designated ranges, from ⅛A up to O.
The total impulse is the integral of the thrust over burn time.
Where is the burn time in seconds, is the instantaneous thrust in newtons, is average thrust in newtons, and is the total impulse in newton seconds.
Class A is from 1.26 newton-seconds (conversion factor 4.448 N per lb. force) to 2.5 N·s, and each class is then double the total impulse of the preceding class, with Class B being 2.51 to 5.00 N·s. The letter (M) would represent the total impulse of between 5,120.01 and 10,240.00 N·s of impulse. Motors E and below are considered low power rocket motors. Motors between F and G are considered mid-power, while motors H and above being high-powered rocket motors. Motors which would be classified beyond O are in the realm of amateur rocketry (in this context, the term amateur refers to the rocketeer's independence from an established commercial or government organization). Professional organizations use the nomenclature of average thrust and burning time.
Rocket motor codes
The designation for a specific motor looks like C6-3. In this example, the letter (C) represents the total impulse range of the motor, the number (6) before the dash represents the average thrust in newtons, and the number (3) after the dash represents the delay in seconds from propelling charge burnout to the firing of the ejection charge (a gas generator composition, usually black powder, designed to deploy the recovery system). A C6-3 motor would have between 5.01 and 10 N·s of impulse, produce 6 N average thrust, and fire an ejection charge 3 seconds after burnout.
An attempt was made by motor manufacturers in 1982 to further clarify the motor code by writing the total impulse in newton-seconds before the code. This allowed the burn duration to be computed from the provided numbers. Additionally, the motor code was |
https://en.wikipedia.org/wiki/Chaitin%27s%20algorithm | Chaitin's algorithm is a bottom-up, graph coloring register allocation algorithm that uses cost/degree as its spill metric. It is named after its designer, Gregory Chaitin. Chaitin's algorithm was the first register allocation algorithm that made use of coloring of the interference graph for both register allocations and spilling.
Chaitin's algorithm was presented on the 1982 SIGPLAN Symposium on Compiler Construction, and published in the symposium proceedings. It was extension of an earlier 1981 paper on the use of graph coloring for register allocation. Chaitin's algorithm formed the basis of a large section of research into register allocators. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.