source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Integration%20by%20substitution | In calculus, integration by substitution, also known as u-substitution, reverse chain rule or change of variables, is a method for evaluating integrals and antiderivatives. It is the counterpart to the chain rule for differentiation, and can loosely be thought of as using the chain rule "backwards."
Substitution for a single variable
Introduction (indefinite integrals)
Before stating the result rigorously, consider a simple case using indefinite integrals.
Compute
Set This means or in differential form, Now:
where is an arbitrary constant of integration.
This procedure is frequently used, but not all integrals are of a form that permits its use. In any event, the result should be verified by differentiating and comparing to the original integrand.
For definite integrals, the limits of integration must also be adjusted, but the procedure is mostly the same.
Statement for definite integrals
Let be a differentiable function with a continuous derivative, where is an interval. Suppose that is a continuous function. Then:
In Leibniz notation, the substitution yields:
Working heuristically with infinitesimals yields the equation
which suggests the substitution formula above. (This equation may be put on a rigorous foundation by interpreting it as a statement about differential forms.) One may view the method of integration by substitution as a partial justification of Leibniz's notation for integrals and derivatives.
The formula is used to transform one integral into another integral that is easier to compute. Thus, the formula can be read from left to right or from right to left in order to simplify a given integral. When used in the former manner, it is sometimes known as u-substitution or w''-substitution in which a new variable is defined to be a function of the original variable found inside the composite function multiplied by the derivative of the inner function. The latter manner is commonly used in trigonometric substitution, replacing t |
https://en.wikipedia.org/wiki/Zygomatic%20branches%20of%20the%20facial%20nerve | The zygomatic branches of the facial nerve (malar branches) are nerves of the face. They run across the zygomatic bone to the lateral angle of the orbit. Here, they supply the orbicularis oculi muscle, and join with filaments from the lacrimal nerve and the zygomaticofacial branch of the maxillary nerve (CN V2).
Structure
The zygomatic branches of the facial nerve are branches of the facial nerve (CN VII). They run across the zygomatic bone to the lateral angle of the orbit. This is deep to zygomaticus major muscle. They send fibres to orbicularis oculi muscle.
Connections
The zygomatic branches of the facial nerve have many nerve connections. Along their course, there may be connections with the buccal branches of the facial nerve. They join with filaments from the lacrimal nerve and the zygomaticofacial nerve from the maxillary nerve (CN V2). They also join with the inferior palpebral nerve and the superior labial nerve, both from the infraorbital nerve.
Function
The zygomatic branches of the facial nerve supply part of the orbicularis oculi muscle. This is used to close the eyelid.
Clinical significance
Testing
To test the zygomatic branches of the facial nerve, a patient is asked to close their eyes tightly. This uses orbicularis oculi muscle. The zygomatic branches of the facial nerve may be recorded and stimulated with an electrode.
Surgical damage
Rarely, the zygomatic branches of the facial nerve may be damaged during surgery on the temporomandibular joint (TMJ).
Additional images
See also
Zygomatic nerve
Zygomaticus major muscle
Zygomaticus minor muscle |
https://en.wikipedia.org/wiki/WAN%20optimization | WAN optimization is a collection of techniques for improving data transfer across wide area networks (WANs). In 2008, the WAN optimization market was estimated to be $1 billion, and was to grow to $4.4 billion by 2014 according to Gartner, a technology research firm. In 2015 Gartner estimated the WAN optimization market to be a $1.1 billion market.
The most common measures of TCP data-transfer efficiencies (i.e., optimization) are throughput, bandwidth requirements, latency, protocol optimization, and congestion, as manifested in dropped packets. In addition, the WAN itself can be classified with regards to the distance between endpoints and the amounts of data transferred. Two common business WAN topologies are Branch to Headquarters and Data Center to Data Center (DC2DC). In general, "Branch" WAN links are closer, use less bandwidth, support more simultaneous connections, support smaller connections and more short-lived connections, and handle a greater variety of protocols. They are used for business applications such as email, content management systems, database application, and Web delivery. In comparison, "DC2DC" WAN links tend to require more bandwidth, are more distant, and involve fewer connections, but those connections are bigger (100 Mbit/s to 1 Gbit/s flows) and of longer duration. Traffic on a "DC2DC" WAN may include replication, back up, data migration, virtualization, and other Business Continuity/Disaster Recovery (BC/DR) flows.
WAN optimization has been the subject of extensive academic research almost since the advent of the WAN. In the early 2000s, research in both the private and public sectors turned to improving the end-to-end throughput of TCP, and the target of the first proprietary WAN optimization solutions was the Branch WAN. In recent years, however, the rapid growth of digital data, and the concomitant needs to store and protect it, has presented a need for DC2DC WAN optimization. For example, such optimizations can be performed t |
https://en.wikipedia.org/wiki/Chiemgau%20impact%20hypothesis | Chiemgau impact hypothesis is an obsolete scientific theory that claimed the Tüttensee lake in southern Bavaria, Germany, to be the result of a Holocene meteorite impact. This claim has been refuted by geological research and the finding of a soil horizon of undisturbed peat and sedimentation since the end of the last glaciation period. The lake is in fact one of many kettles under the foothills of the Bavarian alps.
The claims of an impact crater had been raised by a team of hobby-archaeologists, calling themselves the CIRT (Chiemgau impact research team), and have resulted in some media reports in Germany and discussions in the local tourism industry, but are not accepted beyond the CIRT team today. |
https://en.wikipedia.org/wiki/Geomag | Geomag, stylized as GEOMAG, is a magnetic construction toy consisting of a collection of bars, each set with a neodymium alloy magnet at both ends, connected by a magnetic plug coated with polypropylene, and nickel-coated metal spheres. These elements interlock using magnetism, allowing for them to be assembled in various ways.
Geomag was created in May 1998 by Claudio Vicentelli. Geomag products are manufactured by Geomagworld SA, based in Novazzano, Switzerland. To align with the 2009/48/EC law's nickel content regulations for toys, the design was adjusted to feature spheres coated with a bronze alloy.
Invention and Patent
In May 1998, Claudio Vicentelli, a specialist in practical applications of permanent magnets, patented the concept of Geomag.
The patented design outlines the setup of the Geomag bars, which incorporate a metal pin that connects two magnets positioned at opposite ends, along with metallic spheres. The primary objective of this configuration was to minimize the use of magnetic material, thereby reducing production expenses.
Global Expansion of Geomag
Geomag made its debut in the Italian toy chain store Città del Sole in 1999. In 2000, Geomag appeared in several toy fairs in Milan, Nuremberg, and New York. Differences in vision for the development of the toy led to Vicentelli and toy company Plastwood ending their partnership in the same year.
In January of 2003, the Swiss company Geomag SA was created in Ticino, with a license to produce Geomag. This company introduced more elements to the toy, such as panels made of semi-transparent colored polycarbonate (triangular platforms, rhombi, squares, and pentagons), used for decorative and structural support purposes. Vicentelli patented these new elements in Europe in 2004.
In 2004, the G-Baby line, aimed at younger children, was introduced, featuring magnetic cubes and half-spheres with magnetic faces.
International regulators such as ASTM USA and the European Commission instituted a rul |
https://en.wikipedia.org/wiki/Turnsole | Turnsole, katasol, or folium was a dyestuff prepared from the annual plant Chrozophora tinctoria.
History
Turnsole became a mainstay of medieval manuscript illuminators starting with the development of the technique for extracting it in the thirteenth century, when it joined the vegetable-based woad and indigo in the illuminator's repertory.
Its use was mostly as substitute of the more expensive Tyrian purple, the famous dye obtained from Murex molluscs. However, the queen of blue colorants was always the expensive lapis lazuli or its substitute azurite, ground to the finest powders. Turnsole was downgraded to a shading glaze and fell out of use in the illuminator's palette by the turn of the seventeenth century, with the easier availability of less fugitive mineral-derived blue pigments.
According to its method of preparation, turnsole produced a range of translucent colors from blue, through purple to red, depending on its reaction to the acidity or alkalinity of its environment, in a chemical reaction, not understood in the Middle Ages, that is most familiar in the litmus test.
Folium ("leaf"), was actually derived from the three-lobed fruit (illustration), not the leaves, and medieval recipes are explicit that the fruits must not be broken, or the seeds released, during production of the pigment. The fruits were collected in autumn (August, September).
In the early fifteenth century, Cennino Cennini, in his Libro dell' Arte gives a recipe "XVIII: How you should tint paper turnsole color" and "LXXVI To paint a purple or turnsole drapery in fresco." (though neither of these recipes use or describe turnsole). Textiles soaked in the dye vat would be left in a close damp cellar in an atmosphere produced by pans of urine. It was not realized that the decomposition of urea in the urine was producing ammonia, but the technique reminds us how foul-smelling was the dyer's art.
It was sold impregnated into small pieces of linen and then extracted for use. The colour |
https://en.wikipedia.org/wiki/P.%20Jackson%20Darlington%20Jr. | Philip Jackson Darlington Jr. (November 14, 1904, Philadelphia – 16 December 1983, Cambridge, Massachusetts) was an American entomologist, field naturalist, biogeographer, museum curator, and zoology professor. He was known for his collecting ability and his toughness and determination on field expeditions.
Biography
Darlington graduated in 1922 from secondary school at Phillips Exeter Academy and then attended Harvard University, where he graduated with bachelor's degree in 1926 and M.S. in 1927. In the 1920s he went on several field expeditions to the West Indies. From 1928 to 1929 he worked as an entomologist for the United Fruit Company near Santa Marta, Colombia. He returned to graduate study at Harvard University with an extensive collection of insects and vertebrates, including a diversity of bird skins, which formed the basis for a 1931 article. He received in 1931 his Ph.D. from Harvard University with a thesis on the Carabidae (ground beetles) of New Hampshire. From 1931 to 1932 he was a member of the Harvard Australian Expedition (1931–1932) led by William Morton Wheeler (his thesis advisor) and returned with a collection of a huge number of insects and 341 mammals.
Darlington was a key member of the six-man Harvard Australian Expedition (1931-1932) sent on behalf of the Harvard Museum of Comparative Zoology (MCZ) for the dual purpose of procuring specimens - the museum being "weak in Australian animals and ... desires[ing] to complete its series" - and to engage in "the study of the animals of the region when alive." The mission was success with over 300 mammal and thousands of insect specimens returning to the United States. His companion William E. Schevill reported that "Dr. Darlington's resourceful skill and industry had brought together, from New South Wales and Queensland, not only a large collection of insects, but also over three hundred fifty mammals, representing over sixty species, as well about fifty species of birds; in addition, he had a |
https://en.wikipedia.org/wiki/Goddess%20of%20Democracy | The Goddess of Democracy, also known as the Goddess of Democracy and Freedom, the Spirit of Democracy, and the Goddess of Liberty (; zìyóu nǚshén), was a statue created during the 1989 Tiananmen Square protests. The statue was constructed over four days out of foam and papier-mâché over a metal armature and was unveiled and erected on Tiananmen Square on May 30, 1989. The constructors decided to make the statue as large as possible to try to dissuade the government from dismantling it: the government would either have to destroy the statue—an action which would potentially fuel further criticism of its policies—or leave it standing. Nevertheless, the statue was destroyed on June 4, 1989, by soldiers clearing the protesters from Tiananmen square. Since its destruction, numerous replicas and memorials have been erected around the world, including in Hong Kong, San Francisco, Washington, D.C., and Vancouver.
Construction
Near the end of May 1989 the Democracy Movement in Tiananmen Square, though still attracting huge throngs of participants, was losing momentum in the face of government inaction on reforms. One historian said the movement "appeared to have sunk to its nadir. The number of students in the Square continued to decline. Those remaining seemed to have no clear leadership: Chai Ling, tired and disheartened by the difficulties of keeping the Movement together, had resigned from her post...[the Square] had degenerated into a shantytown, strewn with litter and permeated by the stench of garbage and overflowing portable toilets... Tianamen, once a magnet pulling in huge throngs, had become only an unkempt campground of little interest to citizens, many of whom considered the struggle for democracy lost." It was at this point that the revealing of the statue of the Goddess of Democracy reinvigorated the movement in the Square.
The statue was built by students of the Central Academy of Fine Arts beginning on May 27 at their University. It was built in hopes o |
https://en.wikipedia.org/wiki/MSH2 | DNA mismatch repair protein Msh2 also known as MutS homolog 2 or MSH2 is a protein that in humans is encoded by the MSH2 gene, which is located on chromosome 2. MSH2 is a tumor suppressor gene and more specifically a caretaker gene that codes for a DNA mismatch repair (MMR) protein, MSH2, which forms a heterodimer with MSH6 to make the human MutSα mismatch repair complex. It also dimerizes with MSH3 to form the MutSβ DNA repair complex. MSH2 is involved in many different forms of DNA repair, including transcription-coupled repair, homologous recombination, and base excision repair.
Mutations in the MSH2 gene are associated with microsatellite instability and some cancers, especially with hereditary nonpolyposis colorectal cancer (HNPCC). At least 114 disease-causing mutations in this gene have been discovered.
Clinical significance
Hereditary nonpolyposis colorectal cancer (HNPCC), sometimes referred to as Lynch syndrome, is inherited in an autosomal dominant fashion, where inheritance of only one copy of a mutated mismatch repair gene is enough to cause disease phenotype. Mutations in the MSH2 gene account for 40% of genetic alterations associated with this disease and is the leading cause, together with MLH1 mutations. Mutations associated with HNPCC are broadly distributed in all domains of MSH2, and hypothetical functions of these mutations based on the crystal structure of the MutSα include protein–protein interactions, stability, allosteric regulation, MSH2-MSH6 interface, and DNA binding. Mutations in MSH2 and other mismatch repair genes cause DNA damage to go unrepaired, resulting in an increase in mutation frequency. These mutations build up over a person's life that otherwise would not have occurred had the DNA been repaired properly.
Microsatellite instability
The viability of MMR genes including MSH2 can be tracked via microsatellite instability, a biomarker test that analyzes short sequence repeats which are very difficult for cells to replicate w |
https://en.wikipedia.org/wiki/Stochastic%20empirical%20loading%20and%20dilution%20model | The stochastic empirical loading and dilution model (SELDM) is a stormwater quality model. SELDM is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such management measures for reducing these risks. The U.S. Geological Survey developed SELDM in cooperation with the Federal Highway Administration to help develop planning-level estimates of event mean concentrations, flows, and loads in stormwater from a site of interest and from an upstream basin. SELDM uses information about a highway site, the associated receiving-water basin, precipitation events, stormflow, water quality, and the performance of mitigation measures to produce a stochastic population of runoff-quality variables. Although SELDM is, nominally, a highway runoff model is can be used to estimate flows concentrations and loads of runoff-quality constituents from other land use areas as well. SELDM was developed by the U.S. Geological Survey so the model, source code, and all related documentation are provided free of any copyright restrictions according to U.S. copyright laws and the USGS Software User Rights Notice. SELDM is widely used to assess the potential effect of runoff from highways, bridges, and developed areas on receiving-water quality with and without the use of mitigation measures. Stormwater practitioners evaluating highway runoff commonly use data from the Highway Runoff Database (HRDB) with SELDM to assess the risks for adverse effects of runoff on receiving waters.
SELDM is a stochastic mass-balance model. A mass-balance approach (figure 1) is commonly applied to estimate the concentrations and loads of water-quality constituents in receiving waters downstream of an urban or highway-runoff outfall. In a mass-balance model, the loads from the upstream basin and runoff source area are added to calculate the discharge, co |
https://en.wikipedia.org/wiki/Bony%20labyrinth | The bony labyrinth (also osseous labyrinth or otic capsule) is the rigid, bony outer wall of the inner ear in the temporal bone. It consists of three parts: the vestibule, semicircular canals, and cochlea. These are cavities hollowed out of the substance of the bone, and lined by periosteum. They contain a clear fluid, the perilymph, in which the membranous labyrinth is situated.
A fracture classification system in which temporal bone fractures detected by computed tomography are delineated based on disruption of the otic capsule has been found to be predictive for complications of temporal bone trauma such as facial nerve injury, sensorineural deafness and cerebrospinal fluid otorrhea. On radiographic images, the otic capsule is the densest portion of the temporal bone.
In otospongiosis, a leading cause of adult-onset hearing loss, the otic capsule is exclusively affected. This area normally undergoes no remodeling in adult life and is extremely dense. With otospongiosis, the normally dense enchondral bone is replaced by haversian bone, a spongy and vascular matrix that results in sensorineural hearing loss due to compromise of the conductive capacity of the inner ear ossicles. This results in hypodensity on CT, with the portion first affected usually being the fissula ante fenestram.
The bony labyrinth is studied in paleoanthropology as it is a good indicator for distinguishing Neanderthals and Modern humans. |
https://en.wikipedia.org/wiki/Mark%20Tomforde | Mark Tomforde is an American mathematician and Professor of Mathematics
at University of Colorado Colorado Springs. He works in the areas of functional analysis and algebra, and he earned his Ph.D. in mathematics at Dartmouth College in 2002. Tomforde's research interests are in operator algebras and C*-algebras, and he has made contributions to the study of graph C*-algebras and Leavitt path algebras. He was an invited speaker at the 2015 Abel Symposium, and he is a founding member of the Algebras and Rings in Colorado Springs (ARCS) center. He has also received several awards for his teaching and outreach efforts.
Contributions to Graph C*-algebras and Leavitt Path Algebras
Tomforde has made fundamental contributions to the related areas of graph C*-algebras and Leavitt path algebras. With Doug Drinen, he is co-creator of the Drinen-Tomforde desingularization, often simply called desingularization. Desingularization allows one to extend many results for row-finite graphs to countable graphs, and it has become a standard tool in the study of graph C*-algebras and Leavitt path algebras.
With Gene Abrams, Tomforde has formulated the Abrams-Tomforde conjectures, which state that the Leavitt path algebras of two graphs are Morita equivalent as rings (respectively, isomorphic as rings) if and only if the C*-algebras of the two graphs are Morita equivalent as C*-algebras (respectively, isomorphic as C*-algebras). Abrams and Tomforde have verified certain special cases of the conjectures, and in 2020 Søren Eilers, Gunnar Restorff, Efren Ruiz, and Adam P.W. Sørensen completed a classification of unital graph C*-algebras that allowed them to verify the Abrams-Tomforde conjectures for graphs with a finite number of vertices.
Tomforde is also the creator of ultragraph C*-algebras, which are C*-algebras constructed from a generalization of a directed graph, known as an ultragraph. The ultragraph C*-algebras simultaneously generalize the classes of graph C*-algebras |
https://en.wikipedia.org/wiki/Damk%C3%B6hler%20numbers | The Damköhler numbers (Da) are dimensionless numbers used in chemical engineering to relate the chemical reaction timescale (reaction rate) to the transport phenomena rate occurring in a system. It is named after German chemist Gerhard Damköhler.
The Karlovitz number (Ka) is related to the Damköhler number by Da = 1/Ka.
In its most commonly used form, the Damköhler number relates the reaction timescale to the convection time scale, volumetric flow rate, through the reactor for continuous (plug flow or stirred tank) or semibatch chemical processes:
In reacting systems that include interphase mass transport, the second Damköhler number (DaII) is defined as the ratio of the chemical reaction rate to the mass transfer rate
It is also defined as the ratio of the characteristic fluidic and chemical time scales:
Since the reaction rate determines the reaction timescale, the exact formula for the Damköhler number varies according to the rate law equation. For a general chemical reaction A → B following the Power law kinetics of n-th order, the Damköhler number for a convective flow system is defined as:
where:
k = kinetics reaction rate constant
C0 = initial concentration
n = reaction order
= mean residence time or space time
On the other hand, the second Damköhler number is defined as:
where
kg is the global mass transport coefficient
a is the interfacial area
The value of Da provides a quick estimate of the degree of conversion that can be achieved. As a rule of thumb, when Da is less than 0.1 a conversion of less than 10% is achieved, and when Da is greater than 10 a conversion of more than 90% is expected. The limit is called the Burke–Schumann limit.
Derivation for decomposition of a single species
From the general mole balance on some species , where for a CSTR steady state and perfect mixing are assumed,
Assuming a constant volumetric flow rate , which is the case for a liquid reactor or a gas phase reaction with no net generation |
https://en.wikipedia.org/wiki/Ombrophobe | Ombrophobe or ombrophobous/ombrophobic plant (from Greek ὄμβρος - ombros, "storm of rain" and φόβος - phobos, "fear") is a plant that cannot withstand much rain. The term was introduced by the 19th-century botanist Julius Wiesner, who identified the two extreme kinds of plants, ombrophobes and ombrophiles. Xerophytes are usually ombrophobous. |
https://en.wikipedia.org/wiki/Absorption%20%28acoustics%29 | In acoustics, absorption refers to the process by which a material, structure, or object takes in sound energy when sound waves are encountered, as opposed to reflecting the energy. Part of the absorbed energy is transformed into heat and part is transmitted through the absorbing body. The energy transformed into heat is said to have been 'lost'.
When sound from a loudspeaker collides with the walls of a room, part of the sound's energy is reflected back into the room, part is transmitted through the walls, and part is absorbed into the walls. Just as the acoustic energy was transmitted through the air as pressure differentials (or deformations), the acoustic energy travels through the material which makes up the wall in the same manner. Deformation causes mechanical losses via conversion of part of the sound energy into heat, resulting in acoustic attenuation, mostly due to the wall's viscosity. Similar attenuation mechanisms apply for the air and any other medium through which sound travels.
The fraction of sound absorbed is governed by the acoustic impedances of both media and is a function of frequency and the incident angle. Size and shape can influence the sound wave's behavior if they interact with its wavelength, giving rise to wave phenomena such as standing waves and diffraction.
Acoustic absorption is of particular interest in soundproofing. Soundproofing aims to absorb as much sound energy (often in particular frequencies) as possible converting it into heat or transmitting it away from a certain location.
In general, soft, pliable, or porous materials (like cloths) serve as good acoustic insulators - absorbing most sound, whereas dense, hard, impenetrable materials (such as metals) reflect most.
How well a room absorbs sound is quantified by the effective absorption area of the walls, also named total absorption area. This is calculated using its dimensions and the absorption coefficients of the walls. The total absorption is expressed in Sabins a |
https://en.wikipedia.org/wiki/Flags%20of%20the%20regions%20of%20Ukraine | The flags of the subdivisions of Ukraine exhibit a wide variety of regional influences and local histories, reflecting different styles and design principles. Most local flags were designed and adopted after Ukrainian independence in 1991.
Flags of Ukrainian oblasts
Flag of autonomous republic
Flags of cities with special status
Historical
Flags of Ukrainian oblasts
Flags of Ruthenian lands in the battle of Tannenberg, 1410
Cossacks' flags captured by Radzivil in 1651
Flags of Black Sea Cossack Host
See also
Coats of arms of the regions of Ukraine
List of flags of the raions of Ukraine
Flags of populated places of Ukraine
Flag of Ukraine
List of Ukrainian flags
regions
Ukraine
Flags
National symbols of Ukraine
Ukraine
Flags |
https://en.wikipedia.org/wiki/Matrigel | Matrigel is the trade name for the solubilized basement membrane matrix secreted by Engelbreth-Holm-Swarm (EHS) mouse sarcoma cells produced by Corning Life Sciences. Matrigel resembles the laminin/collagen IV-rich basement membrane extracellular environment found in many tissues and is used by cell biologists as a substrate (basement membrane matrix) for culturing cells.
Cell culture
A common laboratory procedure is to dispense small volumes of chilled (4 °C) liquid Matrigel onto plastic tissue culture labware. When incubated at 37 °C (body temperature) the Matrigel proteins polymerize (solidify) producing a recombinant basement membrane that covers the labware's surface.
Cells cultured on Matrigel demonstrate complex cellular behavior that is otherwise difficult to observe under laboratory conditions. For example, endothelial cells create intricate spiderweb-like networks on Matrigel-coated surfaces but not on plastic surfaces. Such networks are highly suggestive of the microvascular capillary systems that suffuse living tissues with blood. Hence, Matrigel allows them to observe the process by which endothelial cells construct such networks that are of great research interest.
Metastasis model
In some instances researchers may prefer to use greater volumes of Matrigel to produce thick three-dimensional gels. Thick gels induce cells to migrate from the gel's surface to its interior. This migratory behavior is studied by researchers as a model of tumor cell metastasis.
Cancer drug screening
Pharmaceutical scientists use Matrigel to screen drug molecules. A typical experiment consists of adding a test molecule to Matrigel and observing cellular behavior. Test molecules that promote endothelial cell network formation are candidates for tissue regeneration therapies whereas test molecules that inhibit endothelial cell network formation are candidates for anti-cancer therapies. Likewise, test molecules that inhibit tumor cell migration may also have potential as ant |
https://en.wikipedia.org/wiki/Fiber | Fiber or fibre (British English; from ) is a natural or artificial substance that is significantly longer than it is wide. Fibers are often used in the manufacture of other materials. The strongest engineering materials often incorporate fibers, for example carbon fiber and ultra-high-molecular-weight polyethylene.
Synthetic fibers can often be produced very cheaply and in large amounts compared to natural fibers, but for clothing natural fibers can give some benefits, such as comfort, over their synthetic counterparts.
Natural fibers
Natural fibers develop or occur in the fiber shape, and include those produced by plants, animals, and geological processes. They can be classified according to their origin:
Vegetable fibers are generally based on arrangements of cellulose, often with lignin: examples include cotton, hemp, jute, flax, abaca, piña, ramie, sisal, bagasse, and banana. Plant fibers are employed in the manufacture of paper and textile (cloth), and dietary fiber is an important component of human nutrition.
Wood fiber, distinguished from vegetable fiber, is from tree sources. Forms include groundwood, lacebark, thermomechanical pulp (TMP), and bleached or unbleached kraft or sulfite pulps. Kraft and sulfite refer to the type of pulping process used to remove the lignin bonding the original wood structure, thus freeing the fibers for use in paper and engineered wood products such as fiberboard.
Animal fibers consist largely of particular proteins. Instances are silkworm silk, spider silk, sinew, catgut, wool, sea silk and hair such as cashmere wool, mohair and angora, fur such as sheepskin, rabbit, mink, fox, beaver, etc.
Mineral fibers include the asbestos group. Asbestos is the only naturally occurring long mineral fiber. Six minerals have been classified as "asbestos" including chrysotile of the serpentine class and those belonging to the amphibole class: amosite, crocidolite, tremolite, anthophyllite and actinolite. Short, fiber-like minerals include |
https://en.wikipedia.org/wiki/Wilhelm%20Ackermann | Wilhelm Friedrich Ackermann (; ; 29 March 1896 – 24 December 1962) was a German mathematician and logician best known for his work in mathematical logic and the Ackermann function, an important example in the theory of computation.
Biography
Ackermann was born in Herscheid, Germany, and was awarded a Ph.D. by the University of Göttingen in 1925 for his thesis Begründung des "tertium non datur" mittels der Hilbertschen Theorie der Widerspruchsfreiheit, which was a consistency proof of arithmetic apparently without Peano induction (although it did use e.g. induction over the length of proofs). This was one of two major works in proof theory in the 1920s and the only one following Hilbert's school of thought. From 1929 until 1948, he taught at the Arnoldinum Gymnasium in Burgsteinfurt, and then at Lüdenscheid until 1961. He was also a corresponding member of the Akademie der Wissenschaften (Academy of Sciences) in Göttingen, and was an honorary professor at the University of Münster.
In 1928, Ackermann helped David Hilbert turn his 1917 – 22 lectures on introductory mathematical logic into a text, Principles of Mathematical Logic. This text contained the first exposition ever of first-order logic, and posed the problem of its completeness and decidability (Entscheidungsproblem). Ackermann went on to construct consistency proofs for set theory (1937), full arithmetic (1940), type-free logic (1952), and a new axiomatization of set theory (1956).
Later in life, Ackermann continued working as a high school teacher. Still, he kept continually engaged in the field of research and published many contributions to the foundations of mathematics until the end of his life. He died in Lüdenscheid, West Germany in December 1962.
See also
Ackermann's bijection
Ackermann coding
Ackermann function
Ackermann ordinal
Ackermann set theory
Hilbert–Ackermann system
Entscheidungsproblem
Ordinal notation
Inverse Ackermann function
Bibliography
1928. "On Hilbert's construction of the r |
https://en.wikipedia.org/wiki/TRENDnet | TRENDnet is a global manufacturer of computer networking products headquartered in Torrance, California, in the United States. It sells networking and surveillance products especially in the small to medium business (SMB) and home user market segments.
History
The company was founded in 1990 by Pei Huang and Peggy Huang.
Vulnerabilities
In September 2013, the Federal Trade Commission (FTC) brought an enforcement action against TRENDnet alleging that the company marketed its SecurView IP cameras describing them as "secure", when in fact the software allowed online viewing by anyone with the camera's IP address.
The FTC approved a final settlement with TRENDnet in February 2014. In January 2018, TRENDnet launched 4K UHD PoE surveillance cameras with covert IR LEDs. |
https://en.wikipedia.org/wiki/Ionic%20potential | Ionic potential is the ratio of the electrical charge (z) to the radius (r) of an ion.
As such, this ratio is a measure of the charge density at the surface of the ion; usually the denser the charge, the stronger the bond formed by the ion with ions of opposite charge.
The ionic potential gives an indication of how strongly, or weakly, the ion will be electrostatically attracted by ions of opposite charge; and to what extent the ion will be repelled by ions of the same charge.
Victor Moritz Goldschmidt, the father of modern geochemistry found that the behavior of an element in its environment could be predicted from its ionic potential and illustrated this with a diagram (plot of the bare ionic radius as a function of the ionic charge). For instance, the solubility of dissolved iron is highly dependent on its redox state. with a lower ionic potential than is much more soluble because it exerts a weaker interaction force with ion present in water and exhibits a less pronounced trend to hydrolysis and precipitation. Under reducing conditions Fe(II) can be present at relatively high concentration in anoxic water, similar to these encountered for other divalent species such as and . However, once anoxic ground water is pumped from a deep well and is discharged to the surface, it enters in contact with atmospheric oxygen. Then is easily oxidized to and this latter rapidly hydrolyzes and precipitates because of its lower solubility due to a higher z/r ratio.
Millot (1970) also illustrated the importance of the ionic potential of cations to explain the high, or the low, solubility of minerals and the expansive behaviour (swelling/shrinking) of clay materials.
The ionic potential of the different cations (, , and ) present in the interlayer of clay minerals also contribute to explain their swelling/shrinking properties. The more hydrated cations such as and are responsible for the swelling of smectite while the less hydrated and cause the collapse of the |
https://en.wikipedia.org/wiki/Windows%208.1 | Windows 8.1 is a release of the Windows NT operating system developed by Microsoft. It was released to manufacturing on August 27, 2013, and broadly released for retail sale on October 17, 2013, about a year after the retail release of its predecessor, and succeeded by Windows 10 on July 29, 2015. Windows 8.1 was made available for download via MSDN and Technet and available as a free upgrade for retail copies of Windows 8 and Windows RT users via the Windows Store. A server version, Windows Server 2012 R2, was released on October 18, 2013.
Windows 8.1 aimed to address complaints of Windows 8 users and reviewers on launch. Enhancements include an improved Start screen, additional snap views, additional bundled apps, tighter OneDrive (formerly SkyDrive) integration, Internet Explorer 11 (IE11), a Bing-powered unified search system, restoration of a visible Start button on the taskbar, and the ability to restore the previous behavior of opening the user's desktop on login instead of the Start screen.
Windows 8.1 also added support for then emerging technologies like high-resolution displays, 3D printing, Wi-Fi Direct, and Miracast streaming, as well as the ReFS file system.
Windows 8.1 received more positive reception than Windows 8, with people appreciating the expanded functionality available to apps in comparison to Windows 8, its OneDrive integration, its user interface tweaks, and the addition of expanded tutorials for operating the Windows 8 interface. Despite these improvements, Windows 8.1 was still criticized for not addressing all issues of Windows 8 (such as poor integration between Metro-style apps and the desktop interface), and the potential privacy implications of the expanded use of online services.
Mainstream support for Windows 8.1 ended on January 9, 2018, and extended support ended on January 10, 2023. Mainstream support for the Embedded Industry edition of Windows 8.1 ended on July 10, 2018, and extended support ended on January 10, 2023. As |
https://en.wikipedia.org/wiki/%C3%81d%C3%A1m%20Kor%C3%A1nyi | Ádám Korányi (born July 13, 1932, in Szeged) is a Hungarian and American mathematician. He is a Distinguished Professor of Mathematics and Computer Science at Lehman College, City University of New York and at the CUNY Graduate Center. His research interests include complex analysis, harmonic analysis, and quasiconformal mappings.
Life and career
Korányi earned his doctorate in 1959 from the University of Chicago under the supervision of Marshall Stone.
He has been an external member of the Hungarian Academy of Sciences since 2001.
Korányi advised 7 doctoral students, including Howard L. Resnikoff.
Selected publications |
https://en.wikipedia.org/wiki/SPR%20domain | In molecular biology the SPR domain is a protein domain found in the Sprouty (Spry) and Spred (Sprouty related EVH1 domain) proteins. These have been identified as inhibitors of the Ras/mitogen-activated protein kinase (MAPK) cascade, a pathway crucial for developmental processes initiated by activation of various receptor tyrosine kinases. These proteins share a conserved, C-terminal cysteine-rich region, the SPR domain. This domain has been defined as a novel cytosol to membrane translocation domain. It has been found to be a PtdIns(4,5)P2-binding domain that targets the proteins to a cellular localization that maximizes their inhibitory potential. It also mediates homodimer formation of these proteins.
The SPR domain can occur in association with the WH1 domain (see ) (located in the N-terminus) in the Spred proteins.
Examples
Human genes encoding protein containing the SPR domain include:
SPRED1, SPRED2, SPRED3, SPRY1, SPRY2, SPRY3, SPRY4 |
https://en.wikipedia.org/wiki/Universal%20Transverse%20Mercator%20coordinate%20system | The Universal Transverse Mercator (UTM) is a map projection system for assigning coordinates to locations on the surface of the Earth. Like the traditional method of latitude and longitude, it is a horizontal position representation, which means it ignores altitude and treats the earth surface as a perfect ellipsoid. However, it differs from global latitude/longitude in that it divides earth into 60 zones and projects each to the plane as a basis for its coordinates. Specifying a location means specifying the zone and the x, y coordinate in that plane. The projection from spheroid to a UTM zone is some parameterization of the transverse Mercator projection. The parameters vary by nation or region or mapping system.
Most zones in UTM span 6 degrees of longitude, and each has a designated central meridian. The scale factor at the central meridian is specified to be 0.9996 of true scale for most UTM systems in use.
History
The National Oceanic and Atmospheric Administration (NOAA) website states that the system was developed by the United States Army Corps of Engineers, starting in the early 1940s. However, a series of aerial photos found in the Bundesarchiv-Militärarchiv (the military section of the German Federal Archives) apparently dating from 1943–1944 bear the inscription UTMREF followed by grid letters and digits, and projected according to the transverse Mercator, a finding that would indicate that something called the UTM Reference system was developed in the 1942–43 time frame by the Wehrmacht. It was probably carried out by the Abteilung für Luftbildwesen (Department for Aerial Photography). From 1947 onward the US Army employed a very similar system, but with the now-standard 0.9996 scale factor at the central meridian as opposed to the German 1.0. For areas within the contiguous United States the Clarke Ellipsoid of 1866 was used. For the remaining areas of Earth, including Hawaii, the International Ellipsoid was used. The World Geodetic System WGS84 ell |
https://en.wikipedia.org/wiki/Alexidine | Alexidine is an antimicrobial of the biguanide class. More specifically, it is a bisbiguanide. |
https://en.wikipedia.org/wiki/Experience%20modifier | In the insurance industry in the United States, an experience modifier or experience modification is an adjustment of an employer's premium for worker's compensation coverage based on the losses the insurer has experienced from that employer. An experience modifier of 1 would be applied for an employer that had demonstrated the actuarially expected performance. Poorer loss experience leads to a modifier greater than 1, and better experience to a modifier less than 1. The loss experience used in determining the modifier typically comprises three years but excluding the immediate past year. For instance, if a policy expired on January 1, 2018, the period reflected by the experience modifier would run from January 1, 2014 to January 1, 2017.
Methods of calculation
Experience modifiers are normally recalculated for an employer annually by using experience ratings. The rating is a method used by insurers to determine pricing of premiums for different groups or individuals based on the group or individual's history of claims. The experience rating approach uses an individual's or group’s historic data as a proxy for future risk, and insurers adjust and set insurance premiums and plans accordingly. Each year, a newer year's data is added to the three year window of experience used in the calculation, and the oldest year from the prior calculation is dropped off. The other two years worth of data in the rating window are also updated on an annual basis.
Experience modifiers are calculated by organizations known as "rating bureaus" and rely on information reported by insurance companies. The rating bureau used by most states is the NCCI, the National Council on Compensation Insurance. But a number of states have independent rating bureaus: California, Michigan, Delaware, and Pennsylvania have stand-alone rating bureaus that do not integrate data with NCCI. Other states such as Wisconsin, Texas, New York, New Jersey, Indiana, and North Carolina, maintain their own rati |
https://en.wikipedia.org/wiki/Cultural%20keystone%20species | A cultural keystone species is one which is of exceptional significance to a particular culture or a people. Such species can be identified by their prevalence in language, cultural practices (e.g. ceremonies), traditions, diet, medicines, material items, and histories of a community. These species influence social systems and culture and are a key feature of a community's identity.
The concept was first proposed by Gary Nabhan and John Carr in 1994 and later described by Sergio Cristancho and Joanne Vining in 2000 and by ethnobotanist Ann Garibaldi and ethnobiologist Nancy Turner in 2004. It is a "metaphorical parallel" to the ecological keystone species concept, and may be useful for biodiversity conservation and ecological restoration.
Definitions
The exact definition of cultural keystone species remains under debate and is considered to be more abstract than the related ecological concept. Garibaldi and Turner emphasize that the cultural keystone species concept is not an extension of ecological keystone species, but rather a parallel concept that bridges social and physical sciences, as well as indigenous knowledge and western knowledge, to offer a more holistic approach. Other researchers debate whether or not cultural keystone species are different from economically important species. Additionally, it is argued that the concept will be reduced to a biological term if it only focuses on specific species, but this may be solved by considering cultural keystone species as a "complex" that develops based on the ways that the species is used and its impacts on cultural practices over time, through conscious social practices, decision-making processes, and changes to societal needs and practices.
Garibaldi and Turner outline six elements that should be considered when identifying a cultural keystone species:
The magnitude and variety of ways the species is used
The species' influence on language
The species' role in cultural practices (e.g. traditional prac |
https://en.wikipedia.org/wiki/Mosaic%20%28genetics%29 | Mosaicism or genetic mosaicism is a condition in which a multicellular organism possesses more than one genetic line as the result of genetic mutation. This means that various genetic lines resulted from a single fertilized egg. Mosaicism is one of several possible causes of chimerism, wherein a single organism is composed of cells with more than one distinct genotype.
Genetic mosaicism can result from many different mechanisms including chromosome nondisjunction, anaphase lag, and endoreplication. Anaphase lagging is the most common way by which mosaicism arises in the preimplantation embryo. Mosaicism can also result from a mutation in one cell during development, in which case the mutation will be passed on only to its daughter cells (and will be present only in certain adult cells). Somatic mosaicism is not generally inheritable as it does not generally affect germ cells.
History
In 1929, Alfred Sturtevant studied mosaicism in Drosophila, a genus of fruit fly. Muller in 1930 demonstrated that mosaicism in Drosophila is always associated with chromosomal rearrangements and Schultz in 1936 showed that in all cases studied these rearrangements were associated with heterochromatic inert regions, several hypotheses on the nature of such mosaicism were proposed. One hypothesis assumed that mosaicism appears as the result of a break and loss of chromosome segments. Curt Stern in 1935 assumed that the structural changes in the chromosomes took place as a result of somatic crossing, as a result of which mutations or small chromosomal rearrangements in somatic cells. Thus the inert region causes an increase in mutation frequency or small chromosomal rearrangements in active segments adjacent to inert regions.
In the 1930s, Stern demonstrated that genetic recombination, normal in meiosis, can also take place in mitosis. When it does, it results in somatic (body) mosaics. These organisms contain two or more genetically distinct types of tissue. The term somatic mosaicism |
https://en.wikipedia.org/wiki/Kaisa%20Matom%C3%A4ki | Kaisa Sofia Matomäki (born April 30, 1985) is a Finnish mathematician specializing in number theory. Since September 2015, she has been working as an Academic Research Fellow in the Department of Mathematics and Statistics, University of Turku, Turku, Finland. Her research includes results on the distribution of multiplicative functions over short intervals of numbers; for instance, she showed that the values of the Möbius function are evenly divided between +1 and −1 over short intervals. These results, in turn, were among the tools used by Terence Tao to prove the Erdős discrepancy problem.
Awards and honors
Kaisa Matomäki, along with Maksym Radziwill of McGill University, Canada, was awarded the SASTRA Ramanujan Prize for 2016. The Prize was established in 2005 and is awarded annually for outstanding contributions by young mathematicians to areas influenced by Srinivasa Ramanujan.
The citation for the 2016 SASTRA Ramanujan Prize is as follows: "Kaisa Matomäki and Maksym Radziwill are jointly awarded the 2016 SASTRA Ramanujan Prize for their deep and far reaching contributions to several important problems in diverse areas of number theory and especially for their spectacular collaboration which is revolutionizing the subject. The prize recognizes that in making significant improvements over the works of earlier stalwarts on long standing problems, they have introduced a number of innovative techniques. The prize especially recognizes their collaboration starting with their 2015 joint paper in Geometric and Functional Analysis which led to their 2016 paper in the Annals of Mathematics in which they obtain amazing results on multiplicative functions in short intervals, and in particular a stunning result on the parity of the Liouville lambda function on almost all short intervals - a paper that is expected to change the subject of multiplicative functions in a major way. The prize notes also the very recent joint paper of Matomäki, Radziwill and Tao announcing a |
https://en.wikipedia.org/wiki/Nonverbal%20learning%20disorder | Nonverbal learning disability (NVLD) is proposed category of neurodevelopmental disorder characterized by core deficits in visual-spatial processing and a significant discrepancy between verbal and nonverbal intelligence (where verbal intelligence is higher). A review of papers found that proposed diagnostic criteria were inconsistent. Proposed additional diagnostic criteria include intact verbal intelligence, and deficits in the following: visuoconstruction abilities, speech prosody, fine-motor coordination, mathematical reasoning, visuospatial memory and social skills. NVLD is not recognised by the DSM-5 and is not clinically distinct from learning disorder.
NVLD's symptoms can overlap with symptoms of Autism Spectrum Disorder (ASD), bipolar disorder, and ADHD. For this reason, some claim a diagnosis of NVLD is more appropriate in some subset of these cases.
Signs and symptoms
Considered to be neurologically based, nonverbal learning disorder is characterized by:
impairments in visuospatial processing
discrepancy between average to superior verbal abilities and impaired nonverbal abilities such as:
visuoconstruction
fine motor coordination
mathematical reasoning
visuospatial memory
socioemotional skills
People with NVLD may have trouble understanding charts, reading maps, assembling jigsaw puzzles, and using an analog clock to tell time. "Clumsiness" is not unusual in people with NVLD, especially children, and it may take a child with NVLD longer than usual to learn how to tie shoelaces or to ride a bicycle.
At the beginning of their school careers, children with symptoms of NVLD struggle with tasks that require eye–hand coordination, such as coloring and using scissors, but often excel at memorizing verbal content, spelling, and reading once the shapes of the letters are learned. A child with NVLD's Average or Superior verbal skills can be misattributed to attention deficit hyperactivity disorder, defiant behavior, inattention, or lack of effort. Earl |
https://en.wikipedia.org/wiki/C-slowing | C-slow retiming is a technique used in conjunction with retiming to improve throughput of a digital circuit. Each register in a circuit is replaced by a set of C registers (in series). This creates a circuit with C independent threads, as if the new circuit contained C copies of the original circuit. A single computation of the original circuit takes C times as many clock cycles to compute in the new circuit. C-slowing by itself increases latency, but throughput remains the same.
Increasing the number of registers allows optimization of the circuit through retiming to reduce the clock period of the circuit. In the best case, the clock period can be reduced by a factor of C. Reducing the clock period of the circuit reduces latency and increases throughput. Thus, for computations that can be multi-threaded, combining C-slowing with retiming can increase the throughput of the circuit, with little, or in the best case, no increase in latency.
Since registers are relatively plentiful in FPGAs, this technique is typically applied to circuits implemented with FPGAs.
See also
Pipelining
Barrel processor
Resources
PipeRoute: A Pipelining-Aware Router for Reconfigurable Architectures
Simple Symmetric Multithreading in Xilinx FPGAs
Post Placement C-Slow Retiming for Xilinx Virtex (.ppt)
Post Placement C-Slow Retiming for Xilinx Virtex (.pdf)
Exploration of RaPiD-style Pipelined FPGA Interconnects
Time and Area Efficient Pattern Matching on FPGAs
Gate arrays |
https://en.wikipedia.org/wiki/Coroutine | Coroutines are computer program components that allow execution to be suspended and resumed, generalizing subroutines for cooperative multitasking. Coroutines are well-suited for implementing familiar program components such as cooperative tasks, exceptions, event loops, iterators, infinite lists and pipes.
They have been described as "functions whose execution you can pause".
Melvin Conway coined the term coroutine in 1958 when he applied it to the construction of an assembly program. The first published explanation of the coroutine appeared later, in 1963.
Definition and Types
There is no single precise definition of coroutine. In 1980 Christopher D. Marlin summarized two widely-acknowledged fundamental characteristics of a coroutine:
the values of data local to a coroutine persist between successive calls;
the execution of a coroutine is suspended as control leaves it, only to carry on where it left off when control re-enters the coroutine at some later stage.
Besides that, a coroutine implementation has 3 features:
the control-transfer mechanism. Asymmetric coroutines usually provide keywords like yield and resume. Programmers cannot freely choose which frame to yield to. The runtime only yields to the nearest caller of the current coroutine. On the other hand, in symmetric coroutine, programmers must specify a yield destination.
whether coroutines are provided in the language as first-class objects, which can be freely manipulated by the programmer, or as constrained constructs;
whether a coroutine is able to suspend its execution from within nested function calls. Such a coroutine is stackful. One to the contrary is called stackless coroutine, where unless marked as coroutine, a regular function can't use keyword yield.
Revisiting Coroutines published in 2009 proposed term Full Coroutine to denote one that supports first-class coroutine and is stackful. Full Coroutines deserve their own name in that they have the same expressive power as one-sho |
https://en.wikipedia.org/wiki/Hadwiger%27s%20theorem | In integral geometry (otherwise called geometric probability theory), Hadwiger's theorem characterises the valuations on convex bodies in It was proved by Hugo Hadwiger.
Introduction
Valuations
Let be the collection of all compact convex sets in A valuation is a function such that and for every that satisfy
A valuation is called continuous if it is continuous with respect to the Hausdorff metric. A valuation is called invariant under rigid motions if whenever and is either a translation or a rotation of
Quermassintegrals
The quermassintegrals are defined via Steiner's formula
where is the Euclidean ball. For example, is the volume, is proportional to the surface measure, is proportional to the mean width, and is the constant
is a valuation which is homogeneous of degree that is,
Statement
Any continuous valuation on that is invariant under rigid motions can be represented as
Corollary
Any continuous valuation on that is invariant under rigid motions and homogeneous of degree is a multiple of
See also |
https://en.wikipedia.org/wiki/Home%20canning | Home canning or bottling, also known colloquially as putting up or processing, is the process of preserving foods, in particular, fruits, vegetables, and meats, by packing them into glass jars and then heating the jars to create a vacuum seal and kill the organisms that would create spoilage.
Though ceramic and glass containers had been used for storage for thousands of years, the technique of canning, which involves applying heat for preservation, was only invented in the first decade of the 1800s. Before that, food storage containers were used for non-perishable foods, or with preservatives such as salt, sugar, vinegar, or alcohol.
Techniques
Water bath canning
Water bath canning is appropriate for high-acid foods only, such as jam, jelly, most fruit, pickles, and tomato products with acid added. It is not appropriate for meats and low-acid foods such as vegetables. This method uses a pot large enough to hold and submerge the glass canning jars. Food is placed in glass canning jars and placed in the pot. Hot water is added to cover the jars. Water is brought to a boil () and held there for at least 10 minutes. Different foods require a different length of time under boil; larger jars require longer times.
Pressure canning
Pressure canning is the only safe home canning method for meats and low-acid foods. This method uses a pressure canner — similar to, but heavier than, a pressure cooker. A small amount of water is placed in the pressure canner and it is turned to steam, which without pressure would be , but under pressure is raised to . Based on the recipe, the canner is heated until the correct pressure is reached, and the jars left for the appropriate amount of time (charts have been published with times and pressures). The heat is turned off, pressure reduced, canner opened, and hot jars carefully lifted out and placed on an insulated surface (towels, wood cutting board, etc.) and out of drafts to cool.
Safety
While it is possible to safely preserve man |
https://en.wikipedia.org/wiki/Polygraph%20%28mathematics%29 | In mathematics, and particularly in category theory, a polygraph is a generalisation of a directed graph. It is also known as a computad. They were introduced as "polygraphs" by Albert Burroni and as "computads" by Ross Street.
In the same way that a directed multigraph can freely generate a category, an n-computad is the "most general" structure which can generate a free n-category. |
https://en.wikipedia.org/wiki/Reverse%20roll%20coating | Reverse roll coating is a roll-to-roll coating method for wet coatings. It is distinguished from other roll coating methods by having two reverse-running nips. The metering roll and the applicator roll contra-rotate, with an accurate gap between them. The surface of the applicator roll is loaded with an excess of coating prior to the metering nip, so its surface emerges from the metering nip with a precise thickness of coating equal to the gap. At the application nip, the applicator roll transfers all of this coating to the substrate, by running in the opposite direction to the movement of the substrate, wiping the coating onto the substrate.
Reverse roll coating machines demand high specifications in their construction, e.g. for the machining and bearings of the rollers and for highly uniform speed control. This makes them relatively expensive compared to other coating technologies. Unlike many other coating methods, they can however handle coatings with a very wide range of viscosities, from 1 to more than 50000 mPas, and are capable of producing extremely polished finishes on the coatings they apply. They have been produced in a variety of 3-roll and 4-roll configurations.
Products that have been manufactured on reverse roll coating machines include magnetic tapes; coated papers; and pressure sensitive tapes. The rise of slot-die coating has tended to eclipse reverse roll coaters as in most if not all cases, the same products can be made on cheaper machinery. |
https://en.wikipedia.org/wiki/CHRNB1 | Acetylcholine receptor subunit beta is a protein that in humans is encoded by the CHRNB1 gene.
The muscle acetylcholine receptor is composed of five subunits: two alpha subunits and one beta, one gamma, and one delta subunit. This gene encodes the beta subunit of the acetylcholine receptor. The acetylcholine receptor changes conformation upon acetylcholine binding leading to the opening of an ion-conducting channel across the plasma membrane. Mutations in this gene are associated with slow-channel congenital myasthenic syndrome.
See also
Nicotinic acetylcholine receptor |
https://en.wikipedia.org/wiki/Spinhenge%40Home | Spinhenge@home was a volunteer computing project on the BOINC platform, which performs extensive numerical simulations concerning the physical characteristics of magnetic molecules. It is a project of the Bielefeld University of Applied Sciences, Department of Electrical Engineering and Computer Science, in cooperation with the University of Osnabrück and Ames Laboratory.
The project began beta testing on September 1, 2006 and used the Metropolis Monte Carlo algorithm to calculate and simulate spin dynamics in nanoscale molecular magnets.
On September 28, 2011, a hiatus was announced while the project team reviewed results and upgraded hardware. As of July 10, 2022 the hiatus continues and it is likely that the project has been closed down permanently.
See also
Spintronics
BOINC
List of volunteer computing projects |
https://en.wikipedia.org/wiki/Suspension%20%28topology%29 | In topology, a branch of mathematics, the suspension of a topological space X is intuitively obtained by stretching X into a cylinder and then collapsing both end faces to points. One views X as "suspended" between these end points. The suspension of X is denoted by SX or susp(X).
There is a variation of the suspension for pointed space, which is called the reduced suspension and denoted by ΣX. The "usual" suspension SX is sometimes called the unreduced suspension, unbased suspension, or free suspension of X, to distinguish it from ΣX.
Free suspension
The (free) suspension of a topological space can be defined in several ways.
1. is the quotient space . In other words, it can be constructed as follows:
Construct the cylinder .
Consider the entire set as a single point ("glue" all its points together).
Consider the entire set as a single point ("glue" all its points together).
2. Another way to write this is:
Where are two points, and for each i in {0,1}, is the projection to the point (a function that maps everything to ). That means, the suspension is the result of constructing the cylinder , and then attaching it by its faces, and , to the points along the projections .
3. One can view as two cones on X, glued together at their base.
4. can also be defined as the join where is a discrete space with two points.
Properties
In rough terms, S increases the dimension of a space by one: for example, it takes an n-sphere to an (n + 1)-sphere for n ≥ 0.
Given a continuous map there is a continuous map defined by where square brackets denote equivalence classes. This makes into a functor from the category of topological spaces to itself.
Reduced suspension
If X is a pointed space with basepoint x0, there is a variation of the suspension which is sometimes more useful. The reduced suspension or based suspension ΣX of X is the quotient space:
.
This is the equivalent to taking SX and collapsing the line (x0 × I) joining the two ends to |
https://en.wikipedia.org/wiki/The%20Daily%20Dot | The Daily Dot is a digital media company covering the culture of the Internet and the World Wide Web. Founded by Nicholas White in 2011, The Daily Dot is headquartered in Austin, Texas.
The site, conceived as the Internet's "hometown newspaper", focuses on topics such as streaming entertainment, geek culture, memes, gadgets and social issues, such as LGBT, gender and race. In addition, an e-commerce arm produces branded video for advertisers and sells items from an online marketplace.
History
The Daily Dot was established in 2011 by Nicholas White, whose goal was to cover Internet communities such as Reddit and Tumblr in the same manner as hometown newspapers cover their own communities. White's family has been in the newspaper business since buying the Sandusky Register in Ohio in 1869, and White was a reporter and executive with the family's media company before establishing the site.
White launched The Daily Dot with $600,000 and a handful of full-time reporters. Many of the site's early stories were filed to a Google Doc and reported on Facebook and Twitter. After establishing a headquarters in Austin, Texas, the company added other offices but many staff worked remotely from other locations. It raised a $10 million private investment to add staff, produce digital content and develop its internal creative agency in 2015, ramping up its output to 50–70 stories a day. Its coverage has focused on "under-reported" areas while emphasizing progressive issues such as body positivity and feminism. White has also highlighted the need to diversify his staff. "Journalism has been dominated by a few select types of voices. We have an opportunity to break from that cycle" he has said.
The Daily Dot has pursued several content strategies while building its online presence. In 2012, it was one of the first major sites to launch dedicated esports coverage. In 2016, the company sold that section, Dot Esports, to Gamurs, an Australian esports multimedia operation.
In 2014, |
https://en.wikipedia.org/wiki/Leptocephalus | Leptocephalus (meaning "slim head") is the flat and transparent larva of the eel, marine eels, and other members of the superorder Elopomorpha. This is one of the most diverse groups of teleosts, containing 801 species in 4 orders, 24 families, and 156 genera. This group is thought to have arisen in the Cretaceous period over 140 million years ago.
Fishes with a leptocephalus larval stage include the most familiar eels such as the conger, moray eel, and garden eel as well as members of the family Anguillidae, plus more than 10 other families of lesser-known types of marine eels. These are all true eels of the order Anguilliformes. Leptocephali of eight species of eels from the South Atlantic Ocean were described by Meyer-Rochow
The fishes of the other four traditional orders of elopomorph fishes that have this type of larvae are more diverse in their body forms and include the tarpon, bonefish, spiny eel, pelican eel and deep sea species like Cyema atrum and notacanthidae species, the latter with giant Leptocephalus-like larvae.
Description
Leptocephali (singular leptocephalus) all have laterally compressed bodies that contain transparent jelly-like substances on the inside of the body and a thin layer of muscle with visible myomeres on the outside. Their body organs are small and they have only a simple tube for a gut. This combination of features results in them being very transparent when they are alive. Leptocephali have dorsal and anal fins confluent with caudal fins, but lack pelvic fins.
They also lack red blood cells until they begin to metamorphose into the juvenile glass eel stage when they start to look like eels. Leptocephali are also characterized by their fang-like teeth that are present until metamorphosis, when they are lost.
Leptocephali differ from most fish larvae because they grow to much larger sizes and have long larval periods of about three months to more than a year. Another distinguishing feature of these organisms is their mucinou |
https://en.wikipedia.org/wiki/Cytochalasin%20B | Cytochalasin B, the name of which comes from the Greek cytos (cell) and chalasis (relaxation), is a cell-permeable mycotoxin. It was found that substoichiometric concentrations of cytochalasin B (CB) strongly inhibit network formation by actin filaments. Due to this, it is often used in cytological research. It inhibits cytoplasmic division by blocking the formation of contractile microfilaments. It inhibits cell movement and induces nuclear extrusion. Cytochalasin B shortens actin filaments by blocking monomer addition at the fast-growing end of polymers. Cytochalasin B inhibits glucose transport and platelet aggregation. It blocks adenosine-induced apoptotic body formation without affecting activation of endogenous ADP-ribosylation in leukemia HL-60 cells.
It is also used in cloning through nuclear transfer. Here enucleated recipient cells are treated with cytochalasin B. Cytochalasin B makes the cytoplasm of the oocytes more fluid and makes it possible to aspirate the nuclear genome of the oocyte within a small vesicle of plasma membrane into a micro-needle. Thereby, the oocyte genome is removed from the oocyte, while preventing rupture of the plasma membrane.
This alkaloid is isolated from a fungus, Helminthosporium dematioideum.
History
1960s
Cytochalasin B was first described in 1967, when it had been isolated from moulds by Dr W.B. Turner. Smith et al. found that CB causes multinucleation in cells and significantly affects cell motility. The multinucleated cells probably arise from failure of mitotic control, leading to variations in size and shape of interphase nuclei.
1970s
In the 1970s, research on the mitosis of polynucleated cells was done. It appeared that these cells were created through progressive nuclear addition instead of nuclear division. The process by which this occurs is called pseudomitosis, which is the synchronous mitosis resulting in the division of just one nucleus. The separate nuclei are bound by a nuclear bridge and in binuclea |
https://en.wikipedia.org/wiki/Lagg%20%28landform%29 | A lagg, also called a moat, is the very wet zone on the perimeter of peatland or a bog where water from the adjacent upland collects and flows slowly around the main peat mass.
Description
A lagg is an area of wetland, especially at the edge of raised bogs, in which water collects. It is often markedly different from the terrain either side and may consist of a morass of shrubs and murky water.
In addition to water gathered from surrounding uplands, the lagg also picks up water flowing down from the domed centre of a raised bog through small channels - soaks or water tracks - to the steeply sloping shoulder or rand of the bog. At the foot of the rand, the water collects and meets the water of the surrounding area on the boundary between the peat soil and mineral soil. |
https://en.wikipedia.org/wiki/Dual%20graph | In the mathematical discipline of graph theory, the dual graph of a planar graph is a graph that has a vertex for each face of . The dual graph has an edge for each pair of faces in that are separated from each other by an edge, and a self-loop when the same face appears on both sides of an edge. Thus, each edge of has a corresponding dual edge, whose endpoints are the dual vertices corresponding to the faces on either side of . The definition of the dual depends on the choice of embedding of the graph , so it is a property of plane graphs (graphs that are already embedded in the plane) rather than planar graphs (graphs that may be embedded but for which the embedding is not yet known). For planar graphs generally, there may be multiple dual graphs, depending on the choice of planar embedding of the graph.
Historically, the first form of graph duality to be recognized was the association of the Platonic solids into pairs of dual polyhedra. Graph duality is a topological generalization of the geometric concepts of dual polyhedra and dual tessellations, and is in turn generalized combinatorially by the concept of a dual matroid. Variations of planar graph duality include a version of duality for directed graphs, and duality for graphs embedded onto non-planar two-dimensional surfaces.
These notions of dual graphs should not be confused with a different notion, the edge-to-vertex dual or line graph of a graph.
The term dual is used because the property of being a dual graph is symmetric, meaning that if is a dual of a connected graph , then is a dual of . When discussing the dual of a graph , the graph itself may be referred to as the "primal graph". Many other graph properties and structures may be translated into other natural properties and structures of the dual. For instance, cycles are dual to cuts, spanning trees are dual to the complements of spanning trees, and simple graphs (without parallel edges or self-loops) are dual to 3-edge-connected graphs.
|
https://en.wikipedia.org/wiki/Sergei%20Adian | Sergei Ivanovich Adian, also Adyan (; ; 1 January 1931 – 5 May 2020), was a Soviet and Armenian mathematician. He was a professor at the Moscow State University and was known for his work in group theory, especially on the Burnside problem.
Biography
Adian was born near Elizavetpol. He grew up there in an Armenian family. He studied at Yerevan and Moscow pedagogical institutes. His advisor was Pyotr Novikov. He worked at Moscow State University (MSU) since 1965. Alexander Razborov was one of his students.
Mathematical career
In his first work as a student in 1950, Adian proved that the graph of a function of a real variable satisfying the functional equation and having discontinuities is dense in the plane. (Clearly, all continuous solutions of the equation are linear functions.) This result was not published at the time. About 25 years later the American mathematician Edwin Hewitt from the University of Washington gave preprints of some of his papers to Adian during a visit to MSU, one of which was devoted to exactly the same result, which was published by Hewitt much later.
By the beginning of 1955, Adian had managed to prove the undecidability of practically all non-trivial invariant group properties, including the undecidability of being isomorphic to a fixed group , for any group . These results constituted his Ph.D. thesis and his first published work. This is one of the most remarkable, beautiful, and general results in algorithmic group theory and is now known as the Adian–Rabin theorem. What distinguishes the first published work by Adian, is its completeness. In spite of numerous attempts, nobody has added anything fundamentally new to the results during the past 50 years. Adian's result was immediately used by Andrey Markov Jr. in his proof of the algorithmic unsolvability of the classical problem of deciding when topological manifolds are homeomorphic.
Burnside problem
About the Burnside problem:
Very much like Fermat's Last Theorem in number theor |
https://en.wikipedia.org/wiki/Konrad%20Lorenz%20Institute%20for%20Evolution%20and%20Cognition%20Research | The Konrad Lorenz Institute for Evolution and Cognition Research (KLI) is an international center for advanced studies in the life and sustainability sciences. It is a "Home to Theory that Matters" that supports the articulation, analysis, and integration of theories in biology and the sustainability sciences, exploring their wider scientific, cultural, and social significance. The institute is located in Klosterneuburg, near Vienna, Austria. Until 2013, the institute was located in the family mansion of the Nobel Laureate Konrad Lorenz in Altenberg. Lorenz' work laid the foundation for an evolutionary approach to mind and cognition.
The institute unites fellows, visiting scholars, students, and external faculty. Through a lecture and seminar series, the KLI also offers a platform for the critical public discussion of current themes in the biosciences.
Founded in 1990 by Rupert Riedl, followed by Gerd B. Müller as president of the institute, the KLI is funded by a private trust and receives additional support from the Province of Lower Austria. The institute has close ties with many of the higher education institutions in Vienna and Lower Austria, as well as with a number of international institutions with similar aims.
Activities
The KLI supports theoretical research primarily in the areas of evolutionary developmental biology, evolutionary cognitive science, and sustainability science that welcome an evolutionary approach. This is accomplished by providing fellowships to graduate students, postdocs, and visiting scientists for research projects they have proposed. In addition, the KLI organizes lecture series, organizes symposia, and hosts workshops. The KLI provides an extensive internet database for literature in theoretical biology and related fields. Together with Springer Science+Business Media, it publishes the journal Biological Theory.
Fellows
Fellows are scientists and scholars (e.g., in the biological and social sciences and in the humanities) w |
https://en.wikipedia.org/wiki/Motion%20planning | Motion planning, also path planning (also known as the navigation problem or the piano mover's problem) is a computational problem to find a sequence of valid configurations that moves the object from the source to destination. The term is used in computational geometry, computer animation, robotics and computer games.
For example, consider navigating a mobile robot inside a building to a distant waypoint. It should execute this task while avoiding walls and not falling down stairs. A motion planning algorithm would take a description of these tasks as input, and produce the speed and turning commands sent to the robot's wheels. Motion planning algorithms might address robots with a larger number of joints (e.g., industrial manipulators), more complex tasks (e.g. manipulation of objects), different constraints (e.g., a car that can only drive forward), and uncertainty (e.g. imperfect models of the environment or robot).
Motion planning has several robotics applications, such as autonomy, automation, and robot design in CAD software, as well as applications in other fields, such as animating digital characters, video game, architectural design, robotic surgery, and the study of biological molecules.
Concepts
A basic motion planning problem is to compute a continuous path that connects a start configuration S and a goal configuration G, while avoiding collision with known obstacles. The robot and obstacle geometry is described in a 2D or 3D workspace, while the motion is represented as a path in (possibly higher-dimensional) configuration space.
Configuration space
A configuration describes the pose of the robot, and the configuration space C is the set of all possible configurations. For example:
If the robot is a single point (zero-sized) translating in a 2-dimensional plane (the workspace), C is a plane, and a configuration can be represented using two parameters (x, y).
If the robot is a 2D shape that can translate and rotate, the workspace is still 2-dimen |
https://en.wikipedia.org/wiki/George%20N.%20Hatsopoulos | George Nicholas Hatsopoulos (January 7, 1927 – September 20, 2018) was a Greek American mechanical engineer noted for his work in thermodynamics and for having founded Thermo Electron.
Early life
Hatsopoulos was born in Athens, Greece in 1927 and is related to the former rector of the Athens Polytechnic School, Nicolas Kitsikis. He attended Athens Polytechnic before entering MIT, where he received his Bachelor and Master of Science (1950), Mechanical Engineer (1954), and Doctorate of Science (1956).
Hatsopoulos-Keenan reformulation of thermodynamics
In 1965, he and Joseph Keenan published their textbook Principles of General Thermodynamics, which restates the second law of thermodynamics in terms of the existence of stable equilibrium states. Their formulation of the second law of thermodynamics states that:
The Hatsopoulos-Keenan statement of the Second Law entails the Clausius, Kelvin-Planck, and Carathéodory statements of the Second Law, and has provided a basis to extend the traditional definition of entropy to the non-equilibrium domain.
In 1976, Hatsopoulos also contributed to a formulation of a unified theory of mechanics and thermodynamics, arguably a precursor of the emerging field of quantum thermodynamics.
Academic and industry leader
While at MIT, Hatsopoulos was head of the engineering division of Matrad Corporation of New York. Matrad Corporation and MIT also provided financial support for his doctoral thesis The Thermo-Electron Engine. Matrad Corporation was owned by the family of Peter M. Nomikos, a Harvard Business School graduate. In 1956, Hatsopoulos founded the Thermo Electron Corporation with funding from Peter Nomikos. Several years later, George asked his brother (John Hatsopoulos) to join the company as financial controller. Under George Hatsopoulos, Thermo Electron became a major provider of analytical instruments and services for a variety of domains. John Hatsopoulos, and Arvin Smith. In 1965, George Hatsopoulos was president of the |
https://en.wikipedia.org/wiki/Warped%20linear%20predictive%20coding | Warped linear predictive coding (warped LPC or WLPC) is a variant of linear predictive coding in which the spectral representation of the system is modified, for example by replacing the unit delays used in an LPC implementation with first-order all-pass filters. This can have advantages in reducing the bitrate required for a given level of perceived audio quality/intelligibility, especially in wideband audio coding.
History
Warped LPC was first proposed in 1980 by Hans Werner Strube. |
https://en.wikipedia.org/wiki/Prince%20Henry%20Hospital%2C%20Sydney | The Prince Henry Hospital site, formerly known as the Prince Henry Hospital, Sydney, is a heritage-listed former teaching hospital and infectious diseases hospital and now UNSW teaching hospital and spinal rehabilitation unit located at 1430 Anzac Parade, Little Bay, City of Randwick, New South Wales, Australia. It was designed by NSW NSW Colonial Architect and NSW Government Architect and built from 1881 by NSW Public Works Department. It is also known as Prince Henry Hospital and The Coast Hospital. The property is owned by Landcom, an agency of the Government of New South Wales. It was added to the New South Wales State Heritage Register on 2 May 2003.
History
Aboriginal historical context
The greater Sydney region has been inhabited by Aboriginal people for at least 20,000 years with dated sheltered occupation sites occurring in the Blue Mountains and its foothills. Aboriginal occupation of coastal NSW has also been dated to extend back to at least 20,000 years before present at Burrill Lake on the South Coast and 17,000 years before present at Bass Point. At the time of these periods of occupation, both sites would have been located within hinterland areas situated some distance away from the coast. In the case of Burrill Lake, the sea would have been up to 16 km further east than at present and the site would have been located within an inland environment drained by rivers, creeks and streams.
There are no other Pleistocene sites recorded on the Sydney Coast. There are however two sites that have been dated to the early Holocene around 7,000 to 8,000 years ago. These are located at the current Prince of Wales Hospital site (a hearth dated to 7,800 years ago) and a rock shelter site at Curracurrang. It is likely that many coastal Aboriginal sites of a similar age within the Sydney region have been submerged and/or destroyed by sea-level changes which have occurred in eastern Australia during the last 20,000 years. In general terms, the majority of sites reco |
https://en.wikipedia.org/wiki/Will%20Rogers%20phenomenon | The Will Rogers phenomenon, also called the Okie Paradox, is when moving an observation from one group to another increases the average of both groups. It is named after a joke by the comedian Will Rogers in the 1930s about migration during the Great Depression:
All Rogers was attempting to say was that, in his view, the least intelligent Okie is always smarter than the most intelligent Californian.
The apparent paradox comes from the fall in intelligence of both groups, which makes it seem as though intelligence has been "destroyed." However, the overall population maintains the same average intelligence: moving a person from a low-intelligence group to a high-intelligence group makes the high-intelligence group larger, so the population mean (which is a weighted average of the two groups' intelligence) is unaffected.
Numerical examples
Consider the following sets R and S, whose arithmetic mean are 2.5 and 7, respectively.
If 5 is moved from S to R, then the arithmetic mean of R increases to 3, and the arithmetic mean of S increases to 7.5, even though the total set of numbers themselves, and therefore their overall average, have not changed.
Consider this more dramatic example, where the arithmetic means of sets R and S, are 1.5 and , respectively:
If 99 is moved from S to R, then the arithmetic means increase to 34 and . The number 99 is orders of magnitude above 1 and 2, and orders of magnitude below and .
The element which is moved does not have to be the very lowest or highest of its set; it merely has to have a value that lies between the means of the two sets. And the sets themselves can have overlapping ranges. Consider this example:
If 10, which is larger than Rs mean of 7 and smaller than Ss mean of 12, is moved from S to R, the arithmetic means still increase slightly, to 7.375 and 12.333.
Stage migration
One real-world example of the phenomenon is seen in the medical concept of cancer stage migration, which led to clinician Alvan Feinstein c |
https://en.wikipedia.org/wiki/Cannibalism%20in%20poultry | Cannibalism in poultry is the act of one individual of a poultry species consuming all or part of another individual of the same species as food. It commonly occurs in flocks of domestic hens reared for egg production, although it can also occur in domestic turkeys, pheasants and other poultry species. Poultry create a social order of dominance known as pecking order. When pressure occurs within the flock, pecking can increase in aggression and escalate to cannibalism. Cannibalism can occur as a consequence of feather pecking which has caused denuded areas and bleeding on a bird's skin. Cannibalism can cause large mortality rates within the flock and large decreases in production due to the stress it causes. Vent pecking, sometimes called 'cloacal cannibalism', is considered to be a separate form of cannibalistic pecking as this occurs in well-feathered birds and only the cloaca is targeted. There are several causes that can lead to cannibalism such as: light and overheating, crowd size, nutrition, injury/death, genetics and learned behaviour. Research has been conducted to attempt to understand why poultry engage in this behaviour, as it is not totally understood. There are known methods of control to reduce cannibalism such as crowd size control, beak trimming, light manipulation, perches, selective genetics and eyewear.
Motivational basis
Poultry species which exhibit cannibalism are omnivores. For example, hens in the wild often scratch at the soil to search for seeds, insects and even larger animals such as lizards or young mice, although they are mainly herbivorous in adulthood. Feather pecking is often the initial cause of an injury which then attracts the cannibalistic pecking of other birds – perhaps as re-directed foraging or feeding behaviour. In the close confines of modern farming systems, the increased pecking attention is easily observed by multiple birds which join in the attack, and often the escape attempts of the cannibalised bird attract more |
https://en.wikipedia.org/wiki/Field%20stain | Field stain is a histological method for staining of blood smears. It is used for staining thick blood films in order to discover malarial parasites. Field's stain is a version of a Romanowsky stain, used for rapid processing of the specimens.
Field's stain consists of two parts - Field's stain A is methylene blue and Azure 1 dissolved in phosphate buffer solution; Field's stain B is Eosin Y in buffer solution. Field stain is named after physician John William Field, who developed it in 1941.
Additional images |
https://en.wikipedia.org/wiki/Innovation%20butterfly | The innovation butterfly is a metaphor that describes how seemingly minor perturbations (disturbances or changes) to project plans in a system connecting markets, demand, product features, and a firm's capabilities can steer the project, or an entire portfolio of projects, down an irreversible path in terms of technology and market evolution.
Origins
The metaphor was developed by researchers Anderson and Joglekar. It was conceived as a specific instance of the more general 'butterfly effect' encountered in chaos theory.
How it works
The innovation butterfly arises because many innovation systems are made up of a large number of elements that interact with each other via several non-linear feedback loops containing embedded delays, thus constituting a complex system.
Perturbations can come from decisions made within the firm or from those made by its competitors, or they can result from external forces such as government legislation or environmental regulations, or unexpected spikes in the price of oil. How the innovation system evolves as a result of the innovation butterfly can lead ultimately to an innovative firm's success or failure.
Complex systems, in domains such as physics, biology, or sociology, are known to be prone to both path dependence and emergent behavior. What makes the behavior of the innovation butterfly different is market selection, along with biases in individual and group decision making within distributed innovation settings, which may influence the emergent behavior. Furthermore, managers in most fields of business endeavor to reduce uncertainty in order to better manage risk. In innovation settings, however, because success is based upon creativity, managers must actively embrace uncertainty. This leads to a management conundrum because innovation managers and management systems must encourage the potential for a butterfly effect but then must also learn how to cope with its aftermath.
How innovation butterflies are 'chased' is highly |
https://en.wikipedia.org/wiki/Concatenation%20theory | Concatenation theory, also called string theory, character-string theory, or theoretical syntax, studies character strings over finite alphabets of characters, signs, symbols, or marks. String theory is foundational for formal linguistics, computer science, logic, and metamathematics especially proof theory. A generative grammar can be seen as a recursive definition in string theory.
The most basic operation on strings is concatenation; connect two strings to form a longer string whose length is the sum of the lengths of those two strings. ABCDE is the concatenation of AB with CDE, in symbols ABCDE = AB ^ CDE. Strings, and concatenation of strings can be treated as an algebraic system with some properties resembling those of the addition of integers; in modern mathematics, this system is called a free monoid.
In 1956 Alonzo Church wrote: "Like any branch of mathematics, theoretical syntax may, and ultimately must, be studied by the axiomatic method". Church was evidently unaware that string theory already had two axiomatizations from the 1930s: one by Hans Hermes and one by Alfred Tarski. Coincidentally, the first English presentation of Tarski's 1933 axiomatic foundations of string theory appeared in 1956 – the same year that Church called for such axiomatizations. As Tarski himself noted using other terminology, serious difficulties arise if strings are construed as tokens rather than types in the sense of Peirce's type-token distinction. |
https://en.wikipedia.org/wiki/ESafe | ESafe Protect, formerly known as Eliashim Antivirus, is a range of software security products developed by EliaShim Limited, based in Haifa, Israel. The distribution of these products was managed by eSafe Technologies Inc, located in Seattle, United States.
The program
EliaShim was acquired by Aladdin Knowledge Systems in December 1998. The distribution of the consumer desktop version, eSafe Protect, was managed by Aladdin Knowledge Systems until its discontinuation in 2002.
In the following years, the eSafe brand had transformed into a gateway-based content security product, which Aladdin Knowledge Systems sold as an integrated security appliance.
In March 2009, Aladdin Knowledge Systems merged eSafe Protect with SafeNet Inc., which led to the evolution of eSafe into the SafeNet eSafe Content Security product line. |
https://en.wikipedia.org/wiki/Pilotto%20syndrome | Pilotto syndrome is a rare syndrome which affects the face, heart, and back. The syndrome can cause a cleft lip and palate, scoliosis, and mental retardation. The Office of Rare Diseases and National Institutes of Health have classified this syndrome as affecting less than 200,000 people in the United States. |
https://en.wikipedia.org/wiki/Signor%E2%80%93Lipps%20effect | The Signor–Lipps effect is a paleontological principle proposed in 1982 by Philip W. Signor and Jere H. Lipps which states that, since the fossil record of organisms is never complete, neither the first nor the last organism in a given taxon will be recorded as a fossil. The Signor–Lipps effect is often applied specifically to cases of the youngest-known fossils of a taxon failing to represent the last appearance of an organism. The inverse, regarding the oldest-known fossils failing to represent the first appearance of a taxon, is alternatively called the Jaanusson effect after researcher Valdar Jaanusson, or the Sppil–Rongis effect (Signor–Lipps spelled backwards).
One famous example is the coelacanth, which was thought to have become extinct in the very late Cretaceous—until a live specimen was caught in 1938. The animals known as "Burgess Shale-type fauna" are best known from rocks of the Early and Middle Cambrian periods. Since 2006, though, a few fossils of similar animals have been found in rocks from the Ordovician, Silurian, and Early Devonian periods, in other words up to 100 million years after the Burgess Shale. The particular way in which such animals have been fossilized may depend on types of ocean chemistry that were present for limited periods of time.
But the Signor–Lipps effect is more important for the difficulties it raises in paleontology:
It makes it very difficult to be confident about the timing and speed of mass extinctions, and this makes it difficult to test theories about the causes of mass extinctions. For example, the extinction of the dinosaurs was long thought to be a gradual process, but evidence collected since the late 1980s suggests it was abrupt, which is consistent with the idea that an asteroid impact caused it.
The uncertainty about when a taxon first appeared makes it difficult to be confident about the ancestry of specific genera. For example, if the earliest-known fossil of genus X is much earlier than the earliest-kno |
https://en.wikipedia.org/wiki/Traitor%20tracing | Traitor tracing schemes help trace the source of leaks when secret or proprietary data is sold to many customers.
In a traitor tracing scheme, each customer is given a different personal decryption key.
(Traitor tracing schemes are often combined with conditional access systems so that, once the traitor tracing algorithm identifies a personal decryption key associated with the leak, the content distributor can revoke that personal decryption key, allowing honest customers to continue to watch pay television while the traitor and all the unauthorized users using the traitor's personal decryption key are cut off.)
Traitor tracing schemes are used in pay television to discourage pirate decryption – to discourage legitimate subscribers from giving away decryption keys.
Traitor tracing schemes are ineffective if the traitor rebroadcasts the entire (decrypted) original content.
There are other kinds of schemes that discourages pirate rebroadcast – i.e., discourages legitimate subscribers from giving away decrypted original content. These other schemes use tamper-resistant digital watermarking to generate different versions of the original content. Traitor tracing key assignment schemes can be translated into such digital watermarking schemes.
Traitor tracing is a copyright infringement detection system which works by tracing the source of leaked files rather than by direct copy protection. The method is that the distributor adds a unique salt to each copy given out. When a copy of it is leaked to the public, the distributor can check the value on it and trace it back to the "leak".
Primary methods
Activation controls
The main concept is that each licensee (the user) is given a unique key which unlocks the software or allows the media to be decrypted.
If the key is made public, the content owner then knows exactly who did it from their database of assigned codes.
A major attack on this strategy is the key generator (keygen). By reverse engineering the software, the c |
https://en.wikipedia.org/wiki/Sodium%20aluminosilicate | Sodium aluminosilicate refers to compounds which contain sodium, aluminium, silicon and oxygen, and which may also contain water. These include synthetic amorphous sodium aluminosilicate, a few naturally occurring minerals and synthetic zeolites. Synthetic amorphous sodium aluminosilicate is widely used as a food additive, E 554.
Amorphous sodium aluminosilicate
This substance is produced with a wide range of compositions and has many different applications. It is encountered as an additive E 554 in food where it acts as an anticaking (free flow) agent. As it is manufactured with a range of compositions it is not strictly a chemical compound with a fixed stoichiometry. One supplier quotes a typical analysis for one of their products as 14SiO2·Al2O3·Na2O·3H2O,(Na2Al2Si14O32·3H2O).
The US FDA has as of April 1, 2012 approved sodium aluminosilicate (sodium silicoaluminate) for direct contact with consumable items under 21 CFR 182.2727. Sodium aluminosilicate is used as molecular sieve in medicinal containers to keep contents dry.
Sodium aluminosilicate may also be listed as:
aluminium sodium salt
sodium silicoaluminate
aluminosilicic acid, sodium salt
sodium aluminium silicate
aluminum sodium silicate
sodium silico aluminate
sasil
As a problem in industrial processes
The formation of sodium aluminosilicate makes the Bayer process uneconomical for bauxites high in silica.
Minerals sometimes called sodium aluminosilicate
Naturally occurring minerals that are sometimes given the chemical name, sodium aluminosilicate include albite (NaAlSi3O8, an end-member of the plagioclase series) and jadeite (NaAlSi2O6).
Synthetic zeolites sometimes called sodium aluminosilicate
Synthetic zeolites have complex structures and examples (with structural formulae) are:
Na12Al12Si12O48·27H2O, zeolite A (Linde type A sodium form, NaA), used in laundry detergents
Na16Al16Si32O96·16H2O, Analcime, IUPAC code ANA
Na12Al12Si12O48·q H2O, Losod
Na384Al384Si384O1536·518H2O, Lind |
https://en.wikipedia.org/wiki/Virtual%20telecine | A virtual telecine is a piece of video equipment that can play back data files in real time. The colorist-video operator controls the virtual telecine like a normal telecine, although without controls like focus and framing. The data files can be from a Spirit DataCine, motion picture film scanner (like a Cineon), CGI animation computer, or an Acquisition professional video camera. The normal input data file standard is DPX. The output of data files are often used in digital intermediate post-production using a film recorder for film-out. The control room for the virtual telecine is called the color suite.
The 2000 movie O Brother, Where Art Thou? was scanned with Spirit DataCine, color corrected with a VDC-2000 and a Pandora Int. Pogle Color Corrector with MegaDEF. A Kodak Lightning II film recorder was used to output the data back on to film.
Virtual telecines are also used in film restoration.
Another advantage of a virtual telecine is once the film is on the storage array the frames may be played over and over again without damage or dirt to the film. This would be the case for outputting to different TV standards (NTSC or PAL) or formats: (pan and scan, letterboxed, or other aspect ratio. Restoration, special effect, color grading, and other changes can be applied to the data file frames before playout.
Virtual telecine is like a "tape to tape" color correction process, but with the difference of: higher resolution (2k or 4k) and the use of film restoration tools with standards-aspect ratio tools.
2k virtual DataCine products
First virtual telecine by Philips, now Grass Valley a Thomson SA Brand:
VDC-2000 Virtual DataCine
Specter FS Virtual DataCine
These are able to play out 2k data files in non-linear real time. Size, rotation and color correction-color grading are all able to be done in real time controlled by a telecine color corrector. A Silicon Graphics-SGI computer, an Origin 2000, is used to play the data files to "Spirit DataCine hardware". The Vi |
https://en.wikipedia.org/wiki/Minimum%20k-cut | In mathematics, the minimum -cut is a combinatorial optimization problem that requires finding a set of edges whose removal would partition the graph to at least connected components. These edges are referred to as -cut. The goal is to find the minimum-weight -cut. This partitioning can have applications in VLSI design, data-mining, finite elements and communication in parallel computing.
Formal definition
Given an undirected graph with an assignment of weights to the edges and an integer partition into disjoint sets while minimizing
For a fixed , the problem is polynomial time solvable in However, the problem is NP-complete if is part of the input. It is also NP-complete if we specify vertices and ask for the minimum -cut which separates these vertices among each of the sets.
Approximations
Several approximation algorithms exist with an approximation of A simple greedy algorithm that achieves this approximation factor computes a minimum cut in each of the connected components and removes the heaviest one. This algorithm requires a total of max flow computations. Another algorithm achieving the same guarantee uses the Gomory–Hu tree representation of minimum cuts. Constructing the Gomory–Hu tree requires max flow computations, but the algorithm requires an overall max flow computations. Yet, it is easier to analyze the approximation factor of the second algorithm. Moreover, under the small set expansion hypothesis (a conjecture closely related to the unique games conjecture), the problem is NP-hard to approximate to within factor for every constant , meaning that the aforementioned approximation algorithms are essentially tight for large .
A variant of the problem asks for a minimum weight -cut where the output partitions have pre-specified sizes. This problem variant is approximable to within a factor of 3 for any fixed if one restricts the graph to a metric space, meaning a complete graph that satisfies the triangle inequality. More recently |
https://en.wikipedia.org/wiki/Lamb%E2%80%93M%C3%B6ssbauer%20factor | In physics, the Lamb–Mössbauer factor (LMF, after Willis Lamb and Rudolf Mössbauer) or elastic incoherent structure factor (EISF) is the ratio of elastic to total incoherent neutron scattering, or the ratio of recoil-free to total nuclear resonant absorption in Mössbauer spectroscopy. The corresponding factor for coherent neutron or X-ray scattering is the Debye–Waller factor; often, that term is used in a more generic way to include the incoherent case as well.
When first reporting on recoil-free resonance absorption, Mössbauer (1959) cited relevant theoretical work by Lamb (1939). The first use of the term "Mössbauer–Lamb factor" seems to be by Tzara (1961); from 1962 on, the form "Lamb–Mössbauer factor" came into widespread use.
Singwi and Sjölander (1960) pointed out the close relation to incoherent neutron scattering. With the invention of backscattering spectrometers, it became possible to measure the Lamb–Mössbauer factor as a function of the wavenumber (whereas Mössbauer spectroscopy operates at a fixed wavenumber). Subsequently, the term elastic incoherent structure factor became more frequent. |
https://en.wikipedia.org/wiki/No-broadcasting%20theorem | In physics, the no-broadcasting theorem is a result of quantum information theory. In the case of pure quantum states, it is a corollary of the no-cloning theorem. The no-cloning theorem for pure states says that it is impossible to create two copies of an unknown state given a single copy of the state. Since quantum states cannot be copied in general, they cannot be broadcast. Here, the word "broadcast" is used in the sense of conveying the state to two or more recipients. For multiple recipients to each receive the state, there must be, in some sense, a way of duplicating the state. The no-broadcast theorem generalizes the no-cloning theorem for mixed states.
The theorem also includes a converse: if two quantum states do commute, there is a method for broadcasting them: they must have a common basis of eigenstates diagonalizing them simultaneously, and the map that clones every state of this basis is a legitimate quantum operation, requiring only physical resources independent of the input state to implement—a completely positive map. A corollary is that there is a physical process capable of broadcasting every state in some set of quantum states if, and only if, every pair of states in the set commutes. This broadcasting map, which works in the commuting case, produces an overall state in which the two copies are perfectly correlated in their eigenbasis.
Remarkably, the theorem does not hold if more than one copy of the initial state is provided: for example, broadcasting six copies starting from four copies of the original state is allowed, even if the states are drawn from a non-commuting set. The purity of the state can even be increased in the process, a phenomenon known as superbroadcasting.
Generalized No-Broadcast Theorem
The generalized quantum no-broadcasting theorem, originally proven by Barnum, Caves, Fuchs, Jozsa and Schumacher for mixed states of finite-dimensional quantum systems, says that given a pair of quantum states which do not commu |
https://en.wikipedia.org/wiki/Haworthiopsis%20%C3%97%20lisbonensis | Haworthiopsis × lisbonensis, formerly Haworthia lisbonensis, is an ornamental succulent plant, considered a hybrid of unknown parentage.
History
Haworthiopsis × lisbonensis is named after Lisbon, as the original plant was discovered in the collection of the Botanical Garden of the University of Lisbon by António Gomes Amaral. Gomes Amaral brought it to the attention of Flávio Resende, a Portuguese botanist, who described it as Haworthia lisbonensis in 1946. It has since been moved to the genus Haworthiopsis, and it is currently considered a hybrid, rather than a true species.
Flávio Resende shared living material with several of his correspondents around the world, and the plant is still found in cultivation. |
https://en.wikipedia.org/wiki/Endomyocardial%20biopsy | Endomyocardial biopsy (EMB) is an invasive procedure used routinely to obtain small samples of heart muscle, primarily for detecting rejection of a donor heart following heart transplantation. It is also used as a diagnostic tool in some heart diseases.
A bioptome is used to gain access to the heart via a sheath inserted into the right internal jugular or less commonly the femoral vein. Monitoring during the procedure consists of performing ECGs and blood pressures. Guidance and confirmation of correct positioning of the bioptome is made by echocardiography or fluoroscopy.
The risk of complications is less than 1% when performed by an experienced physician in a specialist centre. Serious complications include perforation of the heart with pericardial tamponade, haemopericardium, AV block, tricuspid regurgitation and pneumothorax.
EMB, sampling myocardium was first pioneered in Japan by S. Sakakibra and S. Konno in 1962.
Indications
The main reason for performing an EMB is to assess allograft rejection following heart transplantation and sometimes to evaluate cardiomyopathy, some heart disease research and ventricular arrhythmias, or unexplained ventricular dysfunction.
Transplant monitoring
Visualising the microscopic appearance of the heart muscle allows the detection of cell-mediated or antibody-mediated rejection and is recommended episodically during the first year after heart transplantation. Occasionally, monitoring continues beyond one year.
The use of EMB in heart transplant rejection surveillance remains the gold standard test, although the pre-test predictors of rejection cardiac magnetic resonance imaging (CMR) and gene expression profiling, are increasingly used.
Myocardial diseases
EMB has a role in the diagnosis of viral myocarditis and inflammatory myocarditis.
Procedure
EMB of the right ventricle via the internal jugular vein is standard after heart transplant. A bioptome is used to gain access to the heart via a sheath inserted into the rig |
https://en.wikipedia.org/wiki/Sudan%20%28rhinoceros%29 | Sudan (1973 – 19 March 2018) was a captive northern white rhinoceros (Ceratotherium simum cottoni) that lived at the Safari Park Dvůr Králové in the Czech Republic from 1975 to 2009 and the rest of his life at the Ol Pejeta Conservancy in Laikipia, Kenya. At the time of his death, he was one of only three living northern white rhinoceroses in the world, and the last known male of his subspecies. Sudan was euthanised on 19 March 2018, after suffering from "age-related complications".
Capture in Africa
A group of six northern white rhinoceros, including the two-year-old Sudan, were captured in Shambe, South Sudan by animal trappers employed by Chipperfield's Circus in February 1975 working under agreement with Josef Vágner, the then-director of the Dvůr Králové Zoo in Czechoslovakia (now the Czech Republic). The captured group comprised two males (Sudan and Saut) and four females (Nola, Nuri, Nadi and Nesari).
The number of northern white rhinos was already considered to be only around 700 animals in the wild. For many environmentalists, leaving the animals in nature was the only acceptable way of preserving the already rare subspecies. The Dvůr Králové Zoo and their Chipperfield partners were then criticized for the capture. The zoo was specializing in African fauna and already had the largest collection outside of Africa.
Life in the Czech Republic
In 1975 the group, including Sudan, was shipped to the Dvůr Králové Zoo for their northern white rhinoceros display. The zoo was the only one in the world where northern white rhinos had successfully given birth, with the last calf being born in 2000.
Two years later they were joined by Nasima, who originated from Uganda but came from Knowsley Safari Park near Prescot, England, and Saut was later lent to San Diego Zoo in the United States.
Breeding
After 1980, the northern white rhinos were wiped out in Uganda and Sudan, and 13 were left in Garamba National Park in Zaire (now Democratic Republic of the Congo). The Co |
https://en.wikipedia.org/wiki/TOPS | Total Operations Processing System (TOPS) is a computer system for managing railway locomotives and rolling stock, known for many years of use in the United Kingdom.
TOPS was originally developed between the Southern Pacific Railroad (SP), Stanford University and IBM as a replacement for paper-based systems for managing rail logistics. A jointly-owned consultancy company, TOPS On-Line Inc., was established in 1960 with the goal of implementing TOPS, as well as selling it to third parties. Development was protracted, requiring around 660 man-years of effort to produce a releasable build. During mid-1968, the first phase of the system was introduced on the SP, and quickly proved its advantages over the traditional methods practiced prior to its availability.
In addition to SP, TOPS was widely adopted throughout North America and beyond. While it was at one point in widespread use across many of the United States railroads, the system has been perhaps most prominently used in the United Kingdom. During 1971, the country's nationalised rail operation, British Rail (BR), opted to procure and integrate TOPS into its operations. The acquisition of an existing system rather than develop an indigenous programme was reasoned to be both cheaper and quicker to implement; it was noted, however, that TOPS was not capable of performing all desired functions. Since its implementation during the mid 1970s, both BR and its successors have continued to operate the system. SP itself has developed a newer system called the Terminal Information Processing System (TIPS), which replaced TOPS entirely during 1980.
Early development
During the 1950s and 1960s, it was increasingly recognised that the adoption of computer-based management systems could provide substantial benefits in various operations, particularly those involving logistics. Consequently, by the 1960s, various railways in various countries, including Japan, Canada, and the United States had begun to develop and introduce |
https://en.wikipedia.org/wiki/Wine%20Museum%20and%20Enoteca | The Wine Museum and Enoteca (in Portuguese, Museu do Vinho e Enoteca) is a Brazilian museum and enoteca, located in Porto Alegre in the old building of the plant's gas tank. The museum carries approximately 250 varieties of wines produced by 32 wineries in Rio Grande do Sul, with descriptive guidance products. The Enoteca is the only wine museum in Brazil, with international standards comparable to the French, and the second in Latin America in the public domain. The collection of the museum also keeps parts and equipment used in the initial period of industrialization of wine. |
https://en.wikipedia.org/wiki/Fragaria%20%C3%97%20Comarum%20hybrids | There are several commercially important hybrids between Fragaria and Comarum species in existence. A name for Fragaria × Comarum is available as × Comagaria Büscher & G.H. Loos in Veroff. [Bohumer Bot. Ver. 2(1): 6. 2010], along with the combination × Comagaria rosea (Mabb.) Büscher & G.H. Loos.
The first-generation hybrids have been recorded as heptaploid, i.e. with seven sets of chromosomes; four sets of chromosomes came from their octoploid strawberry parent, and three from their hexaploid Comarum parent.
Commercial cultivars
All commercial cultivars resemble strawberries more closely than they do Comarum. They are all vigorous, and produce runners profusely.
'Frel', also known as , is a patented hybrid strawberry that is the result of crossing the garden strawberry Fragaria × ananassa subsp. cuneifolia (syn. Fragaria grandiflora) with Marsh Cinquefoil, Comarum palustre (formerly Potentilla palustris), followed by backcrossing to strawberry. The plant is grown for ornamental purposes. It has bright pink flowers (in contrast to the white flowers of naturally occurring strawberry species) and it produces a small number of strawberries.
'Franor' (marketed as ) developed as a sport of 'Frel', and features a more intense red color in the flowers.
'Gerald Straley' is a seedling of 'Frel', selected at Heronswood in Washington for its bright red flowers. It was named after the former curator of the University of British Columbia Botanical Gardens.
'Lipstick' is a variety developed in 1966 from a cross between the Marsh Cinquefoil, Comarum palustre and the Garden Strawberry, Fragaria × ananassa. It has deep pink to red flowers, and slightly larger, more flavorful berries than 'Frel'. It, too, is grown for ornamental purposes. |
https://en.wikipedia.org/wiki/Bartlett%27s%20method | In time series analysis, Bartlett's method (also known as the method of averaged periodograms), is used for estimating power spectra. It provides a way to reduce the variance of the periodogram in exchange for a reduction of resolution, compared to standard periodograms. A final estimate of the spectrum at a given frequency is obtained by averaging the estimates from the periodograms (at the same frequency) derived from non-overlapping portions of the original series.
The method is used in physics, engineering, and applied mathematics. Common applications of Bartlett's method are frequency response measurements and general spectrum analysis.
The method is named after M. S. Bartlett who first proposed it.
Definition and procedure
Bartlett’s method consists of the following steps:
The original N point data segment is split up into K (non-overlapping) data segments, each of length M
For each segment, compute the periodogram by computing the discrete Fourier transform (DFT version which does not divide by M), then computing the squared magnitude of the result and dividing this by M.
Average the result of the periodograms above for the K data segments.
The averaging reduces the variance, compared to the original N point data segment.
The end result is an array of power measurements vs. frequency "bin".
Related methods
The Welch method: this is a method that uses a modified version of Bartlett’s method in which the portions of the series contributing to each periodogram are allowed to overlap.
Periodogram smoothing. |
https://en.wikipedia.org/wiki/Scientistic%20materialism | Scientistic materialism is a term used mainly by proponents of creationism and intelligent design to describe scientists who have a materialist worldview. The stance has been attributed to philosopher George Santayana.
History
The "Wedge Document" produced by the Discovery Institute, described materialism as denial of "the proposition that human beings are created in the image of God," and that humans are instead "animals or machines who inhabited a universe ruled by purely impersonal forces and whose behavior and very thoughts were dictated by the unbending forces of biology, chemistry and environment." The document states that materialism leads inevitably to "moral relativism" and denounces its "stifling dominance" in modern culture. By this definition, scientific materialism is linked to the more general version of materialism, which declares that the physical world is the only thing that exists and that nothing supernatural exists.
See also
Conflict thesis
Faith and rationality
Mechanistic materialism
Relationship between religion and science
Scientific mythology
Scientism |
https://en.wikipedia.org/wiki/Motor%20nerve | A motor nerve is a nerve that transmits motor signals from the central nervous system (CNS) to the muscles of the body. This is different from the motor neuron, which includes a cell body and branching of dendrites, while the nerve is made up of a bundle of axons. Motor nerves act as efferent nerves which carry information out from the CNS to muscles, as opposed to afferent nerves (also called sensory nerves), which transfer signals from sensory receptors in the periphery to the CNS. Efferent nerves can also connect to glands or other organs/issues instead of muscles (and so motor nerves are not equivalent to efferent nerves). In addition, there are nerves that serve as both sensory and motor nerves called mixed nerves.
Structure and function
Motor nerve fibers transduce signals from the CNS to peripheral neurons of proximal muscle tissue. Motor nerve axon terminals innervate skeletal and smooth muscle, as they are heavily involved in muscle control. Motor nerves tend to be rich in acetylcholine vesicles because the motor nerve, a bundle of motor nerve axons that deliver motor signals and signal for movement and motor control. Calcium vesicles reside in the axon terminals of the motor nerve bundles. The high calcium concentration outside of presynaptic motor nerves increases the size of end-plate potentials (EPPs).
Protective tissues
Within motor nerves, each axon is wrapped by the endoneurium, which is a layer of connective tissue that surrounds the myelin sheath. Bundles of axons are called fascicles, which are wrapped in perineurium. All of the fascicles wrapped in the perineurium are wound together and wrapped by a final layer of connective tissue known as the epineurium. These protective tissues defend nerves from injury, pathogens and help to maintain nerve function. Layers of connective tissue maintain the rate at which nerves conduct action potentials.
Spinal cord exit
Most motor pathways originate in the motor cortex of the brain. Signals run down th |
https://en.wikipedia.org/wiki/Vaccine%20wastage | Vaccine wastage is the number of vaccines that have not been administered during vaccine deployment in an immunization program. The wastage can occur at multiple stages of the deployment process, and can take place in both unopened and opened vials, or in oral admission. It is an expected part of vaccination deployment and is factored into the manufacturing process.
Prevalence
A 2018 study into Cambodia's national immunization program found wastage rates of 0% to 60% depending on location and vaccination type.
A study from India which collected Universal Immunisation Programme data from two different locations (Kangra and Pune districts) between January 2016 to December 2017 found wastage rates that differed according to vaccine type, reuse type, vial size, transition from IPV (inactivated polio vaccine) dosage to fIPV (fractional inactivated polio vaccine) and according to the geographical location. In both districts wastage increased as vial size increased from 5 to 10 dose vials. In Kangra, wastage observed in oral polio vaccine was 50.8% while in Pune it was 14.3%. Wastage for a number of other vaccinations in the program was higher than what had been factored into the initial programme forecasting.
Parts of the United States has vaccine wastage tracking factored into the deployment process. Reasons for vaccine wastage are categorised as— broken vial/syringe, lost or unaccounted for, open but not all doses administered, or drawn into a syringe but not administered. Other reasons for wastage include contamination, expiration and temperature issues. Vaccine wastage in the United States during its 2021 COVID-19 vaccination program is less than 1%, and reported as low as 0.1%. In India covid vaccine wastage was 6.5% while in Scotland and Wales it was 1.8%.
Reduction
Improving requirement estimates, transportation and logistics, wastage reporting, optimal session sizes and usage of syringes and needles with low dead volume are important factors in reducing was |
https://en.wikipedia.org/wiki/Discrete%20symmetry | In mathematics and geometry, a discrete symmetry is a symmetry that describes non-continuous changes in a system. For example, a square possesses discrete rotational symmetry, as only rotations by multiples of right angles will preserve the square's original appearance. Discrete symmetries sometimes involve some type of 'swapping', these swaps usually being called reflections or interchanges. In mathematics and theoretical physics, a discrete symmetry is a symmetry under the transformations of a discrete group—e.g. a topological group with a discrete topology whose elements form a finite or a countable set.
One of the most prominent discrete symmetries in physics is parity symmetry. It manifests itself in various elementary physical quantum systems, such as quantum harmonic oscillator, electron orbitals of Hydrogen-like atoms by forcing wavefunctions to be even or odd. This in turn gives rise to selection rules that determine which transition lines are visible in atomic absorption spectra. |
https://en.wikipedia.org/wiki/Dnsmasq | dnsmasq is free software providing Domain Name System (DNS) caching, a Dynamic Host Configuration Protocol (DHCP) server, router advertisement and network boot features, intended for small computer networks.
dnsmasq has low requirements for system resources, can run on Linux, BSDs, Android and macOS, and is included in most Linux distributions. Consequently, it "is present in a lot of home routers and certain Internet of Things gadgets" and is included in Android.
Details
dnsmasq is a lightweight, easy to configure DNS forwarder, designed to provide DNS (and optionally DHCP and TFTP) services to a small-scale network. It can serve the names of local machines which are not in the global DNS.
dnsmasq's DHCP server supports static and dynamic DHCP leases, multiple networks and IP address ranges. The DHCP server integrates with the DNS server and allows local machines with DHCP-allocated addresses to appear in the DNS. dnsmasq caches DNS records, reducing the load on upstream nameservers and improving performance, and can be configured to automatically pick up the addresses of its upstream servers.
dnsmasq accepts DNS queries and either answers them from a small, local cache or forwards them to a real, recursive DNS server. It loads the contents of /etc/hosts, so that local host names which do not appear in the global DNS can be resolved. This also means that records added to your local /etc/hosts file with the format "0.0.0.0 annoyingsite.com" can be used to prevent references to "annoyingsite.com" from being resolved by your browser. This can quickly evolve to a local ad blocker when combined with adblocking site list providers. If done on a router, one can efficiently remove advertising content for an entire household or company.
dnsmasq supports modern Internet standards such as IPv6 and DNSSEC, network booting with support for BOOTP, PXE and TFTP and also Lua scripting.
Some Internet service-providers rewrite the NXDOMAIN (domain does not exist) responses |
https://en.wikipedia.org/wiki/Replit | Replit (), formerly Repl.it, is an American start-up and an online integrated development environment (IDE). Replit being Software as a service (SaaS) allows users to create online projects (called Repls, not to be confused with REPLs) and write code.
Replit has a global community for programmers to interact and offers Teams for Education, a product to assist in teaching programming in the classroom.
History
Replit was co-founded by programmers Amjad Masad, Faris Masad, and designer Haya Odeh in 2016. Once listed as a co-founder alongside Masad, Max Shawabkeh left the venture early on. Its name comes from the acronym REPL, which stands for "read–evaluate–print loop".
Before creating Replit, Amjad Masad worked in engineering roles at Yahoo and Facebook, where he built development tools. He also helped found Codecademy. Masad had come up with the idea for Replit over a decade before its creation.
In 2009, Amjad Masad sought to write implementations of other programming languages in JavaScript, but realized it was not practically feasible. He saw great leaps in browser and web technologies and was inspired by the web capabilities of Google Docs. He thought of the idea of being able to write and share code all in a web browser. He spent two years creating an open-source product with Haya Odeh called "JSRepl". This product allowed him to compile languages into JavaScript. It powered Udacity and Codecademy's tutorials. After becoming an early employee of Codecademy, this project was put off until years later, when he and Odeh decided to revive the project of a programming environment in a browser.
As Replit was taking shape, Masad and Odeh wanted to have "a real environment and not something emulated in the browser." The focus was first directed at the education market, and then later towards professional developers.
Since March 2021, "replit.com" has been the default domain name for the web service replacing the older "repl.it". This change was attributed to Masad' |
https://en.wikipedia.org/wiki/Mass-independent%20fractionation | Mass-independent isotope fractionation or Non-mass-dependent fractionation (NMD), refers to any chemical or physical process that acts to separate isotopes, where the amount of separation does not scale in proportion with the difference in the masses of the isotopes. Most isotopic fractionations (including typical kinetic fractionations and equilibrium fractionations) are caused by the effects of the mass of an isotope on atomic or molecular velocities, diffusivities or bond strengths. Mass-independent fractionation processes are less common, occurring mainly in photochemical and spin-forbidden reactions. Observation of mass-independently fractionated materials can therefore be used to trace these types of reactions in nature and in laboratory experiments.
Mass-independent fractionation in nature
The most notable examples of mass-independent fractionation in nature are found in the isotopes of oxygen and sulfur. The first example was discovered by Robert N. Clayton, Toshiko Mayeda, and Lawrence Grossman in 1973, in the oxygen isotopic composition of refractory calcium–aluminium-rich inclusions in the Allende meteorite. The inclusions, thought to be among the oldest solid materials in the Solar System, show a pattern of low 18O/16O and 17O/16O relative to samples from the Earth and Moon. Both ratios vary by the same amount in the inclusions, although the mass difference between 18O and 16O is almost twice as large as the difference between 17O and 16O. Originally this was interpreted as evidence of incomplete mixing of 16O-rich material (created and distributed by a large star in a supernova) into the Solar nebula. However, recent measurement of the oxygen-isotope composition of the Solar wind, using samples collected by the Genesis spacecraft, shows that the most 16O-rich inclusions are close to the bulk composition of the solar system. This implies that Earth, the Moon, Mars, and asteroids all formed from 18O- and 17O-enriched material. Photodissociation of carb |
https://en.wikipedia.org/wiki/ZRTP | ZRTP (composed of Z and Real-time Transport Protocol) is a cryptographic key-agreement protocol to negotiate the keys for encryption between two end points in a Voice over IP (VoIP) phone telephony call based on the Real-time Transport Protocol. It uses Diffie–Hellman key exchange and the Secure Real-time Transport Protocol (SRTP) for encryption. ZRTP was developed by Phil Zimmermann, with help from Bryce Wilcox-O'Hearn, Colin Plumb, Jon Callas and Alan Johnston and was submitted to the Internet Engineering Task Force (IETF) by Zimmermann, Callas and Johnston on March 5, 2006 and published on April 11, 2011 as .
Overview
ZRTP ("Z" is a reference to its inventor, Zimmermann; "RTP" stands for Real-time Transport Protocol) is described in the Internet Draft as a "key agreement protocol which performs Diffie–Hellman key exchange during call setup in-band in the Real-time Transport Protocol (RTP) media stream which has been established using some other signaling protocol such as Session Initiation Protocol (SIP). This generates a shared secret which is then used to generate keys and salt for a Secure RTP (SRTP) session." One of ZRTP's features is that it does not rely on SIP signaling for the key management, or on any servers at all. It supports opportunistic encryption by auto-sensing if the other VoIP client supports ZRTP.
This protocol does not require prior shared secrets or rely on a Public key infrastructure (PKI) or on certification authorities, in fact ephemeral Diffie–Hellman keys are generated on each session establishment: this allows the complexity of creating and maintaining a trusted third-party to be bypassed.
These keys contribute to the generation of the session secret, from which the session key and parameters for SRTP sessions are derived, along with previously shared secrets (if any): this gives protection against man-in-the-middle (MiTM) attacks, so long as the attacker was not present in the first session between the two endpoints.
ZRTP can be u |
https://en.wikipedia.org/wiki/Zech%27s%20logarithm | Zech logarithms are used to implement addition in finite fields when elements are represented as powers of a generator .
Zech logarithms are named after Julius Zech, and are also called Jacobi logarithms, after Carl G. J. Jacobi who used them for number theoretic investigations.
Definition
Given a primitive element of a finite field, the Zech logarithm relative to the base is defined by the equation
which is often rewritten as
The choice of base is usually dropped from the notation when it is clear from the context.
To be more precise, is a function on the integers modulo the multiplicative order of , and takes values in the same set. In order to describe every element, it is convenient to formally add a new symbol , along with the definitions
where is an integer satisfying , that is for a field of characteristic 2, and for a field of odd characteristic with elements.
Using the Zech logarithm, finite field arithmetic can be done in the exponential representation:
These formulas remain true with our conventions with the symbol , with the caveat that subtraction of is undefined. In particular, the addition and subtraction formulas need to treat as a special case.
This can be extended to arithmetic of the projective line by introducing another symbol satisfying and other rules as appropriate.
For fields of characteristic two,
.
Uses
For sufficiently small finite fields, a table of Zech logarithms allows an especially efficient implementation of all finite field arithmetic in terms of a small number of integer addition/subtractions and table look-ups.
The utility of this method diminishes for large fields where one cannot efficiently store the table. This method is also inefficient when doing very few operations in the finite field, because one spends more time computing the table than one does in actual calculation.
Examples
Let be a root of the primitive polynomial . The traditional representation of elements of this field is as polynom |
https://en.wikipedia.org/wiki/Earth%20Gravitational%20Model | The Earth Gravitational Models (EGM) are a series of geopotential models of the Earth published by the National Geospatial-Intelligence Agency (NGA). They are used as the geoid reference in the World Geodetic System.
The NGA provides the models in two formats: as the series of numerical coefficients to the spherical harmonics which define the model, or a dataset giving the geoid height at each coordinate at a given resolution.
Three model versions have been published: EGM84 with n=m=180, EGM96 with n=m=360, and EGM2008 with n=m=2160. n and m are the degree and orders of harmonic coefficients; the higher they are, the more parameters the models have, and the more precise they are. EGM2008 also contains expansions to n=2190. Developmental versions of the EGM are referred to as Preliminary Gravitational Models (PGMs).
Each version of EGM has its own EPSG code as a vertical datum.
History
EGM84
The first EGM, EGM84, was defined as a part of WGS84. WGS84 combines the old GRS 80 with the then-latest data, namely available Doppler, satellite laser ranging, and Very Long Baseline Interferometry (VLBI) observations, and a new least squares method called collocation. It allowed for a model with n=m=180 to be defined, providing a raster for every half degree (30', 30 minute) of latitude and longitude of the world. NIMA also computed and made available 30′×30′ mean altimeter derived gravity anomalies from the GEOSAT Geodetic Mission. 15′×15′ is also available.
EGM96
EGM96 from 1996 is the result of a collaboration between the National Imagery and Mapping Agency (NIMA), the NASA Goddard Space Flight Center (GSFC), and the Ohio State University. It took advantage of new surface gravity data from many different regions of the globe, including data newly released from the NIMA archives. Major terrestrial gravity acquisitions by NIMA since 1990 include airborne gravity surveys over Greenland and parts of the Arctic and the Antarctic, surveyed by the Naval Research Lab (NRL) |
https://en.wikipedia.org/wiki/Wind%20direction | Wind direction is generally reported by the direction from which the wind originates. For example, a north or northerly wind blows from the north to the south; the exceptions are onshore winds (blowing onto the shore from the water) and offshore winds (blowing off the shore to the water). Wind direction is usually reported in cardinal (or compass) direction, or in degrees. Consequently, a wind blowing from the north has a wind direction referred to as 0° (360°); a wind blowing from the east has a wind direction referred to as 90°, etc.
Weather forecasts typically give the direction of the wind along with its speed, for example a "northerly wind at 15 km/h" is a wind blowing from the north at a speed of 15 km/h. If wind gusts are present, their speed may also be reported.
Measurement techniques
A variety of instruments can be used to measure wind direction, such as the anemoscope, windsock, and wind vane. All these instruments work by moving to minimize air resistance. The way a weather vane is pointed by prevailing winds indicates the direction from which the wind is blowing. The larger opening of a windsock faces the direction that the wind is blowing from; its tail, with the smaller opening, points in the same direction as the wind is blowing.
Modern instruments used to measure wind speed and direction are called anemoscopes, anemometers and wind vanes. These types of instruments are used by the wind energy industry, both for wind resource assessment and turbine control.
When a high measurement frequency is needed (such as in research applications), wind can be measured by the propagation speed of ultrasound signals or by the effect of ventilation on the resistance of a heated wire. Another type of anemometer uses pitot tubes that take advantage of the pressure differential between an inner tube and an outer tube that is exposed to the wind to determine the dynamic pressure, which is then used to compute the wind speed.
In situations where modern instrumen |
https://en.wikipedia.org/wiki/Spectral%20centroid | The spectral centroid is a measure used in digital signal processing to characterise a spectrum. It indicates where the center of mass of the spectrum is located. Perceptually, it has a robust connection with the impression of brightness of a sound. It is sometimes called center of spectral mass.
Calculation
It is calculated as the weighted mean of the frequencies present in the signal, determined using a Fourier transform, with their magnitudes as the weights:
where x(n) represents the weighted frequency value, or magnitude, of bin number n, and f(n) represents the center frequency of that bin.
Alternative usage
Some people use "spectral centroid" to refer to the median of the spectrum. This is a different statistic, the difference being essentially the same as the difference between the unweighted median and mean statistics. Since both are measures of central tendency, in some situations they will exhibit some similarity of behaviour. But since typical audio spectra are not normally distributed, the two measures will often give strongly different values. Grey and Gordon in 1978 found the mean a better fit than the median.
Applications
Because the spectral centroid is a good predictor of the "brightness" of a sound, it is widely used in digital audio and music processing as an automatic measure of musical timbre. |
https://en.wikipedia.org/wiki/Chris%20Elliott%20%28food%20scientist%29 | Christopher Elliott is a professor of food safety and microbiology at Queen's University Belfast and founder of the university's Institute for Global Food Security. He led the independent review of the UK food system after the 2013 horse meat scandal.
Early life and education
Christopher Trevor Elliott grew up in Northern Ireland, initially in Belfast and then moved with his family to his grandparents’ farm where he enjoyed working and gained experience with farm animals. He left school at 16 with few qualifications. He got a job as a cleaner at a government research institute that encouraged staff to improve their knowledge. After several years of part-time and evening study he was able to enroll for a degree in medical biological sciences at Ulster University. He later gained a PhD for his work on developing tests for the illegal use of growth promoting drugs in farm animals, and implementing a monitoring programme.
Career
Elliott initially worked in a government institute that undertook research into animal health and behaviour. Starting at a very low level, he was promoted into roles involving virology and later veterinary testing. He reached the position of Principle Scientific Officer when he was 35. His doctoral work on testing and monitoring for the illegal use of clenbuterol as a growth promotor in animal husbandry developed into a system that was used worldwide. In around 2000, he moved to Queen's University Belfast to develop a food and agricultural science department. By 2013 he had persuaded the university to form an Institute of Global Food Security, the first with this name in the British Isles, and became its first director.
After the 2008 Irish pork crisis, where dioxin contamination of pig meat led to an international product recall, he led the Food Fortress project, at the request of the animal feed industry, to evaluate future risks and develop testing and monitoring so that it would not be repeated. This developed into a sampling programme o |
https://en.wikipedia.org/wiki/Ermelinda%20DeLaVi%C3%B1a | Ermelinda DeLaViña is an American mathematician specializing in graph theory.
She is a professor in the Computer and Mathematical Sciences Department of the University of Houston–Downtown, where she is also Associate Dean of the College of Science and Technology.
Education
DeLaViña grew up in a working-class family in Texas, with roots stretching back for five generations there. Her parents came from Bishop, Texas, but raised her in Houston. Inspired by a 9th-grade algebra teacher, she aimed for a college education despite the discouragement of her school counselors. She started her undergraduate studies at the University of Houston, but dropped out after one term, and after working for two years began again at the University of Texas–Pan American, where she graduated with a bachelor's degree in mathematics and a minor in computer science in 1989, becoming the first in her family with a college degree.
She returned to graduate school at the University of Houston and completed a Ph.D. in mathematics there in 1997. Her doctoral supervisor was Siemion Fajtlowicz, with whom she worked on the Graffiti computer program for automatically formulating conjectures in graph theory.
Career
After completing her doctorate, DeLaViña became an assistant professor at the University of Houston–Downtown. She was promoted to full professor there in 2010, and became associate dean in 2012.
Contributions
One of DeLaViña's results in graph theory is related to an inequality showing that every undirected graph has an independent set that is at least as large as its radius; DeLaViña showed that the graphs with no larger independent set always contain a Hamiltonian path. |
https://en.wikipedia.org/wiki/Arthur%20Lynch%20%28politician%29 | Arthur Alfred Lynch (16 October 1861 – 25 March 1934) was an Irish Australian civil engineer, physician, journalist, author, soldier, anti-imperialist and polymath. He served as MP in the UK House of Commons as member of the Irish Parliamentary Party, representing Galway Borough from 1901 to 1902, and later West Clare from 1909 to 1918. Lynch fought on the Boer side during the Boer War in South Africa, for which he was sentenced to death but later pardoned. He supported the British war effort in the First World War, raising his own Irish battalion in Munster towards the end of the war.
Australian years
Lynch was born at Smythesdale near Ballarat, Victoria, the fourth of 14 children. His father, John Lynch, was an Irish Catholic surveyor and civil engineer and his mother Isabella (née MacGregor) was Scottish. John Lynch was a founder and first president of the Ballarat School of Mines, and a captain of Peter Lalor at the Eureka Stockade rebellion (1854) and John Lynch wrote a book, Austral Light (1893–94), about it – later republished as The Story of the Eureka Stockade.
Arthur Lynch was educated at Grenville College, Ballarat, (where he was "entranced" by differential calculus) and the University of Melbourne, where he took the degrees of BA in 1885 and MA in 1887. Lynch qualified as a civil engineer and practised this profession for a short period in Melbourne.
Europe and Ireland
Lynch left Australia and went to Berlin, where he studied physics, physiology and psychology at the University of Berlin in 1888–1889. He had a particular respect for Hermann von Helmholtz. Moving to London, Lynch took up journalism. In 1892, he contested Galway as a Parnellite candidate, but was defeated.
Lynch met Annie Powell (daughter of the Rev. John D. Powell) in Berlin and they were married in 1895. They were to have no children. In Lynch's words, the marriage "never lost its happiness" (My Life Story, p. 85).
In 1898, he was Paris correspondent for the London Daily Mail.
Boer |
https://en.wikipedia.org/wiki/Proof%20that%20e%20is%20irrational | The number e was introduced by Jacob Bernoulli in 1683. More than half a century later, Euler, who had been a student of Jacob's younger brother Johann, proved that e is irrational; that is, that it cannot be expressed as the quotient of two integers.
Euler's proof
Euler wrote the first proof of the fact that e is irrational in 1737 (but the text was only published seven years later). He computed the representation of e as a simple continued fraction, which is
Since this continued fraction is infinite and every rational number has a terminating continued fraction, e is irrational. A short proof of the previous equality is known. Since the simple continued fraction of e is not periodic, this also proves that e is not a root of a quadratic polynomial with rational coefficients; in particular, e2 is irrational.
Fourier's proof
The most well-known proof is Joseph Fourier's proof by contradiction, which is based upon the equality
Initially e is assumed to be a rational number of the form . The idea is to then analyze the scaled-up difference (here denoted x) between the series representation of e and its strictly smaller partial sum, which approximates the limiting value e. By choosing the scale factor to be the factorial of b, the fraction and the partial sum are turned into integers, hence x must be a positive integer. However, the fast convergence of the series representation implies that x is still strictly smaller than 1. From this contradiction we deduce that e is irrational.
Now for the details. If e is a rational number, there exist positive integers a and b such that . Define the number
Use the assumption that e = to obtain
The first term is an integer, and every fraction in the sum is actually an integer because for each term. Therefore, under the assumption that e is rational, x is an integer.
We now prove that . First, to prove that x is strictly positive, we insert the above series representation of e into the definition of x and obtain |
https://en.wikipedia.org/wiki/META%20II | META II is a domain-specific programming language for writing compilers. It was created in 1963–1964 by Dewey Val Schorre at UCLA. META II uses what Schorre called syntax equations. Its operation is simply explained as:
Each syntax equation is translated into a recursive subroutine which tests the input string for a particular phrase structure, and deletes it if found.
Meta II programs are compiled into an interpreted byte code language. VALGOL and SMALGOL compilers illustrating its capabilities were written in the META II language, VALGOL is a simple algebraic language designed for the purpose of illustrating META II. SMALGOL was a fairly large subset of ALGOL 60.
Notation
META II was first written in META I, a hand-compiled version of META II. The history is unclear as to whether META I was a full implementation of META II or a required subset of the META II language required to compile the full META II compiler.
In its documentation, META II is described as resembling BNF, which today is explained as a production grammar. META II is an analytical grammar. In the TREE-META document these languages were described as reductive grammars.
For example, in BNF, an arithmetic expression may be defined as:
<expr> := <term> | <expr> <addop> <term>
BNF rules are today production rules describing how constituent parts may be assembled to form only valid language constructs. A parser does the opposite taking language constructs apart. META II is a stack-based functional parser programming language that includes output directive. In META II, the order of testing is specified by the equation. META II like other programming languages would overflow its stack attempting left recursion. META II uses a $ (zero or more) sequence operator. The expr parsing equation written in META II is a conditional expression evaluated left to right:
expr = term
$( '+' term .OUT('ADD')
/ '-' term .OUT('SUB'));
Above the expr equation is defined by the expression to the right o |
https://en.wikipedia.org/wiki/Egocentric%20vision | Egocentric vision or first-person vision is a sub-field of computer vision that entails analyzing images and videos captured by a wearable camera, which is typically worn on the head or on the chest and naturally approximates the visual field of the camera wearer. Consequently, visual data capture the part of the scene on which the user focuses to carry out the task at hand and offer a valuable perspective to understand the user's activities and their context in a naturalistic setting.
The wearable camera looking forwards is often supplemented with a camera looking inward at the user's eye and able to measure a user's eye gaze, which is useful to reveal attention and to better understand the
user's activity and intentions.
History
The idea of using a wearable camera to gather visual data from a first-person perspective dates back to the 70s, when Steve Mann invented "Digital Eye Glass", a device that, when worn, causes the human eye itself to effectively become both an electronic camera and a television display.
Subsequently, wearable cameras were used for health-related applications in the context of Humanistic Intelligence and Wearable AI. Egocentric vision is best done from the point-of-eye, but may also be done by way of a neck-worn camera when eyeglasses would be in-the-way. This neck-worn variant was popularized by way of the Microsoft SenseCam in 2006 for experimental health research works. The interest of the computer vision community into the egocentric paradigm has been arising slowly entering the 2010s and it is rapidly growing in recent years, boosted by both the impressive advances in the field of wearable technology and by the increasing number of potential applications.
The prototypical first-person vision system described by Kanade and Hebert, in 2012 is composed by three basic components: a localization component able to estimate the surrounding, a recognition component able to identify object and people, and an activity recognition componen |
https://en.wikipedia.org/wiki/Freifunk | Freifunk (German for: "free radio") is a non-commercial open grassroots initiative to support free computer networks in the German region. Freifunk is part of the international movement for a wireless community network. The initiative counts about 400 local communities with over 41,000 access points. Among them, Münster, Aachen, Munich, Hanover, Stuttgart, and Uelzen are the biggest communities, with more than 1,000 access points each.
Aim
The main goals of Freifunk are to build a large-scale free wireless Wi-Fi network that is decentralized, owned by those who run it and to support local communication. The initiative is based on the Picopeering Agreement. In this agreement, participants agree upon a network that is free from discrimination, in the sense of net neutrality. Similar grassroots initiatives in Austria and in Switzerland are FunkFeuer and Openwireless.
Technology
Like many other free community-driven networks, Freifunk uses mesh technology to bring up ad hoc networks by interconnecting multiple Wireless LANs. In a Wi-Fi mobile ad hoc network, all routers connect to each other using special routing software. When a router fails, this software automatically calculates a new route to the destination. This software, the Freifunk firmware, is based on OpenWrt and other free software.
There are several different implementations of the firmware depending on the hardware and protocols local communities use. The first Wi-Fi ad hoc network was done in Georgia, USA in 1999 as demonstrated by Toh. It was a six-node implementation running the Associativity-Based Routing protocol on Linux kernel and WAVELAN WiFi. But ABR was a patented protocol. Following that experience, Freifunk worked on standard IETF protocols - the two common standard proposals are OLSR and B.A.T.M.A.N. The development of B.A.T.M.A.N. is driven by Freifunk activists on a volunteering basis.
History
One of the results of the BerLon workshop in October 2002 on free wireless community networ |
https://en.wikipedia.org/wiki/Henrique%20de%20Carvalho%20Santos | Henrique de Carvalho Santos (5 May 1940 – 15 October 2023), also known as Henrique Onambwé, was an Angolan nationalist who served as the Minister of Industry of Angola. He created the flag of Angola after it was decolonized and gained independence.
Santos was born in Porto Amboim, in the province of Cuanza Sul on 5 May 1940. Santos studied at the University of Heidelberg in Germany. MPLA members elected him to the MPLA Central Committee during the Inter-Regional Conference of September 1974 and the political bureau in December 1977. He served in the Directorate of Information and Security of Angola (DISA) before becoming the Minister of Industry. Santos died on 15 October 2023, at the age of 83. |
https://en.wikipedia.org/wiki/History%20of%20hard%20disk%20drives | In 1953, IBM recognized the immediate application for what it termed a "Random Access File" having high capacity and rapid random access at a relatively low cost. After considering technologies such as wire matrices, rod arrays, drums, drum arrays, etc., the engineers at IBM's San Jose California laboratory invented the hard disk drive. The disk drive created a new level in the computer data hierarchy, then termed Random Access Storage but today known as secondary storage, less expensive and slower than main memory (then typically drums and later core memory) but faster and more expensive than tape drives.
The commercial usage of hard disk drives (HDD) began in 1957, with the shipment of a production IBM 305 RAMAC system including IBM Model 350 disk storage. US Patent 3,503,060 issued March 24, 1970, and arising from the IBM RAMAC program is generally considered to be the fundamental patent for disk drives.
Each generation of disk drives replaced larger, more sensitive and more cumbersome devices. The earliest drives were usable only in the protected environment of a data center. Later generations progressively reached factories, offices and homes, eventually becoming ubiquitous.
Disk media diameter was initially 24 inches, but over time it has been reduced to today's 3.5-inch and 2.5-inch standard sizes. Drives with the larger 24-inch- and 14-inch-diameter media were typically mounted in standalone boxes (resembling washing machines) or large equipment rack enclosures. Individual drives often required high-current AC power due to the large motors required to spin the large disks. Drives with smaller media generally conformed to de facto standard form factors.
The capacity of hard drives has grown exponentially over time. When hard drives became available for personal computers, they offered 5-megabyte capacity. During the mid-1990s the typical hard disk drive for a PC had a capacity in the range of 500 megabyte to 1 gigabyte. hard disk drives up to 22 TB were |
https://en.wikipedia.org/wiki/Vaginitis%20emphysematosa | The term Vaginitis emphysematosa is related to Women's Reproductive Health and coined by Zweifel in 1877. The cases of Vaginitis emphysematosa are rare. The most important thing is that women never consult with the doctors for particularly Vaginitis emphysematosa but when they visit a doctor for some other reproductive health issue, they diagnose with the Vaginitis emphysematosa fortunately. Vaginitis emphysematosa is not common and Gynaecologists rarely know about it. Basically, this is characterised by gas-filled cysts in the mucosa of the vagina. Vaginitis emphysematosa is usually a self limited cystic disorder of the vagina. It is a very rare condition and has not much specific features to arouse clinical suspicion.
The term ''Vaginitis emphysematosa'' has 'vaginitis' in it but it has been observed that inflammation is generally mild and absent. This is characterised by gas-filled cysts in the vaginal wall and does not imply life-threatening infection.
Symptoms and signs
Vaginitis emphysematosa occurs primarily in pregnant women, but there are some cases of non-pregnant women too. It is a rare, benign vaginal cyst identified in 173 cases. Women that have been affected were 42 to 65 years old. The cysts appear grouped but defined from one another, smooth, and can be as large as 2 cm. Symptoms included: vaginal discharge, itching, sensation of pressure, appearance of nodules, and sometimes a "popping sound".
Causes
The cause is unknown. Histological examination showed the cysts contained pink hyaline-like material, foreign body-type giant cells in the cyst's wall, with chronic inflammatory cell fluid. The gas-filled cysts are identified with CT imaging. The gas contained in the cysts has been analysed and consists of nitrogen, oxygen, argon, carbon dioxide, and sulphur dioxide. Treatment may not be required and no complications follow the resolution of the cysts. It may be associated with immunosuppression, trichomonas, or Haemophilus vaginalis infection. Va |
https://en.wikipedia.org/wiki/Elisha%20Netanyahu | Elisha Netanyahu (; December 21, 1912 – April 3, 1986) was an Israeli mathematician specializing in complex analysis. Over the course of his work at the Technion he was the Dean of the Faculty of Sciences and established the separate Department of Mathematics. Historian Benzion Netanyahu was his brother, while current Israeli Prime Minister Benjamin Netanyahu is his nephew.
Biography
Netanyahu was born in Warsaw, Poland, to the writer and Zionist activist Nathan Mileikowsky. He was the third of nine children. In 1920 the family made aliyah to the Land of Israel. The family eventually settled in Jerusalem and adopted Hebrew name Netanyahu.
Netanyahu went to the Reali School in Haifa, from which he graduated in 1930. He later returned to Reali in 1935 to teach mathematics there. He studied at the Hebrew University of Jerusalem, from which he received his BS, MA and PhD (1942). His advisors were Michael Fekete and Binyamin Amirà. After the graduation he joined the British Army as a volunteer, serving in Egypt and then in Italy as an officer in a unit of the Royal Engineering Corps. He specialized in preparation of maps, which he continued to do during the 1948 Arab–Israeli War.
After he was demobilized in 1946, he became a lecturer at the Technion. He rose to a professor in 1958, and later became the head of the Mathematics Section, then as Dean of the Faculty of Sciences. His administrative efforts also played an important role towards establishment of the Ben-Gurion University of the Negev.
He had long term visits at Stanford University (1953–54), NYU (1961), the University of New Mexico (1969), the University of Maryland, College Park (1973), and ETH Zürich (1979). In 1980, Netanyahu retired from the Technion and moved to Jerusalem, where he died of cancer in 1986.
Throughout his long career, Netanyahu collaborated with Paul Erdős, Charles Loewner and other leading mathematicians, continuing and expanding the analytical traditions at the Technion.
Personal lif |
https://en.wikipedia.org/wiki/Chermoula | Chermoula (Berber: tacermult or tacermilt, ) or charmoula is a marinade and relish used in Algerian, Libyan, Moroccan and Tunisian cooking. It is traditionally used to flavor fish or seafood, but it can be used on other meats or vegetables. It is somewhat similar to the Latin American chimichurri.
Ingredients
Common ingredients include garlic, cumin, coriander, oil, lemon juice, and salt. Regional variations may also include preserved lemons, onion, ground chili peppers, black pepper, saffron, and other herbs.
Varieties
Chermoula recipes vary widely by region. In Sfax, Tunisia, chermoula is often served with cured salted fish during Eid al-Fitr. This regional variety is composed of dried dark raisin purée mixed with onions cooked in olive oil and spices such as cloves, cumin, chili, black pepper, and cinnamon.
A Moroccan version comprises dried parsley, cumin, paprika, and salt and pepper. The Libyan version of charmoula is served as a side dish in the summer; It contains olives, tuna and a variety of green herbs.
See also
List of Middle Eastern dishes
Harissa
Tunisian cuisine
Moroccan cuisine
North African cuisine
List of African dishes |
https://en.wikipedia.org/wiki/Lise%20Meitner%20Prize | The Lise Meitner Prize for nuclear physics, established in 2000, is awarded every two years by the European Physical Society for outstanding work in the fields of experimental, theoretical or applied nuclear science. It is named after Lise Meitner to honour her fundamental contributions to nuclear physics and her courageous and exemplary life.
Recipients
2020 Klaus Blaum, Björn Jonson,
2018 ,
2016
2014 Johanna Stachel, , Paolo Giubellino,
2012 , Friedrich-Karl Thielemann
2010
2008 Reinhard Stock, Walter Greiner
2006 , David M. Brink
2004 , Peter J. Twin
2002 Phil Elliott, Francesco Iachello
2000 Peter Armbruster, Gottfried Münzenberg, Yuri Oganessian
See also
List of physics awards |
https://en.wikipedia.org/wiki/Aeroacoustics | Aeroacoustics is a branch of acoustics that studies noise generation via either turbulent fluid motion or aerodynamic forces interacting with surfaces. Noise generation can also be associated with periodically varying flows. A notable example of this phenomenon is the Aeolian tones produced by wind blowing over fixed objects.
Although no complete scientific theory of the generation of noise by aerodynamic flows has been established, most practical aeroacoustic analysis relies upon the so-called aeroacoustic analogy, proposed by Sir James Lighthill in the 1950s while at the University of Manchester. whereby the governing equations of motion of the fluid are coerced into a form reminiscent of the wave equation of "classical" (i.e. linear) acoustics in the left-hand side with the remaining terms as sources in the right-hand side.
History
The modern discipline of aeroacoustics can be said to have originated with the first publication of Lighthill in the early 1950s, when noise generation associated with the jet engine was beginning to be placed under scientific scrutiny.
Lighthill's equation
Lighthill rearranged the Navier–Stokes equations, which govern the flow of a compressible viscous fluid, into an inhomogeneous wave equation, thereby making a connection between fluid mechanics and acoustics. This is often called "Lighthill's analogy" because it presents a model for the acoustic field that is not, strictly speaking, based on the physics of flow-induced/generated noise, but rather on the analogy of how they might be represented through the governing equations of a compressible fluid.
The first equation of interest is the conservation of mass equation, which reads
where and represent the density and velocity of the fluid, which depend on space and time, and is the substantial derivative.
Next is the conservation of momentum equation, which is given by
where is the thermodynamic pressure, and is the viscous (or traceless) part of the stress tensor from th |
https://en.wikipedia.org/wiki/Nitrogen%20dioxide%20poisoning | Nitrogen dioxide poisoning is the illness resulting from the toxic effect of nitrogen dioxide (). It usually occurs after the inhalation of the gas beyond the threshold limit value.
Nitrogen dioxide is reddish-brown with a very harsh smell at high concentrations, at lower concentrations it is colorless but may still have a harsh odour. Nitrogen dioxide poisoning depends on the duration, frequency, and intensity of exposure.
Nitrogen dioxide is an irritant of the mucous membrane linked with another air pollutant that causes pulmonary diseases such as obstructive lung disease, asthma, chronic obstructive pulmonary disease and sometimes acute exacerbation of COPD and in fatal cases, deaths.
Its poor solubility in water enhances its passage and its ability to pass through the moist oral mucosa of the respiratory tract.
Like most toxic gases, the dose inhaled determines the toxicity on the respiratory tract. Occupational exposures constitute the highest risk of toxicity and domestic exposure is uncommon. Prolonged exposure to low concentration of the gas may have lethal effects, as can short-term exposure to high concentrations like chlorine gas poisoning. It is one of the major air pollutants capable of causing severe health hazards such as coronary artery disease as well as stroke.
Nitrogen dioxide is often released into the environment as a byproduct of fuel combustion but rarely released by spontaneous combustion. Known sources of nitrogen dioxide gas poisoning include automobile exhaust and power stations.
The toxicity may also result from non-combustible sources such as the one released from anaerobic fermentation of food grains and anaerobic digestion of biodegradable waste.
The World Health Organization (WHO) developed a global recommendation limiting exposures to less than 20 parts per billion for chronic exposure and value less 100 ppb for one hour for acute exposure, using nitrogen dioxide as a marker for other pollutants from fuel combustion.
There is a |
https://en.wikipedia.org/wiki/SIP%20extensions%20for%20the%20IP%20Multimedia%20Subsystem | The Session Initiation Protocol (SIP) is the signaling protocol selected by the 3rd Generation Partnership Project (3GPP) to create and control multimedia sessions with multiple participants in the IP Multimedia Subsystem (IMS). It is therefore a key element in the IMS framework.
SIP was developed by the Internet Engineering Task Force (IETF) as a standard for controlling multimedia communication sessions in Internet Protocol (IP) networks. It is characterized by its position in the application layer of the Internet Protocol Suite. Several SIP extensions published in Request for Comments (RFC) protocol recommendations, have been added to the basic protocol for extending its functionality.
The 3GPP, which is a collaboration between groups of telecommunications associations aimed at developing and maintaining the IMS, stated a series of requirements for SIP to be successfully used in the IMS. Some of them could be addressed by using existing capabilities and extensions in SIP while, in other cases, the 3GPP had to collaborate with the IETF to standardize new SIP extensions to meet the new requirements. The IETF develops SIP on a generic basis, so that the use of extensions is not restricted to the IMS framework.
3GPP requirements for SIP
The 3GPP has stated several general requirements for operation of the IMS. These include an efficient use of the radio interface by minimizing the exchange of signaling messages between the mobile terminal and the network, a minimum session setup time by performing tasks prior to session establishment instead of during session establishment, a minimum support required in the terminal, the support for roaming and non-roaming scenarios with terminal mobility management (supported by the access network, not SIP), and support for IPv6 addressing.
Other requirements involve protocol extensions, such as SIP header fields to exchange user or server information, and SIP methods to support new network functionality: requirement for regis |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.