source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Stochastic%20grammar | A stochastic grammar (statistical grammar) is a grammar framework with a probabilistic notion of grammaticality:
Stochastic context-free grammar
Statistical parsing
Data-oriented parsing
Hidden Markov model
Estimation theory
The grammar is realized as a language model. Allowed sentences are stored in a database together with the frequency how common a sentence is. Statistical natural language processing uses stochastic, probabilistic and statistical methods, especially to resolve difficulties that arise because longer sentences are highly ambiguous when processed with realistic grammars, yielding thousands or millions of possible analyses. Methods for disambiguation often involve the use of corpora and Markov models. "A probabilistic model consists of a non-probabilistic model plus some numerical quantities; it is not true that probabilistic models are inherently simpler or less structural than non-probabilistic models."
Examples
A probabilistic method for rhyme detection is implemented by Hirjee & Brown in their study in 2013 to find internal and imperfect rhyme pairs in rap lyrics. The concept is adapted from a sequence alignment technique using BLOSUM (BLOcks SUbstitution Matrix). They were able to detect rhymes undetectable by non-probabilistic models.
See also
Colorless green ideas sleep furiously
Computational linguistics
L-system#Stochastic grammars
Stochastic context-free grammar
Statistical language acquisition |
https://en.wikipedia.org/wiki/Non-evaporable%20getter | Non evaporable getters (NEG), based on the principle of metallic surface sorption of gas molecules, are mostly porous alloys or powder mixtures of Al, Zr, Ti, V and Fe. They help to establish and maintain vacuums by soaking up or bonding to gas molecules that remain within a partial vacuum. This is done through the use of materials that readily form stable compounds with active gases. They are important tools for improving the performance of many vacuum systems. Sintered onto the inner surface of high vacuum vessels, the NEG coating can be applied even to spaces that are narrow and hard to pump out, which makes it very popular in particle accelerators where this is an issue. The main sorption parameters of the kind of NEGs, like pumping speed and sorption capacity, have low limits.
A different type of NEG, which is not coated, is the Tubegetter. The activation of these getters is accomplished mechanically or at a temperature from 550 K. The temperature range is from 0 to 800 K under HV/UHV conditions.
The NEG acts as a getter or getter pump that is able to reduce the pressure to less than 10−12 mbar. |
https://en.wikipedia.org/wiki/Afterdepolarization | Afterdepolarizations are abnormal depolarizations of cardiac myocytes that interrupt phase 2, phase 3, or phase 4 of the cardiac action potential in the electrical conduction system of the heart. Afterdepolarizations may lead to cardiac arrhythmias. Afterdepolarization is commonly a consequence of myocardial infarction, cardiac hypertrophy, or heart failure. It may also result from congenital mutations associated with calcium channels and sequestration.
Early afterdepolarizations
Early afterdepolarizations (EADs) occur with abnormal depolarization during phase 2 or phase 3, and are caused by an increase in the frequency of abortive action potentials before normal repolarization is completed. EADs most commonly originate in mid-myocardial cells and Purkinje fibers, but can develop in other cardiac cells that carry an action potential. Phase 2 may be interrupted due to augmented opening of calcium channels, while phase 3 interruptions are due to the opening of sodium channels. Early afterdepolarizations can result in torsades de pointes, tachycardia, and other arrhythmias. EADs can be triggered by hypokalemia and drugs that prolong the QT interval, including class Ia and III antiarrhythmic agents, as well as catecholamines.
Afterhyperpolarizations can also occur in cortical pyramidal neurons. There, they typically follow an action potential and are mediated by voltage gated sodium or chloride channels. This phenomenon requires potassium channels to close quickly to limit repolarization. It is responsible for the difference between regular spiking and intrinsically bursting pyramidal neurons.
Delayed afterdepolarizations
Delayed afterdepolarizations (DADs) begin during phase 4, after repolarization is completed but before another action potential would normally occur via the normal conduction systems of the heart. They are due to elevated cytosolic calcium concentrations, classically seen with digoxin toxicity. The overload of the sarcoplasmic reticulum may cau |
https://en.wikipedia.org/wiki/Southeastern%20Iberian%20script | The southeastern Iberian script, also known as Meridional Iberian, was one of the means of written expression of the Iberian language, which was written mainly in the northeastern Iberian script and residually by the Greco-Iberian alphabet. About the relation between northeastern Iberian and southeastern Iberian scripts, it is necessary to point out that they are two different scripts with different values for the same signs; however it is clear that they had a common origin and the most accepted hypothesis is that northeastern Iberian script derives from southeastern Iberian script. In fact, the southeastern Iberian script is very similar, both considering the shape of the signs or their values, to the Southwestern script used to represent an unknown language usually named Tartessian. The main difference is that southeastern Iberian script does not show the vocalic redundancy of the syllabic signs. Unlike the northeastern Iberian script the decipherment of the southeastern Iberian script is not yet complete, because there are a significant number of signs on which scholars have not yet reached a consensus. Although it is believed that the southeastern Iberian script does not show any system to differentiate between voiced and unvoiced occlusives, unlike the northeastern Iberian script, a recent paper (Ferrer i Jané 2010) defends the existence of a dual system also in the southeastern Iberian script.
Typology and variants
All the paleohispanic scripts, with the exception of the Greco-Iberian alphabet, share a common distinctive typological characteristic: they represent syllabic value for the occlusives, and monophonemic value for the rest of the consonants and vowels. From the writing systems point of view they are neither alphabets nor syllabaries; rather, they are mixed scripts that normally are identified as semi-syllabaries. There is no agreement about how the paleohispanic semi-syllabaries originated; some researchers conclude that their origin is linked onl |
https://en.wikipedia.org/wiki/European%20Committee%20for%20Interoperable%20Systems | The European Committee for Interoperable Systems (ECIS) is an international non-profit association founded in 1989 in order to promote interoperability and market conditions in the Information and Communications Technology (ICT) sector allowing vigorous competition on the merits and a diversity of consumer choice. ECIS has represented its members on many issues related to interoperability and competition before European, national and international bodies, including the European Union institutions and the World Intellectual Property Organization (WIPO). ECIS members include large and smaller information and communications technology hardware and software providers as Adobe Systems, Corel Corporation, IBM, Linspire, Nokia, Opera Software, Oracle Corporation, RealNetworks, Red Hat, and Sun Microsystems.
Involvement against Microsoft dominance
Over the past years, ECIS has been actively involved in the European Commission's antitrust condemnation against Microsoft, now upheld by the European Court of First Instance in September 2007.
Ziff-Davis' eWeek comments that -
Other complaints about Microsoft include:
That bundling XAML in Vista is an attempt to replace HTML by a Microsoft-specific technology
That Office Open XML is a Microsoft-dependent document format
See also
Free Software Foundation Europe
OpenForum Europe
Statements about the European Commission's antitrust case against Microsoft
ECIS statement from Oct. 22 on the EU-MS agreement
ECIS statement on the European Court First Instance (CFI) Microsoft judgment
March 26, 2004 ECIS welcomes Commission Decision finding Microsoft infringed Article 82 |
https://en.wikipedia.org/wiki/Ringworm%20affair | The ringworm affair refers to circumstances involving an alleged number of 20,000 to 200,000 Jews who were treated between 1948 and 1960 for tinea capitis (ringworm) with ionizing radiation to the head and neck area within Israel. The population suffering from the disease in Israel at the time was composed primarily of newly-arrived immigrants and populations who were expected to emigrate, mostly from North Africa, as well as some from Middle East and elsewhere, but many Jewish children were irradiated in their home countries regardless of their intent to emigrate.
The irradiation of Mizrahi children is viewed by activists in Israel as the most salient example of injustices encountered in the 1950s as a result of shortcomings or irresponsibility on the part of authorities in the absorption in Israeli society of new immigrants.
Ringworm in Israel and Jewish communities
The scalp ringworm, also known as tinea capitis, mycosis, thrichophytia, and favus, was one of the most common fungal diseases in children in the Jewish communities in Israel and abroad since the 19th century. An X-ray treatment for ringworm has been used around the world as early as 1897. An estimated 200,000 children worldwide received X-ray treatment for tinea capitis in accordance with the standard Adamson-Kienbock procedure between 1910 and 1959, until griseofulvin, the first effective anti-fungal agent for ringworm, was introduced.
At the beginning of the 20th century, Hadassah Medical Center in Jerusalem treated ringworm disease among the religious Jewish community in Jerusalem using irradiation and the disease almost disappeared.
With mass immigration from Arab and Muslim countries in the 1940s and 1950s, many new cases of ringworm surfaced, primarily among immigrant children from Asia and North Africa, due to crowded living conditions with deficient hygiene facilities. At this time, ringworm was still treated with irradiation, under the supervision of Hadassah Medical Center (Prof. Dostr |
https://en.wikipedia.org/wiki/Botany%202000 | Botany 2000 is the name for a scientific program, organized under the auspices of UNESCO. While a similar UNESCO program, ASOMPS, focus on promotion of collaboration and co-operation between scientists in Asia, Botany 2000 is composed of activities in Asia and Africa. Activities in Europe are confined to support of herbaria in countries in transition and reconstruction e.g. Georgia.
The Botany 2000 program was accepted by the UNESCO General Conference in 1989.
Activities in Africa and Asia subsume activities in predecessor networks such as the:
Botany 2000-Asia
Botany 2000-Africa
The Botany 2000-Asia program is the new name for the former Southeast Asian Cooperative Program on Descriptive Botany (SEABOP) and of the work being carried out by the majority of the other organizations involved in the Asian Coordinating Group for Chemistry (ACGC).
Botany 2000-Asia is implemented by UNESCO's New Delhi Office and focuses on the taxonomy, and biological and cultural diversity of medicinal and ornamental plants, and their protection against environmental pollution, as well as ethnobiological classification and biological systematics and the correct documentation of useful plants through collection of voucher specimens. Within the framework of the project Botany 2000-Asia, UNESCO supports also the computerization of holdings and locations in herbaria throughout Asia.
Under the Botany 2000-Asia program some conferences were held:
Botany 2000-Asia: Herbarium Curation Workshop (15-20 October 1990, Point Walter, Bicton, Western Australia)
Botany 2000-Asia: Zingiberaceae Workshop (15-18 October 1991, Hat Yai, Thailand)
Botany 2000-Asia: Rutaceae Workshop (2-7 February 1992, in conjunction with ASOMPS VII, Manila, Philippines)
Botany 2000-Asia: International Botanical Congress (28 August-3 September 1993, Tokyo, Japan)
Botany 2000-Asia: International Seminar and Workshop : taxonomy and phytochemistry of the Asclepiadaceae in tropical Asia (June 1994, Malacca, Malaysi |
https://en.wikipedia.org/wiki/TV%20Links | TV Links was a user contributed online video directory for television programmes, films, and music videos. In a similar style to BitTorrent trackers such as The Pirate Bay, video content was not hosted by TV Links. Instead, videos were hosted by third-party video sharing websites. The website was operated as a hobby by David Rock of Cheltenham, England.
On 18 October 2007, the website's servers, located in the Netherlands, were raided and shut down by Gloucestershire police in cooperation with the Federation Against Copyright Theft (FACT) in response to complaints received from major US film studios about TV Links. No official clarification has been made to date as to why the website was shut down. Simultaneously, David Rock was arrested and later released pending further investigation, without being charged with a crime. Although FACT initially stated that the raid was performed because of allegations of copyright infringement, it later stated that Rock was arrested for trademark infringement.
History
TV Links launched in October 2006, following a surge in services of its kind. At that time, it provided hyperlinks to videos on video sharing websites.
On about 20 March 2007, the website was updated to use a streaming web-based video player, and direct external links were no longer made available.
Shutdown and arrests by UK government
On 18 October 2007, owner David Rock was arrested by the Gloucestershire police in cooperation with the Federation Against Copyright Theft (FACT) and the website was shut down.
Initial claims by FACT indicated the arrest was made due to offences of facilitating copyright infringement. However, it was later made clear that the arrest was over a matter of possible trademark infringement.
In a statement issued by FACT following the events, local trading standards head Roger Marles implied that the website's update from hyperlinks to a streaming video player might have affected the shutdown. He stated that the arrest and shutdown wer |
https://en.wikipedia.org/wiki/Jens%20Oliver%20Lisberg | Jens Oliver Lisberg (24 December 1896 – 31 August 1920) (Jens Olivur Lisberg in modern Faroese) was one of the designers of the Merkið, the flag of the Faroe Islands.
While a law student in Copenhagen, he devised the flag in 1919 with two other Faroese students, Janus Øssursson from Tórshavn and Paul Dahl from Vágur. Lisberg raised the flag for the first time on Faroese soil on 22 June 1919 on returning to his home town of Fámjin. It would not however receive official status until 25 April 1940 when the British occupation government approved its use as the civil ensign of the islands.
Lisberg died of pneumonia on 31 August 1920. He is buried in Fámjin, where the church now holds the original copy of the Merkið. |
https://en.wikipedia.org/wiki/Kervaire%20invariant | In mathematics, the Kervaire invariant is an invariant of a framed -dimensional manifold that measures whether the manifold could be surgically converted into a sphere. This invariant evaluates to 0 if the manifold can be converted to a sphere, and 1 otherwise. This invariant was named after Michel Kervaire who built on work of Cahit Arf.
The Kervaire invariant is defined as the Arf invariant of the skew-quadratic form on the middle dimensional homology group. It can be thought of as the simply-connected quadratic L-group , and thus analogous to the other invariants from L-theory: the signature, a -dimensional invariant (either symmetric or quadratic, ), and the De Rham invariant, a -dimensional symmetric invariant .
In any given dimension, there are only two possibilities: either all manifolds have Arf–Kervaire invariant equal to 0, or half have Arf–Kervaire invariant 0 and the other half have Arf–Kervaire invariant 1.
The Kervaire invariant problem is the problem of determining in which dimensions the Kervaire invariant can be nonzero. For differentiable manifolds, this can happen in dimensions 2, 6, 14, 30, 62, and possibly 126, and in no other dimensions. The final case of dimension 126 remains open.
Definition
The Kervaire invariant is the Arf invariant of the quadratic form determined by the framing on the middle-dimensional -coefficient homology group
and is thus sometimes called the Arf–Kervaire invariant. The quadratic form (properly, skew-quadratic form) is a quadratic refinement of the usual ε-symmetric form on the middle dimensional homology of an (unframed) even-dimensional manifold; the framing yields the quadratic refinement.
The quadratic form q can be defined by algebraic topology using functional Steenrod squares, and geometrically via the self-intersections
of immersions determined by the framing, or by the triviality/non-triviality of the normal bundles of embeddings (for ) and the mod 2 Hopf invariant of maps
(for ).
History
The K |
https://en.wikipedia.org/wiki/RTI-121 | (–)-2β-Carboisopropoxy-3β-(4-iodophenyl)tropane (RTI-4229-121, IPCIT) is a stimulant drug used in scientific research, which was developed in the early 1990s. RTI-121 is a phenyltropane based, highly selective dopamine reuptake inhibitor and is derived from methylecgonidine. RTI-121 is a potent and long-lasting stimulant, producing stimulant effects for more than 10 hours after a single dose in mice which would limit its potential uses in humans, as it might have significant abuse potential if used outside a medical setting. However RTI-121 occupies the dopamine transporter more slowly than cocaine, and so might have lower abuse potential than cocaine itself.
Uses
RTI-121 is mainly used in scientific research into the dopamine reuptake transporter. It is more selective for the dopamine transporter than other DAT radioligands such as β-CIT, and so has less nonspecific binding and produces "cleaner" images. Various radiolabelled forms of RTI-121 (with different radioactive isotopes of iodine used depending on the application) are used in both humans and animals to map the distribution of dopamine transporters in the brain.
Legal status
RTI-121 not specified as controlled substance in any country as of 2007. Some jurisdictions such as the United States, Australia, and New Zealand, however, might however consider RTI-121 to be a controlled substance analogue of cocaine on the grounds of its related chemical structure.
See also
RTI-55
List of cocaine analogues
List of Phenyltropanes |
https://en.wikipedia.org/wiki/Coincidence%20rangefinder | A coincidence rangefinder or coincidence telemeter is a type of rangefinder that uses mechanical and optical principles to allow an operator to determine the distance to a visible object. There are subtypes split-image telemeter, inverted image, or double-image telemeter with different principles how two images in a single ocular are compared. Coincidence rangefinders were important elements of fire control systems for long-range naval guns and land-based coastal artillery circa 1890–1960. They were also used in rangefinder cameras.
A stereoscopic telemeter looks similar, but has two eyepieces and uses a different principle, based on binocular vision. The two can normally be distinguished at a glance by the number of eyepieces.
Design
The device consists of a long tube with a forward-facing lens at each end and an operator eyepiece in the center. Two prism wedges which, when aligned result in no deviation of the light, are inserted into the light path of one of the two lenses. By rotating the prisms in opposite directions using a differential gear, a degree of horizontal displacement of the image can be achieved.
Applications
Optical rangefinders using this principle, while applicable to several purposes, were widely used for military purposes—determining the range of a target—and for photographic use, determining the distance of a subject to photograph to allow focusing on it. Photographic rangefinders were initially accessories, from which the distance read off could be transferred to the camera's focusing mechanism; later they were built into rangefinder cameras, so that the image was in focus when the images were made to coincide.
Usage
The coincidence rangefinder uses a single eyepiece. Light from the target enters the rangefinder through two windows located at either end of the instrument. At either side the incident beam is reflected to the center of the optical bar by a pentaprism. The optical bar is ideally made from a material with a low coefficient |
https://en.wikipedia.org/wiki/Molecular%20wire | Molecular wires (or sometimes called molecular nanowires) are molecular chains that conduct electric current. They are the proposed building blocks for molecular
electronic devices. Their typical diameters are less than three nanometers, while their lengths may be macroscopic, extending to centimeters or more.
Examples
Most types of molecular wires are derived from organic molecules. One naturally occurring molecular wire is DNA. Prominent inorganic examples include polymeric materials such as Li2Mo6Se6 and Mo6S9−xIx, [Pd4(CO)4(OAc)4Pd(acac)2], and single-molecule extended metal atom chains (EMACs) which comprise strings of transition metal atoms directly bonded to each other. Molecular wires containing paramagnetic inorganic moieties can exhibit Kondo peaks.
Conduction of electrons
Molecular wires conduct electricity. They typically have non-linear current-voltage characteristics, and do not behave as simple ohmic conductors. The conductance follows typical power law behavior as a function of temperature or electric field, whichever is the greater, arising from their strong one-dimensional character. Numerous theoretical ideas have been used in an attempt to understand the conductivity of one-dimensional systems, where strong interactions between electrons lead to departures from normal metallic (Fermi liquid) behavior. Important concepts are those introduced by Tomonaga, Luttinger and Wigner. Effects caused by classical Coulomb repulsion (called Coulomb blockade), interactions with vibrational degrees of freedom (called phonons) and Quantum Decoherence have also been found to be important in determining the properties of molecular wires.
Synthesis
Methods have been developed for the synthesis of diverse types of molecular wires (e.g. organic molecular wires and inorganic molecular wires). The basic principle is to assemble repeating modules. Organic molecular wires are usually synthesized via transition metal-mediated cross-coupling reactions.
Organic molec |
https://en.wikipedia.org/wiki/Mother%20Ukraine | Mother Ukraine ( ) is a monumental Soviet-era statue in Kyiv, the capital of Ukraine. The sculpture is a part of the National Museum of the History of Ukraine in the Second World War. In 2023, the Soviet heraldry was removed from the monument's shield and replaced with Ukraine's coat of arms, the tryzub.
Name
The monument's initial name was the Mother Motherland (), which derives from Russian Mother Motherland (), a name for national personification used Ukraine and by both Russia and the Soviet Union, since it was meant to symbolize the Soviet victory in the Second World War along with other monumental structures across the USSR (e.g. The Motherland Calls in Volgograd, Russia). On 29 July 2023, amidst the removal of the Soviet heraldry from the monument, a future renaming to Mother Ukraine was announced by the director of the memorial complex Yuri Savchuk.
Description
The titanium statue stands tall upon the museum's main building with the overall structure measuring including its base and weighing 560 tonnes. The sword in the statue's right hand is long, weighing 9 tonnes, with the left hand holding up a shield originally emblazoned with the hammer and sickle emblem of the Soviet Union. Initially, the statue was drawn by the sculptor Yevgeny Vuchetich. Vuchetich based the statue on the Ukrainian painter Nina Danyleiko.
When Vuchetich died in 1974, the project was continued by Vasyl Borodai, who used Ukrainian sculptor Halyna Kalchenko, a daughter of the Chairman of the Council of Ministers of the Ukrainian SSR Nikifor Kalchenko, as the model.
The memorial hall of the museum displays marble plaques with carved names of more than 11,600 soldiers and over 200 workers of the home-front honoured during the war with the title of the Hero of the Soviet Union and the Hero of Socialist Labour. On the hill beneath the museum, traditional flower shows are held. The sword of the statue was shortened by four meters from its project height, some sources say that the rea |
https://en.wikipedia.org/wiki/Smart%20label | Smart Label, also called Smart Tag, is an extremely flat configured transponder under a conventional print-coded label, which includes chip, antenna and bonding wires as a so-called inlay. The labels, made of paper, fabric or plastics, are prepared as a paper roll with the inlays laminated between the rolled carrier and the label media for use in specially-designed printer units.
In many processes in logistics and transportation, the barcode, or the 2D-barcode, is well established as the key means for identification in short distance. Whereas the automation of such optical coding is limited in appropriate distance for reading success and usually requires manual operation for finding the code or scanner gates that scan all the surface of a coded object, the RFID-inlay allows for better tolerance in fully automated reading from a certain specified distance. However, the mechanical vulnerability of the RFID-inlay is higher than the ordinary label, which has its weaknesses in its resistance to scratch.
Thus, the smartness of the smart label is earned in compensation of typical weaknesses with the combination of the technologies of plain text, optical character recognition and radio code.
Smart Label Processing
The processing of these labels is basically as with ordinary labels in all stages of production and application, except the inlay is inserted in an automated processing step to ensure identical positioning for each label and careful processing to prevent any damage to the bonding.
The printing is processed in two steps, including
normal ink-jet printing, except the space with the bonded chip, with clearly intelligible text and
either barcode or 2D barcode for later semi-automatic reading with handheld readers or fix-mount scanners
writing coherently concatenated information to the RFID-chip
reading the written information on the RFID-chip subsequently in the printer for control purpose (read after write)
Classification
Chip Labels
Customisation of smart l |
https://en.wikipedia.org/wiki/Remote%20control%20locomotive | A remote control locomotive (also called an RCL) is a railway locomotive that can be operated with a remote control. It differs from a conventional locomotive in that a remote control system has been installed in one or more locomotives within the consist, which uses either a mechanical or radio transmitter and receiver system. The locomotive is operated by a person not physically at the controls within the locomotive cab. They have been in use for many years in the railroad industry, including industrial applications such as bulk material load-out, manufacturing, process and industrial switching. The systems are designed to be fail-safe so that if communication is lost the locomotive is brought to a stop automatically.
History
United Kingdom
One of the earliest remote control locomotives was the GWR Autocoach, which replaced the GWR steam rail motors on both operational cost and maintenance grounds. When running 'autocoach first', the regulator is operated by a linkage to a rotating shaft running the length of the locomotive, passing below the cab floor. This engages (via a telescopic coupling) with another shaft running the full length below the floor of the autocoach. This shaft is turned by a second regulator lever in the cab of the autocoach. The driver can operate the regulator, brakes and whistle from the far (cab) end of the autocoach; the fireman remains on the locomotive and (in addition to firing) also controls the valve gear settings. The driver can also warn of the train's approach using a large mechanical gong, prominently mounted high on the cab end of the autocoach, which is operated by stamping on a pedal on the floor of the cab. The driver, guard and fireman communicate with each other by an electric bell system.
North America
In North America remote controlled locomotives have been in use since the 1980s. In 1988, the US Occupational Safety and Health Administration issued a hazard information bulletin regarding their use. By 1999 Canadian Nat |
https://en.wikipedia.org/wiki/Krener%27s%20theorem | In mathematics, Krener's theorem is a result attributed to Arthur J. Krener in geometric control theory about the topological properties of attainable sets of finite-dimensional control systems. It states that any attainable set of a bracket-generating system has nonempty interior or, equivalently, that any attainable set has nonempty interior in the topology of the corresponding orbit. Heuristically, Krener's theorem prohibits attainable sets from being hairy.
Theorem
Let
be a smooth control system, where
belongs to a finite-dimensional manifold and belongs to a control set . Consider the family of vector fields .
Let be the Lie algebra generated by with respect to the Lie bracket of vector fields.
Given , if the vector space is equal to ,
then belongs to the closure of the interior of the attainable set from .
Remarks and consequences
Even if is different from ,
the attainable set from has nonempty interior in the orbit topology,
as it follows from Krener's theorem applied to the control system restricted to the orbit through .
When all the vector fields in are analytic, if and only if belongs to the closure of the interior of the attainable set from . This is a consequence of Krener's theorem and of the orbit theorem.
As a corollary of Krener's theorem one can prove that if the system is bracket-generating and if the attainable set from is dense in , then the attainable set from
is actually equal to . |
https://en.wikipedia.org/wiki/Instant%20rice | Instant rice is a white rice that is partly precooked and then is dehydrated and packed in a dried form similar in appearance to that of regular white rice. That process allows the product to be later cooked as if it were normal rice but with a typical cooking time of 5 minutes, not the 20–30 minutes needed by white rice (or the still greater time required by brown rice). This process was invented by Ataullah K. Ozai‐Durrani in 1939 and mass-marketed by General Foods starting in 1946 as Minute Rice, which is still made.
Instant rice is not the "microwave-ready" rice that is pre-cooked but not dehydrated; such rice is fully cooked and ready to eat, normally after cooking in its sealed package in a microwave oven for as little as 1 minute for a portion. Another distinct product is parboiled rice (also called "converted" rice, a trademark for what was long sold as Uncle Ben's converted rice); brown rice is parboiled to preserve nutrients that are lost in the preparation of white rice, not to reduce cooking time.
Preparation process
Instant rice is made using several methods. The most common method is similar to the home cooking process. The rice is blanched in hot water, steamed, and rinsed. It is then placed in large ovens for dehydration until the moisture content reaches approximately twelve percent or less. The basic principle involves using hot water or steam to form cracks or holes in the kernels before dehydrating. In the subsequent cooking, water can more easily penetrate into the cracked grain, allowing for a short cooking time.
Advantages and disadvantages
The notable advantage of instant rice is the rapid cooking time: some brands can be ready in as little as three minutes. Currently, several companies, Asian as well as American, have developed brands which only require 90 seconds to cook, much like a cup of instant noodles.
However, instant rice is more expensive than regular white rice due to the cost of the processing. The "cracking" process can |
https://en.wikipedia.org/wiki/Fermentation%20in%20winemaking | The process of fermentation in winemaking turns grape juice into an alcoholic beverage. During fermentation, yeasts transform sugars present in the juice into ethanol and carbon dioxide (as a by-product). In winemaking, the temperature and speed of fermentation are important considerations as well as the levels of oxygen present in the must at the start of the fermentation. The risk of stuck fermentation and the development of several wine faults can also occur during this stage, which can last anywhere from 5 to 14 days for primary fermentation and potentially another 5 to 10 days for a secondary fermentation. Fermentation may be done in stainless steel tanks, which is common with many white wines like Riesling, in an open wooden vat, inside a wine barrel and inside the wine bottle itself as in the production of many sparkling wines.
History
The natural occurrence of fermentation means it was probably first observed long ago by humans. The earliest uses of the word "fermentation" in relation to winemaking was in reference to the apparent "boiling" within the must that came from the anaerobic reaction of the yeast to the sugars in the grape juice and the release of carbon dioxide. The Latin fervere means, literally, to boil. In the mid-19th century, Louis Pasteur noted the connection between yeast and the process of the fermentation in which the yeast act as catalyst and mediator through a series of a reaction that convert sugar into alcohol. The discovery of the Embden–Meyerhof–Parnas pathway by Gustav Embden, Otto Fritz Meyerhof and Jakub Karol Parnas in the early 20th century contributed more to the understanding of the complex chemical processes involved in the conversion of sugar to alcohol. In the early 2010s, New Jersey based wine tech company GOfermentor invented an automated winemaking device that ferments in single-use liners similar to the single-use bioreactor.
Process
In winemaking, there are distinctions made between ambient yeasts which are natur |
https://en.wikipedia.org/wiki/Serial%20computer | A serial computer is a computer typified by bit-serial architecture i.e., internally operating on one bit or digit for each clock cycle. Machines with serial main storage devices such as acoustic or magnetostrictive delay lines and rotating magnetic devices were usually serial computers.
Serial computers require much less hardware than their parallel bus counterpart, but are much slower. There are modern variants of the serial computer available as a soft microprocessor which can serve niche purposes where size of the CPU is the main constraint.
The first computer that was not serial and used a parallel bus was the Whirlwind in 1951.
A serial computer is not necessarily the same as a computer with a 1-bit architecture, which is a subset of the serial computer class. 1-bit computer instructions operate on data consisting of single bits, whereas a serial computer can operate on N-bit data widths, but does so a single bit at a time.
Serial machines
EDVAC (1949)
BINAC (1949)
SEAC (1950)
UNIVAC I (1951)
Elliott Brothers Elliott 152 (1954)
Bendix G-15 (1956)
LGP-30 (1956)
Elliott Brothers Elliott 803 (1958)
ZEBRA (1958)
D-17B guidance computer (1962)
PDP-8/S (1966)
General Electric GE-PAC 4040 process control computer
F14 CADC (1970) transferred all data serially, but internally operated on many bits in parallel
Kenbak-1 (1971)
Datapoint 2200 (1971)
HP-35 (1972)
Digit-serial HP Saturn-based calculators from the HP-71B (1974) to the HP 50g (2006–2015)
National Semiconductor SC/MP (1976)
Massively parallel
Most of the early massive parallel processing machines were built out of individual serial processors, including:
ICL Distributed Array Processor (1979)
Goodyear MPP (1983)
Connection Machine CM-1 (1985)
Connection Machine CM-2 (1987)
MasPar MP-1 (1990) 32-bit architecture, internally processed 4 bits at a time
VIRAM1 computational RAM (2003)
See also
1-bit computing
BKM algorithm
CORDIC algorithm |
https://en.wikipedia.org/wiki/Chip%20Authentication%20Program | The Chip Authentication Program (CAP) is a MasterCard initiative and technical specification for using EMV banking smartcards for authenticating users and transactions in online and telephone banking. It was also adopted by Visa as Dynamic Passcode Authentication (DPA). The CAP specification defines a handheld device (CAP reader) with a smartcard slot, a numeric keypad, and a display capable of displaying at least 12 characters (e.g., a starburst display). Banking customers who have been issued a CAP reader by their bank can insert their Chip and PIN (EMV) card into the CAP reader in order to participate in one of several supported authentication protocols. CAP is a form of two-factor authentication as both a smartcard and a valid PIN must be present for a transaction to succeed. Banks hope that the system will reduce the risk of unsuspecting customers entering their details into fraudulent websites after reading so-called phishing emails.
Operating principle
The CAP specification supports several authentication methods. The user first inserts their smartcard into the CAP reader and enables it by entering the PIN. A button is then pressed to select the transaction type. Most readers have two or three transaction types available to the user under a variety of names. Some known implementations are:
Code/identify Without requiring any further input, the CAP reader interacts with the smartcard to produce a decimal one-time password, which can be used, for example, to log into a banking website.
Response This mode implements challenge–response authentication, where the bank's website asks the customer to enter a "challenge" number into the CAP reader, and then copy the "response" number displayed by the CAP reader into the web site.
Sign This mode is an extension of the previous, where not only a random "challenge" value, but also crucial transaction details such as the transferred value, the currency, and recipient's account number have to be typed into the CAP reader |
https://en.wikipedia.org/wiki/Locust%20bean%20gum | Locust bean gum (LBG, carob gum, carob bean gum, carobin, E410) is a galactomannan vegetable gum extracted from the seeds of the carob tree (Ceratonia siliqua) and used as a thickening agent (gelling agent) in food technology.
Production
Locust bean gum is extracted from the seeds of the carob tree, which is native to the Mediterranean region. In 2016, nearly 75% of global production came from Portugal, Italy, Spain and Morocco. The seeds are contained within long pods that grow on the tree. First, the pods are kibbled to separate the seed from the pulp. Then, the seeds have their skins removed by an acid or heat treatment. Acid treatment yields a lighter coloured gum than heat treatment;:222 the deskinned seed is then split and gently milled. This causes the brittle germ to break up while not affecting the more robust endosperm. The two are separated by sieving. The separated endosperm can then be milled by a roller operation to produce the final locust bean gum powder. Alternatively, the gum can be extracted from the seeds with water, precipitated with alcohol, filtered, dried and milled, to give a very pure "clarified" locust bean gum.:223
Chemistry
Locust bean gum occurs as a white to yellow-white powder. It consists chiefly of high-molecular-weight hydrocolloidal polysaccharides, composed of galactose and mannose units combined through glycosidic linkages, which may be described chemically as galactomannan. It is dispersible in either hot or cold water, forming a sol having a pH between 5.4 and 7.0, which may be converted to a gel by the addition of small amounts of sodium borate. Locust bean gum is composed of a straight backbone chain of D-mannopyranose units with a side-branching unit of D-galactopyranose having an average of one D-galactopyranose unit branch on every fourth D-mannopyranose unit.
Food science
The bean, when made into powder, is sweet—with a flavor similar to chocolate—and is used to sweeten foods and as a chocolate substitute, although t |
https://en.wikipedia.org/wiki/Comparison%20of%20machine%20translation%20applications | Machine translation is an algorithm which attempts to translate text or speech from one natural language to another.
General information
Basic general information for popular machine translation applications.
Languages features comparison
The following table compares the number of languages which the following machine translation programs can translate between.
(Moses and Moses for Mere Mortals allow you to train translation models for any language pair, though collections of translated texts (parallel corpus) need to be provided by the user. The Moses site provides links to training corpora.)
This is not an all-encompassing list. Some applications have many more language pairs than those listed below. This is a general comparison of key languages only. A full and accurate list of language pairs supported by each product should be found on each of the product's websites.
See also
Machine translation
Machine translation software usability
Computer-assisted translation
Comparison of computer-assisted translation tools
External links
Apertium wiki (list of language pairs and licence information)
Xerox Easy Translator Service (list of language pairs)
Bing Translator Language List
Haitian Creole support in Bing/Microsoft Translator
Microsoft Research: Syntactically Informed Phrasal SMT
List of supported languages in Google Translate |
https://en.wikipedia.org/wiki/Jump-and-Walk%20algorithm | Jump-and-Walk is an algorithm for point location in triangulations (though most of the theoretical analysis were performed in 2D and 3D random Delaunay triangulations). Surprisingly, the algorithm does not need any preprocessing or complex data structures except some simple representation of the triangulation itself. The predecessor of Jump-and-Walk was due to Lawson (1977) and Green and Sibson (1978), which picks a random starting point S and then walks from S toward the query point Q one triangle at a time. But no theoretical analysis was known for these predecessors until after mid-1990s.
Jump-and-Walk picks a small group of sample points and starts the walk from the sample point which is the closest to Q until the simplex containing Q is found. The algorithm was a folklore in practice for some time, and the formal presentation of the algorithm and the analysis of its performance on 2D random Delaunay triangulation was done by Devroye, Mucke and Zhu in mid-1990s (the paper appeared in Algorithmica, 1998). The analysis on 3D random Delaunay triangulation was done by Mucke, Saias and Zhu (ACM Symposium of Computational Geometry, 1996). In both cases, a boundary condition was assumed, namely, Q must be slightly away from the boundary of the convex domain where the vertices of the random Delaunay triangulation are drawn. In 2004, Devroye, Lemaire and Moreau showed that in 2D the boundary condition can be withdrawn (the paper appeared in Computational Geometry: Theory and Applications, 2004).
Jump-and-Walk has been used in many famous software packages, e.g., QHULL, Triangle and CGAL. |
https://en.wikipedia.org/wiki/Murray%20Batchelor | Murray Thomas Batchelor (born 27 August 1961) is an Australian mathematical physicist. He is best known for his work in mathematical physics and theoretical physics.
Academic career
Batchelor was educated at Chatham Public School and Chatham High School (Taree, New South Wales). He completed an Honours degree in Theoretical Physics at the University of New South Wales in 1983, graduating with 1st class honours and a University Medal. Batchelor completed a PhD in Mathematics at the Australian National University in 1987.
His first postdoctoral research position was at the Lorentz Institute in Leiden. After a time as a postdoctoral research fellow in mathematics at the University of Melbourne he took up an Australian Research Council QEII Fellowship at the Australian National University. He then was awarded two successive ARC Senior Research Fellowships, followed by an ARC Professorial Fellowship in 2003.
Batchelor served as Head of the Department of Theoretical Physics from mid-2005 to March 2013. He has held visiting positions at a number of universities, including the University of Oxford, the University of Tokyo and Institut Henri Poincaré. He held a Visiting Fellowship at All Souls College, Oxford during Michaelmas Term 2013.
During his career, Batchelor has published over 150 peer-reviewed papers. He is a Fellow of the Australian Mathematical Society, the Australian Institute of Physics and the Institute of Physics (UK).
Batchelor was Editor-in-Chief of Journal of Physics A: Mathematical and Theoretical. Prior to this he served as Mathematical Physics Section Editor (2007–2008) and as a member of the Editorial Board (2005–2006). He is currently Topical Reviews Editor (2014-).
In 2008 Batchelor was awarded an Honorary Professorship at Chongqing University, China. He took up a full-time position there in 2013 under the 1000 Talents Plan. He is a General Council Member of the Asia-Pacific Center for Theoretical Physics.
He holds a part-time position at the A |
https://en.wikipedia.org/wiki/Boris%20Chertok | Boris Yevseyevich Chertok (; – 14 December 2011) was a Russian engineer in the former Soviet space program, mainly working in control systems, and later found employment in Roscosmos.
Major responsibility under his guidance was primarily based on computerized control system of the Russian missiles and rocketry system, and authored the four-volume book Rockets and People– the definitive source of information about the history of the Soviet space program.
From 1974, he was the deputy chief designer of the Korolev design bureau, the space aircraft designer bureau which he started working for in 1946. He retired in 1992.
Personal life
Born in Łódź (modern Poland), his family moved to Moscow when he was aged 3. Starting from 1930, he worked as an electrician in a metropolitan suburb. Since 1934, he was already designing military aircraft in Bolkhovitinov design bureau. In 1946, he entered the rocket-pioneering NII-88 as a head of control systems department, working along with Sergei Korolev, whose deputy he became after OKB-1 spun off from the NII-88 in 1956.
He was married to Yekaterina Semyonovna Golubkina. He was an atheist.
Rockets and People
Between 1994 and 1999 Boris Chertok, with support from his wife Yekaterina Golubkina, created the four-volume book series about the history of the Soviet space industry. The series was originally published in Russian, in 1999.
Черток Б.Е. Ракеты и люди — М.: Машиностроение, 1999. (B. Chertok, Rockets and People)
Черток Б.Е. Ракеты и люди. Фили — Подлипки — Тюратам — М.: Машиностроение, 1999. (B. Chertok, Rockets and People. Fili — Podlipki — Tyuratam)
Черток Б.Е. Ракеты и люди. Горячие дни холодной войны — М.: Машиностроение, 1999. (B. Chertok, Rockets and People. Hot Days of the Cold War)
Черток Б.Е. Ракеты и люди. Лунная гонка — М.: Машиностроение, 1999. (B. Chertok, Rockets and People. The Moon Race)
Translation into English
NASA's History Division published four translated and somewhat edited volumes of the s |
https://en.wikipedia.org/wiki/Aspartame-acesulfame%20salt | Aspartame-acesulfame salt is an artificial sweetener marketed under the name Twinsweet. It is produced by soaking a 2-1 mixture of aspartame and acesulfame potassium in an acidic solution and allowing it to crystallize; moisture and potassium are removed during this process. It is approximately 350 times as sweet as sucrose. It has been given the E number E962.
History
Aspartame-acesulfame salt was invented in 1995 by sweetener expert Dr John Fry while working for The Holland Sweetener Company (HSC), a subsidiary of DSM. HSC marketed it with the name Twinsweet. It was approved for use as an artificial sweetener in the European Parliament and Council Directive 94/35 EC as amended by Directive 2003/ 115/ EC in 2003. In North America it falls under the same regulations as aspartame and acesulfame-K. It is also approved for use in China, Russia, Hong-Kong, Australia and New Zealand.
In December 2006 HSC ceased all of its aspartame operations, citing a glut in the market driving prices below profitable values. The rights to aspartame-acesulfame are now owned by The NutraSweet Company Inc who have continued to market the sweetener successfully in the USA and EU. |
https://en.wikipedia.org/wiki/Nonnegative%20rank%20%28linear%20algebra%29 | In linear algebra, the nonnegative rank of a nonnegative matrix is a concept similar to the usual linear rank of a real matrix, but adding the requirement that certain coefficients and entries of vectors/matrices have to be nonnegative.
For example, the linear rank of a matrix is the smallest number of vectors, such that every column of the matrix can be written as a linear combination of those vectors. For the nonnegative rank, it is required that the vectors must have nonnegative entries, and also that the coefficients in the linear combinations are nonnegative.
Formal definition
There are several equivalent definitions, all modifying the definition of the linear rank slightly. Apart from the definition given above, there is the following: The nonnegative rank of a nonnegative m×n-matrix A is equal to the smallest number q such there exists a nonnegative m×q-matrix B and a nonnegative q×n-matrix C such that A = BC (the usual matrix product). To obtain the linear rank, drop the condition that B and C must be nonnegative.
Further, the nonnegative rank is the smallest number of nonnegative rank-one matrices into which the matrix can be decomposed additively:
where Rj ≥ 0 stands for "Rj is nonnegative". (To obtain the usual linear rank, drop the condition that the Rj have to be nonnegative.)
Given a nonnegative matrix A the nonnegative rank of A satisfies
A Fallacy
The rank of the matrix A is the largest number of columns which are linearly independent, i.e., none of the selected columns can be written as a linear combination of the other selected columns. It is not true that adding nonnegativity to this characterization gives the nonnegative rank: The nonnegative rank is in general less than or equal to the largest number of columns such that no selected column can be written as a nonnegative linear combination of the other selected columns.
Connection with the linear rank
It is always true that rank(A) ≤ rank+(A). In fact rank+(A) = rank(A) ho |
https://en.wikipedia.org/wiki/Skin%20biopsy | Skin biopsy is a biopsy technique in which a skin lesion is removed to be sent to a pathologist to render a microscopic diagnosis. It is usually done under local anesthetic in a physician's office, and results are often available in 4 to 10 days. It is commonly performed by dermatologists. Skin biopsies are also done by family physicians, internists, surgeons, and other specialties. However, performed incorrectly, and without appropriate clinical information, a pathologist's interpretation of a skin biopsy can be severely limited, and therefore doctors and patients may forgo traditional biopsy techniques and instead choose Mohs surgery. There are four main types of skin biopsies: shave biopsy, punch biopsy, excisional biopsy, and incisional biopsy. The choice of the different skin biopsies is dependent on the suspected diagnosis of the skin lesion. Like most biopsies, patient consent and anesthesia (usually lidocaine injected into the skin) are prerequisites.
Types
Shave biopsy
A shave biopsy is done with either a small scalpel blade or a curved razor blade. The technique is very much user skill dependent, as some surgeons can remove a small fragment of skin with minimal blemish using any one of the above tools, while others have great difficulty securing the devices. Ideally, the razor will shave only a small fragment of protruding tumor and leave the skin relatively flat after the procedure. Hemostasis is obtained using light electrocautery, Monsel's solution, or aluminum chloride. This is the ideal method of diagnosis for basal cell cancer. It can be used to diagnose squamous cell carcinoma and melanoma-in-situ, however, the doctor's understanding of the growth of these last two cancers should be considered before one uses the shave method. The punch or incisional method is better for the latter two cancers as a false negative is less likely to occur (i.e. calling a squamous cell cancer an actinic keratosis or keratinous debris). Hemostasis for the |
https://en.wikipedia.org/wiki/Anatomic%20space | In anatomy, a spatium or anatomic space is a space (cavity or gap). Anatomic spaces are often landmarks to find other important structures. When they fill with gases (such as air) or liquids (such as blood) in pathological ways, they can suffer conditions such as pneumothorax, edema, or pericardial effusion. Many anatomic spaces are potential spaces, which means that they are potential rather than realized (with their realization being dynamic according to physiologic or pathophysiologic events). In other words, they are like an empty plastic bag that has not been opened (two walls collapsed against each other; no interior volume until opened) or a balloon that has not been inflated.
Examples of anatomic spaces (or potential spaces) include:
Axillary space
Buccal space
Canine space
Cystohepatic triangle
Deep perineal space
Deep temporal space
Epidural space
Extraperitoneal space
Fascial spaces of the head and neck
Infratemporal space
Intercostal space
Intermembrane space
Interstitial spaces
Mental space
Pericardial space
Intraperitoneal space
Pleural space
Potential space
Pterygomandibular space
Quadrangular space
Retroperitoneal space
Retropharyngeal space
Retropubic space
Subarachnoid space
Subdural space
Sublingual space
Submandibular space
Submasseteric space
Traube's space
See also
Body cavity
Anatomy |
https://en.wikipedia.org/wiki/Loop-mediated%20isothermal%20amplification | Loop-mediated isothermal amplification (LAMP) is a single-tube technique for the amplification of DNA and a low-cost alternative to detect certain diseases. Reverse transcription loop-mediated isothermal amplification (RT-LAMP) combines LAMP with a reverse transcription step to allow the detection of RNA.
LAMP is an isothermal nucleic acid amplification technique. In contrast to the polymerase chain reaction (PCR) technology, in which the reaction is carried out with a series of alternating temperature steps or cycles, isothermal amplification is carried out at a constant temperature, and does not require a thermal cycler.
Technique
In LAMP, the target sequence is amplified at a constant temperature of 60–65 °C (140-149 °F) using either two or three sets of primers and a polymerase with high strand displacement activity in addition to a replication activity. Typically, 4 different primers are used to amplify 6 distinct regions on the target gene, which increases specificity. An additional pair of "loop primers" can further accelerate the reaction.
The amount of DNA produced in LAMP is considerably higher than PCR-based amplification. Primer design could be performed using several programs, such as PrimerExplorer, MorphoCatcher, and NEB LAMP Primer Design Tool. For screening of conservative and species-specific nucleotide polymorphisms in the most of diagnostics applications a combination of PrimerExplorer and MorphoCatcher is very useful, because allows to localize the species-specific nucleotides at 3'-ends of primers for enhancing the specificity of reaction.
The amplification product can be detected via photometry, measuring the turbidity caused by magnesium pyrophosphate precipitate in solution as a byproduct of amplification. This allows easy visualization by the naked eye or via simple photometric detection approaches for small volumes.
The reaction can be followed in real-time either by measuring the turbidity
or by fluorescence using intercalating d |
https://en.wikipedia.org/wiki/Cryptocoryne%20%C3%97%20willisii | Cryptocoryne × willisii is a plant in the family Araceae.
Synonyms
Cryptocoryne nevillii (a valid but different species); C. lucens is now considered a hybrid in the C. × willisii complex
Taxonomy
In 1976 Niels Jacobsen proposed the change of name of C. nevillii to C. × willisii. This plant is a natural hybrid, so the name is spelt with a cross: C. × willisii.
Crypts page C. nevillii
Distribution
Sri Lanka (Kandy area)
Description
The leaves and even the spathe of this plant are variable (no doubt due to its hybrid nature). The crypts page (see link below) illustrate a number of different forms which makes identification difficult. Cryptocoryne x willisi is a smaller member of the genus along with cryptocoryne parva, and may only reach up to 5cm in size.
Cultivation
Not difficult to grow and common in aquariums but some forms do not seem to flower. |
https://en.wikipedia.org/wiki/Elpistostege | Elpistostege is an extinct genus of finned tetrapodomorphs that lived during the Frasnian age of the Late Devonian epoch. Its only known species, E. watsoni, was first described in 1938 by the British palaeontologist Thomas Stanley Westoll, based on a single partial skull roof discovered at the Escuminac Formation in Quebec, Canada.
In 2010, a complete specimen was found in the same formation, which was described by Richard Cloutier and colleagues in 2020. It reveals that the paired fins of Elpistostege contained bones homologous to the phalanges (digit bones) of modern tetrapods; it is the most basal tetrapodomorph known to possess these bones. At the same time, the fins were covered in scales and lepidotrichia (fin rays), which indicates that the origin of phalanges preceded the loss of fin rays, rather than the other way around.
Relationships
An analysis conducted by Swartz in 2012 found Elpistostege to be the sister taxon of Tiktaalik. Both were found to be primitive members of the group Stegocephalia, along with other advanced stem-tetrapods.
The 2020 study by Cloutier et al. instead recovers Elpistostege as the sister taxon of all limbed vertebrates, crownward of Tiktaalik: |
https://en.wikipedia.org/wiki/Southwest%20Approaches | The Southwest Approaches is the name given to the offshore waters to the southwest of Great Britain and Ireland. The area includes the Celtic Sea, the Bristol Channel and sea areas off southwest Ireland. The area is bordered on the north by the St. George's Channel, on the southeast by the English Channel, and on the west by the Atlantic Ocean. |
https://en.wikipedia.org/wiki/Packing%20dimension | In mathematics, the packing dimension is one of a number of concepts that can be used to define the dimension of a subset of a metric space. Packing dimension is in some sense dual to Hausdorff dimension, since packing dimension is constructed by "packing" small open balls inside the given subset, whereas Hausdorff dimension is constructed by covering the given subset by such small open balls. The packing dimension was introduced by C. Tricot Jr. in 1982.
Definitions
Let (X, d) be a metric space with a subset S ⊆ X and let s ≥ 0 be a real number. The s-dimensional packing pre-measure of S is defined to be
Unfortunately, this is just a pre-measure and not a true measure on subsets of X, as can be seen by considering dense, countable subsets. However, the pre-measure leads to a bona fide measure: the s-dimensional packing measure of S is defined to be
i.e., the packing measure of S is the infimum of the packing pre-measures of countable covers of S.
Having done this, the packing dimension dimP(S) of S is defined analogously to the Hausdorff dimension:
An example
The following example is the simplest situation where Hausdorff and packing dimensions may differ.
Fix a sequence such that and . Define inductively a nested sequence of compact subsets of the real line as follows: Let . For each connected component of (which will necessarily be an interval of length ), delete the middle interval of length , obtaining two intervals of length , which will be taken as connected components of . Next, define . Then is topologically a Cantor set (i.e., a compact totally disconnected perfect space). For example, will be the usual middle-thirds Cantor set if .
It is possible to show that the Hausdorff and the packing dimensions of the set are given respectively by:
It follows easily that given numbers , one can choose a sequence as above such that the associated (topological) Cantor set has Hausdorff dimension and packing dimension .
Generalizations
One can co |
https://en.wikipedia.org/wiki/Brazilian%20Health%20Regulatory%20Agency | The Brazilian Health Regulatory Agency (, Anvisa, literally National Health Surveillance Agency) is a regulatory body of the Brazilian government, created in 1999 during President Fernando Henrique Cardoso's term of office. It is responsible for the regulation and approval of pharmaceutical drugs, sanitary standards and regulation of the food industry.
The agency bills itself as "an independently administered, financially autonomous" regulatory body. It is administered by a five-member collegiate board of directors, who oversee five thematic directorates, assisted by a five-tier oversight structure. Since September 2018 the agency is headed by Antonio Barra Torres.
Pesticide approvals and monitoring
Brazil is the world's largest consumer of pesticides. They are primarily used in the production of soy and corn. The number of approved pesticides increased "rapidly" between 2015 and 2019. Tereza Cristina, the agriculture minister, noted that "there is no general liberation" of new pesticide registrations and no reason for concern when pesticides are used as instructed.
The agency also runs a program for checking pesticide levels in food crops found in supermarkets. However, in May 2022, the agency reached a mark of 3 years without publishing its results, citing the COVID-19 pandemic as a reason. The agency also refused to publish partial results from the last tests performed in 2018 and 2019.
See also
Regulation of therapeutic goods
Brazilian Nonproprietary Name
Epidemic Intelligence Service
World Health Organization |
https://en.wikipedia.org/wiki/Check%20%28pattern%29 | Check (also checker, Brit: chequer, or dicing) is a pattern of modified stripes consisting of crossed horizontal and vertical lines which form squares. The pattern typically contains two colours where a single checker (that is a single square within the check pattern) is surrounded on all four sides by a checker of a different colour.
The pattern is commonly placed onto garments and is, in certain social contexts, applied to clothing which is worn to signify cultural or political affiliations. Such is the case with check in ska and on the keffiyeh. The pattern's all-pervasiveness and simple layout has lent to its practical usage in scientific experimentation and observation, optometry, technology (hardware and software), and as a symbol for responders to associate meaning with.
Etymology
The word is derived from the ancient Persian word which means "king" in the Sasanian game of Shatranj; an old form of chess which is played on a squared board of alternating coloured checkers. It is more specifically derived from the expression shah mat, "the king is dead", which in modern chess parlance is referred to "check-mate". The word entered the French language as in the eleventh century, thence into English.
History
The incorporation of the checkerboard pattern in man-made objects has no definitive origin as the pattern has existed in assorted forms with multiple variations across continents and time periods. There are few known instances of its import into the regions and cultures in which it is featured. Its design and incorporation by humans into pattern-making and weaving precedes its common etymological characterisation and derivation from the word shah in chess; the language conventions from which the contemporary English word 'check' is extracted are younger than some appearances of the pattern or its variations. Human uses for check predate its notable usage on the checkerboard in the board game chess, which was developed in its chaturanga iteration in the la |
https://en.wikipedia.org/wiki/Archive%20for%20Rational%20Mechanics%20and%20Analysis | The Archive for Rational Mechanics and Analysis is a scientific journal that is devoted to research in mechanics as a deductive, mathematical science. The current editors in chief of the journal are Felix Otto and Vladimir Sverak. It was founded in 1956 by Clifford Truesdell when he moved from Indiana University to Johns Hopkins and lost control of a similar journal he had founded a few years previously, the Journal of Rational Mechanics and Analysis (now the Indiana University Mathematics Journal).
Gianfranco Capriz writes that Truesdell's ideals of mathematical and typesetting rigor gave the new journal a high reputation:
James Serrin, a later editor of the Archive, adds that it became the center of a revival of mechanics as an academic discipline, and that by the time of Truesdell's retirement as editor in 1989 subscribing to it was "necessary for every fine scientific library". |
https://en.wikipedia.org/wiki/Sponge%20and%20dough | The sponge and dough method is a two-step bread making process: in the first step a sponge is made and allowed to ferment for a period of time, and in the second step the sponge is added to the final dough's ingredients, creating the total formula. In this usage, synonyms for sponge are yeast starter or yeast pre-ferment. In French baking the sponge and dough method is known as levain-levure. The method is reminiscent of the sourdough or levain methods; however, the sponge is made from all fresh ingredients prior to being used in the final dough.
Method
A sponge ferment is usually a sticky process that uses part of the flour, part or all of the water, and part or all of the yeast of a total- or straight-dough formula. Highly liquid sponges of batter consistencies are mixed with a whip, spoon, or fork. Lower hydration, stiffer sponges are lightly mixed or kneaded just until the dough begins to develop. The sponge is allowed to rest and ferment for a period of time in an environment of a desired temperature and humidity. When the sponge's fermentation time has elapsed or it has reached a desired volumetric growth characteristic, the final dough's ingredients are added. The gluten is developed in the mixing or kneading process, and it may then be processed through further work and rest cycles before being proofed then baked.
The sum of the sponge and final dough's ingredients represents the total formula. A generic 65% pre-fermented flour sponge-and-dough formula using bakers' percentages follows:
{| style="text-align:center;"
|-
! ||Sponge%||& ||(Final)Dough%||= ||TotalFormula
|-
| align=left|Flour||65||+||35||=|| 100.00%
|-
| align=left|Water||40||+||25||=||65.00%
|-
| align=left|Sugar||0||+||6||=||6.00%
|-
| align=left|Milk solids||0||+||3||=||3.00%
|-
| align=left|Fat||0||+||3||=||3.00%
|-
| align=left|Yeast||2.4||+||0||=||2.40%
|-
| align=left|Salt||0||+||2.3||=||2.30%
|-
| align=left colspan=6 style="font-size:80%;line-height:3em"|adapted from Young and Cauv |
https://en.wikipedia.org/wiki/Demographic%20gravitation | Demographic gravitation is a concept of "social physics", introduced by Princeton University astrophysicist John Quincy Stewart in 1947. It is an attempt to use equations and notions of classical physics, such as gravity, to seek simplified insights and even laws of demographic behaviour for large numbers of human beings. A basic conception within it is that large numbers of people, in a city for example, actually behave as an attractive force for other people to migrate there. It has been related to W. J. Reilly's law of retail gravitation, George Kingsley Zipf's Demographic Energy, and to the theory of trip distribution through gravity models.
Writing in the journal Sociometry, Stewart set out an "agenda for social physics." Comparing the microscopic versus macroscopic viewpoints in the methodology of formulating physical laws, he made an analogy with the social sciences:
Fortunately for physics, the macroscopic approach was the commonsense one, and the early investigators Boyle, Charles, Gay-Lussac were able to establish the laws of gases. The situation with respect to "social physics" is reversed...
If Robert Boyle had taken the attitude of many social scientists, he would not have been willing to measure the pressure and volume of a sample of air until an encyclopedic history of its molecules had been compiled. Boyle did not even know that air contained argon and helium but he found a very important law.
Stewart proceeded to apply Newtonian formulae of gravitation to that of "the average interrelations of people" on a wide geographic scale, elucidating such notions as "the demographic force of attraction," demographic energy, force, potential and gradient.
Key equations
The following are some of the key equations (with plain English paraphrases) from his article in sociometry:
(Demographic force = (population 1 multiplied by population 2) divided by (distance squared))
(Demographic energy = (population 1, multiplied by population 2) divided by dis |
https://en.wikipedia.org/wiki/Asulam | Asulam is a herbicide invented by May & Baker Ltd , internally called M&B9057, that is used in horticulture and agriculture to kill bracken and docks. It is also used as an antiviral agent. It is currently marketed, by United Phosphorus Ltd - UPL, as "Asulox" which contains 400 g/L of asulam sodium salt.
Asulam was declared not approved by the Commission Implementing Regulation (EU) No 1045/2011 of 19 October 2011 concerning the non-approval of the active substance asulam. Concerns included: lack of evidence concerning the fate of the toxic metabolite sulfanilamide and other metabolites; the poorly characterised nature of the impurities potentially present in the technical-grade product; toxicity to birds. This decision is given in with Regulation (EC) No 1107/2009 of the European Parliament and of the Council concerning the placing of plant protection products on the market, and amending Commission Decision 2008/934/EC (http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2011:275:0023:0024:EN:PDF). |
https://en.wikipedia.org/wiki/Fragmented%20object | In computing, fragmented objects are truly distributed objects. It is a novel design principle extending the traditional concept of stub based distribution.
In contrast to distributed objects, they are physically distributed and encapsulate the distribution in the object itself. Parts of the object - named fragments - may exist on different nodes and provide the object's interface. Each client accessing a fragmented object by its unique object identity presumes a local fragment. Fragmented objects may act like a RPC-based infrastructure or a (caching) smart proxy as well. Therefore, clients cannot distinguish between the access of a local object, a local stub or a local fragment. Full transparency is gained by the following characteristics of fragmented objects.
Arbitrary internal communication
Arbitrary protocols may be chosen for the internal communication between the fragments. For instance, this allows to hide real-time protocols (e.g., RTP for media streaming) behind a standard CORBA interface.
Arbitrary internal structure
The internal structure of a fragmented object is arranged by the object developer/deployer. It may be client–server, hierarchical, peer-to-peer and others. Thus, a downward compatibility to stub based distribution is ensured.
Arbitrary internal configuration
As both the distribution of state and functionality are hidden behind the object interface their respective distribution over the fragments is also arbitrary. In addition, an application using a fragmented object can also tolerate a change in distributions which is achieved by exchanging the fragment at one or multiple hosts. This procedure can either be triggered by a user who changes object properties
or by the fragmented object itself (that is the collectivity of its fragments) e.g., when some fragment is considered to have failed. Of course an exchange request may trigger one or more other internal changes. The object developer can migrate the state and the functionality ove |
https://en.wikipedia.org/wiki/Composite%20epoxy%20material | Composite epoxy materials (CEM) are a group of composite materials typically made from woven glass fabric surfaces and non-woven glass core combined with epoxy synthetic resin. They are typically used in printed circuit boards.
There are different types of CEMs:
CEM-1 is low-cost, flame-retardant, cellulose-paper-based laminate with only one layer of woven glass fabric.
CEM-2 has cellulose paper core and woven glass fabric surface.
CEM-3 is very similar to the most commonly used PCB material, FR-4. Its color is white, and it is flame-retardant.
CEM-4 quite similar as CEM-3 but not flame-retardant.
CEM-5 (also called CRM-5) has polyester woven glass core.
See also |
https://en.wikipedia.org/wiki/Twisted%3A%20The%20Distorted%20Mathematics%20of%20Greenhouse%20Denial | Twisted: The Distorted Mathematics of Greenhouse Denial is a 2007 book by Ian G. Enting, who is the Professorial Research Fellow in the ARC Centre of Excellence for Mathematics and Statistics of Complex Systems (MASCOS) based at the University of Melbourne. The book analyses the arguments of climate change deniers and the use and presentation of statistics. Enting contends there are contradictions in their various arguments. The author also presents calculations of the actual emission levels that would be required to stabilise CO2 concentrations. This is an update of calculations that he contributed to the pre-Kyoto IPCC report on Radiative Forcing of Climate.
See also
Climate change
Greenhouse effect
Radiative forcing |
https://en.wikipedia.org/wiki/Jer%C3%B4nimo%20de%20Sousa%20Monteiro | Jerônimo de Sousa Monteiro (June 4, 1870 – October 23, 1933) was a Brazilian politician. He was a representative (in the local Espírito do Santo's Chamber of Representative and in the Brazilian Federal Chamber of Representatives), a Senator, and the 13th president (governor) of the state of Espírito Santo, a position he held from May 23, 1908, to May 23, 1912.
Monteiro, who was born in Cachoeiro do Itapemirim ES, is traditionally considered the author of the flag of the state of Espírito Santo, which is formed of three horizontal bands—a cyan band on the top, a white band in the middle and a pink band on the bottom—, and has the motto "Trabalha e Confia" (which means "Work and trust") written on the white band. "Work and trust" being a shortened version of a quote from the Spanish Jesuit father José de Anchieta, "Work as if everything depended on you, and trust as if everything depended on God".
Jerônimo Monteiro helped on the improvement of the urbanization of the capital city, Vitória, adding it better services of potable water, sewage and electric power; he also reformed the port of Vitória and the public hospital of the Holly House of the Mercy. He also tried to install an industrial zone in the regions of the state south to Cachoeiro do Itapemirim but the industrialists preferred to install in Vitória though.
However, Jerônimo Monteiro was the leader of a strong political group in South Espírito Santo, which was a group basically composed by landowners, and opposed to the more heterogeneous socially political group from the central region of Espírito Santo. By the time his brother Bernardino de Sousa Monteiro was in his late term, Jerônimo Monteiro tried to impede Nestor Gomez, legally elected governor of Espírito Santo in 1924, to be inaugurated officially, though he could not succeed on that. |
https://en.wikipedia.org/wiki/History%20of%20IBM%20mainframe%20operating%20systems | The history of IBM mainframe operating systems is significant within the history of mainframe operating systems, because of IBM's long-standing position as the world's largest hardware supplier of mainframe computers. IBM mainframes run operating systems supplied by IBM and by third parties.
The operating systems on early IBM mainframes have seldom been very innovative, except for TSS/360 and the virtual machine systems beginning with CP-67. But the company's well-known reputation for preferring proven technology has generally given potential users the confidence to adopt new IBM systems fairly quickly. IBM's current mainframe operating systems, z/OS, z/VM, z/VSE, and z/TPF, are backward compatible successors to those introduced in the 1960s.
Before System/360
IBM was slow to introduce operating systems. General Motors produced General Motors OS in 1955 and GM-NAA I/O in 1956 for use on its own IBM computers; and in 1962 Burroughs Corporation released MCP and General Electric introduced GECOS, in both cases for use by their customers.
The first operating systems for IBM computers were written in the mid-1950s by IBM customers with very expensive machines at , which had sat idle while operators set up jobs manually, and so they wanted a mechanism for maintaining a queue of jobs.
These operating systems run only on a few processor models and are suitable only for scientific and engineering calculations. Other IBM computers or other applications function without operating systems. But one of IBM's smaller computers, the IBM 650, introduced a feature which later became part of OS/360, where if processing is interrupted by a "random processing error" (hardware glitch), the machine automatically resumes from the last checkpoint instead of requiring the operators to restart the job manually from the beginning.
From General Motors GM-NAA I/O to IBSYS
General Motors Research division produced GM-NAA I/O for its IBM 701 in 1956 (from a prototype, GM Operating System, dev |
https://en.wikipedia.org/wiki/Eutely | Eutelic organisms have a fixed number of somatic cells when they reach maturity, the exact number being relatively constant for any one species. This phenomenon is also referred to as cell constancy. Development proceeds by cell division until maturity; further growth occurs via cell enlargement only. This growth is known as auxetic growth. It is shown by members of phylum Aschelminthes. In some cases, individual organs show eutelic properties while the organism itself does not.
Background
In 1909, Eric Martini coined the term eutely to describe the idea of cell constancy and to introduce a term literature sources would be able to use to identify organisms with a fixed amount and arrangement of cells and tissues. Since the introduction of eutely in the early 1900s, textbooks and theories of cytology and ontogeny have not used the term consistently. Advancements in the field of eutely has been developed by morphologists.
Studying of eutelic organisms has proved challenging, as most eutelic organisms are microscopic. Additionally, there is potential for mistakes in cell counting (often completed via an automated cell counter) and observation when larger organisms have numerous cells. In organisms of small size, errors in the examination and explanation of units may entirely negate reconstructions and deductions. Therefore, investigation of most eutelic organisms is done with intense scrutiny and review.
There are two distinct classes of organisms which display eutely:
Eutelic organisms whose somatic cells show a fixed, or complete pattern of cell and tissue number and arrangement
Eutelic organisms whose somatic cells show a limited, or incomplete pattern of cell and tissue number and arrangement
Examples
Eutely has been confirmed to certain degrees in various forms of diversity and sections of the tree of life. Examples include rotifers, many species of nematodes (including ascaris and the organism Caenorhabditis elegans whose male individuals have 1,033 cell |
https://en.wikipedia.org/wiki/Valentin%20Vornicu | Valentin Vornicu is a mathematician, professional midstakes poker player, and former software engineer, with 12 World Series of Poker circuit rings. Valentin is from Romania and now resides in San Diego, California. Vornicu is the founder of MathLinks, an educational resource company. Before founding MathLinks, he worked as a full-stack engineer for Art of Problem Solving.
Mathematics and education
Valentin Vornicu was a part of the Romanian team for the International Mathematics Olympiad (IMO) in 2001 and 2002. In 2002 he earned a bronze medal at the IMO. He graduated at the University of Bucharest in 2006, and got his Master's Degree in Algebra and Number Theory at the same University in 2008. Vornicu also, in 2007, discovered a generalized form of Schur's inequality, usually cited on online forums as "Vornicu-Schur inequality", which he published in a problem-solving book titled Olimpiada de Matematica.
MathLinks.ro and Art of Problem Solving
In 2002, Vornicu founded an educational resource company known as MathLinks.ro. In 2004, he merged the company with Art of Problem Solving, a company in which he was previously the webmaster. In 2010, he left the company, but is still involved by teaching for the online school.
International Mathematical Olympiad
Vornicu was a 2-time IMO participant, having won a Honorable Mention in 2001, and a Bronze Medal in 2002, but his involvement with the IMO did not stop here. He was Observer A with the Romanian delegation at the IMO 2003 in Japan, Observer B at the IMO 2004 in Greece, a Coordinator at the IMO 2005 in Mexico as well as the IMO 2006 in Slovenia, Observer B at the IMO 2007 in Vietnam and again a Coordinator for the IMO 2008 in Spain. He co-wrote one of the problems used in the IMO 2004 test. Currently, he is tutoring rising students for mathematical olympiads part-time.
MathLinks Summer Program
Vornicu founded the MathLinks Summer Program in 2011. This is a newly created three-week residential summer math progra |
https://en.wikipedia.org/wiki/Uniform%20number%20%28Major%20League%20Baseball%29 | In baseball, the uniform number is a number worn on the uniform of each player and coach. Numbers are used for the purpose of easily identifying each person on the field as no two people from the same team can wear the same number. Although designed for identification purposes only, numbers have become the source of superstition, emotional attachment, and honor (in the form of a number retirement). In Major League Baseball, player and manager numbers are always located on the back of the jersey. A smaller number is often found on the front of the jersey, while umpires wear their numbers on the uniform shirt sleeve.
According to common tradition, single-digit numbers are worn by position players but rarely by pitchers, and numbers 60 and higher are rarely worn at all. Higher numbers are worn during spring training by players whose eventual place on the team is uncertain; they are also sometimes worn during the regular season by players recently called up from the minor leagues. However, such players usually change to a more traditional number once it becomes clear that they will stay with the team. These traditions are not enforced by any rule, and exceptions are common. Examples of star players wearing numbers higher than 60 include Carlton Fisk (72), Kenley Jansen (74), and Aaron Judge (99). In 2018, Blake Snell became the first pitcher wearing a single-digit number (4) to appear in the All-Star Game and the first to win the Cy Young Award.
History
The idea of assigning numbers to players was first proposed as a means of allowing spectators to more easily identify each player on the field. The practice of numbering competitors in other sports was already decades old when, in 1894, an unnamed individual suggested to James Hart, president of the Chicago Colts, that his players should wear uniform numbers. On December 29, 1894, the St. Louis Post-Dispatch wrote of the National League club: "Every one who has attended a ball game knows how puzzled one occasionally |
https://en.wikipedia.org/wiki/Pangamic%20acid | Pangamic acid, also called pangamate, is the name given to a chemical compound discovered by Ernst T. Krebs Sr. His son, Ernst T. Krebs Jr., promoted it as a medicinal compound for use in treatment of a wide range of diseases. They also termed this chemical "Vitamin B15", though it is not a true vitamin, has no nutritional value , has no known use in the treatment of any disease and has been called a "quack remedy". Although a number of compounds labelled "pangamic acid" have been studied or sold (including the 1951 d-gluconodimethylamino acetic acid), no chemical compound, including those claimed by the Krebses to be pangamic acid, has been scientifically verified to have the characteristics that defined the original description of the compound.
The Krebses derived the term "pangamic" to describe this compound which they asserted to be ubiquitous and highly concentrated in seeds (pan meaning "universal" and gamic meaning "seed").
Chemistry
Pangamic acid is the name given to the chemical compound with the empirical formula C10H19O8N and a molecular weight of 281 which appeared to be an ester derived from d-gluconic acid and dimethylglycine. In 1943, the Krebses applied for a patent for a process for extracting this chemical compound which they reported had been previously isolated from apricot seeds, and received the patent in 1949 (US2464240). A 1951 paper by the Krebses (PMID 14840945) reported the first isolation of this compound using this patented process, but did not include enough information to confirm that this compound was actually isolated. In 1955, the Krebses received a patent for another synthesizing process for "N-substituted glycine esters of gluconic acid" (US2710876), but the patent contained no supporting data to confirm the process was able to synthesize compounds described by the patent, including pangamic acid.
Subsequent attempts at synthesizing this ester by other researchers found Krebs' purported methods of producing pangamic acid w |
https://en.wikipedia.org/wiki/Ballistospore | A ballistospore or ballistoconidia is a spore that is discharged into the air from a living being, usually a species of fungus. With fungi, most types of basidiospores formed on basidia are discharged into the air from the tips of sterigmata. At least 30 thousand species of mushrooms, basidiomycete yeasts, and other fungal groups may discharge ballistospores, sometimes at initial accelerations exceeding 10 thousand times g. |
https://en.wikipedia.org/wiki/Delaunay%20tessellation%20field%20estimator | The Delaunay tessellation field estimator (DTFE), (or Delone tessellation field estimator (DTFE)) is a mathematical tool for reconstructing a volume-covering and continuous density or intensity field from a discrete point set. The DTFE has various astrophysical applications, such as the analysis of numerical simulations of cosmic structure formation, the mapping of the large-scale structure of the universe and improving computer simulation programs of cosmic structure formation. It has been developed by Willem Schaap and Rien van de Weijgaert. The main advantage of the DTFE is that it automatically adapts to (strong) variations in density and geometry. It is therefore very well suited for studies of the large scale galaxy distribution.
Method
The DTFE consists of three main steps:
Step 1
The starting point is a given discrete point
distribution. In the upper left-hand frame of the figure, a point distribution is plotted in which at the center of the frame an object is located whose density diminishes radially outwards. In the
first step of the DTFE, the Delaunay tessellation of the point
distribution is constructed. This is a volume-covering division
of space into triangles (tetrahedra in three dimensions), whose
vertices are formed by the point distribution (see figure, upper right-hand frame). The Delaunay tessellation is defined such
that inside the interior of the circumcircle of each Delaunay triangle
no other points from the defining point distribution are present.
Step 2
The Delaunay tessellation forms the heart of
the DTFE. In the figure it is clearly visible that the tessellation
automatically adapts to both the local density and geometry of the
point distribution: where the density is high, the triangles are small
and vice versa. The size of the triangles is therefore a measure of
the local density of the point distribution.
This property of the Delaunay tessellation is exploited in step 2 of
the DTFE, in which the local density is estimated at |
https://en.wikipedia.org/wiki/Sugar%20cube | Sugar cubes are white sugar granules pressed into small cubes. It is usually used by individuals to sweeten drinks. There are two main ways of using the sugar cubes: directly dissolving the cubes in the drink or placing the cube into the mouth while drinking.
Size and packaging
The typical size for each cube is between and , corresponding to the weight of approximately 3–5 grams. However, the cube sizes and shapes vary greatly, for example, playing card suits-shaped pieces are produced under the name "bridge cube sugar".
The typical retail packaging weight is 0.5 kilogram (1 pound) or 1 kilogram / 2 pounds.
In 1923 German wholesaler Karl Hellmann started packaging pair of cubes into individual wrappings with advertisements or collectible pictures on the sleeves. Originally very popular in cafés, they were quickly replaced in the beginning of the 21st century by granulated sugar in packets and sticks.
Manufacturing
When making the cubes, the granulated sugar is slightly (2–3%) moistened, placed into a mold and heated so that the moisture can escape. The firmness, density, and speed of dissolution of the cube are controlled via the crystal size of the granulated sugar, amount of water/steam added, molding pressure, and speed of drying. The dissolution speed is important, as the consumers that place the sugar into their mouths prefer denser, slower-dissolving sugar.
The input material usually requires a wide distribution of sizes (from 500 microns and up) for the cube stability.
The cubes are made on the highly automated lines capable of processing up to 50 tons of sugar per day. Typically, one of the three common processes is used to produce the more popular soft cubes:
Vibro process of Swedish Sugar Corporation (from the late 1950s) utilizes vibration to fill the molds and to get the formed cubes out. Heat radiation oven is used for drying;
Chambon process was invented in France in 1949 and uses a rotating molding unit and a vertical dryer;
Elba process |
https://en.wikipedia.org/wiki/Whippletree%20%28mechanism%29 | A whippletree, or whiffletree, is a mechanism to distribute force evenly through linkages. It is also referred to as an equalizer, leader bar, or double tree. It consists of a bar pivoted at or near the centre, with force applied from one direction to the pivot and from the other direction to the tips. Several whippletrees may be used in series to distribute the force further, such as to simulate pressure over an area as when applying loading to test airplane wings. Whippletrees may be used either in compression or tension. They were also used for subtraction and addition calculations in mechanical computers. Tension whippletrees are used in artful hung mobiles, such as those by artist Alexander Calder.
Draught whippletrees
Whippletrees are used in tension to distribute forces from a point load to the traces of draught animals (the traces are the chains or straps on each side of the harness, on which the animal pulls). For these, the whippletree consists of a loose horizontal bar between the draught animal and its load. The centre of the bar is connected to the load, and the traces attach to its ends. Whippletrees are used especially when pulling a dragged load such as a plough, harrow, log or canal boat or for pulling a vehicle (by the leaders in a team with more than one row of animals).
A swingletree, or singletree, is a special kind of whippletree used for a horse-drawn vehicle. The term swingletree is sometimes used for draught whippletrees.
A whippletree balances the pull from each side of the animal, preventing the load from tugging alternately on each side. It also keeps a point load from pulling the traces in onto the sides of the animal.
If several animals are used abreast, further whippletrees may be used behind the first. Thus, with two animals, each has its own whippletree, and a further one balances the loads from their two whippletrees—an arrangement sometimes known as a double-tree, or for the leaders in a larger team, leader-bars. With three or |
https://en.wikipedia.org/wiki/Cowdry%20bodies | Cowdry bodies are eosinophilic or basophilic nuclear inclusions composed of nucleic acid and protein seen in cells infected with Herpes simplex virus, Varicella-zoster virus, and Cytomegalovirus. They are named after Edmund Cowdry.
There are two types of intranuclear Cowdry bodies:
Type A (as seen in herpes simplex, VZV and measles )
Type B (as seen in infection with poliovirus, CMV and adenovirus), though it may seem that this is an antiquated and perhaps illusory type.
Light microscopy is used for detection of Cowdry bodies. |
https://en.wikipedia.org/wiki/Svelte | Svelte is a free and open-source front-end component framework and language created by Rich Harris and maintained by the Svelte core team members. Svelte is not a monolithic JavaScript library imported by applications: instead, Svelte compiles HTML templates to specialized code that manipulates the DOM directly, which may reduce the size of transferred files and give better client performance. Application code is also processed by the compiler, inserting calls to automatically recompute data and re-render UI elements when the data they depend on is modified. This also avoids the overhead associated with runtime intermediate representations, such as virtual DOM, unlike traditional frameworks (such as React and Vue) which carry out the bulk of their work at runtime, i.e. in the browser.
The compiler itself is written in TypeScript. Its source code is licensed under MIT License and hosted on GitHub. Among comparable frontend libraries, Svelte has one of the smallest bundle footprint at merely 2KB.
History
The predecessor of Svelte is Ractive.js, which Rich Harris had developed earlier.
Version 1 of Svelte was written in JavaScript and was released on 29 November 2016. It was basically Ractive with a compiler. The name Svelte was chosen by Rich Harris and his coworkers at The Guardian.
Version 2 of Svelte was released on 19 April 2018. It set out to correct what the maintainers viewed as mistakes in the earlier version such as replacing double curly braces with single curly braces.
Version 3 of Svelte is written in TypeScript and was released on 21 April 2019. It rethought reactivity by using the compiler to instrument assignments behind the scenes.
The SvelteKit web framework was announced in October 2020 and entered beta in March 2021.
Version 4 of Svelte was released on 22 June 2023. It's a maintenance release, smaller and faster than version 3.
Key early contributors became involved with Conduitry joining with the release of Svelte 1, Tan Li Hau joining in |
https://en.wikipedia.org/wiki/Disease%20vector | In epidemiology, a disease vector is any living agent that carries and transmits an infectious pathogen to another living organism; agents regarded as vectors are organisms, such as parasites or microbes. The first major discovery of a disease vector came from Ronald Ross in 1897, who discovered the malaria pathogen when he dissected a mosquito.
Arthropods
Arthropods form a major group of pathogen vectors with mosquitoes, flies, sand flies, lice, fleas, ticks, and mites transmitting a huge number of pathogens. Many such vectors are haematophagous, which feed on blood at some or all stages of their lives. When the insects feed on blood, the pathogen enters the blood stream of the host. This can happen in different ways.
The Anopheles mosquito, a vector for malaria, filariasis, and various arthropod-borne-viruses (arboviruses), inserts its delicate mouthpart under the skin and feeds on its host's blood. The parasites the mosquito carries are usually located in its salivary glands (used by mosquitoes to anaesthetise the host). Therefore, the parasites are transmitted directly into the host's blood stream. Pool feeders such as the sand fly and black fly, vectors for pathogens causing leishmaniasis and onchocerciasis respectively, will chew a well in the host's skin, forming a small pool of blood from which they feed. Leishmania parasites then infect the host through the saliva of the sand fly. Onchocerca force their own way out of the insect's head into the pool of blood.
Triatomine bugs are responsible for the transmission of a trypanosome, Trypanosoma cruzi, which causes Chagas disease. The Triatomine bugs defecate during feeding and the excrement contains the parasites, which are accidentally smeared into the open wound by the host responding to pain and irritation from the bite.
There are several species of Thrips that act as vectors for over 20 viruses, especially Tospoviruses, and cause all sorts of plant diseases.
Plants and fungi
Some plants and fungi act |
https://en.wikipedia.org/wiki/Virtopsy | Virtopsy is a virtual alternative to a traditional autopsy, conducted with scanning and imaging technology. The name is a portmanteau of "virtual" and "autopsy" and is a trademark registered to Richard Dirnhofer (de), the former head of the Institute of Forensic Medicine of the University of Bern, Switzerland.
The term “virtual” in this context apparently is meant in both the modern and original senses. Virtual's Latin root word “virtus” (virtue) implies the qualities of capability, efficiency, effectiveness and objectivity. However, some proponents propose to replace traditional autopsy with this approach. "Virtual" also has the sense of "digital" or refers to virtual reality respectively.
Based on Dirnhofer's claim, virtopsy fully satisfies the requirement that medical forensic findings provide “a complete and true picture of the object examined”. Furthermore, virtopsy also achieves the objective “that the pathologist’s report should ‘photograph’ with words so that the reader is able to follow his thoughts visually”.
Concept
Forensic pathology is a field within which physicians are mainly preoccupied with examining what initially are victims of possible, suspected or obvious violence that ultimately die. Clinical forensic medicine essentially does the same but with living victims; traffic medicine and age determination are applications that are not, strictly speaking, restricted to clinical forensic medicine in that general practitioners, pediatricians, and other specialists also provide services for such requests.
As examinations typically are performed under the legal and task restraints of investigative authorities such as courts, prosecutors, district attorneys or police, there are constraints as to cost, time, objectivity and task specification depending on local law.
The most relevant step is adequately documenting findings. Virtopsy employs imaging methods that are also used in clinical medicine such as computed tomography (CT), magnetic resonance ima |
https://en.wikipedia.org/wiki/Biology%20of%20the%20Cell | Biology of the Cell is a peer-reviewed scientific journal in the field of cell biology, cell physiology, and molecular biology of animal and plant cells, microorganisms and protists. Topics covered include development, neurobiology, and immunology, as well as theoretical or biophysical modelling.
The journal is currently published monthly by Wiley-Blackwell on behalf of the Société Française des Microscopies and the Société de Biologie Cellulaire de France.
History
The journal first appeared in 1962 and was originally titled Journal de Microscopie (1962–1974). In 1975 the journal was retitled Journal de Microscopie et de Biologie Cellulaire (; 1975–1976). It was later retitled Biologie Cellulaire (; 1977–1980), becoming Biology of the Cell in 1981.
Articles were originally published in either English or French, with summaries in both languages.
Modern journal
Content from 1988 is available online in PDF format, with papers from 2005 also being available in HTML, and from 2006 in an enhanced full-text format.
The journal's 2014 impact factor was 3.506. Biology of the Cell is indexed by BIOBASE, BIOSIS, CAB International, Cambridge Scientific Abstracts, Chemical Abstracts Service, Current Contents/Life Sciences, EMBASE/Excerpta Medica, MEDLINE/Index Medicus, and ProQuest Information and Learning
Articles are primarily research and reviews. Themed series on specific topics are scheduled. They were: Stem Cells (2005), RNA localization (2005), Aquaporins (2005), Synapses (2007), Cell Cycle and Cancer (2008), Microtubules, RNA regulation (2008), Microbiology and Cell Biology (2010), Cilia (2011), Endoplasmic Reticulum (2012), Epigenetics (2012), Post-Translational Modification and Virus Intracellular Trafficking (2012), Optogenetics (2014), Microvesicles and Exosomes (2015), Systems Cell Biology (2015), Translating Canceromics into function (2015).
The editor-in-chief of this journal is René-Marc Mège, a team leader at the Institut Jacques Monod. He was preceded |
https://en.wikipedia.org/wiki/Urdu%20localization%20of%20open-source%20software | Open-source software Urdu localization was initiated by the Center for Research in Urdu Language Processing (CRULP) at the National University of Computer and Emerging Sciences, through its PAN Localization Project, funded by IDRC in Canada.
The localization of the following open source software is in progress:
SeaMonkey – an Internet suite
OpenOffice.org – an office suite
Psi – a chat client
NVu – a web development tool
Drupal – a content management system
SeaMonkey Urdu localization
SeaMonkey is an open-source, multi-platform, complete Internet suite including a browser, an email client, an IRC chat client and a simple HTML editor. It is available in a number of languages and the SeaMonkey Urdu localization is under progress at CRULP. The localization of the SeaMonkey browser, email client and HTML editor is complete and is available in the form of an Urdu language pack. Translation of SeaMonkey help files is in progress.
External links
Center for Research in Urdu Language Processing
PAN Localization Project
IDRC, Canada
SeaMonkey Localized Language Packs
Free software projects
Urdu-language computing
Internationalization and localization |
https://en.wikipedia.org/wiki/Plasma-immersion%20ion%20implantation | Plasma-immersion ion implantation (PIII) or pulsed-plasma doping (pulsed PIII) is a surface modification technique of extracting the accelerated ions from the plasma by applying a high voltage pulsed DC or pure DC power supply and targeting them into a suitable substrate or electrode with a semiconductor wafer placed over it, so as to implant it with suitable dopants. The electrode is a cathode for an electropositive plasma, while it is an anode for an electronegative plasma. Plasma can be generated in a suitably designed vacuum chamber with the help of various plasma sources such as electron cyclotron resonance plasma source which yields plasma with the highest ion density and lowest contamination level, helicon plasma source, capacitively coupled plasma source, inductively coupled plasma source, DC glow discharge and metal vapor arc (for metallic species). The vacuum chamber can be of two types - diode and triode type depending upon whether the power supply is applied to the substrate as in the former case or to the perforated grid as in the latter.
Working
In a conventional immersion type of PIII system, also called as the diode type configuration, the wafer is kept at a negative potential since the positively charged ions of the electropositive plasma are the ones who get extracted and implanted. The wafer sample to be treated is placed on a sample holder in a vacuum chamber. The sample holder is connected to a high voltage power supply and is electrically insulated from the chamber wall. By means of pumping and gas feed systems, an atmosphere of a working gas at a suitable pressure is created.
When the substrate is biased to a negative voltage (few KV's), the resultant electric field drives electrons away from the substrate in the time scale of the inverse electron plasma frequency ωe−1 ( ~ 10−9 sec). Thus an ion matrix Debye sheath which is depleted of electrons forms around it. The negatively biased substrate will accelerate the ions within a time sca |
https://en.wikipedia.org/wiki/Sourcefire | Sourcefire, Inc was a technology company that developed network security hardware and software. The company's Firepower network security appliances were based on Snort, an open-source intrusion detection system (IDS). Sourcefire was acquired by Cisco for $2.7 billion in July 2013.
Background
Sourcefire was founded in 2001 by Martin Roesch, the creator of Snort. The company created a commercial version of the Snort software, the Sourcefire 3D System, which evolved into the company's Firepower line of network security products. The company's headquarters was in Columbia, Maryland in the United States, with offices abroad.
Financial
The company's initial growth was funded through four separate rounds of financing raising a total of $56.5 million from venture investors such as Sierra Ventures, New Enterprise Associates, Sequoia Capital, Core Capital Partners, Inflection Point Ventures, Meritech Capital Partners, and Cross Creek Capital, L.P.
In 2005, Check Point Software attempted to acquire Sourcefire for $225 million, but later withdrew its offer after it became clear US authorities would attempt to block the acquisition. The company completed an initial public offering in March 2007, raising $86.3 million. In August of the same year, Sourcefire acquired Clam AntiVirus. Sourcefire rejected an offer of $187 million in May 2008 from security appliance vendor Barracuda Networks, who had offered to pay US$7.50 per share, amounting to a 13% premium of their then-current stock price. Sourcefire announced its acquisition of the cloud-based antivirus firm Immunet in January 2011.
Revenue for the fourth quarter of 2012 was $67.4 million compared to $53.2 million in the fourth quarter of 2011, an increase of 27%. Revenue for the year ending December 31, 2012 was $223.1 million compared to $165.6 million for 2011, an increase of 35%. International revenues were $74.4 million, up 77% over 2011. As of December 31, 2012, the company's cash, cash equivalents, and investments to |
https://en.wikipedia.org/wiki/Prevertebral%20plexus | A prevertebral plexus is a nerve plexus which branches from a prevertebral ganglion. |
https://en.wikipedia.org/wiki/Linear%20space%20%28geometry%29 | A linear space is a basic structure in incidence geometry. A linear space consists of a set of elements called points, and a set of elements called lines. Each line is a distinct subset of the points. The points in a line are said to be incident with the line. Each two points are in a line, and any two lines may have no more than one point in common. Intuitively, this rule can be visualized as the property that two straight lines never intersect more than once.
Linear spaces can be seen as a generalization of projective and affine planes, and more broadly, of 2- block designs, where the requirement that every block contains the same number of points is dropped and the essential structural characteristic is that 2 points are incident with exactly 1 line.
The term linear space was coined by Paul Libois in 1964, though many results about linear spaces are much older.
Definition
Let L = (P, G, I) be an incidence structure, for which the elements of P are called points and the elements of G are called lines. L is a linear space if the following three axioms hold:
(L1) two distinct points are incident with exactly one line.
(L2) every line is incident to at least two distinct points.
(L3) L contains at least two distinct lines.
Some authors drop (L3) when defining linear spaces. In such a situation the linear spaces complying to (L3) are considered as nontrivial and those that do not are trivial.
Examples
The regular Euclidean plane with its points and lines constitutes a linear space, moreover all affine and projective spaces are linear spaces as well.
The table below shows all possible nontrivial linear spaces of five points. Because any two points are always incident with one line, the lines being incident with only two points are not drawn, by convention. The trivial case is simply a line through five points.
In the first illustration, the ten lines connecting the ten pairs of points are not drawn. In the second illustration, seven lines connecting |
https://en.wikipedia.org/wiki/Panel-reactive%20antibody | A panel-reactive antibody (PRA) is a group of antibodies in a test serum that are reactive against any of several known specific antigens in a panel of test leukocytes or purified HLA antigens from cells. It is an immunologic metric routinely performed by clinical laboratories on the blood of people awaiting organ transplantation.
Traditionally serum is exposed to panel lymphocytes and to an extent other leukocytes in a complement dependent cytotoxicity test. From the extent and pattern of cytotoxicity an estimation of what percentage of the possible donor population the patient has antibodies against is calculated. The PRA score is expressed as a percentage representing the proportion of the population to which the person being tested will react via pre-existing antibodies against human cell surface antigens, which include human leukocyte antigen|HLA] and other polymorphic antigen systems. It is a test of the degree of alloimmunity in a graft recipient and thus a test that quantifies the risk of transplant rejection. Each population has a different demographic prevalence of particular antigens, so the PRA test panel constituents differ from country to country.
Since late 1990's, a purified HLA antigen panel affixed to latex beads coated in fluorochrome, a kind of so called solid phase assay, has been used to replace or complement the cell based assay This test will miss non-HLA antibodies as well as antibodies directed against HLA not included in the assay, but removes the need for panel cells.
A high PRA value usually means that the individual is primed to react immunologically against a large proportion of the population. Individuals with a high PRA value are often termed "sensitized", which indicates that they have been exposed to "foreign" (or "non-self") proteins in the past and have developed antibodies to them. These antibodies typically develop following previous transplants, blood transfusions and pregnancy. Transplanting organs into recipients with p |
https://en.wikipedia.org/wiki/Sugarcane | Sugarcane or sugar cane is a species of (often hybrid) tall, perennial grass (in the genus Saccharum, tribe Andropogoneae) that is used for sugar production. The plants are 2–6 m (6–20 ft) tall with stout, jointed, fibrous stalks that are rich in sucrose, which accumulates in the stalk internodes. Sugarcanes belong to the grass family, Poaceae, an economically important flowering plant family that includes maize, wheat, rice, and sorghum, and many forage crops. It is native to the warm temperate and tropical regions of India, Southeast Asia, and New Guinea.
Grown in tropical and subtropical regions, sugarcane is the world's largest crop by production quantity, totaling 1.9 billion tonnes in 2020, with Brazil accounting for 40% of the world total. Sugarcane accounts for 79% of sugar produced globally (most of the rest is made from sugar beets). About 70% of the sugar produced comes from Saccharum officinarum and its hybrids. All sugarcane species can interbreed, and the major commercial cultivars are complex hybrids.
Sucrose (table sugar) is extracted from sugarcane in specialized mill factories. It is consumed directly in confectionery, used to sweeten beverages, as a preservative in jams and conserves, as a decorative finish for cakes and pâtisserie, and as a raw material in the food industry. It can be fermented to produce ethanol, which is used to make alcoholic drinks like falernum, rum, and cachaça, but also to make biofuel. Sugarcane reeds are used to make pens, mats, screens, and thatch. The young, unexpanded flower head of Saccharum edule (duruka) is eaten raw, steamed, or toasted, and prepared in various ways in Southeast Asia, such as certain island communities of Indonesia as well as in Oceanic countries like Fiji.
Sugarcane was an ancient crop of the Austronesian and Papuan people. It was introduced to Polynesia, Island Melanesia, and Madagascar in prehistoric times via Austronesian sailors. It was also introduced to southern China and India by Austron |
https://en.wikipedia.org/wiki/Zeta%20potential%20titration | Zeta potential titration is a titration of heterogeneous systems, for example colloids and emulsions. Solids in such systems have very high surface area. This type of titration is used to study the zeta potential of these surfaces under different conditions. Details of zeta potential definition and measuring techniques can be found in the International Standard.
Iso-electric Point
The iso-electric point is one such property. The iso-electric point is the pH value at which the zeta potential is approximately zero. At a pH near the iso-electric point (± 2 pH units), colloids are usually unstable; the particles tend to coagulate or flocculate. Such titrations use acids or bases as titration reagents. Tables of iso-electric points for different materials are available. The attached figure illustrates results of such titrations for concentrated dispersions of alumina (4% v/v) and rutile (7% v/v). It is seen that iso-electric point of alumina is around pH 9.3, whereas for rutile it is around pH 4. Alumina is unstable in the pH range from 7 to 11. Rutile is unstable in the pH range from 2 to 6.
Surfactants and Stabilization
Another purpose of this titration is determination of the optimum dose of surfactant for achieving stabilization or flocculation of a heterogeneous system.
Measurement
In a zeta-potential titration, the Zeta potential is the indicator. Measurement of the zeta potential can be performed using microelectrophoresis, or electrophoretic light scattering, or electroacoustic phenomena. The last method makes possible to perform titrations in concentrated systems, with no dilution. |
https://en.wikipedia.org/wiki/Plumbagin | Plumbagin or 5-hydroxy-2-methyl-1,4-naphthoquinone is an organic compound with the chemical formula . It is regarded as a toxin and it is genotoxic and mutagenic.
Plumbagin is a yellow dye, formally derived from naphthoquinone.
It is named after the plant genus Plumbago, from which it was originally isolated.
It is also commonly found in the carnivorous plant genera Drosera and Nepenthes. It is also a component of the black walnut drupe.
See also
Juglone |
https://en.wikipedia.org/wiki/Immune%20disorder | An immune disorder is a dysfunction of the immune system. These disorders can be characterized in several different ways:
By the component(s) of the immune system affected
By whether the immune system is overactive or underactive
By whether the condition is congenital or acquired
According to the International Union of Immunological Societies, more than 150 primary immunodeficiency diseases (PIDs) have been characterized. However, the number of acquired immunodeficiencies exceeds the number of PIDs.
It has been suggested that most people have at least one primary immunodeficiency. Due to redundancies in the immune system, though, many of these are never detected.
Autoimmune diseases
An autoimmune disease is a condition arising from an abnormal immune response to a normal body part. There are at least 80 types of autoimmune diseases. Nearly any body part can be involved. Common symptoms include low-grade fever and feeling tired. Often symptoms come and go.
List of some autoimmune disorders
Lupus
Scleroderma
Certain types of hemolytic anemia
Vasculitis
Type 1 diabetes
Graves' disease
Rheumatoid arthritis
Multiple sclerosis (although it is thought to be an immune-mediated process)
Goodpasture syndrome
Pernicious anemia
Some types of myopathy
Lyme disease (Late)
Celiac disease
Alopecia Areata
Immunodeficiencies
Primary immune deficiency diseases are those caused by inherited genetic mutations. Secondary or acquired immune deficiencies are caused by something outside the body such as a virus or immune suppressing drugs.
Primary immune diseases are at risk to an increased susceptibility to, and often recurrent ear infections, pneumonia, bronchitis, sinusitis or skin infections. Immunodeficient patients may less frequently develop abscesses of their internal organs, autoimmune or rheumatologic and gastrointestinal problems.
Primary immune deficiencies
Severe combined immunodeficiency (SCID)
DiGeorge syndrome
Hyperimmunoglobulin E syndrome ( |
https://en.wikipedia.org/wiki/Inference%20attack | An Inference Attack is a data mining technique performed by analyzing data in order to illegitimately gain knowledge about a subject or database. A subject's sensitive information can be considered as leaked if an adversary can infer its real value with a high confidence. This is an example of breached information security. An Inference attack occurs when a user is able to infer from trivial information more robust information about a database without directly accessing it. The object of Inference attacks is to piece together information at one security level to determine a fact that should be protected at a higher security level.
While inference attacks were originally discovered as a threat in statistical databases, today they also pose a major privacy threat in the domain of mobile and IoT sensor data. Data from accelerometers, which can be accessed by third-party apps without user permission in many mobile devices, has been used to infer rich information about users based on the recorded motion patterns (e.g., driving behavior, level of intoxication, age, gender, touchscreen inputs, geographic location).
Highly sensitive inferences can also be derived, for example, from eye tracking data, smart meter data and voice recordings (e.g., smart speaker voice commands). |
https://en.wikipedia.org/wiki/Line%20field | In mathematics, a line field on a manifold is a formation of a line being tangent to a manifold at each point, i.e. a section of the line bundle over the manifold. Line fields are of particular interest in the study of complex dynamical systems, where it is conventional to modify the definition slightly.
Definitions
In general, let M be a manifold. A line field on M is a function μ that assigns to each point p of M a line μ(p) through the origin in the tangent space Tp(M). Equivalently, one may say that μ(p) is an element of the projective tangent space PTp(M), or that μ is a section of the projective tangent bundle PT(M).
In the study of complex dynamical systems, the manifold M is taken to be a Hersee surface. A line field on a subset A of M (where A is required to have positive two-dimensional Lebesgue measure) is a line field on A in the general sense above that is defined almost everywhere in A and is also a measurable function.
Dynamical systems
Fiber bundles |
https://en.wikipedia.org/wiki/ISO/IEC%2014651 | ISO/IEC 14651:2016, Information technology -- International string ordering and comparison -- Method for comparing character strings and description of the common template tailorable ordering, is an ISO/IEC standard specifying an algorithm that can be used when comparing two strings. This comparison can be used when collating a set of strings. The standard also specifies a datafile specifying the comparison order, the Common Tailorable Template, CTT. The comparison order is supposed to be tailored for different languages (hence the CTT is regarded as a template and not a default, though the empty tailoring, not changing any weighting, is appropriate in many cases), since different languages have incompatible ordering requirements. One such tailoring is European ordering rules (EOR), which in turn is supposed to be tailored for different European languages.
The Common Tailorable Template (CTT) data file of this ISO/IEC standard is aligned with the Default Unicode Collation Entity Table (DUCET) datafile of the Unicode collation algorithm (UCA) specified in Unicode Technical Standard #10.
This is the fourth edition of the standard and was published on 2016-02-15, corrected on 2016-05-01 and covers up to and including Unicode 8.0. One additional amendment Amd.1:2017 was published in September 2017 and covers up to and including Unicode 9.0.
See also
Collation
European ordering rules
ISO/IEC JTC 1/SC 2
Unicode
External links and references
ISO site, "ISO/IEC 14651:2016". ISO/IEC 14651:2016 and Amd.1:2017 are freely available from the ISO website
"What are the differences between the UCA and ISO 14651?"
String collation algorithms
14651
Unicode algorithms
Collation |
https://en.wikipedia.org/wiki/SpursEngine | SpursEngine is a microprocessor from Toshiba built as a media oriented coprocessor, designed for 3D- and video processing in consumer electronics such as set-top boxes and computers. The SpursEngine processor is also known as the Quad Core HD processor. Announced 20 September 2007.
The SpursEngine is a stream processor powered by four Synergistic Processing Elements (SPE), also used in the Cell processor featured in Sony PlayStation 3. These processing elements are fed by on chip H.264 and MPEG-2 codecs and controlled by an off die host CPU, connected by an on chip PCIe controller (in contrast to the Cell processor which has an on chip CPU (the PPE) doing similar work). To enable smoother interaction between the host and the SpursEngine Toshiba also integrated a simple proprietary 32-bit control core. The SpursEngine employs dedicated XDR DRAM as its working memory.
The SpursEngine is designed to work at much lower frequencies than the Cell and Toshiba has also optimized the circuit layout of the SPEs to reduce the size by 30%. The resulting chip consumes 10-20 W of power.
The SpursEngine is accessible to the developer from a device driver developed for Windows and Linux systems. Software supporting the SpursEngine is limited and is primarily in the realm of video editing and encoding.
Technical specification
The first generation of SpursEngine processors are specified as follows:
Built with a 65 nm bulk CMOS fabrication process with 7 layers of copper interconnect
9.98 mm × 10.31 mm (102.89 mm²) large die
239.1 million transistors (Logic: 134 M, SRAM:104.8 M)
Thermal design power: <20 W
Max frequency: 1.5 GHz
Packaged in a 624 pin FC-BGA (Flip Chip-Ball Grid Array)
48GFLOPS peak performance (12GFLOPS per SPU @ 1.5GHz)
Commercialization
In April 2008 Toshiba shipped samples of the SpursEngine SE1000 device, a PCIe-based reference board.
The accelerator card connects to a 1x PCI Express bus and has 128MB XDR DRAM with 12.8GB/s bandwidth.
Leadtek is |
https://en.wikipedia.org/wiki/Proper%20zero-signal%20collector%20current | Consider an NPN transistor circuit. During the positive half-cycle of the signal, the base is positive with respect to the emitter and hence the base-emitter junction is forward biased. This causes a base current and much larger collector current to flow. The positive half-cycle of the signal is amplified in the collector. During the negative half-cycle, the base-emitter junction is reverse biased and hence no current flows. No output flows during the negative half-cycle of the signal. Thus the positive-only amplified output is unfaithful.
A sufficient battery source in the base circuit keeps the input circuit forward biased even during the peak of
the negative half-cycle. When no signal is applied, a DC current I C will flow in the collector circuit due to the battery. This is known as zero signal collector current.
The value of zero signal collector current should be at least equal to the maximum collector current due to AC signal alone. |
https://en.wikipedia.org/wiki/Cone%20%28formal%20languages%29 | In formal language theory, a cone is a set of formal languages that has some desirable closure properties enjoyed by some well-known sets of languages, in particular by the families of regular languages, context-free languages and the recursively enumerable languages. The concept of a cone is a more abstract notion that subsumes all of these families. A similar notion is the faithful cone, having somewhat relaxed conditions. For example, the context-sensitive languages do not form a cone, but still have the required properties to form a faithful cone.
The terminology cone has a French origin. In the American oriented literature one usually speaks of a full trio. The trio corresponds to the faithful cone.
Definition
A cone is a family of languages such that contains at least one non-empty language, and for any over some alphabet ,
if is a homomorphism from to some , the language is in ;
if is a homomorphism from some to , the language is in ;
if is any regular language over , then is in .
The family of all regular languages is contained in any cone.
If one restricts the definition to homomorphisms that do not introduce the empty word then one speaks of a faithful cone; the inverse homomorphisms are not restricted. Within the Chomsky hierarchy, the regular languages, the context-free languages, and the recursively enumerable languages are all cones, whereas the context-sensitive languages and the recursive languages are only faithful cones.
Relation to Transducers
A finite state transducer is a finite state automaton that has both input and output. It defines a transduction , mapping a language over the input alphabet into another language over the output alphabet. Each of the cone operations (homomorphism, inverse homomorphism, intersection with a regular language) can be implemented using a finite state transducer. And, since finite state transducers are closed under composition, every sequence of cone operations can be performed by a finite s |
https://en.wikipedia.org/wiki/Statistical%20epidemiology | Statistical epidemiology is an emerging branch of the disciplines of epidemiology and biostatistics that aims to:
Bring more statistical rigour to bear in the field of epidemiology
Recognise the importance of applied statistics, especially with respect to the context in which statistical methods are appropriate and inappropriate
Aid and improve our interpretation of observations
Introduction
The science of epidemiology has had enormous growth, particularly with charity and government funding. Many researchers have been trained to conduct studies, requiring multiple skills ranging from liaising with clinical staff to the statistical analysis of complex data, such as using Bayesian methods. The role of a Statistical Epidemiologist is to bring the most appropriate methods available to bear on observational study from medical research, requiring a broad appreciation of the underpinning methods and their context of applicability and interpretation.
The earliest mention of this phrase was in an article by EB Wilson, taking a critical look at the way in which statistical methods were developing and being applied in the science of epidemiology.
Academic recognition
There are two Professors of Statistical Epidemiology in the United Kingdom (University of Leeds and Imperial College, London) and a Statistical Epidemiology group (Oxford University).
Related fields
Statistical epidemiology draws upon quantitative methods from fields such as: statistics, operations research, computer science, economics, biology, and mathematics.
See also
Epidemiology
Biostatistics |
https://en.wikipedia.org/wiki/Federal%20Networking%20Council | Informally established in the early 1990s, the Federal Networking Council (FNC) was later chartered by the US National Science and Technology Council's Committee on Computing, Information and Communications (CCIC) to continue to act as a forum for networking collaborations among US federal agencies to meet their research, education, and operational mission goals and to bridge the gap between the advanced networking technologies being developed by research FNC agencies and the ultimate acquisition of mature version of these technologies from the commercial sector. The FNC consisted of a group made up of representatives from the United States Department of Defense (DoD), the National Science Foundation, the Department of Energy, and the National Aeronautics and Space Administration (NASA), among others.
By October 1997, the FNC advisory committee was de-chartered and many of the FNC activities were transferred to the Large Scale Networking group of the Computing, Information, and Communications (CIC) R&D subcommittee of the Networking and Information Technology Research and Development program, or the Applications Council.
On October 24, 1995, the Federal Networking Council passed a resolution defining the term Internet:
Resolution: The Federal Networking Council (FNC) agrees that the following language reflects our definition of the term ``Internet. ``Internet'' refers to the global information system that - (i) is logically linked together by a globally unique address space based on the Internet Protocol (IP) or its subsequent extensions/follow-ons; (ii) is able to support communications using the Transmission Control Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/follow-ons, and/or other IP-compatible protocols; and (iii) provides, uses or makes accessible, either publicly or privately, high level services layered on the communications and related infrastructure described herein.'
Some notable members of the council advisory committee i |
https://en.wikipedia.org/wiki/PTC%20Therapeutics | PTC Therapeutics is a US pharmaceutical company focused on the development of orally administered small molecule drugs and gene therapy which regulate gene expression by targeting post-transcriptional control (PTC) mechanisms in orphan diseases.
In September 2009, PTC entered into an agreement with Roche for the development of orally bioavailable small molecules for central nervous system diseases. PTC acquired the Bio-e platform in 2019.
Products
In 2017, PTC acquired Emflaza (deflazacort) from Marathon Pharmaceuticals. PTC also owns Translarna, (Ataluren) marketed for nonsense mutation Duchenne muscular dystrophy. Together, the two products generated revenues of 174 million dollars and 260 million dollars in 2017 and 2018 respectively.
PTC has the commercialization rights for WAYLIVRA (volanesorsen) in Latin America.
Pipeline
In 2018, PTC acquired Agilis Biotherapeutics and a gene therapy candidate, GT-AADC, with its compelling clinical data in treating aromatic L-amino acid decarboxylase (AADC) deficiency. AADC deficiency is a rare CNS disorder arising from reductions in the enzyme AADC that result from mutations in the dopa decarboxylase (DDC) gene.
In 2020, PTC acquired Censa Pharmaceuticals, Inc., a biopharmaceutical company focused on the development of CNSA-001 (sepiapterin), a clinical-stage investigational therapy for orphan metabolic diseases, including phenylketonuria (PKU) and other diseases associated with defects in the tetrahydrobiopterin (BH4) biochemical pathways diagnosed at birth.
In 2020, PTC announced the FDA approval of Evrysdi (risdiplam) for the treatment of spinal muscular atrophy (SMA) in adults and children 2 months and older. |
https://en.wikipedia.org/wiki/Lithium%20burning | Lithium burning is a nucleosynthetic process in which lithium is depleted in a star. Lithium is generally present in brown dwarfs and not in older low-mass stars. Stars, which by definition must achieve the high temperature (2.5 × 106 K) necessary for fusing hydrogen, rapidly deplete their lithium.
7Li
Burning of the most abundant isotope of lithium, lithium-7, occurs by a collision of 7Li and a proton producing beryllium-8, which promptly decays into two helium-4 nuclei. The temperature necessary for this reaction is just below the temperature necessary for hydrogen fusion. Convection in low-mass stars ensures that lithium in the whole volume of the star is depleted. Therefore, the presence of the lithium line in a candidate brown dwarf's spectrum is a strong indicator that it is indeed substellar.
6Li
From a study of lithium abundances in 53 T Tauri stars, it has been found that lithium depletion varies strongly with size, suggesting that lithium burning by the P-P chain, during the last highly convective and unstable stages during the pre–main sequence later phase of the Hayashi contraction may be one of the main sources of energy for T Tauri stars. Rapid rotation tends to improve mixing and increase the transport of lithium into deeper layers where it is destroyed. T Tauri stars generally increase their rotation rates as they age, through contraction and spin-up, as they conserve angular momentum. This causes an increased rate of lithium loss with age. Lithium burning will also increase with higher temperatures and mass, and will last for at most a little over 100 million years.
The P-P chain for lithium burning is as follows
{| border="0"
|- style="height:2em;"
| || + || || → || || || || ||
|- style="height:2em;"
| || || || || || + || → || || +
|- style="height:2em;"
| || + || || → || || || || ||
|- style="height:2em;"
| || || || || || || → 2× || || + energy
|}
It will not occur in stars less than |
https://en.wikipedia.org/wiki/Crop%20coefficient | Crop coefficients are properties of plants used in predicting evapotranspiration (ET). The most basic crop coefficient, Kc, is simply the ratio of ET observed for the crop studied over that observed for the well calibrated reference crop under the same conditions.
Potential evapotranspiration (PET), is the evaporation and transpiration that potentially could occur if a field of the crop had an ideal unlimited water supply. RET is the reference ET often denoted as ET0.
Even in agricultural crops, where ideal conditions are approximated as much as is practical, plants are not always growing (and therefore transpiring) at their theoretical potential. Plants have growth stages and states of health induced by a variety of environmental conditions.
RET usually represents the PET of the reference crop's most active growth. Kc then becomes a function or series of values specific to the crop of interest through its growing season. These can be quite elaborate in the case of certain maize varieties, but tend to use a trapezoidal or leaf area index (LAI) curve for common crop or vegetation canopies.
Stress coefficients, Ks, account for diminished ET due to specific stress factors. These are often assumed to combine by multiplication.
Water stress is the most ubiquitous stress factor, often denoted as Kw. Stress coefficients tend to be functions ranging between 0 and 1. The simplest are linear, but thresholds are appropriate for some toxicity responses. Crop coefficients can exceed 1 when the crop evapotranspiration exceeds that of RET. |
https://en.wikipedia.org/wiki/Computron%20tube | The Computron was an electron tube designed to perform the parallel addition and multiplication of digital numbers. It was conceived by Richard L. Snyder, Jr., Jan A. Rajchman, Paul Rudnick and the digital computer group at the laboratories of the Radio Corporation of America under the direction of Vladimir Zworykin. Development began in 1941 under contract OEM-sr-591 to Division 7 of the National Defense Research Committee of the United States Office of Research and Development.
The numerical function of the Computron was to solve the equation where A, B, C, and D are 14 bit inputs and S is a 28 bit output. This function was key to the RCA attempt to produce a non-analog computer based fire-control system for use in artillery aiming during WWII.
A simple way to describe the physically complex Computron is to begin with a cathode ray tube structure in the form of a right-circular cylinder with a central vertical cathode structure. The cylinder is composed of 14 discrete planes, each plane having 14 individual radial outward projecting beams. Each of the 196 individual beams is steered by multiple deflection plates toward its two targets. Some deflection plates are connected to circuitry external to the Computron and are the data inputs. The balance of the plates are connected to internal targets and are the partial sums and products from other stages within the tube. Some of the targets are connected to circuitry outside the tube and represent the result.
The electronic function of the Computron design incorporated steered, rather than gated, multiple electron beams. Additionally, the Computron was based on the ability of a secondary electron emission target, under electron bombardment, to assume the potential of the nearest collector electrode. The Additron Tube design by Josef Kates gated electron beams of a fixed trajectory with several control grids which either passed or blocked a current. The Computron was a complex cathode ray tube while t |
https://en.wikipedia.org/wiki/Precession%20%28mechanical%29 | Precession is the process of a round part in a round hole, rotating with respect to each other, wherein the inner part begins rolling around the circumference of the outer bore, in a direction opposite of rotation. This is caused by too much clearance between them and a radial force on the part that constantly changes direction. The direction of rotation of the inner part is opposite to the direction of rotation of the radial force.
In a rotating machine, such as motor, engine, gear train, etc., precession can occur when too much clearance exists between a shaft and a bushing, or between the races and rolling elements in roller and ball bearings. Often a result of wear, inadequate lubrication (too little or too thin), or lack of precision engineering, such precession is usually accompanied by excess vibration and an audible rubbing or buzzing noise. This tends to accelerate the wear process, possibly leading to spalling, galling, or false brinelling (fretting wear) of the contact surfaces.
In stationary parts on a rotating object, such as a bolt threaded into a hole, because the sideways, or radial, load constantly shifts position during use, this lateral force translates into a rolling force that moves opposite to the direction of rotation. This can cause threaded parts to either tighten or loosen under a load, depending on the direction of rotation, typically with a force that can far exceed the typical torque of a wrench. For example, this is a common problem in bicycle pedals, thus on nearly all bikes built after the 1930s, the left-side pedal is equipped with left-hand (backwards) threads, to prevent it from unscrewing itself while riding.
This precession is a process purely due to contact forces and does not depend on inertia and is not inversely proportional to spin rate. It is completely unrelated to torque-free and torque-induced precession.
Examples
Precession caused by fretting can cause fastenings under large torque loads to unscrew themselves.
A |
https://en.wikipedia.org/wiki/Oscar%20Auerbach | Oscar Auerbach (January 1, 1905 – January 15, 1997) was an American pathologist and medical educator who significantly helped tie cigarette smoking to cancer.
Early life and education
Auerbach was born in Manhattan, New York City. He was the first child of European Jewish immigrants, Max and Jennie Auerbach. He attended Staten Island Academy but never completed high school or college. He entered New York University based on exams, then left without a degree to enter New York Medical College, receiving his MD in 1929. He later studied pathology in Vienna, where he met his wife.
Career
Auerbach worked at Staten Island's Sea View Hospital and Halloran Hospital in the 1930s and 1940s. Beginning in 1952, he worked for the Veterans Administration, holding the title senior medical investigator at his death. He also taught medicine at New York Medical College for 12 years and New Jersey Medical School for about 30 years.
Auerbach studied the link between smoking and cancer, and was called a "tireless" researcher. His studies were cited prominently in the 1964 Surgeon General's report on smoking, taking the evidence against smoking beyond statistical studies.
A resident of the Short Hills section of Millburn, New Jersey, Auerbach died at the age of 92 on January 15, 1997, at St. Barnabas Medical Center in Livingston, New Jersey.
See also
Health effects of tobacco smoking |
https://en.wikipedia.org/wiki/HPO%20formalism | The history projection operator (HPO) formalism is an approach to temporal quantum logic developed by Chris Isham. It deals with the logical structure of quantum mechanical propositions asserted at different points in time.
Introduction
In standard quantum mechanics a physical system is associated with a Hilbert space . States of the system at a fixed time are represented by normalised vectors in the space and physical observables are represented by Hermitian operators on .
A physical proposition about the system at a fixed time can be represented by an orthogonal projection operator on (See quantum logic). This representation links together the lattice operations in the lattice of logical propositions and the lattice of projection operators on a Hilbert space (See quantum logic).
The HPO formalism is a natural extension of these ideas to propositions about the system that are concerned with more than one time.
History propositions
Homogeneous histories
A homogeneous history proposition is a sequence of single-time propositions specified at different times . These times are called the temporal support of the history. We shall denote the proposition as and read it as
" at time is true and then at time is true and then and then at time is true"
Inhomogeneous histories
Not all history propositions can be represented by a sequence of single-time propositions at different times. These are called inhomogeneous history propositions. An example is the proposition OR for two homogeneous histories .
History projection operators
The key observation of the HPO formalism is to represent history propositions by projection operators on a history Hilbert space. This is where the name "History Projection Operator" (HPO) comes from.
For a homogeneous history we can use the tensor product to define a projector
where is the projection operator on that represents the proposition at time .
This is a projection operator on the tensor product "history |
https://en.wikipedia.org/wiki/Dynamic%20frequency%20scaling | Dynamic frequency scaling (also known as CPU throttling) is a power management technique in computer architecture whereby the frequency of a microprocessor can be automatically adjusted "on the fly" depending on the actual needs, to conserve power and reduce the amount of heat generated by the chip. Dynamic frequency scaling helps preserve battery on mobile devices and decrease cooling cost and noise on quiet computing settings, or can be useful as a security measure for overheated systems (e.g. after poor overclocking).
Dynamic frequency scaling almost always appear in conjunction with dynamic voltage scaling, since higher frequencies require higher supply voltages for the digital circuit to yield correct results. The combined topic is known as dynamic voltage and frequency scaling (DVFS).
Processor throttling is also known as "automatic underclocking". Automatic overclocking (boosting) is also technically a form of dynamic frequency scaling, but it's relatively new and usually not discussed with throttling.
Operation
The dynamic power (switching power) dissipated by a chip is C·V2·A·f, where C is the capacitance being switched per clock cycle, V is voltage, A is the Activity Factor indicating the average number of switching events per clock cycle by the transistors in the chip (as a unitless quantity) and f is the clock frequency.
Voltage is therefore the main determinant of power usage and heating. The voltage required for stable operation is determined by the frequency at which the circuit is clocked, and can be reduced if the frequency is also reduced. Dynamic power alone does not account for the total power of the chip, however, as there is also static power, which is primarily because of various leakage currents. Due to static power consumption and asymptotic execution time it has been shown that the energy consumption of software shows convex energy behavior, i.e., there exists an optimal CPU frequency at which energy consumption is minimized.
Leakage |
https://en.wikipedia.org/wiki/Bloch%20spectrum | The Bloch spectrum is a concept in quantum mechanics in the field of theoretical physics; this concept addresses certain energy spectrum considerations. Let H be the one-dimensional Schrödinger equation operator
where Uα is a periodic function of period α. The Bloch spectrum of H is defined as the set of values E for which all the solutions of (H − E)φ = 0 are bounded on the whole real axis. The Bloch spectrum consists of the half-line E0 < E from which certain closed intervals [E2j−1, E2j] (j = 1, 2, ...) are omitted. These are forbidden bands (or gaps) so the (E2j−2, E2j−1) are allowed bands. |
https://en.wikipedia.org/wiki/Speed%20dial | Speed dial is a function available on many telephone systems allowing the user to place a call by pressing a reduced number of keys. This function is particularly useful for phone users who dial certain numbers on a regular basis.
In most cases, the user stores these numbers in the phone's memory for future use. The speed dial numbers are usually accessed by pressing a pre-determined key or keys on the phone, followed by a one or two-digit code which the user assigns to each number; however for ease of use, on many systems a call may be placed by pressing and holding one key on the numeric keypad.
Speed dialing is also available via Custom Calling features from the Telephone Company's Central Office. The numbers are programmed by the subscriber through the standard telephone dial, and speed dial calls are placed by dialing simply the digit and waiting a few seconds on a standard rotary dial phone and an older 10 key Touch Tone phone, or by dialing the number and the # key to instantly connect the call on a modern 12 key Touch Tone phone.
Most mobile phones have a contact list feature which provides similar abilities although most have an instant call button which only requires one click.
History
The capability for speed dial historically dates at least as far back as the Number One Electronic Switching System (1ESS) in 1965. Other early "instant dialers" dating back to 1972 also included punched card machines and magnetic tape machines.
Metal–oxide–semiconductor (MOS) integrated circuit (IC) telephone technology enabled speed dialing on push-button telephones in the early 1970s. MOS memory chips were used to store phone numbers, which could then be used for speed dialing at the push of a button. This was demonstrated by the British companies Pye TMC, Marconi-Elliott and GEC in 1970. Between 1971 and 1973, the American company Bell Laboratories develop a push-button MOS telephone called the "Touch-O-Matic" phone, which could store up to 32 phone numbers. This |
https://en.wikipedia.org/wiki/Sven%20Erik%20J%C3%B8rgensen | Sven Erik Jørgensen (29 August 1934 – 5 March 2016) was an ecologist and chemist from Denmark.
Biography
Also a well known biathlon person.
Academic degrees and honors
In 1958, he was awarded Master of Science in chemical engineering from the Technical University of Denmark, then Doctor of Environmental Engineerin (Karlsruhe Institute of Technology) and Doctor of Science in ecological modelling (University of Copenhagen). He taught courses in ecological modelling in 32 countries. After his retirement, he became professor emeritus in environmental chemistry at the University of Copenhagen.
He was an honourable doctor at Coimbra University, Portugal and at University of Dar es Salaam, Tanzania
He received several awards: Ruđer Bošković award, Prigogine Prize, Blaise Pascal Medal , Einstein professorship at the Chinese Academy of Sciences and the Santa Chiara Prize for multidisciplinary teaching. In 2004, together with William J. Mitsch, he was awarded the Stockholm Water Prize.
Works
In 1975 he founded a journal, Ecological Modelling, and in 1978 he founded ISEM, the International Society of Ecological Modelling.
He published 366 papers of which 275 were in peer-reviewed international journals, and edited or authored 76 books, of which several have been translated into other languages (Chinese, Russian, Spanish, and Portuguese).
In 2011, he authored a textbook in ecological modeling “Fundamentals of Ecological Modelling”, which was published as a fourth edition together with Brian D. Fath of the Department of Biological Sciences, Towson University. It has been translated into Chinese and Russian (third edition). He was co-editor in chief of the "Encyclopedia of Ecology" published in 2008, and of the "Encyclopedia of Environmental Management" published during December 2012. He co-authored the textbook “Introduction to Systems Ecology, published in English in 2012 and in Chinese in 2013.
He was the editorial board member of 18 international journals in the |
https://en.wikipedia.org/wiki/Psychrometric%20constant | The psychrometric constant relates the partial pressure of water in air to the air temperature. This lets one interpolate actual vapor pressure from paired dry and wet thermometer bulb temperature readings.
psychrometric constant [kPa °C−1],
P = atmospheric pressure [kPa],
latent heat of water vaporization, 2.45 [MJ kg−1],
specific heat of air at constant pressure, [MJ kg−1 °C−1],
ratio molecular weight of water vapor/dry air = 0.622.
Both and are constants.
Since atmospheric pressure, P, depends upon altitude, so does .
At higher altitude water evaporates and boils at lower temperature.
Although is constant, varied air composition results in varied .
Thus on average, at a given location or altitude, the psychrometric constant is approximately constant. Still, it is worth remembering that weather impacts both atmospheric pressure and composition.
Vapor Pressure Estimation
Saturated vapor pressure,
Actual vapor pressure,
here e[T] is vapor pressure as a function of temperature, T.
Tdew = the dewpoint temperature at which water condenses.
Twet = the temperature of a wet thermometer bulb from which water can evaporate to air.
Tdry = the temperature of a dry thermometer bulb in air. |
https://en.wikipedia.org/wiki/IEEE%20Transactions%20on%20Semiconductor%20Manufacturing | The IEEE Transactions on Semiconductor Manufacturing is a quarterly peer-reviewed scientific journal published by the IEEE. It covers research on semiconductor device fabrication, including simulation and modeling from the factory to the detailed process level, defect control, yield analysis and optimization, production planning and scheduling, environmental issues in semiconductor manufacturing, and manufacturability improvement. The editor-in-chief is Reha Uzsoy (North Carolina State University). According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.874.
The journal is a joint publication of the IEEE Solid-State Circuits Society, IEEE Components, Packaging & Manufacturing Technology Society, IEEE Electron Devices Society, and the IEEE Reliability Society. |
https://en.wikipedia.org/wiki/Sensors%20%28journal%29 | Sensors is a monthly peer-reviewed, open access, scientific journal that is published by MDPI. It was established in June 2001. The editors-in-chief are Vittorio M.N. Passaro, Assefa M. Melesse, Alexander Star, Eduard Llobet, Guillermo Villanueva and Davide Brunelli. Sensors covers research on all aspects of sensors and biosensors. The journal publishes original research articles, short notes, review articles, book reviews, product reviews, and announcements related to academia.
Abstracts and indices
The following databases offer indexing and abstracting services:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.576. |
https://en.wikipedia.org/wiki/Comparison%20of%20screencasting%20software | This page provides a comparison of notable screencasting software, used to record activities on the computer screen. This software is commonly used for desktop recording, gameplay recording and video editing. Screencasting software is typically limited to streaming and recording desktop activity alone, in contrast with a software vision mixer, which has the capacity to mix and switch the output between various input streams.
Comparison by specification
Comparison by features
The following table compares features of screencasting software. The table has seven fields, as follows:
Product name: Product's name; sometime includes edition if a certain edition is targeted
Audio: Specifies whether the product supports recording audio commentary on the video
Entire desktop: Specifies whether product supports recording the entire desktop
OpenGL: Specifies whether the product supports recording from video games and software that employ OpenGL to render digital image
Direct3D: Specifies whether the product supports recording from video games or software that employ Direct3D to render digital image
Editing: Specifies whether the product supports editing recorded video at least to some small extent, such as cropping, trimming or splitting
Output: Specifies the file format in which the software saves the final video (audio output types are omitted)
See also
Comparison of webcam software |
https://en.wikipedia.org/wiki/List%20of%20performance%20analysis%20tools | This is a list of performance analysis tools for use in software development.
General purpose, language independent
The following tools work based on log files that can be generated from various systems.
time (Unix) - can be used to determine the run time of a program, separately counting user time vs. system time, and CPU time vs. clock time.
timem (Unix) - can be used to determine the wall-clock time, CPU time, and CPU utilization similar to time (Unix) but supports numerous extensions.
Supports reporting peak resident set size, major and minor page faults, priority and voluntary context switches via getrusage.
Supports sampling procfs on supporting systems to report metrics such as page-based resident set size, virtual memory size, read-bytes, and write-bytes, etc.
Supports collecting hardware counters when built with PAPI support.
Multiple languages
The following tools work for multiple languages or binaries.
C and C++
Arm MAP, a performance profiler supporting Linux platforms.
AppDynamics, an application performance management solution for C/C++ applications via SDK.
AQtime Pro, a performance profiler and memory allocation debugger that can be integrated into Microsoft Visual Studio, and Embarcadero RAD Studio, or can run as a stand-alone application.
IBM Rational Purify was a memory debugger allowing performance analysis.
Instruments (bundled with Xcode) is used to profile an executable's memory allocations, time usage, filesystem activity, GPU activity etc.
Intel Parallel Studio contains Intel VTune Amplifier, which tunes both serial and parallel programs. It also includes Intel Advisor and Intel Inspector. Intel Advisor optimizes vectorization (use of SIMD instructions) and prototypes threading implementations. Intel Inspector detects and debugs races, deadlocks and memory errors.
Parasoft Insure++ provides a graphical tool that displays and animates memory allocations in real time to expose memory blowout, fragmentation, overuse, bottlenec |
https://en.wikipedia.org/wiki/Wolfram%27s%202-state%203-symbol%20Turing%20machine | In his book A New Kind of Science, Stephen Wolfram described a universal 2-state 5-symbol Turing machine, and conjectured that a particular 2-state 3-symbol Turing machine (hereinafter (2,3) Turing machine) might be universal as well.
On May 14, 2007, Wolfram announced a $25,000 prize to be won by the first person to prove or disprove the universality of the (2,3) Turing machine. On 24 October 2007, it was announced that the prize had been won by Alex Smith, a student in electronics and computing at the University of Birmingham, for his proof that it was universal. Since the proof applies to a non-standard Turing machine model which allows infinite, non-periodic initial configurations, it is categorized by some as "weak-universal".
Background
Claude Shannon first explicitly posed the question of finding the smallest possible universal Turing machine in 1956. He showed that two symbols were sufficient so long as enough states were used (or vice versa), and that it was always possible to exchange states for symbols.
The following table indicates the actions to be performed by the Turing machine depending on whether its current state is A or B, and the symbol currently being read is 0, 1 or 2. The table entries indicate the symbol to be printed, the direction in which the tape head is to move, and the subsequent state of the machine.
{| class="wikitable"
! style="width:20px;"| !! A !! B
|-
! 0
| P1,R,B || P2,L,A
|-
! 1
| P2,L,A || P2,R,B
|-
! 2
| P1,L,A || P0,R,A
|}
The (2,3) Turing machine:
Has no halt state;
Is trivially related to 23 other machines by interchange of states, symbols and directions.
Minsky (1967) briefly argued that standard (2,2) machines cannot be universal and M. Margenstern (2010) provided a mathematical proof based on a result by L. Pavlotskaya in 1973 (not published but mentioned in Margenstern article); thus, it might seem that the (2,3) Turing machine would be the smallest possible universal Turing machine (in terms of number of stat |
https://en.wikipedia.org/wiki/Coomber%27s%20relationship | Coomber's relationship can be used to describe how the internal pressure and dielectric constant of a non-polar liquid are related.
As , which defines the internal pressure of a liquid, it can be found that:
where
is equal to the number of molecules
is the ionization potential of the liquid
is a temperature dependent relation based on numerical constants of the pair summation from inter-particle geometry
is the polarizability
is the volume of the liquid
where for most non-polar liquids |
https://en.wikipedia.org/wiki/Galactitol-1-phosphate%205-dehydrogenase | In enzymology, a galactitol-1-phosphate 5-dehydrogenase () is an enzyme that catalyzes the chemical reaction
galactitol-1-phosphate + NAD+ L-tagatose 6-phosphate + NADH + H+
Thus, the two substrates of this enzyme are galactitol-1-phosphate and NAD+, whereas its 3 products are L-tagatose 6-phosphate, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is galactitol-1-phosphate:NAD+ oxidoreductase. This enzyme participates in galactose metabolism. It employs one cofactor, zinc. |
https://en.wikipedia.org/wiki/Homoserine%20dehydrogenase | In enzymology, a homoserine dehydrogenase () is an enzyme that catalyzes the chemical reaction
L-homoserine + NAD(P)+ L-aspartate 4-semialdehyde + NAD(P)H + H+
The 2 substrates of this enzyme are L-homoserine and NAD+ (or NADP+), whereas its 3 products are L-aspartate 4-semialdehyde, NADH (or NADPH), and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is L-homoserine:NAD(P)+ oxidoreductase. Other names in common use include HSDH, and HSD.
Homoserine dehydrogenase catalyses the third step in the aspartate pathway; the NAD(P)-dependent reduction of aspartate beta-semialdehyde into homoserine. Homoserine is an intermediate in the biosynthesis of threonine, isoleucine, and methionine.
Enzyme structure
The enzyme can be found in a monofunctional form, in some bacteria and yeast. Structural analysis of the yeast monofunctional enzyme indicates that the enzyme is a dimer composed of three distinct regions; an N-terminal nucleotide-binding domain, a short central dimerisation region, and a C-terminal catalytic domain. The N-terminal domain forms a modified Rossmann fold, while the catalytic domain forms a novel alpha-beta mixed sheet.
The enzyme can also be found in a bifunctional form consisting of an N-terminal aspartokinase domain and a C-terminal homoserine dehydrogenase domain, as found in bacteria such as Escherichia coli and in plants.
The bifunctional aspartokinase-homoserine dehydrogenase (AK-HSD) enzyme has a regulatory domain that consists of two subdomains with a common loop-alpha helix-loop-beta strand loop-beta strand motif. Each subdomain contains an ACT domain that allows for complex regulation of several different protein functions. The AK-HSD gene codes for aspartate kinase, an intermediate domain (coding for the linker region between the two enzymes in the bifunctional form), and finally the coding sequence for |
https://en.wikipedia.org/wiki/Histidinol%20dehydrogenase | In enzymology, histidinol dehydrogenase (HIS4) (HDH) () is an enzyme that catalyzes the chemical reaction
L-histidinol + 2 NAD+ L-histidine + 2 NADH + 2 H+
Thus, the two substrates of this enzyme are L-histidinol and NAD+, whereas its 3 products are L-histidine, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is L-histidinol:NAD+ oxidoreductase. This enzyme is also called L-histidinol dehydrogenase.
Structure
In bacteria, HDH is a single chain polypeptide; in fungi it is the C-terminal domain of a multifunctional enzyme which catalyses three different steps of histidine biosynthesis; and in plants it is expressed as a nuclear encoded protein precursor which is exported to the chloroplast.
Active site
Histidinol is held inside the active site thanks to a zinc ion, but the zinc ion does not participate in the catalysis otherwise. The zinc ion is held in place by His262, Gln259, Asp360 and His419 (which, in homodimeric histidinol dehydrogenases, comes from the other monomer). Histidinol itself is held in place by His327 and His367 from one moment unit and Glu414 from the other monomer unit.
A Cys residue has been implicated in the catalytic mechanism of the second oxidative step. However, according to newer studies with histidinol dehydrogenase from E. coli, the mechanism is catalyzed by four bases, B1-B4. His327 acts as the first base, deprotonating histidinol's hydroxyl group. Concomitantly, hydride is abstracted from histidinol by NAD+, which is then exchanged for a second NAD+ molecule. Glu325 acts as the second base, deprotonating a molecule of water, which then attacks histidinol. At the same time, His327 (now protonated) donates a proton to the aldehydic oxygen, which results in a gem-diol. After then, His327 again deprotonates one of the hydroxyl groups and NAD+ abstracts a proton from the reactive carbon a |
https://en.wikipedia.org/wiki/Little%20brown%20bat | The little brown bat or little brown myotis (Myotis lucifugus) is an endangered species of mouse-eared microbat found in North America. It has a small body size and glossy brown fur. It is similar in appearance to several other mouse-eared bats, including the Indiana bat, northern long-eared bat, and Arizona myotis, to which it is closely related. Despite its name, the little brown bat is not closely related to the big brown bat, which belongs to a different genus.
Its mating system is polygynandrous, or promiscuous, and females give birth to one offspring annually. The offspring, called pups, are quickly weaned and reach adult size in some dimensions by three weeks old. The little brown bat has a mean lifespan of 6.5 years, though one individual in the wild reached 34 years old. It is nocturnal, foraging for its insect prey at night and roosting in hollow trees or buildings during the day, among less common roost types. It navigates and locates prey with echolocation.
It has few natural predators, but may be killed by raptors such as owls, as well as terrestrial predators such as raccoons. Other sources of mortality include diseases such as rabies and white-nose syndrome. White-nose syndrome has been a significant cause of mortality since 2006, killing over one million little brown bats by 2011. In the Northeastern United States, population loss has been extreme, with surveyed hibernacula (caves used for hibernation) averaging a population loss of 90%.
Humans frequently encounter the little brown bat due to its habit of roosting in buildings. Colonies in buildings are often considered pests because of the production of waste or the concern of rabies transmission. Little brown bats rarely test positive for rabies, however. Some people attempt to attract little brown bats to their property, but not their houses, by installing bat houses.
Taxonomy
{{cladogram|align=left|style=width:275px;font-size:85%;line-height:85%|caption=Relationships of Nearctic Myotis specie |
https://en.wikipedia.org/wiki/3-hydroxyisobutyrate%20dehydrogenase | In enzymology, a 3-hydroxyisobutyrate dehydrogenase () also known as β-hydroxyisobutyrate dehydrogenase or 3-hydroxyisobutyrate dehydrogenase, mitochondrial (HIBADH) is an enzyme that in humans is encoded by the HIBADH gene.
3-Hydroxyisobutyrate dehydrogenase catalyzes the chemical reaction:
3-hydroxy-2-methylpropanoate + NAD+ 2-methyl-3-oxopropanoate + NADH + H+
Thus, the two substrates of this enzyme are 3-hydroxy-2-methylpropanoate and NAD+, whereas its 3 products are 2-methyl-3-oxopropanoate, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 3-hydroxy-2-methylpropanoate:NAD+ oxidoreductase. This enzyme participates in valine, leucine and isoleucine degradation.
Function
3-hydroxyisobutyrate dehydrogenase is a tetrameric mitochondrial enzyme that catalyzes the NAD+-dependent, reversible oxidation of 3-hydroxyisobutyrate, an intermediate of valine catabolism, to methylmalonate semialdehyde.
Structural studies
As of late 2007, five structures have been solved for this class of enzymes, with PDB accession codes , , , , and . |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.