source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Multicistronic%20message | Multicistronic message is an archaic term for Polycistronic. Monocistronic, bicistronic and tricistronic are also used to describe mRNA with single, double and triple coding areas (exons).
Note that the base word cistron is no longer used in genetics, and has been replaced by intron and exon in eukaryotic mRNA. However, the mRNA found in bacteria is mainly polycistronic. This means that a single bacterial mRNA strand can be translated into several different proteins. This will occur if spacers separate the different proteins, and each spacer has to have a Shine-Dalgarno sequence located upstream of the start codon.
RNA |
https://en.wikipedia.org/wiki/Element%20%28category%20theory%29 | In category theory, the concept of an element, or a point, generalizes the more usual set theoretic concept of an element of a set to an object of any category. This idea often allows restating of definitions or properties of morphisms (such as monomorphism or product) given by a universal property in more familiar terms, by stating their relation to elements. Some very general theorems, such as Yoneda's lemma and the Mitchell embedding theorem, are of great utility for this, by allowing one to work in a context where these translations are valid. This approach to category theory – in particular the use of the Yoneda lemma in this way – is due to Grothendieck, and is often called the method of the functor of points.
Definition
Suppose C is any category and A, T are two objects of C. A T-valued point of A is simply a morphism . The set of all T-valued points of A varies functorially with T, giving rise to the "functor of points" of A; according to the Yoneda lemma, this completely determines A as an object of C.
Properties of morphisms
Many properties of morphisms can be restated in terms of points. For example, a map is said to be a monomorphism if
For all maps , , if then .
Suppose and in C. Then g and h are A-valued points of B, and therefore monomorphism is equivalent to the more familiar statement
f is a monomorphism if it is an injective function on points of B.
Some care is necessary. f is an epimorphism if the dual condition holds:
For all maps g, h (of some suitable type), implies .
In set theory, the term "epimorphism" is synonymous with "surjection", i.e.
Every point of C is the image, under f, of some point of B.
This is clearly not the translation of the first statement into the language of points, and in fact these statements are not equivalent in general. However, in some contexts, such as abelian categories, "monomorphism" and "epimorphism" are backed by sufficiently strong conditions that in fact they do allow such a reinterpretat |
https://en.wikipedia.org/wiki/NBR2 | NBR2 is a gene best known for its location near the breast cancer associated gene BRCA1. Like BRCA1, NBR2 has been a subject of research, but links to breast cancer are currently inconclusive.
NBR2 recently was identified as a glucose starvation-induced long non-coding RNA. NBR2 interacts with AMP-activated protein kinase (AMPK), a critical energy sensor in most eukaryotic cells, and promotes AMPK function to mediate energy stress response. Knockdown of NBR2 attenuates energy stress-induced AMPK activation, resulting in unchecked cell cycling, altered apoptosis/autophagy response, and increased tumour development in vivo. It is now appreciated that NBR2, a former junk gene, plays critical roles in tumor suppression. |
https://en.wikipedia.org/wiki/Secondary%20polynomials | In mathematics, the secondary polynomials associated with a sequence of polynomials orthogonal with respect to a density are defined by
To see that the functions are indeed polynomials, consider the simple example of Then,
which is a polynomial provided that the three integrals in (the moments of the density ) are convergent.
See also
Secondary measure
Polynomials |
https://en.wikipedia.org/wiki/Internet%20radio%20audience%20measurement | Internet radio audience measurement is any method used to determine the number of people listening to an Internet radio broadcast. This information is usually obtained from the broadcaster's audio streaming server. Icecast, Nicecast, and SHOUTcast are examples of audio streaming servers that can provide listener statistics for audience measurement. These numbers often include information such as listeners' IP addresses, the media player they are using, how long they listened, and their computer's operating system.
This approach differs greatly from terrestrial radio audience measurement. Demographic and psychographic information cannot be easily collected due to geographically diverse nature of typical Internet radio audiences. Arbitron, a research company in the United States which collects listener data on terrestrial radio audiences, has begun collecting listener data for Internet radio stations based on a panel of 200,000 users. The statistics collected from those users is then projected against the estimated 52 million actual Internet radio listeners.
DigitalRadioTracker.com (DRT) has developed a proprietary system that monitors Internet Radio as well as select Terrestrial FM, College & Non-Commercial radio compiling the airplay of songs around the globe. They monitor 5000+ radio stations. DRT Reports provides users with detailed information of when, where and how often songs are being played on the radio as well as what version of the song (vital for remixes and mash ups). They offer free weekly charts for Top 200, Top 125 Independent, Top International, Top 50 Pop, Top 50 R&B/HipHop, Top 50 Country, Top 50 Christian/Gospel, Top 50 Adult Contemporary, and Top 50 Rock.
Another company named Triton Digital, a software company in the United States also measures worldwide Internet radio audience. It uses actual data collected from streaming servers rather than estimated data.
StreamAnalyst is a web-based service (SaaS) that generates audience statistics re |
https://en.wikipedia.org/wiki/TC%20Works%20Spark | TC Works Spark was a 2-track audio editing application for the Mac OS 9 and Mac OS X, developed by TC Works, the former computer recording subsidiary of TC Electronic, from 1999 to 2003. Spark was discontinued in 2003.
Features
2 track audio editing.
CD burning.
Audio processing with included or third party VST or AU plug-ins.
Audio analysis tools.
Batch conversion.
Noise reduction tools.
Variants
Spark was available in these versions:
Spark ME - a free version available for download from the TC Works website.
Spark LE - a version bundled with early TC PowerCore cards.
Spark LE Plus - a version only available for purchase from the TC webshop
Spark XL - the flagship application, bundled with several audio plug-ins.
Spark - the predecessor to Spark XL.
Spark Modular - a collection of software modules for building your own modular synthesizer.
Spark FX Machine - a matrix similar to the one found in the TC Electronic FireworX hardware unit. |
https://en.wikipedia.org/wiki/Biometeorology | Biometeorology is the interdisciplinary field of science that studies the interactions between the biosphere and the Earth's atmosphere on time scales of the order of seasons or shorter (in contrast with bioclimatology).
Examples of relevant processes
Weather events influence biological processes on short time scales. For instance, as the Sun rises above the horizon in the morning, light levels become sufficient for the process of photosynthesis to take place in plant leaves. Later on, during the day, air temperature and humidity may induce the partial or total closure of the stomata, a typical response of many plants to limit the loss of water through transpiration. More generally, the daily evolution of meteorological variables controls the circadian rhythm of plants and animals alike.
Living organisms, for their part, can collectively affect weather patterns. The rate of evapotranspiration of forests, or of any large vegetated area for that matter, contributes to the release of water vapor in the atmosphere. This local, relatively fast and continuous process may contribute significantly to the persistence of precipitations in a given area. As another example, the wilting of plants results in definite changes in leaf angle distribution and therefore modifies the rates of reflection, transmission and absorption of solar light in these plants. That, in turn, changes the albedo of the ecosystem as well as the relative importance of the sensible and latent heat fluxes from the surface to the atmosphere. For an example in oceanography, consider the release of dimethyl sulfide by biological activity in sea water and its impact on atmospheric aerosols.
Human biometeorology
The methods and measurements traditionally used in biometeorology are not different when applied to study the interactions between human bodies and the atmosphere, but some aspects or applications may have been explored more extensively. For instance, wind chill has been investigated to determine th |
https://en.wikipedia.org/wiki/Surface%20computer | A surface computer is a computer that interacts with the user through the surface of an ordinary object, rather than through a monitor, keyboard, mouse, or other physical hardware.
The term "surface computer" was first adopted by Microsoft for its PixelSense (codenamed Milan) interactive platform, which was publicly announced on 30 May 2007. Featuring a horizontally-mounted 30-inch display in a coffee table-like enclosure, users can interact with the machine's graphical user interface by touching or dragging their fingertips and other physical objects such as paintbrushes across the screen, or by setting real-world items tagged with special bar-code labels on top of it. As an example, uploading digital files only requires each object (e.g. a Bluetooth-enabled digital camera) to be placed on the unit's display. The resulting pictures can then be moved across the screen, or their sizes and orientation can be adjusted as well.
PixelSense's internal hardware includes a 2.0 GHz Core 2 Duo processor, 2GB of memory, an off the shelf graphics card, a scratch-proof spill-proof surface, a DLP projector, and five infrared cameras to detect touch, unlike the iPhone, which uses a capacitive display. These expensive components resulted in a price tag of between $12,500 to $15,000 for the hardware.
The first PixelSense units were used as information kiosks in the Harrah's family of casinos. Other customers were T-Mobile, for comparing several cell phones side by side, and Sheraton Hotels and Resorts, to service lobby customers in numerous ways. These products were originally branded as "Microsoft Surface", but was renamed "Microsoft PixelSense" on June 18, 2012, after the manufacturer adopted the "Surface" name for its new series of tablet PCs.
See also
Surface computing
Table computer
TouchLight
Jeff Han FTIR |
https://en.wikipedia.org/wiki/Tecomanthe%20speciosa | Tecomanthe speciosa (also known as the Three Kings vine or akapukaea) is a species of subtropical forest lianes. A single specimen was first discovered on Manawatāwhi / Three Kings Islands, 55 km off the northern tip of New Zealand, during a scientific survey in 1945. No other specimens have ever been found in the wild. Tecomanthe is a tropical genus not otherwise represented in New Zealand. Four other species of Tecomanthe occur in Queensland, Indonesia, New Guinea, and the Solomon Islands.
Description
Tecomanthe speciosa is a vigorous twining climber growing up to 10m in height. The glossy, thick compound leaves consist of up to five leaflets. In autumn or early winter it bears long cream-coloured tubular flowers that emerge directly from the stem in large clusters. The flowers appear to be adapted to be pollinated by bats, despite the fact that bats are not part of the present-day fauna of the Three Kings Islands (though they may once have been present). Nevertheless, the flowers of plants growing in cultivation are readily pollinated by a large number of native and exotic birds. While a subtropical plant, Tecomanthe speciosa is able to survive at temperatures as low as -2°C, meaning domestic plants growing as south as Dunedin have been noted as surviving.
It has not yet been formally assessed for the IUCN Red List, but a preliminary assessment of the conservation status of all New Zealand vascular plants found T. speciosa to be "Nationally Critical".
Discovery and cultivation
Tecomanthe speciosa may once have been common on the Three Kings. By the time of its discovery, goats that had been introduced to the islands had reduced the entire population to a single specimen on Great Island, making it one of the world's most endangered plants. The remaining specimen grew on a cliff that was too steep for the goats to reach. The original specimen still grows in the wild, and has developed more vines through the natural process of layering in the years since its d |
https://en.wikipedia.org/wiki/Plate%20trick | In mathematics and physics, the plate trick, also known as Dirac's string trick, the belt trick, or the Balinese cup trick, is any of several demonstrations of the idea that rotating an object with strings attached to it by 360 degrees does not return the system to its original state, while a second rotation of 360 degrees, a total rotation of 720 degrees, does. Mathematically, it is a demonstration of the theorem that SU(2) (which double-covers SO(3)) is simply connected. To say that SU(2) double-covers SO(3) essentially means that the unit quaternions represent the group of rotations twice over. A detailed, intuitive, yet semi-formal articulation can be found in the article on tangloids.
Demonstrations
Resting a small plate flat on the palm, it is possible to perform two rotations of one's hand while keeping the plate upright. After the first rotation of the hand, the arm will be twisted, but after the second rotation it will end in the original position. To do this, the hand makes one rotation passing over the elbow, twisting the arm, and then another rotation passing under the elbow untwists it.
In mathematical physics, the trick illustrates the quaternionic mathematics behind the spin of spinors. As with the plate trick, these particles' spins return to their original state only after two full rotations, not after one.
The belt trick
The same phenomenon can be demonstrated using a leather belt with an ordinary frame buckle, whose prong serves as a pointer. The end opposite the buckle is clamped so it cannot move. The belt is extended without a twist and the buckle is kept horizontal while being turned clockwise one complete turn (360°), as evidenced by watching the prong. The belt will then appear twisted, and no maneuvering of the buckle that keeps it horizontal and pointed in the same direction can undo the twist. Obviously a 360° turn counterclockwise would undo the twist. The surprise element of the trick is that a second 360° turn in the clockwise di |
https://en.wikipedia.org/wiki/Jane%20S.%20Richardson | Jane Shelby Richardson (born January 25, 1941) is an American biophysicist best known for developing the Richardson diagram, or ribbon diagram, a method of representing the 3D structure of proteins. Ribbon diagrams have become a standard representation of protein structures that has facilitated further investigation of protein structure and function globally. With interests in astronomy, math, physics, botany, and philosophy, Richardson took an unconventional route to establishing a science career. Today Richardson is a professor in biochemistry at Duke University.
Biography
Richardson was born on January 25, 1941, and grew up in Teaneck, New Jersey. Her father was an electrical engineer and her mother was an English teacher. Her parents encouraged an interest in science and she was a member of local astronomy clubs as early as elementary school. She attended Teaneck High School and in 1958 won third place in the Westinghouse Science Talent Search, the most prestigious science fair in the United States, with calculations of the satellite Sputnik's orbit from her own observations.
She continued her education intending to study mathematics, astronomy and physics at Swarthmore College. However, Richardson instead graduated Phi Beta Kappa with a bachelor's degree in philosophy and a minor in physics in 1962 before she pursued graduate work in philosophy at Harvard University. Meanwhile, she was able to enroll in plant taxonomy and evolution courses at Harvard that would later contribute to her big-picture approach to studying protein structure. Since Harvard's philosophy focused on modern philosophy instead of Richardson's interest, classical philosophy, Richardson left with her master's degree from Harvard in 1966. Post-graduation, Richardson tried teaching high school, but soon realized that this career path was not for her. She subsequently rejoined the scientific world, working as a technician at Massachusetts Institute of Technology in the same laboratory as her |
https://en.wikipedia.org/wiki/Ribbon%20diagram | Ribbon diagrams, also known as Richardson diagrams, are 3D schematic representations of protein structure and are one of the most common methods of protein depiction used today. The ribbon depicts the general course and organisation of the protein backbone in 3D and serves as a visual framework for hanging details of the entire atomic structure, such as the balls for the oxygen atoms attached to myoglobin's active site in the adjacent figure. Ribbon diagrams are generated by interpolating a smooth curve through the polypeptide backbone. α-helices are shown as coiled ribbons or thick tubes, β-strands as arrows, and non-repetitive coils or loops as lines or thin tubes. The direction of the polypeptide chain is shown locally by the arrows, and may be indicated overall by a colour ramp along the length of the ribbon.
Ribbon diagrams are simple yet powerful, expressing the visual basics of a molecular structure (twist, fold and unfold). This method has successfully portrayed the overall organization of protein structures, reflecting their three-dimensional nature and allowing better understanding of these complex objects both by expert structural biologists and by other scientists, students, and the general public.
History
The first ribbon diagrams, hand-drawn by Jane S. Richardson in 1980 (influenced by earlier individual illustrations), were the first schematics of 3D protein structure to be produced systematically. They were created to illustrate a classification of protein structures for an article in Advances in Protein Chemistry (now available in annotated form on-line at Anatax). These drawings were outlined in pen on tracing paper over a printout of a Cα trace of the atomic coordinates, and shaded with colored pencil or pastels; they preserved positions, smoothed the backbone path, and incorporated small local shifts to disambiguate the visual appearance. As well as the triose isomerase ribbon drawing at the right, other hand-drawn examples depicted prealbum |
https://en.wikipedia.org/wiki/Combinatorial%20commutative%20algebra | Combinatorial commutative algebra is a relatively new, rapidly developing mathematical discipline. As the name implies, it lies at the intersection of two more established fields, commutative algebra and combinatorics, and frequently uses methods of one to address problems arising in the other. Less obviously, polyhedral geometry plays a significant role.
One of the milestones in the development of the subject was Richard Stanley's 1975 proof of the Upper Bound Conjecture for simplicial spheres, which was based on earlier work of Melvin Hochster and Gerald Reisner. While the problem can be formulated purely in geometric terms, the methods of the proof drew on commutative algebra techniques.
A signature theorem in combinatorial commutative algebra is the characterization of h-vectors of simplicial polytopes conjectured in 1970 by Peter McMullen. Known as the g-theorem, it was proved in 1979 by Stanley (necessity of the conditions, algebraic argument) and by Louis Billera and Carl W. Lee (sufficiency, combinatorial and geometric construction). A major open question was the extension of this characterization from simplicial polytopes to simplicial spheres, the g-conjecture, which was resolved in 2018 by Karim Adiprasito.
Important notions of combinatorial commutative algebra
Square-free monomial ideal in a polynomial ring and Stanley–Reisner ring of a simplicial complex.
Cohen–Macaulay ring.
Monomial ring, closely related to an affine semigroup ring and to the coordinate ring of an affine toric variety.
Algebra with a straightening law. There are several versions of those, including Hodge algebras of Corrado de Concini, David Eisenbud, and Claudio Procesi.
See also
Algebraic combinatorics
Polyhedral combinatorics
Zero-divisor graph |
https://en.wikipedia.org/wiki/Immunolabeling | Immunolabeling is a biochemical process that enables the detection and localization of an antigen to a particular site within a cell, tissue, or organ. Antigens are organic molecules, usually proteins, capable of binding to an antibody. These antigens can be visualized using a combination of antigen-specific antibody as well as a means of detection, called a tag, that is covalently linked to the antibody. If the immunolabeling process is meant to reveal information about a cell or its substructures, the process is called immunocytochemistry. Immunolabeling of larger structures is called immunohistochemistry.
There are two complex steps in the manufacture of antibody for immunolabeling. The first is producing the antibody that binds specifically to the antigen of interest and the second is fusing the tag to the antibody. Since it is impractical to fuse a tag to every conceivable antigen-specific antibody, most immunolabeling processes use an indirect method of detection. This indirect method employs a primary antibody that is antigen-specific and a secondary antibody fused to a tag that specifically binds the primary antibody. This indirect approach permits mass production of secondary antibody that can be bought off the shelf. Pursuant to this indirect method, the primary antibody is added to the test system. The primary antibody seeks out and binds to the target antigen. The tagged secondary antibody, designed to attach exclusively to the primary antibody, is subsequently added.
Typical tags include: a fluorescent compound, gold beads, a particular epitope tag, or an enzyme that produces a colored compound. The association of the tags to the target via the antibodies provides for the identification and visualization of the antigen of interest in its native location in the tissue, such as the cell membrane, cytoplasm, or nuclear membrane. Under certain conditions the method can be adapted to provide quantitative information.
Immunolabeling can be use |
https://en.wikipedia.org/wiki/H-vector | In algebraic combinatorics, the h-vector of a simplicial polytope is a fundamental invariant of the polytope which encodes the number of faces of different dimensions and allows one to express the Dehn–Sommerville equations in a particularly simple form. A characterization of the set of h-vectors of simplicial polytopes was conjectured by Peter McMullen and proved by Lou Billera and Carl W. Lee and Richard Stanley (g-theorem). The definition of h-vector applies to arbitrary abstract simplicial complexes. The g-conjecture stated that for simplicial spheres, all possible h-vectors occur already among the h-vectors of the boundaries of convex simplicial polytopes. It was proven in December 2018 by Karim Adiprasito.
Stanley introduced a generalization of the h-vector, the toric h-vector, which is defined for an arbitrary ranked poset, and proved that for the class of Eulerian posets, the Dehn–Sommerville equations continue to hold. A different, more combinatorial, generalization of the h-vector that has been extensively studied is the flag h-vector of a ranked poset. For Eulerian posets, it can be more concisely expressed by means of a noncommutative polynomial in two variables called the cd-index.
Definition
Let Δ be an abstract simplicial complex of dimension d − 1 with fi i-dimensional faces and f−1 = 1. These numbers are arranged into the f-vector of Δ,
An important special case occurs when Δ is the boundary of a d-dimensional convex polytope.
For k = 0, 1, …, d, let
The tuple
is called the h-vector of Δ. In particular, , , and , where is the Euler characteristic of . The f-vector and the h-vector uniquely determine each other through the linear relation
from which it follows that, for ,
In particular, . Let R = k[Δ] be the Stanley–Reisner ring of Δ. Then its Hilbert–Poincaré series can be expressed as
This motivates the definition of the h-vector of a finitely generated positively graded algebra of Krull dimension d as the numerator of i |
https://en.wikipedia.org/wiki/Stieltjes%20transformation | In mathematics, the Stieltjes transformation of a measure of density on a real interval is the function of the complex variable defined outside by the formula
Under certain conditions we can reconstitute the density function starting from its Stieltjes transformation thanks to the inverse formula of Stieltjes-Perron. For example, if the density is continuous throughout , one will have inside this interval
Connections with moments of measures
If the measure of density has moments of any order defined for each integer by the equality
then the Stieltjes transformation of admits for each integer the asymptotic expansion in the neighbourhood of infinity given by
Under certain conditions the complete expansion as a Laurent series can be obtained:
Relationships to orthogonal polynomials
The correspondence defines an inner product on the space of continuous functions on the interval .
If is a sequence of orthogonal polynomials for this product, we can create the sequence of associated secondary polynomials by the formula
It appears that is a Padé approximation of in a neighbourhood of infinity, in the sense that
Since these two sequences of polynomials satisfy the same recurrence relation in three terms, we can develop a continued fraction for the Stieltjes transformation whose successive convergents are the fractions .
The Stieltjes transformation can also be used to construct from the density an effective measure for transforming the secondary polynomials into an orthogonal system. (For more details see the article secondary measure.)
See also
Orthogonal polynomials
Secondary polynomials
Secondary measure |
https://en.wikipedia.org/wiki/Chronic%20multifocal%20Langerhans%20cell%20histiocytosis | Chronic multifocal Langerhans cell histiocytosis, previously known as Hand–Schüller–Christian disease, is a type of Langerhans cell histiocytosis (LCH), which can affect multiple organs. The condition is traditionally associated with a combination of three features; bulging eyes, breakdown of bone (lytic bone lesions often in the skull), and diabetes insipidus (excessive thirst and passing urine), although around 75% of cases do not have all three features. Other features may include a fever and weight loss, and depending on the organs involved there may be rashes, asymmetry of the face, ear infections, signs in the mouth and the appearance of advanced gum disease. Features relating to lung and liver disease may occur.
It is due to a genetic mutation in the MAPKinase pathway that occurs during early development. The diagnosis may be suspected based on symptoms and MRI and confirmed by tissue biopsy. Blood tests may show anaemia, and less commonly a low white blood cell count and low platelet count.
Treatment may involve surgery, chemotherapy, radiation therapy, and certain medicines.
Hand–Schüller–Christian disease was named for the American pediatrician Alfred Hand Jr., the Austrian neuroradiologist Arthur Schüller, and the American internist Henry Asbury Christian, who described it in 1893, 1915 and 1919, respectively. Before the Histiocyte Society classified histiocytoses in the 1980s, the condition was also known as "Histiocytosis X", where "X" denoted the then unknown cause. It is now known as chronic multifocal Langerhans cell histiocytosis, a subtype of LCH.
The disease is rare. Most present between the ages of two and six. The outlook depends on how many and how much organs are affected. In some people the condition is life-threatening.
Signs and symptoms
The traditional combination of three features are seen in 25% of people with the condition, which usually presents between the ages of two and six; one or both bulging eyes, breakdown of bone (lytic bo |
https://en.wikipedia.org/wiki/Second%20wind | Second wind is a phenomenon in endurance sports, such as marathons or road running (as well as other sports), whereby an athlete who is out of breath and too tired to continue (known as "hitting the wall"), finds the strength to press on at top performance with less exertion. The feeling may be similar to that of a "runner's high", the most obvious difference being that the runner's high occurs after the race is over. In muscle glycogenoses (muscle GSDs), an inborn error of carbohydrate metabolism impairs either the formation or utilization of muscle glycogen. As such, those with muscle glycogenoses do not need to do prolonged exercise to experience "hitting the wall". Instead, signs of exercise intolerance, such as an inappropriate rapid heart rate response to exercise, are experienced from the beginning of an activity, and some muscle GSDs can achieve second wind within about 10 minutes from the beginning of the aerobic activity, such as walking. (See below in pathology).
In experienced athletes, "hitting the wall" is conventionally believed to be due to the body's glycogen stores being depleted, with "second wind" occurring when fatty acids become the predominant source of energy. The delay between "hitting the wall" and "second wind" occurring, has to do with the slow speed of which fatty acids sufficiently produce ATP (energy); with fatty acids taking approximately 10 minutes, whereas muscle glycogen is considerably faster at about 30 seconds. Some scientists believe the second wind to be a result of the body finding the proper balance of oxygen to counteract the buildup of lactic acid in the muscles. Others claim second winds are due to endorphin production.
Heavy breathing during exercise also provides cooling for the body. After some time the veins and capillaries dilate and cooling takes place more through the skin, so less heavy breathing is needed. The increase in the temperature of the skin can be felt at the same time as the "second wind" takes place. |
https://en.wikipedia.org/wiki/Comparison%20of%20packet%20analyzers | The following tables compare general and technical information for several packet analyzer software utilities, also known as network analyzers or packet sniffers. Please see the individual products' articles for further information.
General information
Basic general information about the software—creator/company, license/price, etc.
Operating system support
The utilities can run on these operating systems. |
https://en.wikipedia.org/wiki/Rayleigh%20Medal | The Rayleigh Medal is a prize awarded annually by the Institute of Acoustics for "outstanding contributions to acoustics". The prize is named after John Strutt, 3rd Baron Rayleigh. It should not be confused with the medal of the same name awarded by the Institute of Physics.
List of recipients
Source: Institute of Acoustics
See also
List of physics awards |
https://en.wikipedia.org/wiki/A%20B%20Wood%20Medal | The A B Wood Medal is a prize awarded annually by the Institute of Acoustics for "distinguished contributions to the application of underwater acoustics". The prize, named after Albert Beaumont Wood, is presented in alternate years to European and North American scientists.
Recipients
Source: Institute of Acoustics
See also
List of physics awards |
https://en.wikipedia.org/wiki/Journal%20of%20Symbolic%20Logic | The Journal of Symbolic Logic is a peer-reviewed mathematics journal published quarterly by Association for Symbolic Logic. It was established in 1936 and covers mathematical logic. The journal is indexed by Mathematical Reviews, Zentralblatt MATH, and Scopus. Its 2009 MCQ was 0.28, and its 2009 impact factor was 0.631.
External links
Mathematics journals
Academic journals established in 1936
Multilingual journals
Quarterly journals
Association for Symbolic Logic academic journals
Logic journals
Cambridge University Press academic journals |
https://en.wikipedia.org/wiki/Volatile%20acid | In chemistry, the terms volatile acid (or volatile fatty acid (VFA)) and volatile acidity (VA) are used somewhat differently in various application areas.
Wine
In wine chemistry, the volatile acids are those that can be separated from wine through steam distillation. Many factors influence the level of VA, but the growth of spoilage bacteria and yeasts are the primary source and consequently VA is often used to quantify the degree of wine oxidation and spoilage.
Acetic acid is the primary volatile acid in wine, but smaller amounts of lactic, formic, butyric, propionic acid, carbonic acid (from carbon dioxide), and sulfurous acid (from sulfur dioxide) may be present and contribute to VA; in analysis, measures may be taken to exclude or correct for the VA due to carbonic, sulfuric, and sorbic acids.
Other acids present in wine, including malic and tartaric acid are considered non-volatile or fixed acids. Together volatile and non-volatile acidity compromise total acidity.
Classical analysis for VA involves distillation in a Cash or Markham still, followed by titration with standardized sodium hydroxide, and reporting of the results as acetic acid.
Several alternatives to the classical analysis have been developed.
While VA is typically considered a wine flaw or fault, winemakers may intentionally allow a small amount of VA in their product for its contribution to the wine's sensory complexity. Excess VA is difficult for winemakers to correct. In the some countries, including the United States, European Union, and Australia, the law sets a limit on the level of allowable VA.
Wastewater
In wastewater treatment, the volatile acids are the short chain fatty acids (1-6 carbon atoms) that are water soluble and can be steam distilled at atmospheric pressure - primarily acetic, proprionic, and butyric acid. These acids are produced during anaerobic digestion. In a well functioning digester, the volatile acids will be consumed by the methane forming bacteria. Volatile |
https://en.wikipedia.org/wiki/Streaming%20data | Streaming data is data that is continuously generated by different sources. Such data should be processed incrementally using stream processing techniques without having access to all of the data. In addition, it should be considered that concept drift may happen in the data which means that the properties of the stream may change over time.
It is usually used in the context of big data in which it is generated by many different sources at high speed.
Data streaming can also be explained as a technology used to deliver content to devices over the internet, and it allows users to access the content immediately, rather than having to wait for it to be downloaded.
Big data is forcing many organizations to focus on storage costs, which brings interest to data lakes and data streams. A data lake refers to the storage of a large amount of unstructured and semi data, and is useful due to the increase of big data as it can be stored in such a way that firms can dive into the data lake and pull out what they need at the moment they need it. Whereas a data stream can perform real-time analysis on streaming data, and it differs from data lakes in speed and continuous nature of analysis, without having to store the data first.
Characteristics and consequences
In digital innovation management theories, five characteristics of digital innovative technologies are mentioned; homogenization and decoupling, modularity, connectivity, digital traces and programmability. Before these characteristics are explained and further elaborated with different examples of data streaming, it is important to understand the difference between digitalization and digitizing. The latter describes encoding from analog information to a digital format, such as light that enters the lens of a camera and transforms to a digital format/image (Yoo et al. 2012). Where digitalization refers to a more socio-technical process, where digitized techniques are applied to broader social and institutional context |
https://en.wikipedia.org/wiki/Palm%20Foleo | The Palm Foleo was a planned subnotebook computer that was announced by mobile device manufacturer Palm Inc. on May 30, 2007, and canceled three months later. It intended to serve as a companion for smartphones including Palm's own Treo line. The device ran on the Linux operating system and featured 256 MB of flash memory and an immediate boot-up feature.
The Foleo featured wireless access via Bluetooth and Wi-Fi. Integrated software included an e-mail client which was to be capable of syncing with the Treo E-Mail client, the Opera web browser and the Documents To Go office suite. The client did not send and retrieve mail over the Wi-Fi connection, instead transmitting via synchronization with the companion smartphone.
The device was slated to launch in the U.S. in the third quarter of 2007 for a price expected by Palm to be $499 after an introductory $100 rebate. Palm canceled Foleo development on September 4, 2007, with Palm CEO Ed Colligan announcing that the company would return its focus to its core product of smartphones and handheld computers. Soon after the device was canceled, a branch of subnotebooks called netbooks, similar to the Foleo in size and functionality, reached the market. Had it been released, the Foleo would have been the founding device in the category. At the time, Palm was performing poorly in face of heavy competition in the smartphone market. The company's sales did not recover, and it was purchased by information technology giant Hewlett-Packard in April 2010.
Software
The Foleo was initially reported to run a modified Linux kernel. The kernel was reported as being version 2.6.14-rmk1-pxa1-intc2 ("rmk1" indicates this is the ARM architectural version, "pxa1" indicates it is of the PXA family of Intel/Marvell Technology Group XScale processors, "intc2" is possibly an IRQ handler). On August 7, 2007, Palm announced that it had chosen Wind River Systems to help it customize the standard Linux kernel to make it more suitable for this devi |
https://en.wikipedia.org/wiki/Discrepancy%20of%20hypergraphs | Discrepancy of hypergraphs is an area of discrepancy theory.
Definitions
In the classical setting, we aim at partitioning the vertices of a hypergraph into two classes in such a way that ideally each hyperedge contains the same number of vertices in both classes. A partition into two classes can be represented by a coloring . We call −1 and +1 colors. The color-classes and form the corresponding partition. For a hyperedge , set
The discrepancy of with respect to and the discrepancy of are defined by
These notions as well as the term 'discrepancy' seem to have appeared for the first time in a paper of Beck. Earlier results on this problem include the famous lower bound on the discrepancy of arithmetic progressions by Roth and upper bounds for this problem and other results by Erdős and Spencer and Sárközi (described on p. 39). At that time, discrepancy problems were called quasi-Ramsey problems.
Examples
To get some intuition for this concept, let's have a look at a few examples.
If all edges of intersect trivially, i.e. for any two distinct edges , then the discrepancy is zero, if all edges have even cardinality, and one, if there is an odd cardinality edge.
The other extreme is marked by the complete hypergraph . In this case the discrepancy is . Any 2-coloring will have a color class of at least this size, and this set is also an edge. On the other hand, any coloring with color classes of size and proves that the discrepancy is not larger than . It seems that the discrepancy reflects how chaotic the hyperedges of intersect. Things are not that easy, however, as the following example shows.
Set , and . In words, is the hypergraph on 4k vertices {1,...,4k}, whose edges are all subsets that have the same number of elements in {1,...,2k} as in {2k+1,...,4k}. Now has many (more than ) complicatedly intersecting edges. However, its discrepancy is zero, since we can color {1,...,2k} in one color and {2k+1,...,4k} in another color.
The last exam |
https://en.wikipedia.org/wiki/Evolution%20of%20biological%20complexity | The evolution of biological complexity is one important outcome of the process of evolution. Evolution has produced some remarkably complex organisms – although the actual level of complexity is very hard to define or measure accurately in biology, with properties such as gene content, the number of cell types or morphology all proposed as possible metrics.
Many biologists used to believe that evolution was progressive (orthogenesis) and had a direction that led towards so-called "higher organisms", despite a lack of evidence for this viewpoint. This idea of "progression" introduced the terms "high animals" and "low animals" in evolution. Many now regard this as misleading, with natural selection having no intrinsic direction and that organisms selected for either increased or decreased complexity in response to local environmental conditions. Although there has been an increase in the maximum level of complexity over the history of life, there has always been a large majority of small and simple organisms and the most common level of complexity appears to have remained relatively constant.
Selection for simplicity and complexity
Usually organisms that have a higher rate of reproduction than their competitors have an evolutionary advantage. Consequently, organisms can evolve to become simpler and thus multiply faster and produce more offspring, as they require fewer resources to reproduce. A good example are parasites such as Plasmodium – the parasite responsible for malaria – and mycoplasma; these organisms often dispense with traits that are made unnecessary through parasitism on a host.
A lineage can also dispense with complexity when a particular complex trait merely provides no selective advantage in a particular environment. Loss of this trait need not necessarily confer a selective advantage, but may be lost due to the accumulation of mutations if its loss does not confer an immediate selective disadvantage. For example, a parasitic organism may dispense |
https://en.wikipedia.org/wiki/Cholinergic%20anti-inflammatory%20pathway | The cholinergic anti-inflammatory pathway regulates the innate immune response to injury, pathogens, and tissue ischemia. It is the efferent, or motor arm of the inflammatory reflex, the neural circuit that responds to and regulates the inflammatory response.
Regulating the immune response
In 1987, a study showed that administration of armin, an irreversible inhibitor of acetylcholinesterase, by injection 24 hours before sepsis modelling invoked essential depression of a lethality of mice from experimental infectious process. Later (in 1995) this data has been confirmed at cholinergic stimulation by other cholinomimetics. Inhibitors of acetylcholinesterase can cause higher accessibility of acetylcholine and activation of cholinergic anti-inflammatory pathway as well.
Tumor necrosis factors (TNF) (and other cytokines) are produced by cells of the innate immune system during local injury and infection. These contribute to initiating a cascade of mediator release, and recruiting inflammatory cells to the site of infection to contain infection, referred to as "innate immunity.". TNF amplifies and prolongs the inflammatory response by activating other cells to release interleukin-1 (IL-1), high mobility group B1 (HMGB1) and other cytokines. These inflammatory cytokine responses confer protective advantages to the host at the site of bacterial infection. A “beneficial” inflammatory response is limited, resolves in 48–72 hours, and does not spread systemically. The cholinergic anti-inflammatory pathway provides a braking effect on the innate immune response which protects the body against the damage that can occur if a localized inflammatory response spreads beyond the local tissues, which results in toxicity or damage to the kidney, liver, lungs, and other organs.
Neurophysiological and immunological mechanism
The vagus nerve is the tenth cranial nerve. It regulates heart rate, broncho-constriction, digestion, and the innate immune response. The vagus nerve in |
https://en.wikipedia.org/wiki/Common%20misunderstandings%20of%20genetics | During the latter half of the 20th century, the fields of genetics and molecular biology matured greatly, significantly increasing understanding of biological heredity. As with other complex and evolving fields of knowledge, the public awareness of these advances has primarily been through the mass media, and a number of common misunderstandings of genetics have arisen.
Genetic determinism
It is a popular misconception that all patterns of an animal's behaviour, and more generally its phenotype, are rigidly determined by its genes. Although many examples of animals exist that display certain well-defined behaviour that is genetically programmed, these examples cannot be extrapolated to all animal behaviour. There is good evidence that some basic aspects of human behaviour, such as circadian rhythms are genetically based, but it is clear that many other aspects are not.
In the first place, much phenotypic variability does not stem from genes themselves. For example:
Epigenetic inheritance. In the widest definition this includes all biological inheritance mechanisms that do not change the DNA sequence of the genome. In a narrower definition it excludes biological phenomena such as the effects of prions and maternal antibodies which are also inherited and have clear survival implications.
Learning from experience. This feature is obviously important for humans, but there is considerable evidence of learned behaviour in other animal species (vertebrates and invertebrates). There are even reports of learned behaviour in Drosophila larvae.
A gene for X
In the early years of genetics it was suggested that there might be "a gene for" a wide range of particular characteristics. This was partly because the examples studied from Mendel onwards inevitably focused on genes whose effects could be readily identified; partly that it was easier to teach science that way; and partly because the mathematics of evolutionary dynamics is simpler if there is a simple mapping between |
https://en.wikipedia.org/wiki/Edmonds%27%20algorithm | In graph theory, Edmonds' algorithm or Chu–Liu/Edmonds' algorithm is an algorithm for finding a spanning arborescence of minimum weight (sometimes called an optimum branching).
It is the directed analog of the minimum spanning tree problem.
The algorithm was proposed independently first by Yoeng-Jin Chu and Tseng-Hong Liu (1965) and then by Jack Edmonds (1967).
Algorithm
Description
The algorithm takes as input a directed graph where is the set of nodes and is the set of directed edges, a distinguished vertex called the root, and a real-valued weight for each edge .
It returns a spanning arborescence rooted at of minimum weight, where the weight of an arborescence is defined to be the sum of its edge weights, .
The algorithm has a recursive description.
Let denote the function which returns a spanning arborescence rooted at of minimum weight.
We first remove any edge from whose destination is .
We may also replace any set of parallel edges (edges between the same pair of vertices in the same direction) by a single edge with weight equal to the minimum of the weights of these parallel edges.
Now, for each node other than the root, find the edge incoming to of lowest weight (with ties broken arbitrarily).
Denote the source of this edge by .
If the set of edges does not contain any cycles, then .
Otherwise, contains at least one cycle.
Arbitrarily choose one of these cycles and call it .
We now define a new weighted directed graph in which the cycle is "contracted" into one node as follows:
The nodes of are the nodes of not in plus a new node denoted .
If is an edge in with and (an edge coming into the cycle), then include in a new edge , and define .
If is an edge in with and (an edge going away from the cycle), then include in a new edge , and define .
If is an edge in with and (an edge unrelated to the cycle), then include in a new edge , and define .
For each edge in , we remember which edge in it corresponds to.
Now |
https://en.wikipedia.org/wiki/Semmle | Semmle Inc is a code-analysis platform; Semmle was acquired by GitHub (itself owned by Microsoft) on 18 September 2019 for an undisclosed amount. Semmle's LGTM technology automates code review, tracks developer contributions, and flags software security issues. The LGTM platform leverages the CodeQL query engine (formerly QL) to perform semantic analysis on software code bases. GitHub aims to integrate Semmle technology to provide continuous vulnerability detection services. In November 2019, use of CodeQL was made free for research and open source. CodeQL either shares a direct pedigree with .QL (dot-que-ell), which derives from the Datalog family tree, or is an evolution of similar technology.
SemmleCode is an object-oriented query language for deductive databases developed by Semmle. It is distinguished within this class by its support for recursive query.
Corporate background
The company was headquartered in San Francisco, with its development operations based in Blue Boar Court, Alfred Street, central Oxford, England. Semmle's customers included Credit Suisse, NASA, and Dell.
SemmleCode background
Academic
SemmleCode builds on academic research on querying the source of software programs. The first such system was Linton's Omega system, where queries were phrased in QUEL. QUEL did not allow for recursion in queries, making it difficult to inspect hierarchical program structures such as the call graph. The next significant development was therefore the use of logic programming, which does allow such recursive queries, in the XL C++ Browser. The disadvantage of using a full logic programming language is however that it is very difficult to attain acceptable efficiency. The CodeQuest system, developed at the University of Oxford, was the first to exploit the observation that Datalog, a very restrictive version of logic programming, is in the sweet spot between expressive power and efficiency. The QL query language is an object-oriented version of Datalog.
In |
https://en.wikipedia.org/wiki/Psychogenic%20disease | Classified as a "conversion disorder" by the DSM-IV, a psychogenic disease is a disease in which mental stressors cause physical symptoms of different diseases. The manifestation of physical symptoms without biologically identifiable causes results from disruptions of processes in the brain from psychological stress. During a psychogenic disease, neuroimaging has shown that neural circuits affecting functions such as emotion, executive functioning, perception, movement, and volition are inhibited. These disruptions become strong enough to prevent the brain from voluntarily allowing certain actions (e.g. moving a limb). When the brain is unable to signal to the body to perform an action voluntarily, physical symptoms of a disease are presented even though there is no biological identifiable cause. Examples of diseases that are believed by many to be psychogenic include psychogenic seizures, psychogenic polydipsia, psychogenic tremor, and psychogenic pain.
The term psychogenic disease is often used in a similar way to psychosomatic disease. However, the term psychogenic usually implies that psychological factors played a key causal role in the development of the illness. The term psychosomatic is often used in a broader way to describe illnesses with a known medical cause where psychological factors may nonetheless play a role (e.g., asthma can be exacerbated by anxiety).
Diagnosis
With the advent of medical screening technologies, such as electroencephalography (EEG) monitoring, psychogenic diseases are becoming much more common as medical professionals have increasingly precise tools to monitor patients. When a patient does not display typical markers of a disorder that could show up from medical exams, physicians typically diagnose a patient's symptoms as being psychogenic. Research into understanding psychogenic disorders has led to the development of both electronic diagnostic tests for ruling out the usual biological markers of a disorder and new clinical obs |
https://en.wikipedia.org/wiki/Split%20%28phylogenetics%29 | A split in phylogenetics is a bipartition of a set of taxa, and the smallest unit of information in unrooted phylogenetic trees: each edge of an unrooted phylogenetic tree represents one split, and the tree can be efficiently reconstructed from its set of splits. Moreover, when given several trees, the splits occurring in more than half of these trees give rise to a consensus tree, and the splits occurring in a smaller fraction of the trees generally give rise to a consensus Split Network.
See also
SplitsTree, a program for inferring phylogenetic (split) networks. |
https://en.wikipedia.org/wiki/Thymus%20citriodorus | Thymus citriodorus, the lemon thyme or citrus thyme, is a lemon-scented evergreen mat-forming perennial plant in the family Lamiaceae. There has been a great amount of confusion over the plant's correct name and origin. Recent DNA analysis suggests that it is not a hybrid or cross, but a distinct species as it was first described in 1811., yet an analysis in a different study clustered Thymus citriodorus together with Thymus vulgaris, which is considered as one of its parent species (see below).
T. citriodorus is an evergreen sub-shrub, growing to in height by in spread. It prefers full sun and well draining soil. The bloom period is mid to late summer, with pink to lavender flowers that are a nectar source for bees and butterflies.
Uses
Thymus citriodorus and its cultivars are grown as ornamentals, culinary herbs, and medicinal plants. In landscaping, the plants are often used as groundcovers or for planting in beds, between stepping stones, and in containers. In xeriscaping it is useful in hot, arid regions. The plant is drought-tolerant once established. As nectar-producing plants, they are cultivated in bee and butterfly gardens.
The leaves are eaten raw in salads or used as a fresh or dried flavoring herb in cooking and for herbal teas. Other uses include essential oil, folk remedies, antiseptics, respiratory aids, aromatherapy, deodorants, perfumes, skincare and cosmetics.
Distribution
There is also no unanimity regarding its origin, as some authors say that Thymus citriodorus has no natural distribution, while others refer to it as native to Southern Europe and that it is widely cultivated in the Mediterranean region.
Taxonomy and synonyms
Thymus citriodorus has had many different names over time, including Thymus × citriodorus, Thymus fragrantissimus, Thymus serpyllum citratus, Thymus serpyllum citriodorum, and more. It was also believed at one time that the plant was a hybrid of European garden origin, between Thymus pulegioides and Thymus vulgaris. |
https://en.wikipedia.org/wiki/N%27-Formylkynurenine | {{DISPLAYTITLE:N-Formylkynurenine}}-Formylkynurenine''' is an intermediate in the catabolism of tryptophan. It is a formylated derivative of kynurenine. The formation of ''-formylkynurenine is catalyzed by heme dioxygenases.
See also
Indoleamine 2,3-dioxygenase |
https://en.wikipedia.org/wiki/Polariton%20superfluid | Polariton superfluid is predicted to be a state of the exciton-polaritons system that combines the characteristics of lasers with those of excellent electrical conductors. Researchers look for this state in a solid state optical microcavity coupled with quantum well excitons. The idea is to create an ensemble of particles known as exciton-polaritons and trap them.
Wave behavior in this state results in a light beam similar to that from a laser but possibly more energy efficient.
Unlike traditional superfluids that need temperatures of approximately ~4 K, the polariton superfluid could in principle be stable at much higher temperatures, and might soon be demonstrable at room temperature. Evidence for polariton superfluidity was reported in by Alberto Amo and coworkers, based on the suppressed scattering of the polaritons during their motion.
Although several other researchers are working in the same field, the terminology and conclusions are not completely shared by the different groups. In particular, important properties of superfluids, such as zero viscosity, and of lasers, such as perfect optical coherence, are a matter of debate. Although, there is clear indication of quantized vortices when the pump beam has orbital angular momentum.
Furthermore, clear evidence has been demonstrated also for superfluid motion of polaritons, in terms of the Landau criterion and the suppression of scattering from defects when the flow velocity is slower than the speed of sound in the fluid.
The same phenomena have been demonstrated in an organic exciton polariton fluid, representing the first achievement of room-temperature superfluidity of a hybrid fluid of photons and excitons.
See also
Bose–Einstein condensation of polaritons |
https://en.wikipedia.org/wiki/Usa%20Marine%20Biological%20Institute | The Usa Marine Biological Institute (UMBI) (sometimes referred to as MBI-Japan, Japanese Marine Biological Institute, Usa Kaiyo Center or just Usa) is one of the oldest and largest centers for phycology, marine biology research, graduate training, and public service in Japan. It is devoted to scientific research leading to MS and PhD degrees in phycology, marine biology and related fields. It grants degrees jointly with Kochi University.
UMBI is located in the village of Usa cho, Kōchi Prefecture, Japan.
History
The Usa Marine Biological Station was founded in 1953 as an independent research institute by the Japanese Government. In 1978, its name was changed to Usa Marine Biological Institute.
Under the directorship of Professor Masao Ohno, the institute established a Japan International Cooperation Agency (JICA) training program in marine biology, since when a large number of foreign researchers have come to the institute to pursue short-term research projects. The current director, Professor Izumi Kinoshita, supervises and coordinates the JICA training program.
In 2004, UMBI started a new graduate program, Kuroshio Sciences, jointly with Kochi University, to study the Kuroshio Current from an interdisciplinary perspective.
UMBI graduate students are supported by various financial aid schemes, especially the Monbukagakusho MEXT International PhD Program.
Vessels
UMBI operates several manned research vessels and vehicles, owned by Kochi University or the Japanese Government
R/V Yutaka Hata Maru
R/V Neptune
R/V Hamayu
R/V Triton
Laboratories
Early life-history of fishes
Zooplankton Ecology
Crustacean Ecology
Marine Phycology
Phycological research
Usa Marine Biological Institute is renowned for marine phycological research. Emeritus Professor Masao Ohno was the first person in Japan to use an artificial seeding method for the commercial cultivation of green algae. The institute is one of the pioneer research institutes in the world for the study of U |
https://en.wikipedia.org/wiki/Activation%20product | An activation product is a material that has been made radioactive by the process of neutron activation.
Fission products and actinides produced by neutron absorption of nuclear fuel itself are normally referred to by those specific names, and activation product reserved for products of neutron capture by other materials, such as structural components of the nuclear reactor or nuclear bomb, the reactor coolant, control rods or other neutron poisons, or materials in the environment. All of these, however, need to be handled as radioactive waste. Some nuclides originate in more than one way, as activation products or fission products.
Activation products in a reactor's primary coolant loop are a main reason reactors use a chain of two or even three coolant loops linked by heat exchangers.
Fusion reactors will not produce radioactive waste from the fusion product nuclei themselves, which are normally just helium-4, but generate high neutron fluxes, so activation products are a particular concern.
Activation product radionuclides include:
[1] Branching fractions from LNHB database.
[2] Branching fractions renormalised to sum to 1.0.. |
https://en.wikipedia.org/wiki/Temasek%20Life%20Sciences%20Laboratory | Temasek Life Sciences Laboratory (TLL) was established in August 2002 and as a Singapore Non Profit Philanthropic Research Organisation focusing primarily on understanding the cellular mechanisms that underlie the development and physiology of plants, fungi and animals which provides foundation for biotechnology innovation.
It is affiliated with the National University of Singapore and the Nanyang Technological University and is located within the campus of the National University of Singapore.
TLL has 230 researchers from about 20 different nationalities to engage in biomolecular science research and applications.
History
Temasek Life Sciences Laboratory (TLL) is a beneficiary of Temasek Trust which oversees the initial endowment of S$500 million by Temasek to support corporate social responsibility philanthropic efforts in developing and delivering community programmes.
Temasek Life Sciences Laboratory and Temasek
Temasek Life Sciences Laboratory (TLL) was founded in 2002 and funded by Temasek Trust, the philanthropic arm of Temasek.
Academic Programmes
TLL offers various academic programmes at the tertiary level and is affiliated with the National University of Singapore and the Nanyang Technological University.
PhD/Graduate Programme
Temasek Life Sciences Laboratory (TLL) offers an intensive PhD programme in Singapore that fosters productive scientific interactions between students, postdoctoral fellows, and PIs. Past candidates have had their work published in prestigious research journals and travelled widely to present their findings at international conferences.
2. Internship Programmes
Research Attachment Programme (REAP)
The Research Attachment Programme (REAP) is jointly organised by the Ministry of Education (MOE), National University of Singapore (NUS) and TLL to groom local life sciences research talents.
The eight-week programme is designed for first-year Biology and Chemistry students in local junior colleges (JCs) to encourage these b |
https://en.wikipedia.org/wiki/Phytotope | Phytotope is the total habitat available for colonisation within any certain ecotope or biotope by plants and fungi. The community of plants and fungi so established constitutes the phytocoenosis of that ecotope.
All these words (ecotope, biotope, phytotope and others) describe environmental niches at very small scales of consideration. A suburban garden or village park or wilderness ravine would each be deserving of the label. |
https://en.wikipedia.org/wiki/Zootope | Zootope is the total habitat available for colonisation within any certain ecotope or biotope by animal life. The community of animals so established constitutes the zoocoenosis of that ecotope.
All these words (ecotope, biotope, zootope and others) describe environmental niches at very small scales of consideration. The rabbits and squirrels and mosquitoes of any suburban garden or village park, or the deer and wolves and birds of a wilderness ravine would each be deserving of the label. |
https://en.wikipedia.org/wiki/Physiotope | Physiotope is the total abiotic matrix of habitat present within any certain ecotope. The physiotope is the landform, the rocks and the soils, the climate and the hydrology, and the geologic processes which marshalled all these resources together in a certain way and in this time and place.
See also
Ecological land classification |
https://en.wikipedia.org/wiki/Geotope | A geotope is the geological component of the abiotic matrix present in an ecotope. Example geotopes might be an exposed outcrop of rocks, an erratic boulder, a grotto or ravine, a cave, an old stone wall marking a property boundary, and so forth.
It is a loanword from German (Geotop) in the study of ecology and might be the model for many other similar words coined by analogy. As the prototype, it has enjoyed wider currency than many of the other words modelled on it, including physiotope, with which it is used synonymously. But the geotope is properly the rocks and not the whole lay of the land (which would be the physiotope).
For usage in the context of geoheritage, like e.g. in Friedrich Wiedenbein's contributions (see below) and in the German discussion on geoheritage, the more adequate term (and translation from the German) is geosite.
See also
Ecological land classification |
https://en.wikipedia.org/wiki/Linnar%20Viik | Linnar Viik (born 26 February 1965 in Tallinn) is an Estonian information technology scientist, entrepreneur and IT visionary.
Currently he is a visiting lecturer at University of Tartu, Estonian Academy of Arts and Tallinn University, Partner and Member of the Board of Mobi Solutions and Chairman of the Supervisory Board of EIT Digital.
As founder and Programme Director at Estonian e-Governance Academy, he has been advising more than 40 governments on their digital strategy, digital capacities and digital transformation roadmaps. He has been member of the Research and Development Council of Estonia 2001-2017, member of e-Estonia Council 1996-2021.
He is also member of the Supervisory Board of SEI Tallinn and member of the Advisory Board of Lisbon Council.
Linnar has been member of the Board and lecturing at Estonian IT College since 2000 where he was appointed Acting Rector in 2010. Linnar Viik was founding Member of the European Institute of Innovation and Technology Governing Board, member of Advisory Board of Nordic Investment Bank, Chairman of the Board of the Open Estonia Foundation.
He is a founder and member of the boards of several mobile communications, broadband and software companies, former advisor to the Prime Minister of Estonia on ICT, innovation, R&D and civic society issues.
Earlier occupations include United Nations Development Programme as advisor and Stockholm Environment Institute as Councilor.
Linnar Viik has written over 120 articles and 10 reports, mostly on the topics of Knowledge Based Economy and Implications of Information Society, as well as being instrumental in the rapid development of Estonian computer and network infrastructure, as well as the Estonian Internet Voting and eSignature projects. |
https://en.wikipedia.org/wiki/Intelligence%20Advanced%20Research%20Projects%20Activity | The Intelligence Advanced Research Projects Activity (IARPA) is an organization within the Office of the Director of National Intelligence responsible for leading research to overcome difficult challenges relevant to the United States Intelligence Community. IARPA characterizes its mission as follows: "To envision and lead high-risk, high-payoff research that delivers innovative technology for future overwhelming intelligence advantage."
IARPA funds academic and industry research across a broad range of technical areas, including mathematics, computer science, physics, chemistry, biology, neuroscience, linguistics, political science, and cognitive psychology. Most IARPA research is unclassified and openly published. IARPA transfers successful research results and technologies to other government agencies. Notable IARPA investments include quantum computing, superconducting computing, machine learning, and forecasting tournaments.
Mission
IARPA characterizes its mission as "to envision and lead high-risk, high-payoff research that delivers innovative technology for future overwhelming intelligence advantage".
History
In 1958, the first Advanced Research Projects Agency, or ARPA, was created in response to an unanticipated surprise—the Soviet Union's successful launch of Sputnik on October 4, 1957. The ARPA model was designed to anticipate and pre-empt technological surprise. As then-Secretary of Defense Neil McElroy said, "I want an agency that makes sure no important thing remains undone because it doesn’t fit somebody's mission." The ARPA model has been characterized by ambitious technical goals, competitively awarded research led by term-limited staff, and independent testing and evaluation.
Authorized by the ODNI in 2006, IARPA was modeled after DARPA but focused on national intelligence needs, rather than military needs. The agency was a consolidation of the National Security Agency's Disruptive Technology Office, the National Geospatial-Intelligence Age |
https://en.wikipedia.org/wiki/Cell%20lists | Cell lists (also sometimes referred to as cell linked-lists) is a data structure in molecular dynamics simulations to find all atom pairs within a given cut-off distance of each other. These pairs are needed to compute the short-range non-bonded interactions in a system, such as Van der Waals forces or the short-range part of the electrostatic interaction when using Ewald summation.
Algorithm
Cell lists work by subdividing the simulation domain into cells with an edge length greater than or equal to the cut-off radius of the interaction to be computed. The particles are sorted into these cells and the interactions are computed between particles in the same or neighbouring cells.
In its most basic form, the non-bonded interactions for a cut-off distance are computed as follows:
for all neighbouring cell pairs do
for all do
for all do
if then
Compute the interaction between and .
end if
end for
end for
end for
Since the cell length is at least in all dimensions, no particles within of each other can be missed.
Given a simulation with particles with a homogeneous particle density, the number of cells is proportional to and inversely proportional to the cut-off radius (i.e. if increases, so does the number of cells). The average number of particles per cell therefore does not depend on the total number of particles. The cost of interacting two cells is in . The number of cell pairs is proportional to the number of cells which is again proportional to the number of particles . The total cost of finding all pairwise distances within a given cut-off is in , which is significantly better than computing the pairwise distances naively.
Periodic boundary conditions
In most simulations, periodic boundary conditions are used to avoid imposing artificial boundary conditions. Using cell lists, these boundaries can be implemented in two ways.
Ghost cells
In the ghost cells approach, the simulation box is wrapped in an additional layer of cells. These cells c |
https://en.wikipedia.org/wiki/Verlet%20list | A Verlet list (named after Loup Verlet) is a data structure in molecular dynamics simulations to efficiently maintain a list of all particles within a given cut-off distance of each other.
This method may easily be applied to Monte Carlo simulations. For short-range interactions, a cut-off radius is typically used, beyond which particle interactions are considered "close enough" to zero to be safely ignored. For each particle, a Verlet list is constructed that lists all other particles within the potential cut-off distance, plus some extra distance so that the list may be used for several consecutive Monte Carlo "sweeps" (set of Monte Carlo steps or moves) before being updated. If we wish to use the same Verlet list times before updating, then the cut-off distance for inclusion in the Verlet list should be , where is the cut-off distance of the potential, and is the maximum Monte Carlo step (move) of a single particle. Thus, we will spend of order time to compute the Verlet lists ( is the total number of particles), but are rewarded with Monte Carlo "sweeps" of order instead of . By optimizing our choice of it can be shown that Verlet lists allow converting the problem of Monte Carlo sweeps to an problem.
Using cell lists to identify the nearest neighbors in further reduces the computational cost.
See also
Verlet integration
Fast multipole method
Molecular mechanics
Software for molecular mechanics modeling |
https://en.wikipedia.org/wiki/Connection%20string | In computing, a connection string is a string that specifies information about a data source and the means of connecting to it. It is passed in code to an underlying driver or provider in order to initiate the connection. Whilst commonly used for a database connection, the data source could also be a spreadsheet or text file.
The connection string may include attributes such as the name of the driver, server and database, as well as security information such as user name and password.
Examples
This example shows a Postgres connection string for connecting to wikipedia.com with SSL and a connection timeout of 180 seconds:
DRIVER={PostgreSQL Unicode};SERVER=www.wikipedia.com;SSL=true;SSLMode=require;DATABASE=wiki;UID=wikiuser;Connect Timeout=180;PWD=ashiknoor
Users of Oracle databases can specify connection strings:
on the command line (as in: sqlplus scott/tiger@connection_string )
via environment variables ($TWO_TASK in Unix-like environments; %TWO_TASK% in Microsoft Windows environments)
in local configuration files (such as the default $ORACLE_HOME/network/admin.tnsnames.ora)
in LDAP-capable directory services |
https://en.wikipedia.org/wiki/Probabilistic%20automaton | In mathematics and computer science, the probabilistic automaton (PA) is a generalization of the nondeterministic finite automaton; it includes the probability of a given transition into the transition function, turning it into a transition matrix. Thus, the probabilistic automaton also generalizes the concepts of a Markov chain and of a subshift of finite type. The languages recognized by probabilistic automata are called stochastic languages; these include the regular languages as a subset. The number of stochastic languages is uncountable.
The concept was introduced by Michael O. Rabin in 1963; a certain special case is sometimes known as the Rabin automaton (not to be confused with the subclass of ω-automata also referred to as Rabin automata). In recent years, a variant has been formulated in terms of quantum probabilities, the quantum finite automaton.
Informal Description
For a given initial state and input character, a deterministic finite automaton (DFA) has exactly one next state, and a nondeterministic finite automaton (NFA) has a set of next states. A probabilistic automaton (PA) instead has a weighted set (or vector) of next states, where the weights must sum to 1 and therefore can be interpreted as probabilities (making it a stochastic vector). The notions states and acceptance must also be modified to reflect the introduction of these weights. The state of the machine as a given step must now also be represented by a stochastic vector of states, and a state accepted if its total probability of being in an acceptance state exceeds some cut-off.
A PA is in some sense a half-way step from deterministic to non-deterministic, as it allows a set of next states but with restrictions on their weights. However, this is somewhat misleading, as the PA utilizes the notion of the real numbers to define the weights, which is absent in the definition of both DFAs and NFAs. This additional freedom enables them to decide languages that are not regular, such as the |
https://en.wikipedia.org/wiki/V-model%20%28software%20development%29 | In software development, the V-model represents a development process that may be considered an extension of the waterfall model, and is an example of the more general V-model. Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. The horizontal and vertical axes represent time or project completeness (left-to-right) and level of abstraction (coarsest-grain abstraction uppermost), respectively.
Project definition phases
Requirements analysis
In the requirements analysis phase, the first step in the verification process, the requirements of the system are collected by analyzing the needs of the user(s). This phase is concerned with establishing what the ideal system has to perform. However it does not determine how the software will be designed or built. Usually, the users are interviewed and a document called the user requirements document is generated.
The user requirements document will typically describe the system's functional, interface, performance, data, security, etc. requirements as expected by the user. It is used by business analysts to communicate their understanding of the system to the users. The users carefully review this document as this document would serve as the guideline for the system designers in the system design phase. The user acceptance tests are designed in this phase. See also Functional requirements.
There are different methods for gathering requirements of both soft and hard methodologies including; interviews, questionnaires, document analysis, observation, throw-away prototypes, use case and static and dynamic views with users.
System design
Systems design is the phase where system engineers analyze and understand the business of the proposed system by studying the user requirements document. They figure out possibilities and |
https://en.wikipedia.org/wiki/Paraneoplastic%20syndrome | A paraneoplastic syndrome is a syndrome (a set of signs and symptoms) that is the consequence of a tumor in the body (usually a cancerous one). It is specifically due to the production of chemical signaling molecules (such as hormones or cytokines) by tumor cells or by an immune response against the tumor. Unlike a mass effect, it is not due to the local presence of cancer cells.
Paraneoplastic syndromes are typical among middle-aged to older people, and they most commonly occur with cancers of the lung, breast, ovaries or lymphatic system (a lymphoma). Sometimes, the symptoms of paraneoplastic syndromes show before the diagnosis of a malignancy, which has been hypothesized to relate to the disease pathogenesis. In this paradigm, tumor cells express tissue-restricted antigens (e.g., neuronal proteins), triggering an anti-tumor immune response which may be partially or, rarely, completely effective in suppressing tumor growth and symptoms. Patients then come to clinical attention when this tumor immune response breaks immune tolerance and begins to attack the normal tissue expressing that (e.g., neuronal) protein.
The abbreviation PNS is sometimes used for paraneoplastic syndrome, although it is used more often to refer to the peripheral nervous system.
Signs and symptoms
Symptomatic features of paraneoplastic syndrome cultivate in four ways: endocrine, neurological, mucocutaneous, and hematological. The most common presentation is a fever (release of endogenous pyrogens often related to lymphokines or tissue pyrogens), but the overall picture will often include several clinical cases observed which may specifically simulate more common benign conditions.
Endocrine
The following diseases manifest by means of endocrine dysfunction: Cushing syndrome, syndrome of inappropriate antidiuretic hormone, hypercalcemia, hypoglycemia, carcinoid syndrome, and hyperaldosteronism.
Neurological
The following diseases manifest by means of neurological dysfunction: Lambert–Ea |
https://en.wikipedia.org/wiki/Lema%C3%AEtre%E2%80%93Tolman%20metric | In physics, the Lemaître–Tolman metric, also known as the Lemaître–Tolman–Bondi metric or the Tolman metric, is a Lorentzian metric based on an exact solution of Einstein's field equations; it describes an isotropic and expanding (or contracting) universe which is not homogeneous, and is thus used in cosmology as an alternative to the standard Friedmann–Lemaître–Robertson–Walker metric to model the expansion of the universe. It has also been used to model a universe which has a fractal distribution of matter to explain the accelerating expansion of the universe. It was first found by Georges Lemaître in 1933 and Richard Tolman in 1934 and later investigated by Hermann Bondi in 1947.
Details
In a synchronous reference system where and , the time coordinate (we set ) is also the proper time and clocks at all points can be synchronized. For a dust-like medium where the pressure is zero, dust particles move freely i.e., along the geodesics and thus the synchronous frame is also a comoving frame wherein the components of four velocity are . The solution of the field equations yield
where is the radius or luminosity distance in the sense that the surface area of a sphere with radius is and is just interpreted as the Lagrangian coordinate and
subjected to the conditions and , where and are arbitrary functions and is the matter density. We can also assume and that excludes cases resulting in crossing of material particles during its motion. To each particle there corresponds a value of , the function and its time derivative respectively provides its law of motion and radial velocity. An interesting property of the solution described above is that when and are plotted as functions of , the form of these functions plotted for the range is independent of how these functions will be plotted for . This prediction is evidently similar to the Newtonian theory. The total mass within the sphere is given by
which implies that Schwarzschild radius is given b |
https://en.wikipedia.org/wiki/Navarro%E2%80%93Frenk%E2%80%93White%20profile | The Navarro–Frenk–White (NFW) profile is a spatial mass distribution of dark matter fitted to dark matter halos identified in N-body simulations by Julio Navarro, Carlos Frenk and Simon White. The NFW profile is one of the most commonly used model profiles for dark matter halos.
Density distribution
In the NFW profile, the density of dark matter as a function of radius is given by:
where ρ0 and the "scale radius", Rs, are parameters which vary from halo to halo.
The integrated mass within some radius Rmax is
The total mass is divergent, but it is often useful to take the edge of the halo to be the virial radius, Rvir, which is related to the "concentration parameter", c, and scale radius via
(Alternatively, one can define a radius at which the average density within this radius is times the critical or mean density of the universe, resulting in a similar relation: . The virial radius will lie around to , though values of are used in X-ray astronomy, for example, due to higher concentrations.)
The total mass in the halo within is
The specific value of c is roughly 10 or 15 for the Milky Way, and may range from 4 to 40 for halos of various sizes.
This can then be used to define a dark matter halo in terms of its mean density, solving the above equation for and substituting it into the original equation. This gives
where
is the mean density of the halo,
is from the mass calculation, and
is the fractional distance to the virial radius.
Higher order moments
The integral of the squared density is
so that the mean squared density inside of Rmax is
which for the virial radius simplifies to
and the mean squared density inside the scale radius is simply
Gravitational potential
Solving Poisson's equation gives the gravitational potential
with the limits and .
The acceleration due to the NFW potential is:
where .
Radius of the maximum circular velocity
The radius of the maximum circular velocity (confusingly sometime |
https://en.wikipedia.org/wiki/ISO/IEC%2042010 | ISO/IEC/IEEE 42010 Systems and software engineering — Architecture description is an international standard for architecture descriptions of systems and software.
Overview
ISO/IEC/IEEE 42010:2011 defines requirements on the description of system, software and enterprise architectures. It aims to standardise the practice of architecture description by defining standard terms, presenting a conceptual foundation for expressing, communicating and reviewing architectures and specifying requirements that apply to architecture descriptions, architecture frameworks and architecture description languages.
Following its predecessor, IEEE 1471, the standard makes a strict distinction between architectures and architecture descriptions.
The description of ISO/IEC/IEEE 42010 in this article is based upon the standard published in 2011. This standard has been withdrawn and replaced with ISO/IEC/IEEE 42010:2022 in November 2022 https://www.iso.org/standard/74393.html
History of ISO/IEC/IEEE 42010
The origin of the standard was the fast track international standardization of IEEE 1471:2000. The standard was originally balloted as ISO/IEC DIS 25961. It was subsequently adopted and published as ISO/IEC 42010:2007 which was identical with IEEE 1471:2000.
In 2006, ISO/IEC JTC1/SC7 WG 42 and IEEE Computer Society launched a coordinated revision of this standard to address: harmonization with ISO/IEC 12207 and ISO/IEC 15288; alignment with other ISO/IEC architecture standards (e.g. ISO/IEC 10746 Reference Model Open Distributed Processing); the specification of architecture frameworks and architecture description languages; architecture decision capture; and correspondences for model and view consistency.
In July 2011, the Final Draft International Standard was balloted and approved (21-0) by ISO member bodies. The corresponding IEEE version, P42010/D9, was approved as a revised standard by the IEEE-SA Standards Board on 31 October 2011. ISO/IEC/IEEE 42010:2011 was published by ISO |
https://en.wikipedia.org/wiki/Adaptor%20hypothesis | The adaptor hypothesis is a theoretical scheme in molecular biology to explain how information encoded in the nucleic acid sequences of messenger RNA (mRNA) is used to specify the amino acids that make up proteins during the process of translation. It was formulated by Francis Crick in 1955 in an informal publication of the RNA Tie Club, and later elaborated in 1957 along with the central dogma of molecular biology and the sequence hypothesis. It was formally published as an article "On protein synthesis" in 1958. The name "adaptor hypothesis" was given by Sydney Brenner.
Crick postulated that there must exist a small molecule to precisely recognise and bind the mRNA sequences while amino acids are being synthesised. The hypothetical adaptor molecule was later established to be a hitherto unknown nucleic acid, transfer RNA (tRNA).
Development
In 1953, English biophysicist Francis Crick and American biologist James Watson, working together at the Cavendish Laboratory of the University of Cambridge, worked out the correct description of the structure of DNA, one of the major genetic materials. In their follow-up paper the same year, they introduced the concept of genetic information alongside the notion that DNA and protein cloud be related. By 1954, it was becoming to be understood that DNA, RNA (only messenger RNA was understood at the time, but only as a vague nucleic acid, and identified as such only in 1960) and proteins were related as components of the same genetic information pathway. However, the structure of RNA and details of how these biological molecules related and interact with each other were still a mystery, especially on how proteins could be synthesised from nucleic acids. Watson called this problem "the mysteries of life" in his letter to Crick. Watson and Alexander Rich discussed in the PNAS, saying, "We shall not be able to check a structural relationship between RNA and protein synthesis or between RNA and DNA until we know the structure of R |
https://en.wikipedia.org/wiki/YProxy | yProxy is a Network News Transfer Protocol (NNTP) proxy server for the Windows operating system. yProxy's main function is to convert yEnc-encoded attachments to UUE-encoded attachments on the fly. The main purpose of this is to add functionality to NNTP newsreaders that do not have native support for yEnc.
The inventor of yEnc recommends yProxy for use by Windows users whose newsreaders do not support yEnc decoding.
yProxy comes in two varieties:
yProxy
yProxy Pro
yProxy
The latest free version of yProxy is version 1.3.
History of yProxy
yEnc (8 bit ASCII of 8 bit data) was released in 2001, and almost immediately the most popular utility for decoding yEnc became a software utility named yEnc32. yEnc32 was an early provider of yEnc decoding, but yEnc32, while flexible through its user interface, requires manual steps to decode yEnc attachments.
In the spring of 2002, shortly after yEnc gained popularity in binary newsgroups, yProxy was released as freeware. yProxy was designed to convert yEnc attachments as they are downloaded, without user intervention. Because yProxy is a proxy server, once it is configured, the user must only ensure that yProxy is running in order to use it.
Due to the design of yProxy as a generic NNTP proxy server, yProxy can be used by any NNTP newsreader. There are many free and commercial NNTP newsreader clients that do not natively support yEnc. yProxy was designed to let the user continue to use his or her existing newsreader.
As of May 31, 2007, the following, popular, free newsreaders do not support yEnc:
Outlook Express
Windows Mail
Windows Live Mail
Mozilla Thunderbird
The free version of yProxy is not supported on Windows Vista or Windows 7 due to yProxy's dependency on WinHelp for the help file. In addition, the free version of yProxy only includes instructions for configuring Outlook Express, which does not apply to Windows Vista's free email and NNTP client, Windows Mail or Windows Live Mail for Windows 7.
The f |
https://en.wikipedia.org/wiki/Visible%20Language | Visible Language is an American journal presenting visual communication research. Founded in 1967 as The Journal of Typographical Research by Merald Wrolstad, occasional Visible Language issues are co-edited with a guest editor-author.
The journal was founded with the primary tenet of the journal being that reading and writing together form a new, separate, and autonomous language system. The journal has evolved to focus on research in visual communication. The journal has covered the subject of concrete poetry, the Fluxus art movement, painted text, textual criticism, the abstraction of symbols, articulatory synthesis and text, and the evolution of the page from print to on-screen display. Guest editor-authors have included Colin Banks, John Cage, Adrian Frutiger, Dick Higgins, Richard Kostelanetz, Craig Saper, and George Steiner.
The journal was edited for 26 years (1987–2012) by Sharon Poggenpohl of the Illinois Institute of Technology's Institute of Design, with administrative offices at the Rhode Island School of Design. It is currently edited by Mike Zender of the University of Cincinnati, which publishes and provides administrative offices for the journal. |
https://en.wikipedia.org/wiki/Bohuslav%20Balcar | Bohuslav Balcar (; 1943 – 2017) was a Czech mathematician. He was a senior researcher at the Center for Theoretical Study (CTS), and a professor at Charles University in Prague. His research interests were mainly related to foundations of mathematics.
Balcar received his Ph.D. in 1966 from Charles University. His Ph.D. supervisor was Petr Vopěnka. |
https://en.wikipedia.org/wiki/Fungal%20Genetics%20Stock%20Center | Established in 1960, the Fungal Genetics Stock Center is the main open repository for genetically characterized fungi. The FGSC is a member of the World Federation for Culture Collections and is a leading collection in the US Culture Collection Network Research Coordination Network .
Holdings
The FGSC distributes strains of Neurospora and Aspergillus, as well as limited numbers of Fusarium, Magnaporthe and many strains from current fungal genome projects.
In the 1980s and 1990s the FGSC added molecular materials including cloned genes, cloning vectors and gene libraries to the collection.
As more fungal genomes have been sequenced, the FGSC has re-evaluated the definition of a genetic system. This has led them to expand the collection, with additional materials including strains from genome programs and mutant collections for organisms such as Neurospora crassa, Aspergillus nidulans, Cryptococcus neoformans, and Candida albicans.
As a genetic repository, the FGSC has always endeavored to represent the diversity of genetic materials available. To that end, they hold large numbers of strains of a few different species. More specifically, strains from 76 different species representing 23 different genera. Of these, there are more than ten strains for only nineteen different species.
These strains have been deposited by 310 different individuals, 64 of whom have deposited only one strain. The FGSC also holds a number of non-accessioned strains including the wild-type strain collection of Dr. David Perkins as well as Neurospora strains from a number of other researchers who have retired. These are held with the understanding that they will keep them as long as space is available. They are not curated and are available on an as-is basis. Other strain collections include Allomyces (pdf), Aspergillus niger, Ustilago madis, and Neurospora strains from the historical Tatum lab collection
Distribution
In the period from January 1998 to December 2018, the FGSC distribut |
https://en.wikipedia.org/wiki/Conceptual%20physics | Conceptual physics is an approach to teaching physics that focuses on the ideas of physics rather than the mathematics. It is believed that with a strong conceptual foundation in physics, students are better equipped to understand the equations and formulas of physics, and to make connections between the concepts of physics and their everyday life. Early versions used almost no equations or math-based problems.
Paul G. Hewitt popularized this approach with his textbook Conceptual Physics: A New Introduction to your Environment in 1971. In his review at the time, Kenneth W. Ford noted the emphasis on logical reasoning and said "Hewitt's excellent book can be called physics without equations, or physics without computation, but not physics without mathematics." Hewitt's wasn't the first book to take this approach. Conceptual Physics: Matter in Motion by Jae R. Ballif and William E. Dibble was published in 1969. But Hewitt's book became very successful. As of 2022, it is in its 13th edition. In 1987 Hewitt wrote a version for high school students.
The spread of the conceptual approach to teaching physics broadened the range of students taking physics in high school. Enrollment in conceptual physics courses in high school grew from 25,000 students in 1987 to over 400,000 in 2009. In 2009, 37% of students took high school physics, and 31% of them were in Physics First, conceptual physics courses, or regular physics courses using a conceptual textbook.
This approach to teaching physics has also inspired books for science literacy courses, such as From Atoms to Galaxies: A Conceptual Physics Approach to Scientific Awareness by Sadri Hassani. |
https://en.wikipedia.org/wiki/Inhomogeneous%20cosmology | An inhomogeneous cosmology is a physical cosmological theory (an astronomical model of the physical universe's origin and evolution) which, unlike the currently widely accepted cosmological concordance model, assumes that inhomogeneities in the distribution of matter across the universe affect local gravitational forces (i.e., at the galactic level) enough to skew our view of the Universe. When the universe began, matter was distributed homogeneously, but over billions of years, galaxies, clusters of galaxies, and superclusters have coalesced, and must, according to Einstein's theory of general relativity, warp the space-time around them. While the concordance model acknowledges this fact, it assumes that such inhomogeneities are not sufficient to affect large-scale averages of gravity in our observations. When two separate studies claimed in 1998-1999 that high redshift supernovae were further away than our calculations showed they should be, it was suggested that the expansion of the universe is accelerating, and dark energy, a repulsive energy inherent in space, was proposed to explain the acceleration. Dark energy has since become widely accepted, but it remains unexplained. Accordingly, some scientists continue to work on models that might not require dark energy. Inhomogeneous cosmology falls into this class.
Inhomogeneous cosmologies assume that the backreactions of denser structures, as well as those of very empty voids, on space-time are significant enough that when not taken into account, they distort our understanding of time and our observations of distant objects. Following Thomas Buchert's publication of equations in 1997 and 2000 that derive from general relativity but also allow for the inclusion of local gravitational variations, a number of cosmological models were proposed under which the acceleration of the universe is in fact a misinterpretation of our astronomical observations and in which dark energy is unnecessary to explain them. For examp |
https://en.wikipedia.org/wiki/Control%20%28management%29 | Control is a function of management that helps to check errors and take corrective actions. This is done to minimize deviation from standards and ensure that the stated goals of the organization are achieved in a desired manner.
According to modern concepts, control is a foreseeing action; earlier concepts of control were only used when errors were detected. Control in management includes setting standards, measuring actual performance, and taking corrective action in decision making.
Definition
In 1916, Henri Fayol formulated one of the first definitions of control as it pertains to management: Control of an undertaking consists of seeing that everything is being carried out in accordance with the plan which has been adopted, the orders which have been given, and the principles which have been laid down. Its objective is to point out mistakes so that they may be rectified and prevented from recurring.
According to EFL Brech:Control is checking current performance against pre-determined standards contained in the plans, with a view to ensuring adequate progress and satisfactory performance.
According to Harold Koontz:Controlling is the measurement and correction of performance to make sure that enterprise objectives and the plans devised to attain them are accomplished.
According to Stafford Beer: Management is the profession of control.
Robert J. Mockler presented a more comprehensive definition of managerial control:
Management control can be defined as a systematic torture by business management to compare performance to predetermined standards, plans, or objectives to determine whether performance is in line with these standards and presumably to take any remedial action required to see that human and other corporate resources are being used most effectively and efficiently possible in achieving corporate objectives.
Also, control can be defined as "that function of the system that adjusts operations as needed to achieve the plan, or to maintain varia |
https://en.wikipedia.org/wiki/Spinplasmonics | Spinplasmonics is a field of nanotechnology combining spintronics and plasmonics. The field was pioneered by Professor Abdulhakem Elezzabi at the University of Alberta in Canada. In a simple spinplasmonic device, light waves couple to electron spin states in a metallic structure. The most elementary spinplasmonic device consists of a bilayer structure made from magnetic and nonmagnetic metals. It is the nanometer scale interface between such metals that gives rise to an electron spin phenomenon. The plasmonic current is generated by optical excitation and its properties are manipulated by applying a weak magnetic field. Electrons with a specific spin state can cross the interfacial barrier, but those with a different spin state are impeded. Essentially, switching operations are performed with the electrons spin and then sent out as a light signal.
Spinplasmonic devices potentially have the advantages of high speed, miniaturization, low power consumption, and multifunctionality. On a length scale that is less than a single magnetic domain size, the interaction between atomic spins realigns the magnetic moments. Unlike semiconductor-based devices, smaller spinplasmonics devices are expected to be more efficient in transporting the spin-polarized electron current.
See also
Plasmon
Spintronics
Spin pumping
Spin transfer
List of emerging technologies |
https://en.wikipedia.org/wiki/Andrzej%20Schinzel | Andrzej Bobola Maria Schinzel (5 April 1937 – 21 August 2021) was a Polish mathematician studying mainly number theory.
Education
Schinzel received an MSc in 1958 at Warsaw University, Ph.D. in 1960 from Institute of Mathematics of the Polish Academy of Sciences where he studied under Wacław Sierpiński, with a habilitation in 1962. He was a member of the Polish Academy of Sciences.
Career
Schinzel was a professor at the Institute of Mathematics of the Polish Academy of Sciences (IM PAN). His principal interest was the theory of polynomials. His 1958 conjecture on the prime values of polynomials, known as Schinzel's hypothesis H, both extends the Bunyakovsky conjecture and broadly generalizes the twin prime conjecture. He also proved Schinzel's theorem on the existence of circles through any given number of integer points.
Schinzel was the author of over 200 research articles in various branches of number theory, including elementary, analytic and algebraic number theory. He was the editor of Acta Arithmetica for over four decades.
Private life
Andrzej Schinzel was the oldest brother of a Polish chess master Władysław Schinzel (born 1943). |
https://en.wikipedia.org/wiki/Immunodiffusion | Immunodiffusion is a diagnostic test which involves diffusion through a substance such as agar which is generally soft gel agar (2%) or agarose (2%), used for the detection of antibodies or antigen.
The commonly known types are:
Single diffusion in one dimension (Oudin procedure)
Double diffusion in one dimension (Oakley Fulthorpe procedure)
Single diffusion in two dimensions (radial immunodiffusion or Mancini method)
Double diffusion in two dimensions (Ouchterlony double immunodiffusion)
Notes
External links
Biological techniques and tools
Diagnostic virology
Immunologic tests |
https://en.wikipedia.org/wiki/Power%20gain | In electrical engineering, the power gain of an electrical network is the ratio of an output power to an input power. Unlike other signal gains, such as voltage and current gain, "power gain" may be ambiguous as the meaning of terms "input power" and "output power" is not always clear. Three important power gains are operating power gain, transducer power gain and available power gain. Note that all these definitions of power gains employ the use of average (as opposed to instantaneous) power quantities and therefore the term "average" is often suppressed, which can be confusing at occasions.
Operating power gain
The operating power gain of a two-port network, , is defined as:
where
is the maximum time-averaged power delivered to the load, where the maximization is over the load impedance, i.e., we desire the load impedance which maximizes the time-averaged power delivered to the load.
is the time-averaged input power to the network.
If the time-averaged input power depends on the load impedance, one must take the maximum of the ratio, not just the maximum of the numerator.
Transducer power gain
The transducer power gain of a two-port network, , is defined as:
where
is the average power delivered to the load
is the maximum available average power at the source
In terms of y-parameters this definition can be used to derive:
where
is the load admittance
is the source admittance
This result can be generalized to z, h, g and y-parameters as:
where
is a z, h, g or y-parameter
is the load value in the corresponding parameter set
is the source value in the corresponding parameter set
may only be obtained from the source when the load impedance connected to it (i.e. the equivalent input impedance of the two-port network) is the complex conjugate of the source impedance, a consequence of the maximum power theorem.
Available power gain
The available power gain of a two-port network, , is defined as:
where
is the maximum available average power at the |
https://en.wikipedia.org/wiki/Cronobacter%20sakazakii | Cronobacter sakazakii, which before 2007 was named Enterobacter sakazakii, is an opportunistic Gram-negative, rod-shaped, pathogenic bacterium that can live in very dry places, otherwise known as xerotolerance. C. sakazakii utilizes a number of genes to survive desiccation and this xerotolerance may be strain specific. The majority of C. sakazakii cases are adults but low-birth-weight preterm neonatal and older infants are at the highest risk. The pathogen is a rare cause of invasive infection in infants, with historically high case fatality rates (40–80%).
In infants it can cause bacteraemia, meningitis and necrotizing enterocolitis. Most neonatal C. sakazakii infections cases have been associated with the use of powdered infant formula with some strains able to survive in a desiccated state for more than two years. However, not all cases have been linked to contaminated infant formula. In November 2011, several shipments of Kotex tampons were recalled due to a Cronobacter (E. sakazakii) contamination. In one study, the pathogen was found in 12% of field vegetables and 13% of hydroponic vegetables.
All Cronobacter species, except C. condimenti, have been linked retrospectively to clinical cases of infection in either adults or infants. However multilocus sequence typing has shown that the majority of neonatal meningitis cases in the past 30 years, across 6 countries, have been associated with only one genetic lineage of the species Cronobacter sakazakii called 'Sequence Type 4' or 'ST4', and therefore this clone appears to be of greatest concern with infant infections.
The bacterium is ubiquitous, being isolated from a range of environments and foods; the majority of Cronobacter cases occur in the adult population. However it is the association with intrinsically or extrinsically contaminated powdered formula which has attracted the main attention. According to multilocus sequence analysis (MLSA) the genus originated ~40 MYA, and the most clinically significant |
https://en.wikipedia.org/wiki/Dental%20bonding | Adhesive dentistry is a branch of dentistry which deals with adhesion or bonding to the natural substance of teeth, enamel and dentin. It studies the nature and strength of adhesion to dental hard tissues, properties of adhesive materials, causes and mechanisms of failure of the bonds, clinical techniques for bonding and newer applications for bonding such as bonding to the soft tissue. There is also direct composite bonding which uses tooth-colored direct dental composites to repair various tooth damages such as cracks or gaps.
Dental bonding is a dental procedure in which a dentist applies a tooth-colored resin material (a durable plastic material) and cures it with visible, blue light. This ultimately "bonds" the material to the tooth and improves the overall appearance of teeth. Tooth bonding techniques have various clinical applications including operative dentistry and preventive dentistry as well as cosmetic and pediatric dentistry, prosthodontics, and orthodontics.
History
Adhesive dentistry began in 1955 with a paper by Dr. Michael Buonocore on the benefits of acid etching. Technologies have changed multiple times since then, with generally recognized generations established in the literature. Dental bonding agents have evolved from no-etch to total-etch (4th- and 5th-generation) to self-etch (6th- and 7th-generation) systems. improved convenience and reduced sensitivity to operator errors. However, the best bonding and longevity was achieved with 4th generation agents (having separate etch, prime, and bond steps).
Irwin Smigel founder and current president of the American Society for Dental Aesthetics and diplomate of the American Board of Aesthetic Dentistry, was one of the first to broaden the usage of bonding by using it to close gaps between teeth, lengthen teeth as well as to re-contour the entire mouth rather than using crowns. Having done more extensive work on the process than any other dentist, Dr. Smigel lectures worldwide on aesthetic dentist |
https://en.wikipedia.org/wiki/Cornelia%20de%20Lange%20syndrome | Cornelia de Lange syndrome (CdLS) is a genetic disorder. People with Cornelia de Lange syndrome experience a range of physical, cognitive, and medical challenges ranging from mild to severe. Cornelia de Lange syndrome has a widely varied phenotype, meaning people with the syndrome have varied features and challenges. The typical features of CdLS include thick or long eyebrows, a small nose, small stature, developmental delay, long or smooth philtrum, thin upper lip and downturned mouth.
The syndrome is named after Dutch pediatrician Cornelia Catharina de Lange, who described it in 1933.
It is often termed Brachmann de Lange syndrome or Bushy syndrome and is also known as Amsterdam dwarfism. Its exact incidence is unknown, but it is estimated at 1 in 10,000 to 30,000.
Signs and symptoms
The phenotype of CdLS is highly varied and is described as a spectrum; from Classic CdLS (with a greater number of key features) to mild variations with only a few features. Some people will have a small number of features but do not have CdLS.
Key features:
Long and/or thick eyebrows
Short nose
Concave nasal ridge and/or upturned nasal tip
Long and/or smooth philtrum
Thin upper lip vermilion and/or downturned corners of mouth
Missing fingers or toes
Congenital diaphragmatic hernia
Other suggestive features:
Developmental delay or intellectual disability
Small prenatal and birth size or weight
Small stature
Microcephaly (prenatally or postnatally)
Small hands or feet
Short fifth finger
Hirsutism
The following health conditions are more common in people with CdLS than in the general population.
Respiratory illness
Heart defects (e.g., pulmonary stenosis, VSD, ASD, coarctation of the aorta)
Hearing impairment
Vision abnormalities (e.g., ptosis, nystagmus, high myopia, hypertropia)
Partial joining of the second and third toes
Incurved fifth fingers (clinodactyly)
Gastroesophageal reflux
Gastrointestinal abnormalities
Musculoskeletal problems
Scoliosis
Soci |
https://en.wikipedia.org/wiki/STO-nG%20basis%20sets | STO-nG basis sets are minimal basis sets, where primitive Gaussian orbitals are fitted to a single Slater-type orbital (STO). originally took the values 2 – 6. They were first proposed by John Pople. A minimum basis set is where only sufficient orbitals are used to contain all the electrons in the neutral atom. Thus for the hydrogen atom, only a single 1s orbital is needed, while for a carbon atom, 1s, 2s and three 2p orbitals are needed. The core and valence orbitals are represented by the same number of primitive Gaussian functions . For example, an STO-3G basis set for the 1s, 2s and 2p orbital of the carbon atom are all linear combination of 3 primitive Gaussian functions. For example, a STO-3G s orbital is given by:
where
The values of c1, c2, c3, α1, α2 and α3 have to be determined. For the STO-nG basis sets, they are obtained by making a least squares fit of the three Gaussian orbitals to the single Slater-type orbitals. (Extensive tables of parameters have been calculated for STO-1G through STO-5G for s orbitals through g orbitals.) This differs from the more common procedure where the criteria often used is to choose the coefficients (c's) and exponents (α'''s) to give the lowest energy with some appropriate method for some appropriate molecule. A special feature of this basis set is that common exponents are used for orbitals in the same shell (e.g. 2s and 2p) as this allows more efficient computation.
The fit between the Gaussian orbitals and the Slater orbital is good for all values of r'', except for very small values near to the nucleus. The Slater orbital has a cusp at the nucleus, while Gaussian orbitals are flat at the nucleus.
Use of STO-nG basis sets
The most widely used basis set of this group is STO-3G, which is used for large systems and for preliminary geometry determinations. This basis set is available for all atoms from hydrogen up to xenon.
STO-2G basis set
The STO-2G basis set is a linear combination of 2 primitive Gaussian functi |
https://en.wikipedia.org/wiki/NTU%20method | The number of transfer units (NTU) method is used to calculate the rate of heat transfer in heat exchangers (especially counter current exchangers) when there is insufficient information to calculate the log mean temperature difference (LMTD). In heat exchanger analysis, if the fluid inlet and outlet temperatures are specified or can be determined by simple energy balance, the LMTD method can be used; but when these temperatures are not available either the NTU or the effectiveness NTU method is used.
The effectiveness-NTU method is very useful for all the flow arrangements (besides parallel flow and counterflow ones) because the effectiveness of all other types must be obtained by a numerical solution of the partial differential equations and there is no analytical equation for LMTD or effectiveness, but as a function of two variables the effectiveness for each type can be presented in a single diagram.
To define the effectiveness of a heat exchanger we need to find the maximum possible heat transfer that can be hypothetically achieved in a counter-flow heat exchanger of infinite length. Therefore one fluid will experience the maximum possible temperature difference, which is the difference of (The temperature difference between the inlet temperature of the hot stream and the inlet temperature of the cold stream). The method proceeds by calculating the heat capacity rates (i.e. mass flow rate multiplied by specific heat) and for the hot and cold fluids respectively, and denoting the smaller one as :
Where is the mass flow rate and is the fluid's specific heat capacity at constant pressure.
A quantity:
is then found, where is the maximum heat that could be transferred between the fluids per unit time. must be used as it is the fluid with the lowest heat capacity rate that would, in this hypothetical infinite length exchanger, actually undergo the maximum possible temperature change. The other fluid would change temperature more slowly along the heat exch |
https://en.wikipedia.org/wiki/SAP%20NetWeaver%20Visual%20Composer | SAP NetWeaver Visual Composer is SAP’s web-based software modelling tool. It enables business process specialists and developers to create business application components, without coding.
Visual Composer produces applications in a declarative form, enabling code-free execution mode for multiple runtime environments. It provides application lifecycle support by maintaining the connection between an application and its model throughout its lifecycle. Visual Composer is designed with an open architecture, which enables developers to extend its design-time environment and modelling language, as well as to integrate external data services.
The tool aims to increase productivity by reducing development effort time, and narrow the gap between application definition and implementation.
Starting with a blank canvas, the Visual Composer user, typically a business process specialist, draws the application in Visual Composer Storyboard (workspace), without writing code, to prototype, design and produce applications.
A typical workflow for creating, deploying and running an application using Visual Composer is:
Create a model
Discover data services and add them to the model
Select necessary UI elements and add them to the model
Connect model elements to define the model logic and data flow
Edit the layout
Arranging the UI elements and the controls of the application on forms and tables.
Deploy the model
This step includes compilation, validation and deployment to a selected environment.
Run the application
The application can run using different runtime environment (such as Adobe Flex and HTML). In 2014 a runtime environment was introduced that is utilizing HTML5 capabilities of SAPUI5.
See also
SAP AG
NetWeaver
Modelling language |
https://en.wikipedia.org/wiki/Dew%20point%20depression | The dew point depression (T-Td) is the difference between the temperature and dew point temperature at a certain height in the atmosphere.
For a constant temperature, the smaller the difference, the more moisture there is, and the higher the relative humidity. In the lower troposphere, more moisture (small dew point depression) results in lower cloud bases and lifted condensation levels (LCL). LCL height is an important factor modulating severe thunderstorms. One example concerns tornadogenesis, with tornadoes most likely if the dew point depression is 20 °F (11 °C) or less, and the likelihood of large, intense tornadoes increasing as dew point depression decreases. LCL height also factors in downburst and microburst activity. Conversely, instability is increased when there is a mid-level dry layer (large dew point depression) known as a "dry punch", which is favorable for convection if the lower layer is buoyant.
As it measures moisture content in the atmosphere, the dew point depression is also an important indicator in agricultural and forest meteorology, particularly in predicting wildfires.
See also
Wet-bulb depression
Atmospheric thermodynamics
Atmospheric thermodynamics
Severe weather and convection
Meteorological data and networks
de:Taupunkt#Taupunktdifferenz
fr:Point de rosée#Dépression du point de rosée |
https://en.wikipedia.org/wiki/Rebiana | Rebiana is the trade name for high-purity rebaudioside A, a steviol glycoside that is 200 times as sweet as sugar. It is derived from stevia leaves by steeping them in water and purifying the resultant extract to obtain the rebaudioside A. The Coca-Cola Company filed patents on rebiana, and in 2007 it licensed the rights to the patents for food products to Cargill; Coca-Cola retained the exclusive rights to use the patents for beverage products. Truvia and PureVia are each made from rebiana and were each recognized as GRAS food ingredients by the US FDA in 2008. |
https://en.wikipedia.org/wiki/Latex%20fixation%20test | A latex fixation test, also called a latex agglutination assay or test (LA assay or test), is an assay used clinically in the identification and typing of many important microorganisms. These tests use the patient's antigen-antibody immune response. This response occurs when the body detects a pathogen and forms an antibody specific to an identified antigen (a protein configuration) present on the surface of the pathogen.
Agglutination tests, specific to a variety of pathogens, can be designed and manufactured for clinicians by coating microbeads of latex with pathogen-specific antigens or antibodies. In performing a test, laboratory clinicians will mix a patient's cerebrospinal fluid, serum or urine with the coated latex particles in serial dilutions with normal saline (important to avoid the prozone effect) and observe for agglutination (clumping). Agglutination of the beads in any of the dilutions is considered a positive result, confirming either that the patient's body has produced the pathogen-specific antibody (if the test supplied the antigen) or that the specimen contains the pathogen's antigen (if the test supplied the antibody). Instances of cross-reactivity (where the antibody sticks to another antigen besides the antigen of interest) can lead to confusing results.
Agglutination techniques are used to detect antibodies produced in response to a variety of viruses and bacteria, as well as autoantibodies, which are produced against the self in autoimmune diseases. For example, assays exist for rubella virus, rotavirus, and rheumatoid factor, and an excellent LA test is available for cryptococcus. Agglutination techniques are also used in definitive diagnosis of group A streptococcal infection.
See also |
https://en.wikipedia.org/wiki/Software%20measurement | Software measurement is a quantified attribute (see also: measurement) of a characteristic of a software product or the software process. It is a discipline within software engineering. The process of software measurement is defined and governed by ISO Standard ISO 15939 (software measurement process).
Software metrics
Software size, functional measurement
The primary measurement of software is size, specifically functional size. The generic principles of functional size are described in the ISO/IEC 14143. Software size is principally measured in function points. It can also be measured in lines of code, or specifically, source lines of code (SLOC) which is functional code excluding comments. Whilst measuring SLOC is interesting, it is more an indication of effort than functionality. Two developers could approach a functional challenge using different techniques, and one might need only write a few lines of code, and the other might need to write many times more lines to achieve the same functionality. The most reliable method for measuring software size is code agnostic, from the user's point of view - in function points.
Measuring code
One method of software measurement is metrics that are analyzed against the code itself. These are called software metrics and including simple metrics, such as counting the number of lines in a single file, the number of files in an application, the number of functions in a file, etc. Such measurements have become a common software development practice.
Measuring software complexity, cohesion and coupling
There are also more detailed metrics that help measure things like software complexity, Halstead, cohesion, and coupling.
See also
History of software engineering
Software engineer
Software metrics
Function point
COSMIC functional size measurement |
https://en.wikipedia.org/wiki/Rising%20Sun%20Flag | The is a Japanese flag that consists of a red disc and sixteen red rays emanating from the disc. Like the Japanese national flag, the Rising Sun Flag symbolizes the sun.
The flag was originally used by feudal warlords in Japan during the Edo period (1603–1868 CE). On May 15, 1870, as a policy of the Meiji government, it was adopted as the war flag of the Imperial Japanese Army, and on October 7, 1889, it was adopted as the naval ensign of the Imperial Japanese Navy.
At present, the flag is flown by the Japan Maritime Self-Defense Force, and an eight-ray version is flown by the Japan Self-Defense Forces and the Japan Ground Self-Defense Force. The rising sun design is also seen in numerous scenes in daily life in Japan, such as in fishermen's banners hoisted to signify large catches of fish, flags to celebrate childbirth, and in flags for seasonal festivities.
The flag is controversial in Korea and China, where it is associated with Japanese militarism and imperialism.
History and design
The flag of Japan and the symbolism of the rising Sun has held symbolic meaning in Japan since the Asuka period (538–710 CE). The Japanese archipelago is east of the Asian mainland, and is thus where the Sun "rises". In 607 CE, an official correspondence that began with "from the Emperor of the rising sun" was sent to Chinese Emperor Yang of Sui. Japan is often referred to as "the land of the rising sun". In the 12th century work The Tale of the Heike, it was written that different samurai carried drawings of the Sun on their fans.
The Japanese word for Japan is , which is pronounced or , and literally means "the origin of the sun". The character means "sun" or "day"; means "base" or "origin". The compound therefore means "origin of the sun" and is the source of the popular Western epithet "Land of the Rising Sun". The red disc symbolizes the Sun and the red lines are light rays shining from the rising sun.
The design of the Rising Sun Flag (Asahi) has been widely used si |
https://en.wikipedia.org/wiki/.QL | .QL (pronounced "dot-cue-el") is an object-oriented query language used to retrieve data from relational database management systems. It is reminiscent of the standard query language SQL and the object-oriented programming language Java. .QL is an object-oriented variant of a logical query language called Datalog. Hierarchical data can therefore be naturally queried in .QL in a recursive manner.
Queries written in .QL are optimised, compiled into SQL and can then be executed on any major relational database management system. .QL query language is being used in SemmleCode to query a relational representation of Java programs.
.QL is developed at Semmle Limited and is based on the company's proprietary technology.
Language features
.QL has several language features to make queries concise, intuitive and reusable:
Extensible type hierarchy
Methods and predicates
Definition before use
Example query
The sample query below illustrates use of .QL to query a Java program. This is how one would select all classes that contain more than ten public methods:
from Class c, int numOfMethods
where numOfMethods = count(Method m| m.getDeclaringType()=c
and m.hasModifier("public"))
and numOfMethods > 10
select c.getPackage(), c, numOfMethods
In fact, this query selects not only all classes with more than ten public methods, but also their corresponding packages and the number of methods each class has.
See also
SQL - Structured Query Language
OQL - Object Query Language
Datalog - logic programming language
SemmleCode - Software testing tool that uses .QL language |
https://en.wikipedia.org/wiki/Civil%20inattention | Civil inattention is the process whereby strangers who are in close proximity demonstrate that they are aware of one another, without imposing on each other – a recognition of the claims of others to a public space, and of their own personal boundaries.
In practice
Civil inattention is the term introduced by Erving Goffman to describe the care taken to maintain public order among strangers and thus to make anonymised life in cities possible. Rather than either ignoring or staring at others, civil inattention involves the unobtrusive and peaceful scanning of others so as to allow for neutral interaction. Through brief eye contact with an approaching stranger, a person both acknowledges their presence and forecloses the possibility of more personal contact or of conversation.
Civil inattention is thus a means of making privacy possible within a crowd through culturally accepted forms of self-distancing. Seemingly (though not in reality) effortless, such civility is a way of shielding others from personal claims in public – an essential feature of the abstract, impersonal relationships demanded by the open society.
Negative aspects
Civil inattention can lead to feelings of loneliness or invisibility, and it reduces the tendency to feel responsibility for the well-being of others. Newcomers to urban areas are often struck by the impersonality of such routines, which they may see as callous and uncaring, rather than as necessary for the peaceful co-existence of close-packed millions.
Insanity of place
Goffman saw many classic indications of madness as violations of the norm of civil inattention speaking to strangers, or shying away from every passing glance.
See also |
https://en.wikipedia.org/wiki/OZ7IGY | OZ7IGY is a Danish amateur radio beacon, and the world's oldest VHF and UHF amateur radio beacon and active since the International Geophysical Year in 1957. It is located near Jystrup, in Maidenhead locator JO55WM54, and transmits on the frequencies detailed in Table 1.
Since 30 October 2012, when the Next Generation Beacons platform came into use, the 2 m and 6 m beacons have been frequency and time locked to GPS.
Since 30 March 2013 all the beacons using the Next Generation Beacons platform transmit PI4 (a specialized digital modulation system), CW and unmodulated carrier in a one-minute cycle. The frequency precision of the Next Generation Beacons is typically better than 5 mHz. Over time all the OZ7IGY beacons will use the Next Generation Beacons platform. |
https://en.wikipedia.org/wiki/Jiffy%20mix | Jiffy is a brand of baking mixes marketed by the Chelsea Milling Company in Chelsea, Michigan, that has been producing mixes since 1930. The company was previously named Chelsea Roller Mill. They are known for their products being packaged in a recognizable, small box with the brand's logo in blue. Jiffy was created as the first prepared baking mix in the United States by Mabel White Holmes.
The company is now run and managed by her grandson, Howdy Holmes, a former Indianapolis 500 and CART driver. Holmes became the company's CEO in 1995. In March 2013, the company had around 350 employees, and in 2015 employs about 300 workers and produces 1.6 million boxes of its products each day. Its corn muffin mix accounts for 91 percent of the company's retail sales, and the company's retail market in October 2013 was $550 million.
History
Chelsea Milling Company is a family-operated company with roots in the flour milling business dating back to 1802. Originally a commercial operation that sold only to other businesses, its first baking mix designed for sale to consumers was created in the spring of 1930 by then-owner Mabel White Holmes. At the time, it was marketed as a way to make biscuits that was "so easy even a man could do it."
Operations
Most of the company's products are handled, processed and produced in-house, which includes grain storage, the grinding of grains into flour, product mixing and box manufacturing. Equipment repair is typically performed by company personnel. A significant amount of product ingredients are sourced from Michigan-raised crops, including "most of the wheat and some of the sugar." Some sugar and shortening is imported from the states of Illinois and Indiana. The company began offering free tours of its facilities and operations to the public in the 1960s, and continues to do so today.
Expansion
In 2008, the company began expansion into the food service and institutional industries due to a decline in the home-baking products market |
https://en.wikipedia.org/wiki/Diacylglycerol%20oil | Diacylglycerol oil (DAG oil) is a cooking oil in which the ratio of triglycerides, also known as Triacylglycerols (TAGs), to diacylglycerols (DAGs) is shifted to contain mostly DAG, unlike conventional cooking oils, which are rich in TAGs. Vegetable DAG oil, for example, contains 80% DAG and is used as a 1:1 replacement for liquid vegetable oils in all applications.
How it works
DAGs and TAGs are natural components in all vegetable oils. Through an enzymatic process, the DAG content of a combination of soy and canola oils is significantly increased. Unlike TAG, which is stored as body fat, DAG is immediately burned as energy. With DAG-rich oil containing more than 80% DAG, less of the oil is stored as body fat than with traditional oils, which are rich in TAG. Excess calories consumed by the body are converted into fat and stored, regardless if it is consumed as DAG or TAG.
Study
According to a 2007 study,
Diacylglycerol (DAG) oil is present with vegetable oil. A study in 2004 indicated that DAG oil is effective for both fasting and postprandial hyperlipidemia; according to the same study, it helped prevent excess adiposity.
FDA designation
DAG oil was designated as generally recognized as safe (GRAS) by an outside panel of scientific experts, and their conclusion has been reviewed and accepted by the US Food and Drug Administration (FDA). This GRAS determination is for use in vegetable oil spreads and home cooking oil. In Japan, the Ministry of Health, Labor and Welfare has approved DAG oil to manage serum triglycerides after a meal, which leads to less build-up of body fat.
Side effects
Because DAG oil is digested the same way as conventional vegetable oils, the potential side effects are no different than those of conventional oil. In addition, studies with animals and human subjects have shown no adverse effects from single-dose or long-term consumption of DAG-rich oil. It has also been found that fat-soluble vitamins' status is not affected by the |
https://en.wikipedia.org/wiki/Throwim%20Way%20Leg | Throwim Way Leg is a 1998 book written by Australian scientist Tim Flannery. It documents Flannery's experiences conducting scientific research in the highlands of Papua New Guinea and Indonesian Western New Guinea. The book describes the flora and fauna of the island and the cultures of its various peoples. The title is an anglicised spelling of the New Guinean Pidgin "Tromoi Lek," to go on a journey.
Flannery recounts his 15 trips to New Guinea beginning in 1981, when he was aged 26. He identifies at least 17 previously undescribed species during this period.
See also |
https://en.wikipedia.org/wiki/Glutamate%20formimidoyltransferase | Glutamate formimidoyltransferase is a methyltransferase enzyme which uses tetrahydrofolate as part of histidine catabolism. It catalyses two reactions:
5-formimidoyltetrahydrofolate + L-glutamate <=> tetrahydrofolate + N-formimidoyl-L-glutamate
5-formyltetrahydrofolate + L-glutamate <=> tetrahydrofolate + N-formyl-L-glutamate
It is classified under and in mammals is found as part of a bifunctional enzyme that also has formimidoyltetrahydrofolate cyclodeaminase activity.
Structure
The formiminotransferase (FT) domain of formiminotransferase-cyclodeaminase (FTCD) forms a homodimer, with each protomer comprising two subdomains. The formiminotransferase domain has an N-terminal subdomain that is made up of a six-stranded mixed beta-pleated sheet and five alpha helices, which are arranged on the external surface of the beta sheet. This, in turn, faces the beta-sheet of the C-terminal subdomain to form a double beta-sheet layer. The two subdomains are separated by a short linker sequence, which is not thought to be any more flexible than the remainder of the molecule. The substrate is predicted to form a number of contacts with residues found in both the N-terminal and C-terminal subdomains. In humans, deficiency of this enzyme results in a disease phenotype. |
https://en.wikipedia.org/wiki/Dolby%20E | Dolby E is a lossy audio compression and decoding technology developed by Dolby Laboratories that allows 6 to 8 channels of audio to be compressed into an AES3 digital audio stream that can be stored as a standard stereo pair of digital audio tracks.
Up to six channels, such as a 5.1 mix, can be recorded as 16-bit Dolby E data. However, if more than six channels are required, such as 5.1 plus a stereo LtRt, the AES3 data must be formatted as 20-bit audio. This increases capacity to eight channels.
Dolby E should never reach home viewers, as it is intended for use during post-production when moving multichannel material between production facilities or broadcasters. It is decoded prior to transmission.
It is very important to ensure that a Dolby E stream is never played through monitors or headphones without decoding. Undecoded Dolby E data will be converted to analog as full scale (0 dBFS) digital noise that can easily damage loudspeakers or hearing. Unambiguous media labeling is essential to avoid this.
Products
Dolby E encoding and decoding is implemented using commercially available hardware or software.
Hardware
Dolby DP571
Dolby DP572
Dolby DP568
Dolby DP580
Dolby DP591
Dolby DP600
Dolby DP600C
Software
FFmpeg (only decoding)
Avisynth (only decoding)
Emotion Systems 'eNGINE'
Minnetonka Audio 'AudioTools Server'
Minnetonka Audio SurCode for Dolby E
Neyrinck SoundCode For Dolby E |
https://en.wikipedia.org/wiki/Hylomorphism%20%28computer%20science%29 | In computer science, and in particular functional programming, a hylomorphism is a recursive function, corresponding to the composition of an anamorphism (which first builds a set of results; also known as 'unfolding') followed by a catamorphism (which then folds these results into a final return value). Fusion of these two recursive computations into a single recursive pattern then avoids building the intermediate data structure. This is an example of deforestation, a program optimization strategy. A related type of function is a metamorphism, which is a catamorphism followed by an anamorphism.
Formal definition
A hylomorphism can be defined in terms of its separate anamorphic and catamorphic parts.
The anamorphic part can be defined in terms of a unary function defining the list of elements in by repeated application ("unfolding"), and a predicate providing the terminating condition.
The catamorphic part can be defined as a combination of an initial value for the fold and a binary operator used to perform the fold.
Thus a hylomorphism
may be defined (assuming appropriate definitions of & ).
Notation
An abbreviated notation for the above hylomorphism is .
Hylomorphisms in practice
Lists
Lists are common data structures as they naturally reflect linear computational processes. These processes arise in repeated (iterative) function calls. Therefore, it is sometimes necessary to generate a temporary list of intermediate results before reducing this list to a single result.
One example of a commonly encountered hylomorphism is the canonical factorial function.
factorial :: Integer -> Integer
factorial n
| n == 0 = 1
| n > 0 = n * factorial (n - 1)
In the previous example (written in Haskell, a purely functional programming language) it can be seen that this function, applied to any given valid input, will generate a linear call tree isomorphic to a list. For example, given n = 5 it will produce the following:
factorial 5 = 5 * (factorial 4) = |
https://en.wikipedia.org/wiki/Dennis%20Gabor%20Medal%20and%20Prize | The Dennis Gabor Medal and Prize (previously the Duddell Medal and Prize until 2008) is a prize awarded biannually by the Institute of Physics for distinguished contributions to the application of physics in an industrial, commercial or business context. The medal is made of silver and is accompanied by a prize and a certificate.
The original Duddell award was instituted by the Council of The Physical Society in 1923 to the memory of William du Bois Duddell, the inventor of the electromagnetic oscillograph. Between 1961 and 1975 it was awarded in alternate odd-numbered years and thereafter annually.
In 2008 the award was renamed in honour of Dennis Gabor, the Hungarian – British physicist who developed holography, for which he received the 1971 Nobel Prize in Physics. The prize also switched to being awarded in alternate even-numbered years.
Gabor Medallists
The following have been awarded the Gabor Medal and Prize:
Duddell Medallists
The following have been awarded the Duddell Medal and Prize:
See also
Institute of Physics Awards
List of physics awards
List of awards named after people |
https://en.wikipedia.org/wiki/ATSC-M/H | ATSC-M/H (Advanced Television Systems Committee - Mobile/Handheld) is a U.S. standard for mobile digital TV that allows TV broadcasts to be received by mobile devices.
ATSC-M/H is a mobile TV extension to preexisting terrestrial TV broadcasting standard ATSC A/53. It corresponds to the European DVB-H and 1seg extensions of DVB-T and ISDB-T terrestrial digital TV standards respectively. ATSC is optimized for a fixed reception in the typical North American environment and uses 8VSB modulation. The ATSC transmission method is not robust enough against Doppler shift and multipath radio interference in mobile environments, and is designed for highly directional fixed antennas. To overcome these issues, additional channel coding mechanisms are introduced in ATSC-M/H to protect the signal. As of 2021, ATSC-M/H is considered to have been a commercial failure.
Evolution of mobile TV standard
Requirements
Several requirements of the new standard were fixed right from the beginning:
Completely backward compatible with ATSC (A/53)
Broadcasters can use their available license without additional restrictions
Available legacy ATSC receivers can be used to receive the ATSC (A/53) standard without any modification.
Proposals
Ten systems from different companies were proposed, and two remaining systems were presented with transmitter and receiver prototypes:
MPH (an acronym for mobile/pedestrian/handheld, suggesting miles per hour), was developed by LG Electronics and Harris Broadcast. (Zenith, a subsidiary of LG, developed much of the original ATSC system.)
A-VSB (Advanced-VSB) was developed by Samsung and Rohde & Schwarz.
To find the best solution, the Advanced Television Systems Committee assigned the Open Mobile Video Coalition (OMVC) to test both systems. The test report was presented on May 15, 2008. As a result of this detailed work by the OMVC, a final standard draft was designed by the Advanced Television Systems Committee, specialist group S-4. ATSC-M/H will be a |
https://en.wikipedia.org/wiki/Speech%20sound%20disorder | A speech sound disorder (SSD) is a speech disorder in which some sounds (phonemes) are not produced or used correctly. The term "protracted phonological development" is sometimes preferred when describing children's speech, to emphasize the continuing development while acknowledging the delay.
Classification
Speech sound disorders may be subdivided into two primary types, articulation disorders (also called phonetic disorders) and phonemic disorders (also called phonological disorders). However, some may have a mixed disorder in which both articulation and phonological problems exist. Though speech sound disorders are associated with childhood, some residual errors may persist into adulthood.
Articulation disorders
Articulation disorders (also called phonetic disorders, or simply "artic disorders" for short) are based on difficulty learning to physically produce the intended phonemes. Articulation disorders have to do with the main articulators which are the lips, teeth, alveolar ridge, hard palate, velum, glottis, and the tongue. If the disorder has anything to do with any of these articulators, then it is an articulation disorder. There are usually fewer errors than with a phonemic disorder, and distortions are more likely (though any omissions, additions, and substitutions may also be present). They are often treated by teaching the child how to physically produce the sound and having them practice its production until it (hopefully) becomes natural. Articulation disorders should not be confused with motor speech disorders, such as dysarthria (in which there is actual paralysis of the speech musculature) or developmental verbal dyspraxia (in which motor planning is severely impaired).
List
Deltacism (from the Greek letter Δ) is a difficulty in producing d sound.
Etacism is a difficulty in producing e sound.
Gamacism is a difficulty in producing g sound.
Hitism is a difficulty in producing /h/ sound.
Iotacism is a difficulty in producing /j/ sound.
Ka |
https://en.wikipedia.org/wiki/Desert%20fungi | The desert fungi are a variety of terricolous fungi inhabiting the biological soil crust of arid regions. Those exposed to the sun typically contain melanin and are resistant to high temperatures, dryness and low nutrition. Species that are common elsewhere (e.g. Penicillium spp. and common soil Aspergillus spp.) do not thrive in these conditions. Producing large dark unicellular spores also helps survival. Sexually reproducing ascomycetes, especially Chaetomium spp., have developed resilience by growing thick, dark perithecia. Under desert shrubs, however, more sensitive species such as Gymnoascus reesii'' prevail.
Species
Agaricus columellatus
Agaricus deserticola
Agaricus evertens
Battarreoides diguetii
Chlamydopus meyenianus
Coccidioides
Disciseda sp.
Montagnea arenaria
Podaxis longii
Podaxis pistillaris
Tulostoma sp. |
https://en.wikipedia.org/wiki/Transcription%20factor%20II%20A | Transcription factor TFIIA is a nuclear protein involved in the RNA polymerase II-dependent transcription of DNA. TFIIA is one of several general (basal) transcription factors (GTFs) that are required for all transcription events that use RNA polymerase II. Other GTFs include TFIID, a complex composed of the TATA binding protein TBP and TBP-associated factors (TAFs), as well as the factors TFIIB, TFIIE, TFIIF, and TFIIH. Together, these factors are responsible for promoter recognition and the formation of a transcription preinitiation complex (PIC) capable of initiating RNA synthesis from a DNA template.
Functions
TFIIA interacts with the TBP subunit of TFIID and aids in the binding of TBP to TATA-box containing promoter DNA. Interaction of TFIIA with TBP facilitates formation of and stabilizes the preinitiation complex. Interaction of TFIIA with TBP also results in the exclusion of negative (repressive) factors that might otherwise bind to TBP and interfere with PIC formation. TFIIA also acts as a coactivator for some transcriptional activators, assisting with their ability to increase, or activate, transcription. The requirement for TFIIA in vitro transcription systems has been variable, and it can be considered either as a GTF and/or a loosely associated TAF-like coactivator. Genetic analysis in yeast has shown that TFIIA is essential for viability.
Structure
TFIIA is a heterodimer with two subunits: one large unprocessed (subunit 1, or alpha/beta; gene name ) and one small (subunit 2, or gamma; gene name ). It was originally believed to be a heterotrimer of an alpha (p35), a beta (p19) and a gamma subunit (p12). In humans, the sizes of the encoded proteins are approximately 55 kD and 12 kD. Both genes are present in species ranging from humans to yeast, and their protein products interact to form a complex composed of a beta barrel domain and an alpha helical bundle domain. It is the N-terminal and C-terminal regions of the large subunit that particip |
https://en.wikipedia.org/wiki/Hyperconnectivity | Hyperconnectivity is a term invented by Canadian social scientists Anabel Quan-Haase and Barry Wellman, arising from their studies of person-to-person and person-to-machine communication in networked organizations and networked societies. The term refers to the use of multiple means of communication, such as email, instant messaging, telephone, face-to-face contact and Web 2.0 information services.
Hyperconnectivity is also a trend in computer networking in which all things that can or should communicate through the network will communicate through the network. This encompasses person-to-person, person-to-machine and machine-to-machine communication. The trend is fueling large increases in bandwidth demand and changes in communications because of the complexity, diversity and integration of new applications and devices using the network.
The communications equipment maker Nortel has recognized hyperconnectivity as a pervasive and growing market condition that is at the core of their business strategy. CEO Mike Zafirovski and other executives have been quoted extensively in the press referring to the hyperconnected era.
Apart from network-connected devices such as landline telephones, mobile phones and computers, newly-connectable devices range from mobile devices such as PDAs, MP3 players, GPS receivers and cameras through to an ever wider collection of machines including cars refrigerators and coffee makers, all equipped with embedded wireline or wireless networking capabilities. The IP enablement of all devices is a fundamental limitation of IP version 4, and IPv6 is the enabling technology to support massive address explosions.
There are other, independent, uses of the term:
The U.S. Army describes hyperconnectivity as a digitization of the battlefield where all military elements are connected.
Hyperconnectivity is used in medical terminology to explain billions and billions of neurons creating excessive connections, within the brain associated with schi |
https://en.wikipedia.org/wiki/Evan%20O%27Dorney | Evan Michael O'Dorney (born September 16, 1993) is an American mathematician who is a postdoctoral associate at Carnegie Mellon University. As a home-schooled high school student and college student, he won many contests in mathematics and other subjects, including the 2007 Scripps National Spelling Bee, 2011 Intel Science Talent Search, four International Math Olympiad medals, and three Putnam Fellowships. A 2013 report by the National Research Council called him "as famous for academic excellence as any student can be".
Education and competitions
As a home-schooled high school student, O'Dorney attended classes at the University of California, Berkeley from 2007 to 2011. He was the winner of the 2007 Scripps National Spelling Bee,
and an interview O'Dorney did on CNN with Kiran Chetry after he won the Scripps Spelling Bee later became a viral video in which he misspelled the word scombridae. During this time he was a four-time International Math Olympiad medalist, with two gold and two silver medals.
In 2010, he won $10,000 (half for himself and half for the Berkeley Mathematics Circle) in a national "Who Wants to Be a Mathematician" contest, held at that year's Joint Mathematics Meetings in San Francisco. In 2011, he won the Intel Science Talent Search for a project entitled "continued fraction convergents and linear fractional transformations".
O'Dorney started attending Harvard College in 2011, where he studied mathematics. He jumped straight into graduate classes in mathematics, avoiding the undergraduate-level classes. While at Harvard, he was a three-time Putnam fellow. (His first Putnam was as a high school student.) In 2015–16, he studied Part III of the Mathematical Tripos at Cambridge, on a Churchill Scholarship. In 2016 he received honorable mention for the Morgan Prize in mathematics.
In 2021, he received a PhD in mathematics from Princeton University.
Other interests
Although his primary interest is mathematics, O'Dorney has had a strong inte |
https://en.wikipedia.org/wiki/Grain%20cradle | A grain cradle or cradle, is a modification to a standard scythe to keep the cut grain stems aligned. The cradle scythe has an additional arrangement of fingers attached to the snaith (snath or snathe) to catch the cut grain so that it can be cleanly laid down in a row with the grain heads aligned for collection and efficient threshing.
History
As the cultivation of grain developed, the seasonal harvest became a major agricultural event. Grain could be pulled or, later, cut with a sickle and tied into sheaves to be threshed. The scythe improved on the sickle by giving the mower a more ergonomic stance and permitting a larger blade. However, keeping the grain stems aligned in the windrow required great skill and where these skills were less available the addition of a cradle helped to manage the grain heads, reducing the sheaver's work-load and improving efficiency at threshing. Lesser skilled mowers could harvest significantly more grain by using the cradle. Although the grain cradle was in previous use in parts of Europe it was not generally used because skilled labour was traditionally available. Between 1800 and 1840 the cradle was widely adopted in the expanding grain growing area of Midwestern United States, undergoing some refinement there and resulting in the American-pattern cradle. Fifty American patents were issued between 1823 and 1930, the first in 1823 in western New York state and the last in 1924 in West Virginia peaking between 1875 and 1900.
Hay does not require aligning and the scythe is more efficient without a cradle, so it was removed for haymaking.
Decline
The cradle was commonly used throughout the 1800s and into the beginning of the 20th century, in part because many of the smaller farms were not designed for mechanical reaping and in part because there were still a great number of smaller farms where the mechanical reaper was not economical. However, by the end of the 19th century the cradle had been generally replaced by the mechanical |
https://en.wikipedia.org/wiki/Carath%C3%A9odory%20metric | In mathematics, the Carathéodory metric is a metric defined on the open unit ball of a complex Banach space that has many similar properties to the Poincaré metric of hyperbolic geometry. It is named after the Greek mathematician Constantin Carathéodory.
Definition
Let (X, || ||) be a complex Banach space and let B be the open unit ball in X. Let Δ denote the open unit disc in the complex plane C, thought of as the Poincaré disc model for 2-dimensional real/1-dimensional complex hyperbolic geometry. Let the Poincaré metric ρ on Δ be given by
(thus fixing the curvature to be −4). Then the Carathéodory metric d on B is defined by
What it means for a function on a Banach space to be holomorphic is defined in the article on Infinite dimensional holomorphy.
Properties
For any point x in B,
d can also be given by the following formula, which Carathéodory attributed to Erhard Schmidt:
For all a and b in B,
with equality if and only if either a = b or there exists a bounded linear functional ℓ ∈ X∗ such that ||ℓ|| = 1, ℓ(a + b) = 0 and
Moreover, any ℓ satisfying these three conditions has |ℓ(a − b)| = ||a − b||.
Also, there is equality in (1) if ||a|| = ||b|| and ||a − b|| = ||a|| + ||b||. One way to do this is to take b = −a.
If there exists a unit vector u in X that is not an extreme point of the closed unit ball in X, then there exist points a and b in B such that there is equality in (1) but b ≠ ±a.
Carathéodory length of a tangent vector
There is an associated notion of Carathéodory length for tangent vectors to the ball B. Let x be a point of B and let v be a tangent vector to B at x; since B is the open unit ball in the vector space X, the tangent space TxB can be identified with X in a natural way, and v can be thought of as an element of X. Then the Carathéodory length of v at x, denoted α(x, v), is defined by
One can show that α(x, v) ≥ ||v||, with equality when x = 0.
See also
Earle–Hamilton fixed point theorem |
https://en.wikipedia.org/wiki/Depolarizer | A depolarizer or depolariser, in electrochemistry, according to an IUPAC definition, is a synonym of electroactive substance, i.e., a substance which changes its oxidation state, or partakes in a formation or breaking of chemical bonds, in a charge-transfer step of an electrochemical reaction.
In the battery industry, the term "depolarizer" has been used to denote a substance used in a primary cell to prevent buildup of hydrogen gas bubbles. A battery depolarizer takes up electrons during discharge of the cell; therefore, it is always an oxidizing agent. The term "depolarizer" can be considered as outdated or misleading, since it is based on the concept of "polarization" which is hardly realistic in many cases.
Polarization
Under certain conditions for some electrochemical cells, especially if they use an aqueous electrolyte, hydrogen ions can be converted into hydrogen atoms and H2 molecules. In the extreme case, bubbles of hydrogen gas might appear at one of the electrodes. If such a layer of hydrogen or even H2 gas bubbles appear on the positive plate of a battery, they interfere with the chemical action of the cell. An electrode covered with gases is said to be polarized. Polarization in galvanic cells causes the voltage and thus current to be reduced, especially if the bubbles cover a large fraction of a plate. Depolarizers are substances which are intended to remove the hydrogen, and therefore, they help to keep the voltage at a high level. However, this concept is outdated, since if enough depolarizer is present, it will react directly in most cases by getting electrons from the positive plate of the galvanic cell, i.e. there will be no relevant amount of hydrogen gas present. Therefore, the original concept of polarization does not apply to most batteries, and the depolarizer does not react with hydrogen as H2. Still, the term is used today, however, in most cases, it might be replaced with oxidizing agent.
Many different substances have been used as depo |
https://en.wikipedia.org/wiki/Huber%27s%20equation | Huber's equation, first derived by a Polish engineer Tytus Maksymilian Huber, is a basic formula in elastic material tension calculations, an equivalent of the equation of state, but applying to solids. In most simple expression and commonly in use it looks like this:
where is the tensile stress, and is the shear stress, measured in newtons per square meter (N/m2, also called pascals, Pa), while —called a reduced tension—is the resultant tension of the material.
Finds application in calculating the span width of the bridges, their beam cross-sections, etc.
See also
Yield surface
Stress–energy tensor
Tensile stress
von Mises yield criterion |
https://en.wikipedia.org/wiki/Oregon%20Institute%20of%20Marine%20Biology | The Oregon Institute of Marine Biology (or OIMB) is the marine station of the University of Oregon. This marine station is located in Charleston, Oregon at the mouth of Coos Bay. Currently, OIMB is home to several permanent faculty members and a number of graduate students. OIMB is a member of the National Association of Marine Laboratories (NAML). In addition to graduate research, undergraduate classes are offered year round, including marine birds and mammals, estuarine biology, marine ecology, invertebrate zoology, molecular biology, biology of fishes, biological oceanography, and embryology.
The Loyd and Dorothy Rippey Library, one of eight branches of the UO Libraries, was added to the campus in 1999. The Rippey Library is open to the public by appointment, and the Oregon Card Program allows Oregon residents 16 years old and over to borrow from the collection.
The Charleston Marine Life Center (or CMLC) is a public museum and aquarium on the edge of the harbor in Charleston, OR, across the street from the OIMB campus. Displays aimed at visitors of all ages emphasize the diversity of animal and plant life in local marine ecosystems. Visitors learn where to interact with marine organisms in their natural environments and how local scientists study the life histories, evolution and ecology of underwater plants and animals.
History
The University of Oregon first established OIMB as a summer research and education program in 1924, operating out of tents along the beach of Sunset Bay. OIMB settled into its current location in 1931, when 100 acres of the Coos Head Military Reserve, including several buildings from the Army Corps of Engineers, was deeded to the University of Oregon. In 1937, OIMB was transferred to Oregon State College (now Oregon State University), and remained theirs until the federal government required the property during World War II. Following the war, OIMB was initially returned to Oregon State University, but the University of Oregon r |
https://en.wikipedia.org/wiki/Incunabula%20Short%20Title%20Catalogue | The Incunabula Short Title Catalogue (ISTC) is an electronic bibliographic database maintained by the British Library which seeks to catalogue all known incunabula. The database lists books by individual editions, recording standard bibliographic details for each edition as well as giving a brief census of known copies, organised by location. It currently holds records of over 30,000 editions.
History
Previous efforts to comprehensively catalog 15th century printing include Georg Wolfgang Panzer's Annales Typographici ab Artis Inventae Origine ad Annum MD (1793–97) and Ludwig Hain's Repertorium Bibliographicum (1822). Hain's work was later supplemented by Copinger's Supplement and Reichling's Appendices, which would pave the way for the Gesamtkatalog der Wiegendrucke (1925). The Gesamtkatalog der Wiegendrucke (GW) was the most comprehensive catalog of incunables to date (and still offers more in-depth information than ISTC), but in recent decades work on the catalog has slowed to such a degree that the goal of cataloging all extant incunables under the GW'''s system is indefinitely far-off.
The ISTC was created to establish a system of incunable cataloging that was simple enough to be expanded quickly, bringing the goal of a complete incunable catalog back into focus. Furthermore, the ISTC would use standardized entries that could be entered into a machine-searchable database.
Work on the ISTC began in 1980 under the leadership of the British Library's Lotte Hellinga. Frederick R. Goff's Incunabula in American Libraries (1973) was the first pre-existing catalog to be keyed into ISTC's database. Besides providing the catalog's first 12,900 entries, Goff's system for classifying information about incunables formed the basis for the structure of ISTC's records. Entries for all of the incunables in British Library and the Italian union catalog (IGI) were added next, followed by other national incunable catalogs.
Records
ISTC records retain many characteristics |
https://en.wikipedia.org/wiki/Larrabee%20%28microarchitecture%29 | Larrabee is the codename for a cancelled GPGPU chip that Intel was developing separately from its current line of integrated graphics accelerators. It is named after either Mount Larrabee or Larrabee State Park in Whatcom County, Washington, near the town of Bellingham. The chip was to be released in 2010 as the core of a consumer 3D graphics card, but these plans were cancelled due to delays and disappointing early performance figures. The project to produce a GPU retail product directly from the Larrabee research project was terminated in May 2010 and its technology was passed on to the Xeon Phi. The Intel MIC multiprocessor architecture announced in 2010 inherited many design elements from the Larrabee project, but does not function as a graphics processing unit; the product is intended as a co-processor for high performance computing.
Almost a decade later, on June 12, 2018; the idea of an Intel dedicated GPU was revived again with Intel's desire to create a discrete GPU by 2020. This project would eventually become the Intel Xe and Intel Arc series, released in September 2020 and March 2022, respectively - but both were unconnected to the work on the Larrabee project.
Project status
On December 4, 2009, Intel officially announced that the first-generation Larrabee would not be released as a consumer GPU product. Instead, it was to be released as a development platform for graphics and high-performance computing. The official reason for the strategic reset was attributed to delays in hardware and software development. On May 25, 2010, the Technology@Intel blog announced that Larrabee would not be released as a GPU, but instead would be released as a product for high-performance computing competing with the Nvidia Tesla.
The project to produce a GPU retail product directly from the Larrabee research project was terminated in May 2010. The Intel MIC multiprocessor architecture announced in 2010 inherited many design elements from the Larrabee project, but does |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.