id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
4,033,192
https://en.wikipedia.org/wiki/Space%20dock
A space dock is a hypothesised type of space station that is able to repair or build spacecraft similar to maritime shipyards on Earth. They remove the need for new spacecraft to perform a space launch to reach space and existing spacecraft to make an atmospheric entry and landing for repair work. They currently only exist in fiction, however concept work has been undertaken on real space dock facilities that could be built with current technology. Real world Space docks, as part of a wider space logistics infrastructure, are considered a relevant part of a true space-faring society. Scientists of the American Institute of Aeronautics and Astronautics have proposed that future, near-term LEO space facilities should include "a large space dock making possible the on-orbit assembly and maintenance of large space facilities, space platforms, and spacecraft" (see image for design concept). A space dock / hangar could also allow enclosed (and possibly pressurized) maintenance of smaller spacecraft and space planes, though the construction of non-atmospheric spacecraft and other space facilities is envisaged as its main use. The structural strength of such a more advanced hangar would primarily be based on the internal atmospheric pressure that would have to be sustained for shirt-sleeve operations, thus enabling routine servicing and assembly in space. The use for orbital maintenance could be especially critical for damaged atmospheric spacecraft, which are at great risk during reentry into the atmosphere, as was shown during the Columbia disaster. In the wake of the disaster, NASA improvised repairs to shuttles while in flight, a procedure which would have been much easier with a dedicated orbital facility. The use of a major space dock as a construction facility would also be required for the construction of an interstellar colonization starship built with current or near-term technology. Future Ares V missions for example could serve to cost-effectively transport construction materials for future spacecraft and space exploration missions, delivering raw materials to a Moon-based space dock positioned as a counterweight to a Moon-based space elevator. Science fiction Space docks in science fiction play an important role in the construction and maintenance of space vessels. They add a depth of realism to the fictional worlds they appear in and continue the nautical parallels that most space-based science fiction uses. Space docks serve the same purpose as their non-fictional terrestrial dry dock counterparts, being used for construction, repairs, refits and restorations of spacecraft. Some play significant plot roles, others hide in the background in many sci-fi media. Such science fiction settings as Star Wars, Babylon 5, the Honorverse and the Foundation series mention or allude substantially to such facilities. Star Trek Space docks of varying styles and sizes have made a number of appearances in the Star Trek science fiction universe. Often they were shown as open, metal framed structures in which a vessel could be docked. The first such dry dock was seen in Star Trek: The Motion Picture with the refit USS Enterprise (NCC-1701) contained within such an "orbital dockyard" before being sent to intercept an alien vessel on course for Earth—"chronologically" speaking in the storyline, an earlier example (set in 2151) also housed the first Enterprise of Capt. Jonathan Archer at the start of the Star Trek: Enterprise series. A larger facility, known as Earth Spacedock, was seen for the first time in Star Trek III: The Search for Spock, designed by David Carson and Nilo Rodis of Industrial Light and Magic and praised as "one of the more stunning visuals in all of Star Trek. These were huge orbital command installations incorporating internal space docks that could be completely enclosed—starships could enter through bay doors to receive supplies or maintenance. One feature of the Spacedock design was its interior set, which included an area with large windows, outside which the Enterprise could be seen, thus allowing the Enterprise to be seen in scale compared to people, all inside the Spacedock space station. The design was to be done away with after Search for Spock, and ILM dismantled it after that film. When it was desired to be used again for the next movie, Star Trek IV: The Voyage Home, it had to be re-assembled. The re-use of the model from the previous movie and also the re-use of interior sets depicting the station helped economize on the budget for Star Trek IV, which debuted in 1986. The Earth Spacedock would go on to make appearances in later movies and in The Next Generation–era trilogy of seven season shows (The Next Generation, Deep Space Nine, and Voyager). It has been described as one of franchise's "enduring spacecraft designs". A third type of space dock was seen occasionally in The Next Generation and following series. This type of dock had a large command pod at the top, with arms underneath that could house a starship. The Enterprise-D was refitted and repaired in such a dock following combat with the Borg in 2367. Babylon 5 Dock facilities were occasionally seen on the Babylon 5 television series and movies. In the Babylon 5 universe, the space docks were structures deployed outside the station when larger ships were in need of repair. The Babylon 5 station itself effectively served as a space dock with internal docking facilities for freighters, personal transport vessels, and its own complement of fighter-craft designated to protect the station. During the events of the movie A Call to Arms, the Excalibur and the Victory were shown in the dry dock facilities in which they were constructed. The dock was destroyed by the Drakh following their attack on Earth, which would halt the construction of further Victory class destroyers until the facilities could be rebuilt. Star Wars Large space dock facilities were common above major shipbuilding worlds, such as Sullust and Corellia. Most notably, the massive Kuat Drive Yards corporation owned many facilities in the extensive moon system in the Kuat system and even a massive ringworld dry dock around Kuat (the planet) itself. References Science fiction themes Megastructures Fictional space stations Spaceflight
Space dock
Astronomy,Technology
1,207
18,662,971
https://en.wikipedia.org/wiki/DCrk
DCrk (Drosophila ortholog of Crk) is the Drosophila melanogaster ortholog of the mammalian proteins Crk and CrkL. Structure and function DCrk includes one SH2 domain followed by two SH3 domains and interacts with the MyoBlast City (MCB) protein. The DCrk protein is expressed during embryogenesis, declines during the larval stages, and reappears during pupation which suggests that DCrk plays an important role in development. References Drosophila melanogaster genes
DCrk
Chemistry
117
33,970,953
https://en.wikipedia.org/wiki/Film%20blowing%20machine
A film blowing machine involves one process used to make plastic film. Extruded tubular processing is most often used with polyethylene films but can be used with other polymers. The film may be laminating film, shrink film, agricultural covering film, bags or film for textiles and clothing, and other packaging materials. Technical information Parts include: screw and barrel, motor, inverter, heaters, die head, winder, and tower. The main motor may have frequency control of motor speed to improve speed regulation and save electricity. The screw and material barrel may be made from a nitrogen-treated chromium-molybdenum-aluminum alloy. Process At the beginning of the process, the polymer comes in the form of a pellet. it is heated and melted into a viscous liquid between rotating screws and barrels of the extruder. This allows for the polymer to be fed through a die that shapes it in the form of a tube. This tube is then carefully inflated, so there is no risk of tearing, into a bubble by injecting it with air. The bubble is simultaneously being cooled in its interior, via a cooling system, and on the exterior surface, through the use of an air ring, to solidify the material. A set of collapsing frames or guides are then used to collapse the bubble into two, more defined, layers within closer proximity. Now that the layers are close, a series of nip rollers flatten the layers together to form a two-layered plastic film that is then wound onto a cylindrical roll for packaging purposes. This process may vary depending upon the specifications and models of the machines. Bubble instabilities In the case that the bubble formed from air injection is not handled with caution, the bubble may become unstable and deform in a number of different ways. Draw resonance exists when the film velocity at which solidification occurs is much higher than the velocity of the melted liquid as it exits the die. This causes the melt to stretch too quickly and the bubble diameter starts to vary along its surface. One way to fix this situation is to increase the speed of the melt through the die. Helical instability is noticeable when one side of the bubble is cooled more than the other due to the air ring. The bubble then starts to form a helical shape as it reaches the collapsing frames. This can be avoided by either lowering the melt temperature or increasing extruder output. Freezeline height instability results in a variation of the thickness of the bubble. This is caused by extruder motor amps and back pressure. To prevent this variation in thickness, improvements upon the feeding and melting of the material must be implemented. Heavy-Bubble instability - Heavy-Bubble instability refers to the bubble sagging towards the bottom. When this occurs it means that it is not being cooled enough. Lowering the freezeline height or lowering the melt temperature will assist the bubble in its cooling face causing less sag. Bubble flutter appears below the freezeline when cool air impinges on the surface of the bubble. A higher freezline height, a lower melt temperature, and a narrower die gap can solve this problem. Bubble breathing occurs when the volume of the air in the bubble keeps changing periodically. This causes a variation in film thickness. Some solutions include controlling the cooling system and sensors, a reduction of melt temperatures, or decreasing extruder output. Bubble tear appears as a tear in the bubble, which happens when the force needed to draw up the bubble into the nip rollers is higher than the tensile strength of the melted film. A simple solution to this is to reduce extruder output or increase the die and melt temperature. References Books and general references Hawkins, William E, The Plastic Film and Foil Web Handling Guide CRC Press 2003 Jenkins, W. A., and Osborn, K. R. Plastic Films: Technology and Packaging Applications, CRC Press 1992 Yam, K. L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009, Packaging machinery Industrial equipment
Film blowing machine
Engineering
824
9,069,542
https://en.wikipedia.org/wiki/Napakivi
Napakivi (pole/navel stone) or tonttukivi (elf stone) is a traditional Finnish name for a standing stone in the middle of a field or another central spot. Generally speaking napakivi are unhewn stones that people have set upright. Some of them may have been erected by withdrawal of the receding ice-masses after the ice-age, in which case they will not be napakivi proper. Napakivi are usually longish and erect, and frequently have a round head. This has been interpreted by some to perhaps indicate an omphalos/penile reference symbolically. Napakivi can be located in the middle of a field, or the heart of an adjacent pile of stones which will be compiled of stones which had to be removed from the field to make it cultivatable by a plough. It can also be the central stone of a burial mound. Napakivi may have been considered facilitators of fertility or protectors of domain, or they may have been legal indicators of ownership. It is plausible they may have been considered some kind of magical centres of force or energy accumulators; perhaps the seat of a tutelary spirits power. The name tonttukivi refers to the elfs known as tonttu and also to the Finnish language word for a plot of land "tontti". Some stones equivalent to napakivi have been referred to with the term or Jumin kurikka, in which case they will have been connected to the mysterious spirit known as , who served as the basis for the Finnish word for god. Napakivi may have some cultural connection with saami seids or central European and great British megaliths, although it has not been demonstrated with any scientific rigour. Megaliths too are erected by ancient folk, giant, usually over man high stones which are sole or in groups. Most megaliths as well are considered to have a connexion to the penis and fertility. A stone in the center of a graveyard set up at the end of battle to inter the combatants is often called a napakivi, in which case the attendant mythology described above will not be attached to it. References External links Tonttukivet Merikarvian Trolssin kylässä. kansanperinne.net Stones Archaeological sites in Finland Finnish mythology
Napakivi
Physics
484
68,377,408
https://en.wikipedia.org/wiki/1%2C2-Bis%28diphenylphosphino%29benzene
1,2-Bis(diphenylphosphino)benzene (dppbz) is an organophosphorus compound with the formula C6H4(PPh2)2 (Ph = C6H5). Classified as a diphosphine ligand, it is a common bidentate ligand in coordination chemistry. It is a white, air-stable solid. As a chelating ligand, dppbz is very similar to 1,2-bis(diphenylphosphino)ethylene. References Chelating agents Diphosphines Phenyl compounds
1,2-Bis(diphenylphosphino)benzene
Chemistry
128
12,814,339
https://en.wikipedia.org/wiki/Cambridge%20Crystallographic%20Data%20Centre
The Cambridge Crystallographic Data Centre (CCDC) is a non-profit organisation based in Cambridge, England. Its primary activity is the compilation and maintenance of the Cambridge Structural Database, a database of small molecule crystal structures. They also perform analysis on the database for the benefit of the scientific community, and write and distribute computer software to allow others to do the same. History In 1962, Dr. Olga Kennard OBE FRS set up a chemical crystallography group within the Department of Chemistry, University of Cambridge. In 1965 she founded the CCDC and established the associated Cambridge Structural Database. At that time, there were only about 3,000 published X-ray structures, and the work involved converting these into a machine-readable form. Kennard invited Frank Allen to join the group, which he did in 1970, becoming Scientific Director and then Executive Director before retiring in 2008. In 1992, the CCDC moved into its own building adjacent to the Cambridge chemistry department. This new headquarters was designed by the Danish architect Professor Erik Christian Sørensen and won The Sunday Times Building of the Year Award in 1993. The CCDC still retains very close links as a University Partner Institution that trains students for postgraduate research degrees but from 1987 became an independent company. By 2019 the database had grown to over a million structures. Current research The staff at the CCDC curate the database of small-molecule organic and metal-organic crystal structures and make these available for download by the public. They also create and maintain a suite of cheminformatics software that may be used to apply the data to applications in the life sciences, including crystal engineering and materials science. Programs Developed CCDC developed programs such as ConQuest and Mercury that run under Windows and various types of Unix, including Linux. ConQuest is a search interface to the Cambridge Structural Database (CSD). Mercury is a crystal structure visualizer tool, of which later versions released in 2015 and later provide the functionality to generate 3D prints. See also List of chemical databases CrystalExplorer References External links The Cambridge Crystallographic Data Centre 1965 establishments in England Chemical industry in the United Kingdom Crystallography organizations Partner institutions of the University of Cambridge Research institutes established in 1965 Research institutes in Cambridge Science and technology in Cambridgeshire Research organisations in England
Cambridge Crystallographic Data Centre
Chemistry,Materials_science
458
32,112,885
https://en.wikipedia.org/wiki/AMMECR1
In molecular biology, the AMMECR1 protein (Alport syndrome, intellectual disability, midface hypoplasia and elliptocytosis chromosomal region gene 1 protein) is a protein encoded by the AMMECR1 gene on human chromosome Xq22.3. The contiguous gene deletion syndrome is characterised by Alport syndrome (A), intellectual disability (M), midface hypoplasia (M), and elliptocytosis (E), as well as generalized hypoplasia and cardiac abnormalities. It is caused by a deletion in Xq22.3, comprising several genes including AMME chromosomal region gene 1 (AMMECR1), which encodes a protein with a nuclear location and presently unknown function. The C-terminal region of AMMECR1 (from residue 122 to 333) is well conserved, and homologues appear in species ranging from bacteria and archaea to eukaryotes. The high level of conservation of the AMMECR1 domain points to a basic cellular function, potentially in either the transcription, replication, repair or translation machinery. The AMMECR1 domain contains a six-amino-acid motif (LRGCIG) that might be functionally important since it is strikingly conserved throughout evolution. The AMMECR1 domain consists of two distinct subdomains of different sizes. The large subdomain, which contains both the N- and C-terminal regions, consists of five alpha-helices and five beta-strands. These five beta-strands form an antiparallel beta-sheet. The small subdomain consists of four alpha-helices and three beta-strands, and these beta-strands also form an antiparallel beta-sheet. The conserved 'LRGCIG' motif is located at beta(2) and its N-terminal loop, and most of the side chains of these residues point toward the interface of the two subdomains. The two subdomains are connected by only two loops, and the interaction between the two subdomains is not strong. Thus, these subdomains may move dynamically when the substrate enters the cleft. The size of the cleft suggests that the substrate is large, e.g., the substrate may be a nucleic acid or protein. However, the inner side of the cleft is not filled with positively charged residues, and therefore it is unlikely that negatively charged nucleic acids such as DNA or RNA interact at this site. References Protein families
AMMECR1
Biology
526
24,057,347
https://en.wikipedia.org/wiki/WASP-16
WASP-16 is a magnitude 11 yellow dwarf main sequence star, with characteristics similar to the Sun, located in the Virgo constellation. Planetary system In 2009, a planet of the star was announced by the SuperWASP project. It appears to be another hot Jupiter type exoplanet. In 2024, a candidate mini-neptune was detected, also using the transit method. Further observations are needed to confirm its existence. The planet takes ten days to fully orbit WASP-16 and has an equilibrium temperature of . See also SuperWASP List of extrasolar planets References External links SuperWASP Homepage Virgo (constellation) G-type main-sequence stars Planetary systems with one confirmed planet Planetary transit variables J14184392-2016317 16
WASP-16
Astronomy
156
593,338
https://en.wikipedia.org/wiki/Bimetal
Bimetal refers to an object that is composed of two separate metals joined together. Instead of being a mixture of two or more metals, like alloys, bimetallic objects consist of layers of different metals. Trimetal and tetrametal refer to objects composed of three and four separate metals respectively. A bimetal bar is usually made of brass and iron. Bimetallic strips and disks, which convert a temperature change into mechanical displacement, are the most recognized bimetallic objects due to their name. However, there are other common bimetallic objects. For example, tin cans consist of steel covered with tin. The tin prevents the can from rusting. To cut costs and prevent people from melting them down for their metal, coins are often composed of a cheap metal covered with a more expensive metal. For example, the United States penny was changed from 95% copper to 95% zinc, with a thin copper plating to retain its appearance. A common type of trimetallic object (before the all-aluminium can) was a tin-plated steel can with an aluminum lid with a pull tab. Making the lid out of aluminum allowed it to be pulled off by hand instead of using a can opener, but these cans proved difficult to recycle owing to their mix of metals. Blades for bandsaws and reciprocating saws are often made with bimetal construction. The teeth, made of high-speed steel, are bonded (by various methods, for example, electron beam welding or laser beam welding) to the softer high-carbon steel base. Such construction makes for blades with a better combination of cutting speed and durability than shown by non-bimetal blades, because the advantages and disadvantages of each of the metals are applied in the best locations: the teeth are harder (and thus cut better), but therefore also brittler; meanwhile, the body area of the band is softer (which would make for poorer teeth), but also less brittle, and thus more resistant to cracking and breaking (which is desirable in the body area). See also Bimetallic strip Bimetallism Bi-metallic coin Thermocouple (electric) Copper-clad steel References Further reading Thermal imaging with tapping mode using a bimetal oscillator formed at the end of a cantilever Bimetal: Definition, Properties, and Applications Kanthal Thermostatic Bimetal Guide.pdf How Thermostatic Bimetal Works Metallurgy Composite materials
Bimetal
Physics,Chemistry,Materials_science,Engineering
521
36,875,991
https://en.wikipedia.org/wiki/Muhammad%20Jamil%20Ahmad%20Mulla
Muhammad Jamil Ahmad Mulla (; born 12 December 1946) is a Saudi engineer who was minister of communications and information technology from 2003 to 2014. Early life and education Mulla was born in Madinah on 12 December 1946. Mulla is a graduate of King Saud University where he received a Bachelor of Science degree in electrical engineering in 1972. Then he obtained a Master of Science degree in telecommunications from the University of Colorado at Boulder in 1979. He also participated in various training programs in different countries. Career Mulla began his career at the ministry of post, telegraph and telephone in 1972, working initially as a radio engineer and then as an electrical engineer. Later he served as a Riyadh province manager and general manager of the central region for telecommunications. In June 2001, Mullah was named as the governor of the Saudi Telecom Authority. Next, he was appointed assistant deputy minister and then deputy minister of post, telegraph and telephone in charge of operation and maintenance affairs. He then became the governor of the Saudi communications commission. Mullah was appointed minister of communications and information technology when the office established on 1 May 2003. His term ended on 8 December 2014 when Fahd bin Matad bin Shafaq Al Hamad was appointed to the post. References External links 20th-century Saudi Arabian engineers 21st-century Saudi Arabian engineers 21st-century Saudi Arabian politicians 1946 births Living people Electrical engineers Communication ministers of Saudi Arabia Information ministers of Saudi Arabia King Saud University alumni University of Colorado Boulder alumni
Muhammad Jamil Ahmad Mulla
Engineering
301
70,076,218
https://en.wikipedia.org/wiki/HM%20Sagittae
HM Sagittae is a dusty-type symbiotic nova in the northern constellation of Sagitta. It was discovered by O. D. Dokuchaeva and colleagues in 1975 when it increased in brightness by six magnitudes (a factor of around 250 brighter). The object displays an emission line spectrum similar to a planetary nebula and was detected in the radio band in 1977. Unlike a classical nova, the optical brightness of this system did not rapidly decrease with time, although it showed some variation. It displays activity in every band of the electromagnetic spectrum from X-ray to radio. Observations in the infrared during 1978 showed this to be a very strong source with a spectrum that is consistent with a binary symbiotic system similar to V1016 Cyg. The cooler stellar component is emitting material that is then ionized by a hot component, with the emission spectrum coming from heated dust generated by the cooler star. By 1983, the infrared emission of the system was shown to vary by a factor of 1.5 magnitudes in the K-band with a time scale of about 500 days. High resolution spectral examination of the system in 1984 showed a bipolar outflow of matter with a velocity of . A series of knots extend outward on both sides of the central star to an angular distance of . The nebula surrounding the system shows a bipolar, S-shaped morphology, similar to R Aqr. The features of the system are consistent with a central red giant star being orbited by a compact object that is accreting matter from the giant. The pair have an angular separation of , with the axis aligned along a position angle of . Their physical separation is estimated at . The giant component is most likely a Mira variable and measurements up to 1989 found a period of 527 days. It is surrounded by a dusty shell that is mostly composed of silicates. The compact object is a hot white dwarf with 70% of the mass of the Sun, which is orbited by an accretion disk. The nova-like outburst of 1975 may have been generated by a burst of mass transfer from the giant to the white dwarf during the periastron passage of an eccentric orbit, leading to a thermonuclear outburst. Winds from both stars are colliding to produce a shock region that is a source of ultraviolet emission. By 1985, a fading of the brightness and an increase in redness were observed, caused by dust obscuration. The hot component may be inhibiting dust formation around the giant except in the shadow region behind the star. This could explain observed individual dust obscuration events. References Further reading Cataclysmic variable stars Mira variables M-type giants White dwarfs Binary stars Sagitta Sagittae, HM
HM Sagittae
Astronomy
562
3,878
https://en.wikipedia.org/wiki/Biostatistics
Biostatistics (also known as biometry) is a branch of statistics that applies statistical methods to a wide range of topics in biology. It encompasses the design of biological experiments, the collection and analysis of data from those experiments and the interpretation of the results. History Biostatistics and genetics Biostatistical modeling forms an important part of numerous modern biological theories. Genetics studies, since its beginning, used statistical concepts to understand observed experimental results. Some genetics scientists even contributed with statistical advances with the development of methods and tools. Gregor Mendel started the genetics studies investigating genetics segregation patterns in families of peas and used statistics to explain the collected data. In the early 1900s, after the rediscovery of Mendel's Mendelian inheritance work, there were gaps in understanding between genetics and evolutionary Darwinism. Francis Galton tried to expand Mendel's discoveries with human data and proposed a different model with fractions of the heredity coming from each ancestral composing an infinite series. He called this the theory of "Law of Ancestral Heredity". His ideas were strongly disagreed by William Bateson, who followed Mendel's conclusions, that genetic inheritance were exclusively from the parents, half from each of them. This led to a vigorous debate between the biometricians, who supported Galton's ideas, as Raphael Weldon, Arthur Dukinfield Darbishire and Karl Pearson, and Mendelians, who supported Bateson's (and Mendel's) ideas, such as Charles Davenport and Wilhelm Johannsen. Later, biometricians could not reproduce Galton conclusions in different experiments, and Mendel's ideas prevailed. By the 1930s, models built on statistical reasoning had helped to resolve these differences and to produce the neo-Darwinian modern evolutionary synthesis. Solving these differences also allowed to define the concept of population genetics and brought together genetics and evolution. The three leading figures in the establishment of population genetics and this synthesis all relied on statistics and developed its use in biology. Ronald Fisher worked alongside statistician Betty Allan developing several basic statistical methods in support of his work studying the crop experiments at Rothamsted Research, published in Fisher's books Statistical Methods for Research Workers (1925) and The Genetical Theory of Natural Selection (1930), as well as Allan's scientific papers. Fisher went on to give many contributions to genetics and statistics. Some of them include the ANOVA, p-value concepts, Fisher's exact test and Fisher's equation for population dynamics. He is credited for the sentence "Natural selection is a mechanism for generating an exceedingly high degree of improbability". Sewall G. Wright developed F-statistics and methods of computing them and defined inbreeding coefficient. J. B. S. Haldane's book, The Causes of Evolution, reestablished natural selection as the premier mechanism of evolution by explaining it in terms of the mathematical consequences of Mendelian genetics. He also developed the theory of primordial soup. These and other biostatisticians, mathematical biologists, and statistically inclined geneticists helped bring together evolutionary biology and genetics into a consistent, coherent whole that could begin to be quantitatively modeled. In parallel to this overall development, the pioneering work of D'Arcy Thompson in On Growth and Form also helped to add quantitative discipline to biological study. Despite the fundamental importance and frequent necessity of statistical reasoning, there may nonetheless have been a tendency among biologists to distrust or deprecate results which are not qualitatively apparent. One anecdote describes Thomas Hunt Morgan banning the Friden calculator from his department at Caltech, saying "Well, I am like a guy who is prospecting for gold along the banks of the Sacramento River in 1849. With a little intelligence, I can reach down and pick up big nuggets of gold. And as long as I can do that, I'm not going to let any people in my department waste scarce resources in placer mining." Research planning Any research in life sciences is proposed to answer a scientific question we might have. To answer this question with a high certainty, we need accurate results. The correct definition of the main hypothesis and the research plan will reduce errors while taking a decision in understanding a phenomenon. The research plan might include the research question, the hypothesis to be tested, the experimental design, data collection methods, data analysis perspectives and costs involved. It is essential to carry the study based on the three basic principles of experimental statistics: randomization, replication, and local control. Research question The research question will define the objective of a study. The research will be headed by the question, so it needs to be concise, at the same time it is focused on interesting and novel topics that may improve science and knowledge and that field. To define the way to ask the scientific question, an exhaustive literature review might be necessary. So the research can be useful to add value to the scientific community. Hypothesis definition Once the aim of the study is defined, the possible answers to the research question can be proposed, transforming this question into a hypothesis. The main propose is called null hypothesis (H0) and is usually based on a permanent knowledge about the topic or an obvious occurrence of the phenomena, sustained by a deep literature review. We can say it is the standard expected answer for the data under the situation in test. In general, HO assumes no association between treatments. On the other hand, the alternative hypothesis is the denial of HO. It assumes some degree of association between the treatment and the outcome. Although, the hypothesis is sustained by question research and its expected and unexpected answers. As an example, consider groups of similar animals (mice, for example) under two different diet systems. The research question would be: what is the best diet? In this case, H0 would be that there is no difference between the two diets in mice metabolism (H0: μ1 = μ2) and the alternative hypothesis would be that the diets have different effects over animals metabolism (H1: μ1 ≠ μ2). The hypothesis is defined by the researcher, according to his/her interests in answering the main question. Besides that, the alternative hypothesis can be more than one hypothesis. It can assume not only differences across observed parameters, but their degree of differences (i.e. higher or shorter). Sampling Usually, a study aims to understand an effect of a phenomenon over a population. In biology, a population is defined as all the individuals of a given species, in a specific area at a given time. In biostatistics, this concept is extended to a variety of collections possible of study. Although, in biostatistics, a population is not only the individuals, but the total of one specific component of their organisms, as the whole genome, or all the sperm cells, for animals, or the total leaf area, for a plant, for example. It is not possible to take the measures from all the elements of a population. Because of that, the sampling process is very important for statistical inference. Sampling is defined as to randomly get a representative part of the entire population, to make posterior inferences about the population. So, the sample might catch the most variability across a population. The sample size is determined by several things, since the scope of the research to the resources available. In clinical research, the trial type, as inferiority, equivalence, and superiority is a key in determining sample size. Experimental design Experimental designs sustain those basic principles of experimental statistics. There are three basic experimental designs to randomly allocate treatments in all plots of the experiment. They are completely randomized design, randomized block design, and factorial designs. Treatments can be arranged in many ways inside the experiment. In agriculture, the correct experimental design is the root of a good study and the arrangement of treatments within the study is essential because environment largely affects the plots (plants, livestock, microorganisms). These main arrangements can be found in the literature under the names of "lattices", "incomplete blocks", "split plot", "augmented blocks", and many others. All of the designs might include control plots, determined by the researcher, to provide an error estimation during inference. In clinical studies, the samples are usually smaller than in other biological studies, and in most cases, the environment effect can be controlled or measured. It is common to use randomized controlled clinical trials, where results are usually compared with observational study designs such as case–control or cohort. Data collection Data collection methods must be considered in research planning, because it highly influences the sample size and experimental design. Data collection varies according to type of data. For qualitative data, collection can be done with structured questionnaires or by observation, considering presence or intensity of disease, using score criterion to categorize levels of occurrence. For quantitative data, collection is done by measuring numerical information using instruments. In agriculture and biology studies, yield data and its components can be obtained by metric measures. However, pest and disease injuries in plats are obtained by observation, considering score scales for levels of damage. Especially, in genetic studies, modern methods for data collection in field and laboratory should be considered, as high-throughput platforms for phenotyping and genotyping. These tools allow bigger experiments, while turn possible evaluate many plots in lower time than a human-based only method for data collection. Finally, all data collected of interest must be stored in an organized data frame for further analysis. Analysis and data interpretation Descriptive tools Data can be represented through tables or graphical representation, such as line charts, bar charts, histograms, scatter plot. Also, measures of central tendency and variability can be very useful to describe an overview of the data. Follow some examples: Frequency tables One type of table is the frequency table, which consists of data arranged in rows and columns, where the frequency is the number of occurrences or repetitions of data. Frequency can be: Absolute: represents the number of times that a determined value appear; Relative: obtained by the division of the absolute frequency by the total number; In the next example, we have the number of genes in ten operons of the same organism. Line graph Line graphs represent the variation of a value over another metric, such as time. In general, values are represented in the vertical axis, while the time variation is represented in the horizontal axis. Bar chart A bar chart is a graph that shows categorical data as bars presenting heights (vertical bar) or widths (horizontal bar) proportional to represent values. Bar charts provide an image that could also be represented in a tabular format. In the bar chart example, we have the birth rate in Brazil for the December months from 2010 to 2016. The sharp fall in December 2016 reflects the outbreak of Zika virus in the birth rate in Brazil. Histograms The histogram (or frequency distribution) is a graphical representation of a dataset tabulated and divided into uniform or non-uniform classes. It was first introduced by Karl Pearson. Scatter plot A scatter plot is a mathematical diagram that uses Cartesian coordinates to display values of a dataset. A scatter plot shows the data as a set of points, each one presenting the value of one variable determining the position on the horizontal axis and another variable on the vertical axis. They are also called scatter graph, scatter chart, scattergram, or scatter diagram. Mean The arithmetic mean is the sum of a collection of values () divided by the number of items of this collection (). Median The median is the value in the middle of a dataset. Mode The mode is the value of a set of data that appears most often. Box plot Box plot is a method for graphically depicting groups of numerical data. The maximum and minimum values are represented by the lines, and the interquartile range (IQR) represent 25–75% of the data. Outliers may be plotted as circles. Correlation coefficients Although correlations between two different kinds of data could be inferred by graphs, such as scatter plot, it is necessary validate this though numerical information. For this reason, correlation coefficients are required. They provide a numerical value that reflects the strength of an association. Pearson correlation coefficient Pearson correlation coefficient is a measure of association between two variables, X and Y. This coefficient, usually represented by ρ (rho) for the population and r for the sample, assumes values between −1 and 1, where ρ = 1 represents a perfect positive correlation, ρ = −1 represents a perfect negative correlation, and ρ = 0 is no linear correlation. Inferential statistics It is used to make inferences about an unknown population, by estimation and/or hypothesis testing. In other words, it is desirable to obtain parameters to describe the population of interest, but since the data is limited, it is necessary to make use of a representative sample in order to estimate them. With that, it is possible to test previously defined hypotheses and apply the conclusions to the entire population. The standard error of the mean is a measure of variability that is crucial to do inferences. Hypothesis testing Hypothesis testing is essential to make inferences about populations aiming to answer research questions, as settled in "Research planning" section. Authors defined four steps to be set: The hypothesis to be tested: as stated earlier, we have to work with the definition of a null hypothesis (H0), that is going to be tested, and an alternative hypothesis. But they must be defined before the experiment implementation. Significance level and decision rule: A decision rule depends on the level of significance, or in other words, the acceptable error rate (α). It is easier to think that we define a critical value that determines the statistical significance when a test statistic is compared with it. So, α also has to be predefined before the experiment. Experiment and statistical analysis: This is when the experiment is really implemented following the appropriate experimental design, data is collected and the more suitable statistical tests are evaluated. Inference: Is made when the null hypothesis is rejected or not rejected, based on the evidence that the comparison of p-values and α brings. It is pointed that the failure to reject H0 just means that there is not enough evidence to support its rejection, but not that this hypothesis is true. Confidence intervals A confidence interval is a range of values that can contain the true real parameter value in given a certain level of confidence. The first step is to estimate the best-unbiased estimate of the population parameter. The upper value of the interval is obtained by the sum of this estimate with the multiplication between the standard error of the mean and the confidence level. The calculation of lower value is similar, but instead of a sum, a subtraction must be applied. Statistical considerations Power and statistical error When testing a hypothesis, there are two types of statistic errors possible: Type I error and Type II error. The type I error or false positive is the incorrect rejection of a true null hypothesis The type II error or false negative is the failure to reject a false null hypothesis. The significance level denoted by α is the type I error rate and should be chosen before performing the test. The type II error rate is denoted by β and statistical power of the test is 1 − β. p-value The p-value is the probability of obtaining results as extreme as or more extreme than those observed, assuming the null hypothesis (H0) is true. It is also called the calculated probability. It is common to confuse the p-value with the significance level (α), but, the α is a predefined threshold for calling significant results. If p is less than α, the null hypothesis (H0) is rejected. Multiple testing In multiple tests of the same hypothesis, the probability of the occurrence of falses positives (familywise error rate) increase and some strategy are used to control this occurrence. This is commonly achieved by using a more stringent threshold to reject null hypotheses. The Bonferroni correction defines an acceptable global significance level, denoted by α* and each test is individually compared with a value of α = α*/m. This ensures that the familywise error rate in all m tests, is less than or equal to α*. When m is large, the Bonferroni correction may be overly conservative. An alternative to the Bonferroni correction is to control the false discovery rate (FDR). The FDR controls the expected proportion of the rejected null hypotheses (the so-called discoveries) that are false (incorrect rejections). This procedure ensures that, for independent tests, the false discovery rate is at most q*. Thus, the FDR is less conservative than the Bonferroni correction and have more power, at the cost of more false positives. Mis-specification and robustness checks The main hypothesis being tested (e.g., no association between treatments and outcomes) is often accompanied by other technical assumptions (e.g., about the form of the probability distribution of the outcomes) that are also part of the null hypothesis. When the technical assumptions are violated in practice, then the null may be frequently rejected even if the main hypothesis is true. Such rejections are said to be due to model mis-specification. Verifying whether the outcome of a statistical test does not change when the technical assumptions are slightly altered (so-called robustness checks) is the main way of combating mis-specification. Model selection criteria Model criteria selection will select or model that more approximate true model. The Akaike's Information Criterion (AIC) and The Bayesian Information Criterion (BIC) are examples of asymptotically efficient criteria. Developments and big data Recent developments have made a large impact on biostatistics. Two important changes have been the ability to collect data on a high-throughput scale, and the ability to perform much more complex analysis using computational techniques. This comes from the development in areas as sequencing technologies, Bioinformatics and Machine learning (Machine learning in bioinformatics). Use in high-throughput data New biomedical technologies like microarrays, next-generation sequencers (for genomics) and mass spectrometry (for proteomics) generate enormous amounts of data, allowing many tests to be performed simultaneously. Careful analysis with biostatistical methods is required to separate the signal from the noise. For example, a microarray could be used to measure many thousands of genes simultaneously, determining which of them have different expression in diseased cells compared to normal cells. However, only a fraction of genes will be differentially expressed. Multicollinearity often occurs in high-throughput biostatistical settings. Due to high intercorrelation between the predictors (such as gene expression levels), the information of one predictor might be contained in another one. It could be that only 5% of the predictors are responsible for 90% of the variability of the response. In such a case, one could apply the biostatistical technique of dimension reduction (for example via principal component analysis). Classical statistical techniques like linear or logistic regression and linear discriminant analysis do not work well for high dimensional data (i.e. when the number of observations n is smaller than the number of features or predictors p: n < p). As a matter of fact, one can get quite high R2-values despite very low predictive power of the statistical model. These classical statistical techniques (esp. least squares linear regression) were developed for low dimensional data (i.e. where the number of observations n is much larger than the number of predictors p: n >> p). In cases of high dimensionality, one should always consider an independent validation test set and the corresponding residual sum of squares (RSS) and R2 of the validation test set, not those of the training set. Often, it is useful to pool information from multiple predictors together. For example, Gene Set Enrichment Analysis (GSEA) considers the perturbation of whole (functionally related) gene sets rather than of single genes. These gene sets might be known biochemical pathways or otherwise functionally related genes. The advantage of this approach is that it is more robust: It is more likely that a single gene is found to be falsely perturbed than it is that a whole pathway is falsely perturbed. Furthermore, one can integrate the accumulated knowledge about biochemical pathways (like the JAK-STAT signaling pathway) using this approach. Bioinformatics advances in databases, data mining, and biological interpretation The development of biological databases enables storage and management of biological data with the possibility of ensuring access for users around the world. They are useful for researchers depositing data, retrieve information and files (raw or processed) originated from other experiments or indexing scientific articles, as PubMed. Another possibility is search for the desired term (a gene, a protein, a disease, an organism, and so on) and check all results related to this search. There are databases dedicated to SNPs (dbSNP), the knowledge on genes characterization and their pathways (KEGG) and the description of gene function classifying it by cellular component, molecular function and biological process (Gene Ontology). In addition to databases that contain specific molecular information, there are others that are ample in the sense that they store information about an organism or group of organisms. As an example of a database directed towards just one organism, but that contains much data about it, is the Arabidopsis thaliana genetic and molecular database – TAIR. Phytozome, in turn, stores the assemblies and annotation files of dozen of plant genomes, also containing visualization and analysis tools. Moreover, there is an interconnection between some databases in the information exchange/sharing and a major initiative was the International Nucleotide Sequence Database Collaboration (INSDC) which relates data from DDBJ, EMBL-EBI, and NCBI. Nowadays, increase in size and complexity of molecular datasets leads to use of powerful statistical methods provided by computer science algorithms which are developed by machine learning area. Therefore, data mining and machine learning allow detection of patterns in data with a complex structure, as biological ones, by using methods of supervised and unsupervised learning, regression, detection of clusters and association rule mining, among others. To indicate some of them, self-organizing maps and k-means are examples of cluster algorithms; neural networks implementation and support vector machines models are examples of common machine learning algorithms. Collaborative work among molecular biologists, bioinformaticians, statisticians and computer scientists is important to perform an experiment correctly, going from planning, passing through data generation and analysis, and ending with biological interpretation of the results. Use of computationally intensive methods On the other hand, the advent of modern computer technology and relatively cheap computing resources have enabled computer-intensive biostatistical methods like bootstrapping and re-sampling methods. In recent times, random forests have gained popularity as a method for performing statistical classification. Random forest techniques generate a panel of decision trees. Decision trees have the advantage that you can draw them and interpret them (even with a basic understanding of mathematics and statistics). Random Forests have thus been used for clinical decision support systems. Applications Public health Public health, including epidemiology, health services research, nutrition, environmental health and health care policy & management. In these medicine contents, it's important to consider the design and analysis of the clinical trials. As one example, there is the assessment of severity state of a patient with a prognosis of an outcome of a disease. With new technologies and genetics knowledge, biostatistics are now also used for Systems medicine, which consists in a more personalized medicine. For this, is made an integration of data from different sources, including conventional patient data, clinico-pathological parameters, molecular and genetic data as well as data generated by additional new-omics technologies. Quantitative genetics The study of population genetics and statistical genetics in order to link variation in genotype with a variation in phenotype. In other words, it is desirable to discover the genetic basis of a measurable trait, a quantitative trait, that is under polygenic control. A genome region that is responsible for a continuous trait is called a quantitative trait locus (QTL). The study of QTLs become feasible by using molecular markers and measuring traits in populations, but their mapping needs the obtaining of a population from an experimental crossing, like an F2 or recombinant inbred strains/lines (RILs). To scan for QTLs regions in a genome, a gene map based on linkage have to be built. Some of the best-known QTL mapping algorithms are Interval Mapping, Composite Interval Mapping, and Multiple Interval Mapping. However, QTL mapping resolution is impaired by the amount of recombination assayed, a problem for species in which it is difficult to obtain large offspring. Furthermore, allele diversity is restricted to individuals originated from contrasting parents, which limit studies of allele diversity when we have a panel of individuals representing a natural population. For this reason, the genome-wide association study was proposed in order to identify QTLs based on linkage disequilibrium, that is the non-random association between traits and molecular markers. It was leveraged by the development of high-throughput SNP genotyping. In animal and plant breeding, the use of markers in selection aiming for breeding, mainly the molecular ones, collaborated to the development of marker-assisted selection. While QTL mapping is limited due resolution, GWAS does not have enough power when rare variants of small effect that are also influenced by environment. So, the concept of Genomic Selection (GS) arises in order to use all molecular markers in the selection and allow the prediction of the performance of candidates in this selection. The proposal is to genotype and phenotype a training population, develop a model that can obtain the genomic estimated breeding values (GEBVs) of individuals belonging to a genotype and but not phenotype population, called testing population. This kind of study could also include a validation population, thinking in the concept of cross-validation, in which the real phenotype results measured in this population are compared with the phenotype results based on the prediction, what used to check the accuracy of the model. As a summary, some points about the application of quantitative genetics are: This has been used in agriculture to improve crops (Plant breeding) and livestock (Animal breeding). In biomedical research, this work can assist in finding candidates gene alleles that can cause or influence predisposition to diseases in human genetics Expression data Studies for differential expression of genes from RNA-Seq data, as for RT-qPCR and microarrays, demands comparison of conditions. The goal is to identify genes which have a significant change in abundance between different conditions. Then, experiments are designed appropriately, with replicates for each condition/treatment, randomization and blocking, when necessary. In RNA-Seq, the quantification of expression uses the information of mapped reads that are summarized in some genetic unit, as exons that are part of a gene sequence. As microarray results can be approximated by a normal distribution, RNA-Seq counts data are better explained by other distributions. The first used distribution was the Poisson one, but it underestimate the sample error, leading to false positives. Currently, biological variation is considered by methods that estimate a dispersion parameter of a negative binomial distribution. Generalized linear models are used to perform the tests for statistical significance and as the number of genes is high, multiple tests correction have to be considered. Some examples of other analysis on genomics data comes from microarray or proteomics experiments. Often concerning diseases or disease stages. Other studies Ecology, ecological forecasting Biological sequence analysis Systems biology for gene network inference or pathways analysis. Clinical research and pharmaceutical development Population dynamics, especially in regards to fisheries science. Phylogenetics and evolution Pharmacodynamics Pharmacokinetics Neuroimaging Tools There are a lot of tools that can be used to do statistical analysis in biological data. Most of them are useful in other areas of knowledge, covering a large number of applications (alphabetical). Here are brief descriptions of some of them: ASReml: Another software developed by VSNi that can be used also in R environment as a package. It is developed to estimate variance components under a general linear mixed model using restricted maximum likelihood (REML). Models with fixed effects and random effects and nested or crossed ones are allowed. Gives the possibility to investigate different variance-covariance matrix structures. CycDesigN: A computer package developed by VSNi that helps the researchers create experimental designs and analyze data coming from a design present in one of three classes handled by CycDesigN. These classes are resolvable, non-resolvable, partially replicated and crossover designs. It includes less used designs the Latinized ones, as t-Latinized design. Orange: A programming interface for high-level data processing, data mining and data visualization. Include tools for gene expression and genomics. R: An open source environment and programming language dedicated to statistical computing and graphics. It is an implementation of S language maintained by CRAN. In addition to its functions to read data tables, take descriptive statistics, develop and evaluate models, its repository contains packages developed by researchers around the world. This allows the development of functions written to deal with the statistical analysis of data that comes from specific applications. In the case of Bioinformatics, for example, there are packages located in the main repository (CRAN) and in others, as Bioconductor. It is also possible to use packages under development that are shared in hosting-services as GitHub. SAS: A data analysis software widely used, going through universities, services and industry. Developed by a company with the same name (SAS Institute), it uses SAS language for programming. PLA 3.0: Is a biostatistical analysis software for regulated environments (e.g. drug testing) which supports Quantitative Response Assays (Parallel-Line, Parallel-Logistics, Slope-Ratio) and Dichotomous Assays (Quantal Response, Binary Assays). It also supports weighting methods for combination calculations and the automatic data aggregation of independent assay data. Weka: A Java software for machine learning and data mining, including tools and methods for visualization, clustering, regression, association rule, and classification. There are tools for cross-validation, bootstrapping and a module of algorithm comparison. Weka also can be run in other programming languages as Perl or R. Python (programming language) image analysis, deep-learning, machine-learning SQL databases NoSQL NumPy numerical python SciPy SageMath LAPACK linear algebra MATLAB Apache Hadoop Apache Spark Amazon Web Services Scope and training programs Almost all educational programmes in biostatistics are at postgraduate level. They are most often found in schools of public health, affiliated with schools of medicine, forestry, or agriculture, or as a focus of application in departments of statistics. In the United States, where several universities have dedicated biostatistics departments, many other top-tier universities integrate biostatistics faculty into statistics or other departments, such as epidemiology. Thus, departments carrying the name "biostatistics" may exist under quite different structures. For instance, relatively new biostatistics departments have been founded with a focus on bioinformatics and computational biology, whereas older departments, typically affiliated with schools of public health, will have more traditional lines of research involving epidemiological studies and clinical trials as well as bioinformatics. In larger universities around the world, where both a statistics and a biostatistics department exist, the degree of integration between the two departments may range from the bare minimum to very close collaboration. In general, the difference between a statistics program and a biostatistics program is twofold: (i) statistics departments will often host theoretical/methodological research which are less common in biostatistics programs and (ii) statistics departments have lines of research that may include biomedical applications but also other areas such as industry (quality control), business and economics and biological areas other than medicine. Specialized journals Biostatistics International Journal of Biostatistics Journal of Epidemiology and Biostatistics Biostatistics and Public Health Biometrics Biometrika Biometrical Journal Communications in Biometry and Crop Science Statistical Applications in Genetics and Molecular Biology Statistical Methods in Medical Research Pharmaceutical Statistics Statistics in Medicine See also Bioinformatics Epidemiological method Epidemiology Group size measures Health indicator Mathematical and theoretical biology References External links The International Biometric Society The Collection of Biostatistics Research Archive Guide to Biostatistics (MedPageToday.com) Biomedical Statistics Bioinformatics
Biostatistics
Engineering,Biology
6,812
1,005,746
https://en.wikipedia.org/wiki/Supercritical%20flow
A supercritical flow is a flow whose velocity is larger than the wave velocity. The analogous condition in gas dynamics is supersonic speed. According to the website Civil Engineering Terms, supercritical flow is defined as follows: The flow at which depth of the channel is less than critical depth, velocity of flow is greater than critical velocity and slope of the channel is also greater than the critical slope is known as supercritical flow. Information travels at the wave velocity. This is the velocity at which waves travel outwards from a pebble thrown into a lake. The flow velocity is the velocity at which a leaf in the flow travels. If a pebble is thrown into a supercritical flow then the ripples will all move down stream whereas in a subcritical flow some would travel up stream and some would travel down stream. It is only in supercritical flows that hydraulic jumps (bores) can occur. In fluid dynamics, the change from one behaviour to the other is often described by a dimensionless quantity, where the transition occurs whenever this number becomes less or more than one. One of these numbers is the Froude number: where U = velocity of the flow g = acceleration due to gravity (9.81 m/s² or 32.2 ft/s²) h = depth of flow relative to the channel bottom If , we call the flow subcritical; if , we call the flow supercritical. If , it is critical. See also Supercritical fluid Supercritical vs. subcritical flow Supersonic Hypersonic Sonic black hole References The Hydraulics of Open Channel Flow: An Introduction. Physical Modelling of Hydraulics Chanson, Hubert (1999) Fluid dynamics
Supercritical flow
Chemistry,Engineering
348
28,093,559
https://en.wikipedia.org/wiki/Rate%20Based%20Satellite%20Control%20Protocol
In computer networking, Rate Based Satellite Control Protocol (RBSCP) is a tunneling method proposed by Cisco to improve the performance of satellite network links with high latency and error rates. The problem RBSCP addresses is that the long RTT on the link keeps TCP virtual circuits in slow start for a long time. This, in addition to the high loss give a very low amount of bandwidth on the channel. Since satellite links may be high-throughput, the overall link utilized may be below what is optimal from a technical and economic view. Means of operation RBSCP works by tunneling the usual IP packets within IP packets. The transport protocol identifier is 199. On each end of the tunnel, routers buffer packets to utilize the link better. In addition to this, RBSCP tunnel routers: modify TCP options at connection setup. implement a Performance Enhancing Proxy (PEP) that resends lost packets on behalf of the client, so loss is not interpreted as congestion. External links https://web.archive.org/web/20110706144353/http://cisco.biz/en/US/docs/ios/12_3t/12_3t7/feature/guide/gt_rbscp.html Computer networking
Rate Based Satellite Control Protocol
Technology,Engineering
270
5,844,960
https://en.wikipedia.org/wiki/Oprelvekin
Oprelvekin is recombinant interleukin eleven (IL-11), a thrombopoietic growth factor that directly stimulates the proliferation of hematopoietic stem cells and megakaryocyte progenitor cells and induces megakaryocyte maturation resulting in increased platelet production. It is marketed under the trade name Neumega. Chemical, pharmacological and marketing data IL-11 is a member of a family of human growth factors and is being produced in the bone marrow of healthy adults. Synonyms are: AGIF Adipogenesis inhibitory factor Interleukin-11 precursor. Oprelvekin is produced in Escherichia coli (E. coli) by recombinant DNA technology. The protein has a molecular mass of approximately 19,000 g/mol, and is non-glycosylated. The polypeptide is 177 amino acids in length (the natural IL-11 has 178). This alteration has not resulted in measurable differences in bioactivity either in vitro or in vivo. The primary hematopoietic activity of Neumega is stimulation of megakaryocytopoiesis and thrombopoiesis. In mice and nonhuman primate studies Neumega has shown potent thrombopoietic activity in compromised hematopoiesis, including moderately to severely myelosuppressed animals. In these studies, Neumega improved platelet nadirs and accelerated platelet recoveries compared to controls. In animal studies oprelvekin also has non-hematopoietic activities. This includes the regulation of intestinal epithelium growth (enhanced healing of gastrointestinal lesions), the inhibition of adipogenesis, the induction of acute phase protein synthesis (e.g., fibrinogen), and inhibition of macrophageal released pro-inflammatory cytokines. However, pathologic changes, some also seen in humans, have been noticed: papilledema fibrosis of tendons and joint capsules periosteal thickening and embryotoxicity (see under pregnancy). In preclinical human trials mature megakaryocytes which develop during in vivo treatment with Neumega were ultrastructurally, morphologically, and functionally normal. They also showed a normal life span. In a study in which a single 50 μg/kg subcutaneous dose was administered to eighteen healthy men, the peak serum concentration (Cmax) of 17.4 ± 5.4 ng/mL was reached at 3.2 ± 2.4 h (Tmax) following dosing. The terminal half-life was 6.9 ± 1.7 hours. In a second study in which single 75 μg/kg subcutaneous and intravenous doses were administered to twenty-four healthy subjects, the pharmacokinetic profiles were similar between men and women. The absolute bioavailability of Neumega was >80%. In a study in which multiple, subcutaneous doses of both 25 and 50 μg/kg were administered to cancer patients receiving chemotherapy, Neumega did not accumulate and clearance of Neumega was not altered following multiple doses. Pediatric cancer patients treated with aggressive chemotherapy showed similar pharmakinetic characteristics. In humans treated with oprelvekin on a daily base a twofold increase in fibrinogen levels occurred. Healthy volunteers displayed an increase in von-Willebrand-factor (vWf) activity. Isolated molecules formed under oprelvekin were found to have exact the same multimere structure as the 'normal' factor and were therefore fully functioning. These increases in coagulation factors may contribute to the development of stroke (see under ), but a precise association cannot be made at this stage. In a variety of clinical studies upon which FDA approval is based, Neumega showed effectivity in reducing thrombocytopenia in oncologic patients treated with myelosuppressant chemotherapeutic drugs as measured by significantly decreased need of platelet transfusions. Neumega is manufactured and sold by Wyeth. The drug is formulated in single-use vials containing 5 mg of oprelvekin (specific activity approximately 8 × 106 units/mg) as a sterile, lyophilized powder. The FDA approved the drug in 1997. Indications Neumega is indicated for the prevention of severe thrombocytopenia and the reduction of the need for platelet transfusions following myelosuppressive chemotherapy in adult patients with nonmyeloid malignancies who are at high risk of severe thrombocytopenia. Efficacy was demonstrated in patients who had experienced severe thrombocytopenia following the previous chemotherapy cycle. Contraindications and precautions Patients with known hypersensitivity to Oprelvekin itself or any other ingredient. Patients with severe or decompensated heart failure should not be treated, because Oprelvekin may cause excessive fluid retention with edema and cardiac decompensation. Patients with compensated heart disease should be treated with caution and under permanent clinical supervision. Neumega is not indicated following myeloablative chemotherapy (increased likelihood of severe side-effects) and in pediatric patients. Renal impairment: Neumega is excreted renally. No differences of pharmakinetic parameters and clinical differences have been seen in mild to moderate impairment. Severe impairment has led to an increased number of patients with reduced hemoglobin due to dilutional anemia. Patients with severely disturbed renal function should be monitored very closely. The efficacy of Oprelvekin has not been systematically studied in patients receiving chemotherapy regimes of more than 5 days duration/each cycle or in those regimes containing agents that induce delayed thrombocytopenia (e.g. nitrosoureas, mitomycin C. Neumega should not be given in these cases. Pregnancy In studies with rats and rabbits treated chronically, Oprelvekin showed embryo- and fetotoxicity (early death of embryos and reduction of number of fetus, fetal malformations etc.). There is no sufficient human data available. Pregnant women should only be treated, if the benefit to the mother outweighs the potential risk to the unborn. Lactation No human data is available if the drug is distributed into human milk. Nursing women should either discontinue breast-feeding or Neumega, the decision should take into account the importance of the drug to the mother. Side effects Neumega has caused allergic reaction which at times have been very serious. Symptoms have been edema of the face and tongue, or larynx; shortness of breath; wheezing; chest pain; hypotension (including shock); dysarthria; loss of consciousness, rash, urticaria, flushing, and fever. These reaction can occur after the first dose or after any later application. Neumega should be permanently discontinued in patients with any sign of allergy. Treatment is largely symptomatic. Oprelvekin also has caused quite often fluid retention, ranging from peripheral edema (approximately 40% of patients) to dyspnea and full developed lung edema with or without cardiac decompensation (see contraindications and precautions). These symptoms have led to some deaths. Fluid retention may also lead to dilutional anemia (in 10 to 15% of patients). Hypokalemia may also result. Symptoms of fluid retention have been observed more often in patients following myeloablative chemotherapy (see contraindications). Severe arrhythmias (atrial flutter and atrial fibrillation) as well as fatal cardiac arrest have also been seen which may or may be not attributed to fluid retention/increased volume. Isolated cases of stroke have been noted, those patients with previous transient ischemic attacks or partial/minor strokes may be at particular risk. Papilledema of the eyes has been observed (2%) and may lead to disturbed visual acuity and even temporary or permanent blindness. Patients with preexisting papilledema or with involvement of the central nervous system may be at higher risk. In postmarketing studies isolated cases of severe ventricular arrhythmias and renal failure have been seen. Injection site reactions like have also been observed (dermatitis, pain, and discoloration), but are usually mild. Interactions The concomitant application of GM-CSFs such as filgrastim or Sargramostim showed no potential interactions. Additionally, no other interactions are known. Interactions with drugs undergoing P450 enzyme metabolism are not likely to occur. Necessary examinations during treatment Complete blood counts should be obtained before starting chemotherapy and in short intervals afterwards. Platelet counts should be done at the time of expected nadir (lowest number of platelets) and at least until remission starts (platelet counts greater than 50,000). The patients should be watched for signs of allergy, fluid retention and anemia during and after therapy with Neumega. Preexisting ascites and pericardial effusions should be monitored closely for signs of worsening. Dosage regimen The dosage in patients without severe renal impairment is 50 μg/kg subcutaneously once a day either abdominal, in thigh, or hip. Most patients will be able to self-administer the drug after appropriate training. Patients with severe renal impairment should receive only 25 μg/kg daily. The first dose should be given 6 to 24 hours after completion of chemotherapy. Dosing should be continued until platelet counts reach at least 50,000 cells. Usually, one course of Neumega encompasses 10 to 21 days. The drug should be discontinued at least 2 days before starting the next chemotherapy cycle. Additional information Neumega vials must be stored in a refrigerator at . Protect from light. Do not freeze. References http://www.rxlist.com/cgi/generic3/oprelvek.htm http://www.wyeth.com/products_hcp?product=/wyeth_html/home/products/prescription/Neumega®%20(oprelvekin)/Neumega®%20(oprelvekin)_overview.html (Drug Information provided by Wyeth) Cytokines Immunostimulants Embryotoxicants Drugs developed by Pfizer Fetotoxicants
Oprelvekin
Chemistry
2,196
4,322
https://en.wikipedia.org/wiki/Borel%20measure
In mathematics, specifically in measure theory, a Borel measure on a topological space is a measure that is defined on all open sets (and thus on all Borel sets). Some authors require additional restrictions on the measure, as described below. Formal definition Let be a locally compact Hausdorff space, and let be the smallest σ-algebra that contains the open sets of ; this is known as the σ-algebra of Borel sets. A Borel measure is any measure defined on the σ-algebra of Borel sets. A few authors require in addition that is locally finite, meaning that every point has an open neighborhood with finite measure. For Hausdorff spaces, this implies that for every compact set ; and for locally compact Hausdorff spaces, the two conditions are equivalent. If a Borel measure is both inner regular and outer regular, it is called a regular Borel measure. If is both inner regular, outer regular, and locally finite, it is called a Radon measure. Alternatively, if a regular Borel measure is tight, it is a Radon measure. If is a separable complete metric space, then every Borel measure on is a Radon measure. On the real line The real line with its usual topology is a locally compact Hausdorff space; hence we can define a Borel measure on it. In this case, is the smallest σ-algebra that contains the open intervals of . While there are many Borel measures μ, the choice of Borel measure that assigns for every half-open interval is sometimes called "the" Borel measure on . This measure turns out to be the restriction to the Borel σ-algebra of the Lebesgue measure , which is a complete measure and is defined on the Lebesgue σ-algebra. The Lebesgue σ-algebra is actually the completion of the Borel σ-algebra, which means that it is the smallest σ-algebra that contains all the Borel sets and can be equipped with a complete measure. Also, the Borel measure and the Lebesgue measure coincide on the Borel sets (i.e., for every Borel measurable set, where is the Borel measure described above). This idea extends to finite-dimensional spaces (the Cramér–Wold theorem, below) but does not hold, in general, for infinite-dimensional spaces. Infinite-dimensional Lebesgue measures do not exist. Product spaces If X and Y are second-countable, Hausdorff topological spaces, then the set of Borel subsets of their product coincides with the product of the sets of Borel subsets of X and Y. That is, the Borel functor from the category of second-countable Hausdorff spaces to the category of measurable spaces preserves finite products. Applications Lebesgue–Stieltjes integral The Lebesgue–Stieltjes integral is the ordinary Lebesgue integral with respect to a measure known as the Lebesgue–Stieltjes measure, which may be associated to any function of bounded variation on the real line. The Lebesgue–Stieltjes measure is a regular Borel measure, and conversely every regular Borel measure on the real line is of this kind. Laplace transform One can define the Laplace transform of a finite Borel measure μ on the real line by the Lebesgue integral An important special case is where μ is a probability measure or, even more specifically, the Dirac delta function. In operational calculus, the Laplace transform of a measure is often treated as though the measure came from a distribution function f. In that case, to avoid potential confusion, one often writes where the lower limit of 0− is shorthand notation for This limit emphasizes that any point mass located at 0 is entirely captured by the Laplace transform. Although with the Lebesgue integral, it is not necessary to take such a limit, it does appear more naturally in connection with the Laplace–Stieltjes transform. Moment problem One can define the moments of a finite Borel measure μ on the real line by the integral For these correspond to the Hamburger moment problem, the Stieltjes moment problem and the Hausdorff moment problem, respectively. The question or problem to be solved is, given a collection of such moments, is there a corresponding measure? For the Hausdorff moment problem, the corresponding measure is unique. For the other variants, in general, there are an infinite number of distinct measures that give the same moments. Hausdorff dimension and Frostman's lemma Given a Borel measure μ on a metric space X such that μ(X) > 0 and μ(B(x, r)) ≤ rs holds for some constant s > 0 and for every ball B(x, r) in X, then the Hausdorff dimension dimHaus(X) ≥ s. A partial converse is provided by the Frostman lemma: Lemma: Let A be a Borel subset of Rn, and let s > 0. Then the following are equivalent: Hs(A) > 0, where Hs denotes the s-dimensional Hausdorff measure. There is an (unsigned) Borel measure μ satisfying μ(A) > 0, and such that holds for all x ∈ Rn and r > 0. Cramér–Wold theorem The Cramér–Wold theorem in measure theory states that a Borel probability measure on is uniquely determined by the totality of its one-dimensional projections. It is used as a method for proving joint convergence results. The theorem is named after Harald Cramér and Herman Ole Andreas Wold. See also Jacobi operator References Further reading Gaussian measure, a finite-dimensional Borel measure . Wiener's lemma related External links Borel measure at Encyclopedia of Mathematics Measures (measure theory)
Borel measure
Physics,Mathematics
1,197
41,825,367
https://en.wikipedia.org/wiki/List%20of%20email%20archive%20software
This article provides a list of software products and cloud-based services used for email archiving. Email archiving has several objectives: long-term preservation of knowledge, regulatory compliance, legal protection, etc. Those different goals may call for different solutions. For example, the preservation of historical records may require a one-time migration (change of format) and storage, while regulatory compliance generally calls for the systematic and continuous archival of messages, using a solution that provides storage and retrieval of email over extended time periods. Different products have been developed to meet those needs. They may be offered as a standalone computer appliance (possibly in the form of a virtual machine), as installable software that can be deployed on the user's premises, or as a cloud-based service. Those products may perform compression, de-duplication, encryption, indexing and advanced searching. They may work with a variety of email data sources (email systems and email storage file formats) and they may also support other types of messaging systems such as social media or instant messaging. Additionally, several collaborative software products (which commonly have messaging components) can also archive the data that they manage; however, those systems are not listed here since they generally do not archive third-party email data. Components In addition to a backup system, several other components are necessary for a useful email archiving system: Message header/metadata. Message body and related document attachments. Search and retrieval. Efficient storage. Reliable gathering of email flow. Policy enforcement, such as retention policy. Compliance certification. Notable products and services See also Comparison of mail servers Electronic discovery Electronic message journaling Email archiving File archive Message transfer agent References Computer archives Message transfer agents Mail servers Records management technology
List of email archive software
Technology
350
35,858,696
https://en.wikipedia.org/wiki/Langdon%20%26%20Seah
Langdon & Seah is an international construction consultancy firm in Asia operating independently in 13 countries from 39 offices and a staff resources of nearly 3,000. History The firm has its roots traced to the quantity surveying practice in the United Kingdom of "Horace W Langdon & Every", founded in 1919, which had bought the Singaporean firm of "Waters & Watson" to form "Horace W Langdon & Every incorporating Waters & Watson" in 1946. "Waters & Watson" itself was established in 1933 but ceased operations when Japan invaded Singapore in 1942. Following the Japanese surrender in 1945, original partner Eric Watson restarted "Waters & Watson" in early 1946, roping in young quantity surveyor Seah Mong Hee - joining "Waters & Watson" in 1936 - who helped maintained the office in the months prior to the Japanese invasion. The practice soon flourish and branch offices were soon established in Kuala Lumpur (1947) and Hong Kong (1949) encouraged by extensive post-war reconstruction work in the region. Also in 1949, Seah Mong Heewho had operated the office during the warwas made a Partner. The firm underwent further name changes to reflect changes in status, becoming "Langdon & Every (Far East)" in 1954. By 1967, the words "Far East" were dropped to reflect the firm's expansion beyond the region and the practice later became "Langdon Every & Seah" in 1969. In 1988, "Langdon & Every" in the United Kingdom and the Gulf countries amalgamated with "Davis, Belfield & Everest" (a firm formed in 1931 by Owen Davis joined in 1935 by John Belfield and in 1944 by Bobbie Everest) to become "Davis Langdon & Everest". Following the swapping of shares, "Langdon, Every & Seah" thus became “Davis Langdon & Seah” in 1990. In April 2012, the group was acquired by Arcadis, an international consultancy, design, engineering and management services company. On May 18, 2012, Davis Langdon & Seah officially changed its name to "Langdon & Seah". For more details, kindly refer to Quantifying Asia (compiled by John Peacock) Services The group offers consultancy services including the core services of cost management and quantity surveying, cost engineering, legal support, project management and monitoring, management consultancy, due diligence, research studies, insurance valuations, sustainable construction and capital tax allowances. The group’s work in the construction sector ranges from infrastructure works, retail, residential, industrial and commercial, as well as work in the oil and gas industry specifically by branch at Brunei. List of notable projects The firm has worked on the following high-profile projects: Beijing International Airport, China Jin Mao Building, Shanghai, China World Expo 2010, Shanghai, China Goldin Financial Global Centre, Hong Kong Hong Kong Stadium, Hong Kong Hong Kong Disneyland, Hong Kong Industrial & Commercial Bank of China, Hong Kong Mandarin Oriental, Kuala Lumpur, Malaysia Suvarnabhumi International Airport, Bangkok, Thailand Central World, Bangkok, Thailand Bitexco Financial Tower, Ho Chi Minh City, Vietnam Zuellig Building, Philippines Resorts World at Sentosa, Singapore Marina Bay Sands, Singapore Gardens by the Bay, Singapore References Further reading Meikle, Jim, Thinking Big: The History of Davis Langdon, Black Dog Publishing, 2009 External links Construction and civil engineering companies established in 1919 International engineering consulting firms Consulting firms established in 1919 1919 establishments in England
Langdon & Seah
Engineering
692
15,603,775
https://en.wikipedia.org/wiki/Nitrogen%E2%80%93phosphorus%20detector
The nitrogen–phosphorus detector (NPD) is also known as thermionic specific detector (TSD) is a detector commonly used with gas chromatography, in which thermal energy is used to ionize an analyte. It is a type of flame thermionic detector (FTD), the other being the alkali flame-ionization detector (AFID also known as AFD). With this method, nitrogen and phosphorus can be selectively detected with a sensitivity that is 104 times greater than that for carbon. NP-Mode A concentration of hydrogen gas is used such that it is just below the minimum required for ignition. A rubidium or cesium bead, which is mounted over the nozzle, ignites the hydrogen (by acting catalytically), and forms a cold plasma. Excitation of the alkali metal results in ejection of electrons, which in turn are detected as a current flow between an anode and cathode in the chamber. As nitrogen or phosphorus analytes exit the column, they cause a reduction in the work function of the metal bead, resulting in an increase in current. Since the alkali metal bead is consumed over time, it must be replaced regularly . See also Gas chromatography External links Gas chromatography
Nitrogen–phosphorus detector
Chemistry
268
38,921,516
https://en.wikipedia.org/wiki/CloudBees
CloudBees is an enterprise software delivery company. Sacha Labourey and Francois Dechery co-founded the company in early 2010, and investors include Matrix Partners, Lightspeed Venture Partners, HSBC, Verizon Ventures, Golub Capital, Goldman Sachs, Morgan Stanley, and Bridgepoint Group. CloudBees is headquartered in San Jose, CA with additional offices in Raleigh, NC, Lewes, DE, Richmond, VA, Berlin, London, and Neuchâtel, Switzerland. CloudBees' software originally included a Platform as a Service offering, which let developers use Jenkins (software) in the cloud, along with an on-premise version of Jenkins with additional functions for enterprise companies. In 2020, CloudBees also introduced a Software Delivery Automation platform. History CloudBees was founded in 2010 by Sacha Labourey and Francois Dechery. Later that year, CloudBees acquired InfraDNA, a company run by Kohsuke Kawaguchi, the creator of Jenkins. Since 2010, CloudBees has raised a total of over $250 million in venture financing from investors. CloudBees customers include Salesforce, Capital One, United States Air Force, and HSBC. In September 2014, CloudBees stopped offering runtime PaaS services and began to focus on its enterprise Jenkins for on-premises and cloud-based continuous delivery. Also in 2014, Kohsuke Kawaguchi, the lead developer and founder of Jenkins, became CloudBees' CTO. In 2016, the company added a Software as a Service (SaaS) version of its continuous delivery software. In February 2018, CloudBees acquired the cloud-based continuous delivery company Codeship. In 2019, CloudBees acquired Electric Cloud and Rollout. In 2020, Kawaguchi left his role as CTO of CloudBees to found a new company, Launchable. In 2021, CloudBees announced CloudBees Compliance, a compliance and risk analysis capability platform for software delivery. CloudBees raised $150 million in a series F funding round in December 2021. In 2022, CloudBees announced the acquisition of ReleaseIQ, a SaaS-based offering, to expand the company’s DevSecOps capabilities. References Technology companies established in 2010 American companies established in 2010 Companies based in San Jose, California Technology companies of the United States Java (programming language) Cloud platforms Cloud infrastructure Cloud computing providers Cloud storage 2010 establishments in California
CloudBees
Technology
500
12,101,061
https://en.wikipedia.org/wiki/Anil%20Aggrawal%27s%20Internet%20Journal%20of%20Forensic%20Medicine%20and%20Toxicology
Anil Aggrawal's Internet Journal of Forensic Medicine and Toxicology is an online scientific journal covering forensic medicine and toxicology and allied subjects such as criminology, police science, and deviant behavior. It is one of the most widely read and popular peer-reviewed forensic medicine journals in the world. The journal is published semiannually and is indexed by EMBASE, Chemical Abstracts Service, Locatorplus, EBSCO, Indianjournals.com, Scopus and Emerging Sources Citation Index (ESCI) by Clarivate. It was established by Anil Aggrawal (Maulana Azad Medical College, New Delhi) in 2000. Thematic issues The journal has produced several thematic issues on forensic entomology edited by Mark Benecke of Germany, on crime scene investigation edited by Daryl Clemens, and on toxicology edited by V.V.Pillay of India. References External links Anil Aggrawal's Internet Journal of Book Reviews (Sister publication) Academic works about forensics Open access journals Academic journals established in 2000 Biannual journals Toxicology journals English-language journals Criminology journals
Anil Aggrawal's Internet Journal of Forensic Medicine and Toxicology
Environmental_science
234
40,951,771
https://en.wikipedia.org/wiki/Barium%20iodate
Barium iodate is an inorganic chemical compound with the chemical formula Ba(IO3)2. It is a white, granular substance. Derivation Barium iodate can be derived either as a product of a reaction of iodine and barium hydroxide or by combining barium chlorate with potassium iodate. Chemical properties The compound is stable on a temperature up to approximately . If the temperature is higher than that value, the following reaction, known as Rammelsberg's reaction, occurs: References External links Definition of Insoluble salts (precipitates); Solubility product Inorganic compounds Iodates Barium compounds
Barium iodate
Chemistry
132
11,871,683
https://en.wikipedia.org/wiki/Epsilon%20Eridani%20b
Epsilon Eridani b, also known as AEgir , is an exoplanet approximately 10.5 light-years away orbiting the star Epsilon Eridani, in the constellation of Eridanus (the River). The planet was discovered in 2000, and as of 2024 remains the only confirmed planet in its planetary system. It orbits at around 3.5 AU with a period of around 7.6 years, and has a mass around 0.6 times that of Jupiter. , both the Extrasolar Planets Encyclopaedia and the NASA Exoplanet Archive list the planet as 'confirmed'. Name The planet and its host star are one of the planetary systems selected by the International Astronomical Union as part of NameExoWorlds, their public process for giving proper names to exoplanets and their host star (where no proper name already exists). The process involved public nomination and voting for the new names. In December 2015, the IAU announced the winning names were AEgir for the planet (pronounced [Anglicized] or , an approximation of the old Norse Ægir) and Ran for the star. James Ott, age 14, submitted the names for the IAU contest and won. The moon Aegir of Saturn is also named after the mythological Ægir, and differs in spelling only by capitalization. Discovery The planet's existence was suspected by a Canadian team led by Bruce Campbell and Gordon Walker in the early 1990s, but their observations were not definitive enough to make a solid discovery. Its formal discovery was announced on August 7, 2000, by a team led by Artie Hatzes. The discoverers gave its mass as 1.2 ± 0.33 times that of Jupiter, with a mean distance of 3.4 AU from the star. Observers, including Geoffrey Marcy, suggested that more information on the star's Doppler noise behaviour created by its large and varying magnetic field was needed before the planet could be confirmed. In 2006, the Hubble Space Telescope made astrometric measurements and confirmed the existence of the planet. These observations indicated that the planet has a mass 1.5 times that of Jupiter and shares the same plane as the outer dust disk observed around the star. The derived orbit from these measurements is eccentric: either 0.25 or 0.7. Meanwhile, the Spitzer Space Telescope detected an asteroid belt at roughly 3 AU from the star. In 2009 one team of astronomers claimed that the proposed planet's eccentricity and this belt were inconsistent: the planet would pass through the asteroid belt and rapidly clear it of material. The planet and the inner belt may be reconciled if that belt's material had migrated in from the outer comet belt (also known to exist). Astronomers continue to collect and analyse radial velocity data, while also refining existing upper limits from non-detection via direct imaging, on Epsilon Eridani b. A paper published in January 2019 found an orbital eccentricity with an order of magnitude smaller than earlier estimates had, at around 0.07, consistent with a nearly circular orbit and very similar to Jupiter's orbital eccentricity of 0.05. This resolved the stability issue with the inner asteroid belt. The updated measurements, amongst other things, also included new estimates for the mass and inclination of the planet, at 0.78 times that of Jupiter but due to the inclination having been poorly constrained at 89 degrees this was only a rough estimate of the absolute mass. If the planet instead orbited at the same inclination as the debris disc (34 degrees), as supported by Benedict et al. 2006, then its mass would have been greater, at 1.19 times that of Jupiter. Using astrometric data taken from the U.S. Naval Observatory Robotic Astrometric Telescope (URAT) combined with previously collected data from the Hipparcos mission, and the newer Gaia EDR3 data release, a group of scientists at the United States Naval Observatory believe they have, with high formal confidence levels, confirmed the presence of a long-period exoplanet orbiting Epsilon Eridani. A paper published in October 2021 determines, using absolute astrometry measurements from the Hipparcos, Gaia DR2 data, and new radial velocity measurements from Keck/NIRC2 Ms-band vortex coronagraph images, a lower absolute mass of 0.65 times that of Jupiter, at an eccentricity close to 0.055 with the planet orbiting at around 3.53 AU inclined at 78 degrees. Similar updated findings were published in a paper in July 2021, determining a minimum mass of 0.651 that of Jupiter, with the planet's semi-major axis at 3.5 AU orbiting with an eccentricity of 0.044. A March 2022 paper finds an inclination of 45 degrees, closer to earlier estimates, a mass 0.63 times that of Jupiter, and an eccentricity of 0.16. Direct imaging of Epsilon Eridani b with the James Webb Space Telescope is planned. See also 47 Ursae Majoris b 51 Pegasi b List of nearest exoplanets Notes References External links Epsilon Eridani b at The Extrasolar Planets Encyclopaedia. Retrieved 2020-05-04. Epsilon Eridani b at The NASA Exoplanet Archive. Retrieved 2020-05-04. Eridanus (constellation) Exoplanets detected by radial velocity Exoplanets detected by astrometry Exoplanets discovered in 2000 Giant planets Exoplanets with proper names
Epsilon Eridani b
Astronomy
1,129
37,249,254
https://en.wikipedia.org/wiki/NABERS
NABERS, the National Australian Built Environment Rating System is an Australian national initiative, managed by the Government of New South Wales' Department of Climate Change, Energy, the Environment and Water (New South Wales) on behalf of the Australian Government, that measures and compares the environmental performance of Australian buildings and tenancies. There are NABERS rating tools for commercial office buildings to measure greenhouse gas emissions, energy efficiency, water efficiency, waste efficiency and indoor environment quality. There are also energy/greenhouse and water rating tools for hotels, shopping centres and data centres. Accredited Assessors A key feature of the initiative is the use of independent 'Accredited Assessors' to conduct ratings. Assessors are required to attend training, pass an exam and complete two supervised assessments before they receive full accreditation. While there are no formal pre-requisites to attend the training, most Assessors have experience in the building services, property or energy management industries. Building owners and tenants can use the online 'self-assessment' tool, however they cannot promote these results. Only ratings that have been certified by the NABERS National Administrator can be promoted using the NABERS trademark. Rating System NABERS rating helps you to accurately measure, understand, and communicate the environmental performance of your building while identifying areas for cost savings and future improvements. It provides a rating that is valid for twelve months from one to six stars for building efficiency across these measures: Energy Water Waste Indoor environment Ratings can be applied to the following built environment sectors: Office Buildings and Tenancies Shopping Centres Apartment Buildings Hospitals (public) Hotels Data Centres Residential Aged Care Retirement Living Warehouses and Cold Stores Schools Retail Stores The vision statement of NABERS is 'To support a more sustainable built environment through a relevant, reliable and practical measure of building performance'. Offices The NABERS tools attempt to provide an accurate measurement of how efficiently building owners and tenants are providing their services without penalising them for factors that are beyond their control. For example, if the primary service that an office building owner provides is safe, lit and comfortable office space the NABERS Energy for offices tool would consider how much space is being used, how much energy is being used to supply services to the space, and then statistically adjusts for factors like the climate - which will influence how much energy is used for heating and cooling. Energy To obtain a NABERS Energy for offices rating, consumption data for the building (such as electricity and gas bills) is collected by Accredited Assessors along with data about a number of other aspects of the building such as its size, hours of occupation, climate location and density of occupation. Data requirements are set out in a document called 'The NABERS Energy and Water for Offices Rules for Collecting and Using Data v.3.0'. This data is then input into the NABERS rating calculator which statistically adjusts for these factors so that the building can have its consumption fairly benchmarked against its peers. The result of this calculation is a star rating on a six-star scale, where zero is very poor performance and six is market-leading. Water The procedure for an office water rating is similar to conducting an office energy rating. The main differences are that it is water rather than energy bills that are used, and some data such as the hours of operation are not required. Unlike office energy ratings, which can either be for the base building, tenancies or whole building, office water ratings are only available for whole buildings. Data centres Energy Like NABERS for offices, NABERS Energy for data centres has three distinct rating types to reflect the different interests and responsibilities from data centres owners, operators and tenants – Infrastructure (co-location owner), Whole Facility (data centre owner) and IT Equipment (data centre tenant) ratings. The tool is designed to rate the majority of data centres in Australia, provide a direct comparison with other rateable data centres, and allow an individual data centre to measure and compare performance over time. The NABERS Data Centre IT Equipment Rating is designed for organisations that control and manage their own IT equipment (servers, storage and networking devices). The IT Equipment rating measures features that are closely related to the primary functions of a data centre (processing, storage and networking) and that all data centres provide, regardless of how they provide them. NABERS uses two IT equipment metrics: Processing capacity: number of server cores × clock speed in gigahertz (GHz) and Storage capacity (total unformatted storage capacity in terabytes). The NABERS performance benchmark model predicts the industry median greenhouse gas emissions for a given amount of data centre processing and storage capacity. This means that if a data centre consumes more energy than the benchmark model predicts, the site is less energy efficient than the industry median (set at 3 stars), while if it consumes less energy it is more efficient than the median. To obtain a NABERS Energy for data centres IT Equipment rating, energy consumption data for the IT equipment over a 28- to 40-day period is collected by Accredited Assessors along with data about the total unformatted storage capacity and total processing capacity as above. The Infrastructure Rating measures the energy efficiency in delivering support services to the IT equipment, using the widely accepted industry Power Usage Effectiveness (PUE) ratio that is converted into kilogram of emissions with some modification for climate and shared cooling services. To obtain an infrastructure rating, 12 months of energy consumption data for IT equipment and infrastructure services is collected by Accredited Assessors along with the climate location of the data centre. The Whole Facility rating measures the energy efficiency of the whole data centre by assessing the processing and storage capacity and the industry median energy efficiency for infrastructure services compared with the overall energy consumption of the data centre. It is a combination of both the IT Equipment and Infrastructure rating benchmarks To obtain a NABERS Energy for data centres Whole Facility rating, 12 months of energy consumption data for the data centre is collected by Accredited Assessors along with the processing and storage capacity and climate location of the data centre. Comparison to other building rating systems There are a number of building environmental certification systems across the world, such as LEED, Green Star, BRE Environmental Assessment Method (BREEAM) and Display Energy Certificates (DECs). The key features of NABERS as a system are that it is based on performance rather than design, assessments are carried out by third-party 'Accredited Assessors', it is based on third party verifiable data (such as utility bills), ratings undergo government quality assurance checks and it distinguishes between the environmental impact of a building's shared services and its tenancies. While other rating systems across the world share some of these features, none share all of them. Adoption Use in Australian energy programs & policy While NABERS Energy is a voluntary rating scheme for buildings, its success has been at least partly driven by its extensive use in energy initiatives by government and industry throughout Australia. Some programs include: The NSW Government Resource Efficiency Policy (GREP): The most recent iteration of a series of NSW government procurement policies that set out NABERS targets in government leasing criteria. In the GREP, government tenants require a building to have a 4.5-star NABERS rating. It also states a 4.5-star energy rating as a minimum criterion for government data centres. Similar policies are in place other states and territories, as well as the Australian government (the 'Energy Efficiency in Government Operations' policy). Emissions Reduction Fund: the centrepiece of Australia's carbon abatement strategy, began operating in early 2015. NABERS Energy is used in the commercial buildings methodology, to calculate and ensure the carbon abatement achieved by project proponents is real Energy Savings Scheme (ESS): a New South Wales state energy program where commercial buildings can obtain Energy Saving Certificates (ESC) for energy efficiency projects, which can be sold to the market. A NABERS Energy ratings are used to demonstrate the energy savings achieved by the project. Green Building Fund: a former Australian Government program, where commercial buildings could obtain up to 50% of capital funding for energy efficiency projects. The program used NABERS Energy ratings to ensure the savings effectively occurred, as well as to calculate the total amount of energy and emissions saved. City Switch: an initiative that supports commercial office tenants to improve energy efficiency, run by a coalition of local councils throughout Australia. City Switch uses NABERS Energy as its key indicator of energy performance and provides assistance to its members to achieve a rating of 4 stars or higher Use in Australian legislation The Building Energy Efficiency Disclosure Act 2010: Australian government legislation that requires owners of office buildings to disclose the energy efficiency of the building to prospective tenants or buyers. Known operationally as the Commercial Building Disclosure (CBD) program, a certified NABERS Energy rating is the main energy efficiency indicator required of building owners. Use internationally NABERSNZ: The Energy Efficiency and Conservation Authority (EECA) in New Zealand licensed NABERS in 2013 to create NABERSNZ. The Global Real Estate Sustainability Benchmark (GRESB): The GRESB is a global standard for portfolio-level sustainability assessment in real estate. The GRESB benchmark addresses issues including corporate sustainability strategy, policies and objectives, environmental performance monitoring, and the use of high-quality voluntary rating tools such as NABERS. The Climate Bonds Initiative (CBI): The CBI creates Climate Bonds Standards, which provide a Fair Trade-like labelling system for bonds, designed to make it easier for investors to work out what sorts of investments genuinely contribute to addressing climate change. Data from NABERS Energy rating reports can be used in Climate Bond reporting under the Climate Bonds Standard for Low Carbon Commercial Buildings. NABERS IE in India: One NABERS Indoor Environment rating has been conducted in India, at the Paraharpur Business Centre. The rating was certified in May 2015. Program success NABERS Energy for offices is considered by many to have been successful, as over 82% of the Australian national office market has now been rated with either a base building or whole building rating. Factors behind the success of the tool are largely attributed to its ability to differentiate between the base building and tenants energy end uses and strong government support. Far fewer tenancy energy ratings have been conducted however and there has also been far less uptake of the other tools. Australian government legislation that requires owners of office buildings to disclose the energy efficiency of the building to prospective tenants or buyers. Known operationally as the Commercial Building Disclosure (CBD) program, a certified NABERS Energy rating is the main energy efficiency indicator required of building owners. References Environmental design Sustainable building Environmental law in Australia Environmental monitoring Environmental impact assessment Sustainability organizations Sustainable design Technology assessment Environmental impact in Australia
NABERS
Technology,Engineering
2,172
80,322
https://en.wikipedia.org/wiki/Lev%20Landau
Lev Davidovich Landau (; 22 January 1908 – 1 April 1968) was a Soviet physicist who made fundamental contributions to many areas of theoretical physics. He was considered as one of the last scientists who were universally well-versed and made seminal contributions to all branches of physics. He is credited with laying the foundations of twentieth century condensed matter physics, and is also considered arguably the greatest Soviet theoretical physicist. His accomplishments include the independent co-discovery of the density matrix method in quantum mechanics (alongside John von Neumann), the quantum mechanical theory of diamagnetism, the theory of superfluidity, the theory of second-order phase transitions, invention of order parameter technique, the Ginzburg–Landau theory of superconductivity, the theory of Fermi liquids, the explanation of Landau damping in plasma physics, the Landau pole in quantum electrodynamics, the two-component theory of neutrinos, and Landau's equations for S-matrix singularities. He received the 1962 Nobel Prize in Physics for his development of a mathematical theory of superfluidity that accounts for the properties of liquid helium II at a temperature below (). Life Early years Landau was born on 22 January 1908 to Jewish parents in Baku, the Russian Empire, in what is now Azerbaijan. Landau's father, David Lvovich Landau, was an engineer with the local oil industry, and his mother, Lyubov Veniaminovna Garkavi-Landau, was a doctor. Both came to Baku from Mogilev and both graduated the Mogilev gymnasium. He learned differential calculus at age 12 and integral calculus at age 13. Landau graduated in 1920 at age 13 from gymnasium. His parents considered him too young to attend university, so for a year he attended the Baku Economical Technical School. In 1922, at age 14, he matriculated at the Baku State University, studying in two departments simultaneously: the Departments of Physics and Mathematics, and the Department of Chemistry. Subsequently, he ceased studying chemistry, but remained interested in the field throughout his life. Leningrad and Europe In 1924, he moved to the main centre of Soviet physics at the time: the Physics Department of Leningrad State University, where he dedicated himself to the study of theoretical physics, graduating in 1927. Landau subsequently enrolled for post-graduate studies at the Leningrad Physico-Technical Institute where he eventually received a doctorate in Physical and Mathematical Sciences in 1934. Landau got his first chance to travel abroad during the period 1929–1931, on a Soviet government—People's Commissariat for Education—travelling fellowship supplemented by a Rockefeller Foundation fellowship. By that time he was fluent in German and French and could communicate in English. He later improved his English and learned Danish. After brief stays in Göttingen and Leipzig, he went to Copenhagen on 8 April 1930 to work at the Niels Bohr's Institute for Theoretical Physics. He stayed there until 3 May of the same year. After the visit, Landau always considered himself a pupil of Niels Bohr and Landau's approach to physics was greatly influenced by Bohr. After his stay in Copenhagen, he visited Cambridge (mid-1930), where he worked with Paul Dirac, Copenhagen (September to November 1930), and Zürich (December 1930 to January 1931), where he worked with Wolfgang Pauli. From Zürich Landau went back to Copenhagen for the third time and stayed there from 25 February until 19 March 1931 before returning to Leningrad the same year. National Scientific Center Kharkiv Institute of Physics and Technology, Kharkiv Between 1932 and 1937, Landau headed the Department of Theoretical Physics at the National Scientific Center Kharkiv Institute of Physics and Technology, and he lectured at the University of Kharkiv and the Kharkiv Polytechnic Institute. Apart from his theoretical accomplishments, Landau was the principal founder of a great tradition of theoretical physics in Kharkiv, Ukraine, sometimes referred to as the "Landau school". In Kharkiv, he and his friend and former student, Evgeny Lifshitz, began writing the Course of Theoretical Physics, ten volumes that together span the whole of the subject and are still widely used as graduate-level physics texts. During the Great Purge, Landau was investigated within the UPTI Affair in Kharkiv, but he managed to leave for Moscow to take up a new post. Landau developed a famous comprehensive exam called the "Theoretical Minimum" which students were expected to pass before admission to the school. The exam covered all aspects of theoretical physics, and between 1934 and 1961 only 43 candidates passed, but those who did later became quite notable theoretical physicists. In 1932, Landau computed the Chandrasekhar limit; however, he did not apply it to white dwarf stars. Institute for Physical Problems, Moscow From 1937 until 1962, Landau was the head of the Theoretical Division at the Institute for Physical Problems. On 27 April 1938, Landau was arrested for a leaflet which compared Stalinism to German Nazism and Italian Fascism. He was held in the NKVD's Lubyanka prison until his release, on 29 April 1939, after Pyotr Kapitsa (an experimental low-temperature physicist and the founder and head of the institute) and Bohr wrote letters to Joseph Stalin. Kapitsa personally vouched for Landau's behaviour and threatened to quit the institute if Landau was not released. After his release, Landau discovered how to explain Kapitsa's superfluidity using sound waves, or phonons, and a new excitation called a roton. Landau led a team of mathematicians supporting Soviet atomic and hydrogen bomb development. He calculated the dynamics of the first Soviet thermonuclear bomb, including predicting the yield. For this work Landau received the Stalin Prize in 1949 and 1953, and was awarded the title "Hero of Socialist Labour" in 1954. Landau's students included Lev Pitaevskii, Alexei Abrikosov, Aleksandr Akhiezer, Igor Dzyaloshinskii, Evgeny Lifshitz, Lev Gor'kov, Isaak Khalatnikov, Roald Sagdeev and Isaak Pomeranchuk. Scientific achievements Landau's accomplishments include the independent co-discovery of the density matrix method in quantum mechanics (alongside John von Neumann), the quantum mechanical theory of diamagnetism, the theory of superfluidity, the theory of second-order phase transitions, the Ginzburg–Landau theory of superconductivity, the theory of Fermi liquids, the explanation of Landau damping in plasma physics, the Landau pole in quantum electrodynamics, the two-component theory of neutrinos, the explanation of flame instability (the Darrieus-Landau instability), and Landau's equations for S matrix singularities. Landau received the 1962 Nobel Prize in Physics for his development of a mathematical theory of superfluidity that accounts for the properties of liquid helium II at a temperature below 2.17 K (−270.98 °C)." Personal life and views In 1937, Landau married Kora T. Drobanzeva from Kharkiv. Their son Igor (1946–2011) became a theoretical physicist. Lev Landau believed in "free love" rather than monogamy and encouraged his wife and his students to practise "free love". However, his wife was not enthusiastic. Landau is generally described as an atheist, although when Soviet filmmaker Andrei Tarkovsky asked Landau whether he believed in the existence of God, Landau pondered the matter in silence for three minutes before responding, "I think so." In 1957, a lengthy report to the CPSU Central Committee by the KGB recorded Landau's views on the 1956 Hungarian Uprising, Vladimir Lenin and what he termed "red fascism". Hendrik Casimir recalls him as a passionate communist, emboldened by his revolutionary ideology. Landau's drive in establishing Soviet science was in part due to his devotion to socialism. In 1935 he published a piece titled “Bourgeoisie and Contemporary Physics” in the Soviet newspaper Izvestia in which he criticized religious superstition and the dominance of capital, which he saw as bourgeois tendencies, citing “unprecedented opportunities for the development of physics in our country, provided by the Party and the government.” Last years On 7 January 1962, Landau's car collided with an oncoming truck. He was severely injured and spent two months in a coma. Although Landau recovered in many ways, his scientific creativity was destroyed, and he never returned fully to scientific work. His injuries prevented him from accepting the 1962 Nobel Prize in Physics in person. Throughout his life Landau was known for his sharp humour, as illustrated by the following dialogue with a psychologist, Alexander Luria, who tried to test for possible brain damage while Landau was recovering from the car crash: Luria: "Please draw me a circle" Landau draws a cross Luria: "Hm, now draw me a cross" Landau draws a circle Luria: "Landau, why don't you do what I ask?" Landau: "If I did, you might come to think I've become mentally retarded". In 1965 former students and co-workers of Landau founded the Landau Institute for Theoretical Physics, located in the town of Chernogolovka near Moscow, and led for the following three decades by Isaak Khalatnikov. In June 1965, Lev Landau and Evsei Liberman published a letter in the New York Times, stating that as Soviet Jews they opposed U.S. intervention on behalf of the Student Struggle for Soviet Jewry. However, there are doubts that Landau authored this letter. Death Landau died on 1 April 1968, aged 60, from complications of the injuries sustained in the car accident six years earlier. He was buried at the Novodevichy Cemetery. Fields of contribution DLVO theory Fermi liquid theory Quasiparticle Ivanenko–Landau–Kähler equation Landau damping Landau distribution Landau gauge Landau kinetic equation Landau pole Landau susceptibility Landau potential Landau quantization Landau theory Landau–Squire jet Landau–Levich problem Landau–Hopf theory of turbulence Ginzburg–Landau theory Darrieus–Landau instability Landau–Lifshitz aeroacoustic equation Landau–Raychaudhuri equation Landau–Zener formula Landau–Lifshitz model Landau–Lifshitz pseudotensor Landau–Lifshitz–Gilbert equation Landau–Pomeranchuk–Migdal effect Landau–Yang theorem Landau principle Stuart–Landau equation Superfluidity Superconductivity Pedagogy Course of Theoretical Physics Legacy Two celestial objects are named in his honour: the minor planet 2142 Landau. the lunar crater Landau. The highest prize in theoretical physics awarded by the Russian Academy of Sciences is named in his honour: Landau Gold Medal On 22 January 2019, Google celebrated what would have been Landau's 111th birthday with a Google Doodle. The Landau-Spitzer Award (American Physical Society), which recognizes outstanding contributions to plasma physics and European-United States collaboration, is named in-part in his honor. Landau's ranking of physicists Landau kept a list of names of physicists which he ranked on a logarithmic scale of productivity and genius, such as creativity and innate talent, ranging from 0 to 5. The highest ranking, 0, was assigned to Isaac Newton. Albert Einstein was ranked 0.5. A rank of 1 was awarded to the founding fathers of quantum mechanics, Niels Bohr, Werner Heisenberg, Satyendra Nath Bose, Paul Dirac and Erwin Schrödinger, and others, while members of rank of 5 were deemed "pathologists". Landau ranked himself as a 2.5 but later promoted to a 2. N. David Mermin, writing about Landau, referred to the scale, and ranked himself in the fourth division, in the article "My Life with Landau: Homage of a 4.5 to a 2". Landau had a lesser known scale to measure the genius of a scientist using diagrams instead. He had four classes of diagrams, with the first being a simple triangle, which included those who were the most original and brilliant, such as Dirac and Einstein. The diagrams were formed by two parallel lines, bottom representing tenacity, while the top measured genius and originality. In popular culture The Russian television film My Husband — the Genius (translation of the Russian title Мой муж — гений) released in 2008 tells the biography of Landau (played by Daniil Spivakovsky), mostly focusing on his private life. It was generally panned by critics. People who had personally met Landau, including famous Russian scientist Vitaly Ginzburg, said that the film was not only terrible, but also false in historical facts. Another film about Landau, Dau, is directed by Ilya Khrzhanovsky with non-professional actor Teodor Currentzis (an orchestra conductor) as Landau. Dau was a common nickname of Lev Landau. The film was part of the multidisciplinary art project DAU. Works Landau wrote his first paper On the derivation of Klein–Fock equation, co-authored with Dmitri Ivanenko in 1926, when he was 18 years old. His last paper titled Fundamental problems appeared in 1960 in an edited version of tributes to Wolfgang Pauli. A complete list of Landau's works appeared in 1998 in the Russian journal Physics-Uspekhi. Landau would allow himself to be listed as a co-author of a journal article on two conditions: 1) he brought up the idea of the work, partly or entirely, and 2) he performed at least some calculations presented in the article. Consequently, he removed his name from numerous publications of his students where his contribution was less significant. Course of Theoretical Physics — 2nd ed. (1965) at archive.org Landau and Lifshitz suggested in the third volume of the Course of Theoretical Physics that the then-standard periodic table had a mistake in it, and that lutetium should be regarded as a d-block rather than an f-block element. Their suggestion was fully vindicated by later findings, and in 1988 was endorsed by a report of the International Union of Pure and Applied Chemistry (IUPAC). Other in 4 volumes: volume 1 Physical bodies ; volume 2 Molecules ; volume 3 Electrons and volume 4 Photons and nuclei; vols. 3 & 4 by Kitaigorodsky alone See also List of Jewish Nobel laureates List of things named after Lev Landau References Further reading Books (After Landau's 1962 car accident, the physics community around him rallied to attempt to save his life. They managed to prolong his life until 1968.) Articles Karl Hufbauer, "Landau's youthful sallies into stellar theory: Their origins, claims, and receptions", Historical Studies in the Physical and Biological Sciences, 37 (2007), 337–354. "As a student, Landau dared to correct Einstein in a lecture". Global Talent News. Lev Davidovich Landau. Nobel-Winners. Landau's Theoretical Minimum, Landau's Seminar, ITEP in the Beginning of the 1950s by Boris L. Ioffe, Concluding talk at the workshop QCD at the Threshold of the Fourth Decade/Ioeffest. EJTP Landau Issue 2008. Ammar Sakaji and Ignazio Licata (eds), Lev Davidovich Landau and his Impact on Contemporary Theoretical Physics, Nova Science Publishers, New York, 2009, . Gennady Gorelik, "The Top Secret Life of Lev Landau", Scientific American, Aug. 1997, vol. 277(2), 53–57, JSTOR link. Maya Bessarab, "Landau's Life Pages(in Russian)". External links Lev Landau 1908 births 1968 deaths Soviet Nobel laureates Azerbaijani Jews Scientists from Baku Burials at Novodevichy Cemetery Fluid dynamicists Foreign associates of the National Academy of Sciences Foreign members of the Royal Society Full Members of the USSR Academy of Sciences Nobel laureates in Physics Heroes of Socialist Labour Recipients of the Stalin Prize Recipients of the Lenin Prize Recipients of the Order of the Badge of Honour Recipients of the Order of Lenin Recipients of the Order of the Red Banner of Labour Winners of the Max Planck Medal Jewish atheists Jewish physicists Members of the German National Academy of Sciences Leopoldina Academic staff of Moscow State University Academic staff of the Moscow Institute of Physics and Technology People from Baku Governorate Saint Petersburg State University alumni Soviet atheists Soviet inventors Soviet Jews Soviet physicists Theoretical physicists Academic staff of the National University of Kharkiv Superfluidity People involved with the periodic table Russian scientists
Lev Landau
Physics,Chemistry,Materials_science
3,416
59,471,688
https://en.wikipedia.org/wiki/Thermoelectric%20acclimatization
Thermoelectric acclimatization depends on the possibility of a Peltier cell of absorbing heat on one side and rejecting heat on the other side. Consequently, it is possible to use them for heating on one side and cooling on the other and as a temperature control system. Peltier cell heat pump A typical Peltier cell based heat pump can be used by coupling the thermoelectric generators with photovoltaic air cooled panels as defined in the PhD thesis of Alexandra Thedeby. Considering the system with an air plant that ensures the possibility of heating on one side and cooling on the other. By changing the configuration it allows both winter and summer acclimatization. These elements are expected to be an effective element for zero-energy buildings, if coupled with solar thermal energy and photovoltaic with particular reference to create radiant heat pumps on the walls of a building. It must be remarked that this acclimatization method ensures the ideal efficiency during summer cooling if coupled with a photovoltaic generator. The air circulation could be also used for cooling the temperature of PV modules. The most important engineering requirement is the accurate design of heat sinks to optimize the heat exchange and minimize the fluiddynamic losses. Thermodynamic parameters The efficiency can be determined by the following relation: where is the temperature of the cooling surface and is the temperature of the heating surface. The key energy phenomena and the reason of defining a specific use of thermoelectric elements (Figure 1) as heat pump resides in the energy fluxes that those elements allow realizing: Conductive power : Heat flux on the cold side : Heat flux on the hot side : Electric power : Where the following terms are used: , electric current; α Seebeck coefficient; R electric resistance, S surface area, d cell thickness, and k thermal conductivity. The efficiencies of the system are: Cooling efficiency: Heating efficiency: COP can be calculated according to Cannistraro. Final uses Thermoelectric heat pumps can be easily used for both local acclimatization for removing local discomfort situations. For example, thermoelectric ceilings are today in an advanced research stage with the aim of increasing indoor comfort conditions according to Fanger, such as the ones that may appear in presence of large glassed surfaces, and for small building acclimatization if coupled with solar systems. Those systems have the key importance in the direction of new zero emissions passive building because of a very high COP value and the following high performances by an accurate exergy optimization of the system. At industrial level thermoelectric acclimatization appliances are actually under development References Heating Thermodynamics
Thermoelectric acclimatization
Physics,Chemistry,Mathematics
554
65,781,618
https://en.wikipedia.org/wiki/Iodine%20nitrate
Iodine nitrate is a chemical with formula INO3. It is a covalent molecule with a structure of I–O–NO2. Preparation The compound was first produced by the reaction of mercury(II) nitrate and iodine in ether. Other nitrate salts and solvents can also be used. As a gas it is slightly unstable, decaying with a rate constant of −3.2×10−2 s−1. The possible formation of this chemical in the atmosphere and its ability to destroy ozone have been studied. Potential reactions in this context are: IONO2 → IO + NO2 IONO2 → I + NO3 I + O3 → IO + O2 References Nitrates Iodine compounds
Iodine nitrate
Chemistry
146
4,252,869
https://en.wikipedia.org/wiki/Q%20%28number%20format%29
The Q notation is a way to specify the parameters of a binary fixed point number format. For example, in Q notation, the number format denoted by Q8.8 means that the fixed point numbers in this format have 8 bits for the integer part and 8 bits for the fraction part. A number of other notations have been used for the same purpose. Definition Texas Instruments version The Q notation, as defined by Texas Instruments, consists of the letter followed by a pair of numbers mn, where m is the number of bits used for the integer part of the value, and n is the number of fraction bits. By default, the notation describes signed binary fixed point format, with the unscaled integer being stored in two's complement format, used in most binary processors. The first bit always gives the sign of the value(1 = negative, 0 = non-negative), and it is not counted in the m parameter. Thus, the total number w of bits used is 1 + m + n. For example, the specification describes a signed binary fixed-point number with a w = 16 bits in total, comprising the sign bit, three bits for the integer part, and 12 bits that are the fraction. That is, a 16-bit signed (two's complement) integer, that is implicitly multiplied by the scaling factor 2−12 In particular, when n is zero, the numbers are just integers. If m is zero, all bits except the sign bit are fraction bits; then the range of the stored number is from −1.0 (inclusive) to +1.0 (exclusive). The m and the dot may be omitted, in which case they are inferred from the size of the variable or register where the value is stored. Thus, means a signed integer with any number of bits, that is implicitly multiplied by 2−12. The letter can be prefixed to the to denote an unsigned binary fixed-point format. For example, describes values represented as unsigned 16-bit integers with an implicit scaling factor of 2−15, which range from 0.0 to (216−1)/215 = +1.999969482421875. ARM version A variant of the Q notation has been in use by ARM. In this variant, the m number includes the sign bit. For example, a 16-bit signed integer would be denoted Q15.0 in the TI variant, but Q16.0 in the ARM variant. Characteristics The resolution (difference between successive values) of a Qm.n or UQm.n format is always 2−n. The range of representable values depends on the notation used: For example, a Q15.1 format number requires 15+1 = 16 bits, has resolution 2−1 = 0.5, and the representable values range from −214 = −16384.0 to +214 − 2−1 = +16383.5. In hexadecimal, the negative values range from 0x8000 to 0xFFFF followed by the non-negative ones from 0x0000 to 0x7FFF. Math operations Q numbers are a ratio of two integers: the numerator is kept in storage, the denominator is equal to 2n. Consider the following example: The Q8 denominator equals 28 = 256 1.5 equals 384/256 384 is stored, 256 is inferred because it is a Q8 number. If the Q number's base is to be maintained (n remains constant) the Q number math operations must keep the denominator constant. The following formulas show math operations on the general Q numbers and . (If we consider the example as mentioned above, is 384 and is 256.) Because the denominator is a power of two, the multiplication can be implemented as an arithmetic shift to the left and the division as an arithmetic shift to the right; on many processors shifts are faster than multiplication and division. To maintain accuracy, the intermediate multiplication and division results must be double precision and care must be taken in rounding the intermediate result before converting back to the desired Q number. Using C the operations are (note that here, Q refers to the fractional part's number of bits) : Addition int16_t q_add(int16_t a, int16_t b) { return a + b; } With saturation int16_t q_add_sat(int16_t a, int16_t b) { int16_t result; int32_t tmp; tmp = (int32_t)a + (int32_t)b; if (tmp > 0x7FFF) tmp = 0x7FFF; if (tmp < -1 * 0x8000) tmp = -1 * 0x8000; result = (int16_t)tmp; return result; } Unlike floating point ±Inf, saturated results are not sticky and will unsaturate on adding a negative value to a positive saturated value (0x7FFF) and vice versa in that implementation shown. In assembly language, the Signed Overflow flag can be used to avoid the typecasts needed for that C implementation. Subtraction int16_t q_sub(int16_t a, int16_t b) { return a - b; } Multiplication // precomputed value: #define K (1 << (Q - 1)) // saturate to range of int16_t int16_t sat16(int32_t x) { if (x > 0x7FFF) return 0x7FFF; else if (x < -0x8000) return -0x8000; else return (int16_t)x; } int16_t q_mul(int16_t a, int16_t b) { int16_t result; int32_t temp; temp = (int32_t)a * (int32_t)b; // result type is operand's type // Rounding; mid values are rounded up temp += K; // Correct by dividing by base and saturate result result = sat16(temp >> Q); return result; } Division int16_t q_div(int16_t a, int16_t b) { /* pre-multiply by the base (Upscale to Q16 so that the result will be in Q8 format) */ int32_t temp = (int32_t)a << Q; /* Rounding: mid values are rounded up (down for negative values). */ /* OR compare most significant bits i.e. if (((temp >> 31) & 1) == ((b >> 15) & 1)) */ if ((temp >= 0 && b >= 0) || (temp < 0 && b < 0)) { temp += b / 2; /* OR shift 1 bit i.e. temp += (b >> 1); */ } else { temp -= b / 2; /* OR shift 1 bit i.e. temp -= (b >> 1); */ } return (int16_t)(temp / b); } See also Fixed-point arithmetic Floating-point arithmetic References Further reading (Note: the accuracy of the article is in dispute; see discussion.) External links Computer arithmetic
Q (number format)
Mathematics
1,589
3,610,259
https://en.wikipedia.org/wiki/Chip%20art
Chip art, also known as silicon art, chip graffiti or silicon doodling, refers to microscopic artwork built into integrated circuits, also called chips or ICs. Since ICs are printed by photolithography, not constructed a component at a time, there is no additional cost to include features in otherwise unused space on the chip. Designers have used this freedom to put all sorts of artwork on the chips themselves, from designers' simple initials to rather complex drawings. Given the small size of chips, these figures cannot be seen without a microscope. Chip graffiti is sometimes called the hardware version of software easter eggs. Prior to 1984, these doodles also served a practical purpose. If a competitor produced a similar chip, and examination showed it contained the same doodles, then this was strong evidence that the design was copied (a copyright violation) and not independently derived. A 1984 revision of the US copyright law (the Semiconductor Chip Protection Act of 1984) made all chip masks automatically copyrighted, with exclusive rights to the creator, and similar rules apply in most other countries that manufacture ICs. Since an exact copy is now automatically a copyright violation, the doodles no longer serve useful purpose in terms of hardware watermarking. Creating chip art Integrated Circuits are constructed from multiple layers of material, typically silicon, silicon dioxide (glass), and aluminum. The composition and thickness of these layers give them their distinctive color and appearance. These elements created an irresistible palette for IC design and layout engineers. The creative process involved in the design of these chips, a strong sense of pride in their work, and an artistic temperament combined compels people to want to mark their work as their own. It is very common to find initials, or groups of initials on chips. This is the design engineer's way of "signing" their work. Often this creative artist's instinct extends to the inclusion of small pictures or icons. These may be images of significance to the designers, comments related to the chip's function, inside jokes, or even satirical references. Because of the difficulty in verifying their existence, chip art has also been the subject of online hoaxes (e.g. the never-seen "bill sux" comment on a Pentium chip—the reputed "photo" showing the inscription is a hoax). The mass production of these works of art on the body of a commercial IC goes unnoticed by most observers and is discouraged by semiconductor corporations, primarily from the fear that the presence of the artwork (which is clearly unneeded) may interfere with some necessary function or design flow in the chip. Some laboratories have started collaborating with artists or directly producing books and exhibits with the micrographs of these chips. Such is the case of Harvard chemist George Whitesides, who collaborated with pioneer photographer Felice Frankel to publish On the Surface of Things, a highly praised photography book on experiments from (mostly) the Whitesides lab. Also, the laboratory of Albert Folch (who, perhaps not coincidentally, works in BioMEMS, the same field as George Whitesides) at the University of Washington's Bioengineering Dept. has a highly popular online gallery with more than 1,700 free BioMEMS-related chip art micrographs and has already produced three art exhibits in the Seattle area, with online sales. Notes References The Silicon Zoo - A portion of the Molecular Expressions web site from Florida State University, containing pictures of hundreds of discovered chip artworks. The buffalo shown here is from this website. See also Watermarking Digital watermarking External links Yahoo directory of chip art pages and articles. Chipworks An entire silicon art gallery found on the chips analysed by Chipworks. Chip graffiti from the Smithsonian Museum of American History Art on the Head of a Microchip, Bruce Headlam, New York Times, 4 March 1999. Integrated circuits Visual arts genres Easter egg (media)
Chip art
Technology,Engineering
788
11,793,306
https://en.wikipedia.org/wiki/Septoria%20lycopersici
Septoria lycopersici is a fungal pathogen that is most commonly found infecting tomatoes. It causes one of the most destructive diseases of tomatoes and attacks tomatoes during any stage of development. Host and symptoms Septoria lycopersici infects the tomato leaves via the stomata and also by direct penetration of epidermal cells. Symptoms generally include circular or angular lesions most commonly found on the older, lower leaves of the plant. The lesions are generally 2–5mm in diameter and have a greyish center with brown margins. The lesions are distinct characteristics of S. lycopersici and contain pycnidia in the center which aid when trying to identify the pathogen. Pycnidia can be found in the center of the said lesions. Pycnidia are fruiting bodies of the fungus. When the lesions become numerous often the leaves turn yellow, then brown, shriveling up and eventually dropping off the plant altogether. Environment Septoria lycopersici prefers warm, wet, and humid conditions. Disease development occurs within a wide range of temperatures; however, the optimal temperatures lie between 20 and 25 degrees Celsius. High humidity and leaf wetness are also ideal for disease development. The initial source of inoculum for S. lycopersici results from overwintered resting structures such as mycelium and conidia within pycnidia which can be found on and in infected seed and within infected tomato debris left in the field. Spores spread to healthy tomato leaves by windblown water, splashing rain, irrigation, mechanical transmission, and through the activities of insects such as beetles, tomato worms, and aphids. Provided the environment is conducive for disease development, lesions usually develop within 5 days of infection. Management The effects of Septoria lycopersici can often be reduced through the implementation of a variety of management techniques. First and foremost, each season should begin as pathogen-free as possible. This can be accomplished by burning or destroying all infected plant tissues to prevent the spread of the primary innoculum. Crop rotation is also encouraged to avoid the re-infection of new foliage from overwintered inoculum. Improving air circulation around the plants through separation of rows and use of cages can also promote faster drying and reduction of splashing, thus reducing the spread of fungal spores. Drip irrigation and mulching also help with the reduction of splashing thus decreasing further inoculum dispersal. Fungicidal sprays should also be considered, though they do not cure already infected leaves, they protect uninfected leaves from becoming infected. References External links Index Fungorum USDA ARS Fungal Database Fungal plant pathogens and diseases Tomato diseases lycopersici Fungi described in 1881 Fungus species
Septoria lycopersici
Biology
571
27,397,118
https://en.wikipedia.org/wiki/Endocochlear%20potential
The endocochlear potential (EP; also called endolymphatic potential) is the positive voltage of 80-100mV seen in the cochlear endolymphatic spaces. Within the cochlea, the EP varies in magnitude all along its length. When a sound is presented, the endocochlear potential changes either positive or negative in the endolymph, depending on the stimulus. The change in the potential is called the summating potential. With the movement of the basilar membrane, a shear force is created and a small potential is generated due to a difference in potential between the endolymph (scala media, +80 mV) and the perilymph (vestibular and tympanic ducts, 0 mV). EP is highest in the basal turn of the cochlea (95 mV in mice) and decreases in the magnitude towards the apex (87 mV). In saccule and utricle, endolymphatic potential is about +9 mV and +3mV in the semicircular canal. EP is highly dependent on metabolism and ionic transport. An acoustic stimulus produces a simultaneous change in conductance at the membrane of the receptor cell. Because there is a steep gradient (150 mV), changes in membrane conductance are accompanied by rapid influx and efflux of ions which in turn produce the receptor potential. This is known as the Battery Hypothesis. The receptor potential for each hair cell causes a release of neurotransmitters at its basal pole, which elicits excitation of the afferent nerve fibres. Anatomy
Endocochlear potential
Biology
339
27,258,886
https://en.wikipedia.org/wiki/Caterpillar%20tree
In graph theory, a caterpillar or caterpillar tree is a tree in which all the vertices are within distance 1 of a central path. Caterpillars were first studied in a series of papers by Harary and Schwenk. The name was suggested by Arthur Hobbs. As colorfully write, "A caterpillar is a tree which metamorphoses into a path when its cocoon of endpoints is removed." Equivalent characterizations The following characterizations all describe the caterpillar trees: They are the trees for which removing the leaves and incident edges produces a path graph. They are the trees in which there exists a path that contains every vertex of degree two or more. They are the trees in which every vertex of degree at least three has at most two non-leaf neighbors. They are the trees that do not contain as a subgraph the graph formed by replacing every edge in the star graph K1,3 by a path of length two. They are the connected graphs that can be drawn with their vertices on two parallel lines, with edges represented as non-crossing line segments that have one endpoint on each line. They are the trees whose square is a Hamiltonian graph. That is, in a caterpillar, there exists a cyclic sequence of all the vertices in which each adjacent pair of vertices in the sequence is at distance one or two from each other, and trees that are not caterpillars do not have such a sequence. A cycle of this type may be obtained by drawing the caterpillar on two parallel lines and concatenating the sequence of vertices on one line with the reverse of the sequence on the other line. They are the trees whose line graphs contain a Hamiltonian path; such a path may be obtained by the ordering of the edges in a two-line drawing of the tree. More generally the number of edges that need to be added to the line graph of an arbitrary tree so that it contains a Hamiltonian path (the size of its Hamiltonian completion) equals the minimum number of edge-disjoint caterpillars that the edges of the tree can be decomposed into. They are the connected graphs of pathwidth one. They are the connected triangle-free interval graphs. They are n-vertex graphs whose adjacency matrices can be written in such a way that the ones of the upper triangular part form a path of length n-1 beginning at the upper right corner and going down or left. Generalizations A k-tree is a chordal graph with exactly maximal cliques, each containing vertices; in a k-tree that is not itself a , each maximal clique either separates the graph into two or more components, or it contains a single leaf vertex, a vertex that belongs to only a single maximal clique. A k-path is a k-tree with at most two leaves, and a k-caterpillar is a k-tree that can be partitioned into a k-path and some k-leaves, each adjacent to a separator k-clique of the k-path. In this terminology, a 1-caterpillar is the same thing as a caterpillar tree, and k-caterpillars are the edge-maximal graphs with pathwidth k. A lobster graph is a tree in which all the vertices are within distance 2 of a central path. Enumeration Caterpillars provide one of the rare graph enumeration problems for which a precise formula can be given: when n ≥ 3, the number of caterpillars with n unlabeled vertices is For n = 1, 2, 3, ... the numbers of n-vertex caterpillars are 1, 1, 1, 2, 3, 6, 10, 20, 36, 72, 136, 272, 528, 1056, 2080, 4160, ... . Computational complexity Finding a spanning caterpillar in a graph is NP-complete. A related optimization problem is the Minimum Spanning Caterpillar Problem (MSCP), where a graph has dual costs over its edges and the goal is to find a caterpillar tree that spans the input graph and has the smallest overall cost. Here the cost of the caterpillar is defined as the sum of the costs of its edges, where each edge takes one of the two costs based on its role as a leaf edge or an internal one. There is no f(n)-approximation algorithm for the MSCP unless P = NP. Here f(n) is any polynomial-time computable function of n, the number of vertices of a graph. There is a parametrized algorithm that finds an optimal solution for the MSCP in bounded treewidth graphs. So both the Spanning Caterpillar Problem and the MSCP have linear time algorithms if a graph is an outerplanar, a series-parallel, or a Halin graph. Applications Caterpillar trees have been used in chemical graph theory to represent the structure of benzenoid hydrocarbon molecules. In this representation, one forms a caterpillar in which each edge corresponds to a 6-carbon ring in the molecular structure, and two edges are incident at a vertex whenever the corresponding rings belong to a sequence of rings connected end-to-end in the structure. writes, "It is amazing that nearly all graphs that played an important role in what is now called "chemical graph theory" may be related to caterpillar trees." In this context, caterpillar trees are also known as benzenoid trees and Gutman trees, after the work of Ivan Gutman in this area. References External links Trees (graph theory) Mathematical chemistry
Caterpillar tree
Chemistry,Mathematics
1,175
256,322
https://en.wikipedia.org/wiki/Peter%20Guthrie%20Tait
Peter Guthrie Tait (28 April 18314 July 1901) was a Scottish mathematical physicist and early pioneer in thermodynamics. He is best known for the mathematical physics textbook Treatise on Natural Philosophy, which he co-wrote with Lord Kelvin, and his early investigations into knot theory. His work on knot theory contributed to the eventual formation of topology as a mathematical discipline. His name is known in graph theory mainly for Tait's conjecture on cubic graphs. He is also one of the namesakes of the Tait–Kneser theorem on osculating circles. Early life Tait was born in Dalkeith on 28 April 1831 the only son of Mary Ronaldson and John Tait, secretary to the 5th Duke of Buccleuch. He was educated at Dalkeith Grammar School then Edinburgh Academy, where he began his lifelong friendship with James Clerk Maxwell. He studied mathematics and physics at the University of Edinburgh, and then went to Peterhouse, Cambridge, graduating as senior wrangler and first Smith's prizeman in 1852. As a fellow and lecturer of his college he remained at the University for a further two years, before leaving to take up the professorship of mathematics at Queen's College, Belfast; there he made the acquaintance of Thomas Andrews, whom he joined in researches on the density of ozone and the action of the electric discharge on oxygen and other gases. Andrews also introduced him to Sir William Rowan Hamilton and quaternions. Middle years In 1860, Tait succeeded his old master, James D. Forbes, as professor of natural philosophy at the University of Edinburgh. He occupied the Chair until shortly before his death. The first scientific paper under Tait's name only was published in 1860. His earliest work dealt mainly with mathematical subjects, and especially with quaternions, of which he was the leading exponent after their originator, William Rowan Hamilton. He was the author of two text-books on them - one an Elementary Treatise on Quaternions (1867), written with the advice of Hamilton, though not published till after his death, and the other an Introduction to Quaternions (1873), in which he was aided by Philip Kelland (1808–1879). Kelland was one of his teachers and colleagues at the University of Edinburgh. Quaternions was also one of the themes of his address as president of the mathematical and physical section of the British Association for the Advancement of Science in 1871. Tait also collaborated with Lord Kelvin on ‘’Treatise on Natural Philosophy’’ in 1867. Tait also produced original work in mathematical and experimental physics. In 1864, he published a short paper on thermodynamics, and from that time his contributions to that and kindred departments of science became frequent and important. In 1871, he emphasised the significance and future importance of the principle of the dissipation of energy (second law of thermodynamics). In 1873 he took thermoelectricity for the subject of his discourse as Rede lecturer at Cambridge, and in the same year he presented the first sketch of his well-known thermoelectric diagram before the Royal Society of Edinburgh. Two years later, researches on "Charcoal Vacua" with James Dewar led him to see the true dynamical explanation of the Crookes radiometer in the large mean free path of the molecule of the highly rarefied air. From 1879 to 1888, he engaged in difficult experimental investigations. These began with an inquiry into what corrections were required for thermometers operating at great pressure. This was for the benefit of thermometers employed by the Challenger expedition for observing deep-sea temperatures, and were extended to include the compressibility of water, glass, and mercury. This work led to the first formulation of the Tait equation, which is widely used to fit liquid density to pressure. Between 1886 and 1892 he published a series of papers on the foundations of the kinetic theory of gases, the fourth of which contained what was, according to Lord Kelvin, the first proof ever given of the Waterston-Maxwell theorem (equipartition theorem) of the average equal partition of energy in a mixture of two gases./ About the same time he carried out investigations into impact and its duration. Many other inquiries conducted by him might be mentioned, and some idea may be gained of his scientific activity from the fact that a selection only from his papers, published by the Cambridge University Press, fills three large volumes. This mass of work was done in the time he could spare from his professorial teaching in the university. For example, in 1880 he worked on the Four color theorem and proved that it was true if and only if no snarks were planar. Later years In addition, he was the author of a number of books and articles. Of the former, the first, published in 1856, was on the dynamics of a particle; and afterwards there followed a number of concise treatises on thermodynamics, heat, light, properties of matter and dynamics, together with an admirably lucid volume of popular lectures on Recent Advances in Physical Science. With Lord Kelvin, he collaborated in writing the well-known Treatise on Natural Philosophy. "Thomson and Tait", as it is familiarly called (" T and T' " was the authors' own formula), was planned soon after Lord Kelvin became acquainted with Tait, on the latter's appointment to his professorship in Edinburgh, and it was intended to be an all-comprehensive treatise on physical science, the foundations being laid in kinematics and dynamics, and the structure completed with the properties of matter, heat, light, electricity and magnetism. But the literary partnership ceased in about eighteen years, when only the first portion of the plan had been completed, because each of the members felt he could work to better advantage separately than jointly. The friendship, however, endured for the remaining twenty-three years of Tait's life. Tait collaborated with Balfour Stewart in the Unseen Universe, which was followed by Paradoxical Philosophy. It was in his 1875 review of The Unseen Universe, that William James first put forth his Will to Believe Doctrine. Tait's articles include those he wrote for the ninth edition of the Encyclopædia Britannica on light, mechanics, quaternions, radiation, and thermodynamics, and the biographical notices of Hamilton and James Clerk Maxwell. Death He died in Edinburgh on 4 July 1901, aged 70. He is buried in the second terrace down from Princes Street in the burial ground of St John's Episcopal Church, Edinburgh. Topology The Tait conjectures are three conjectures made by Tait in his study of knots. The Tait conjectures involve concepts in knot theory such as alternating knots, chirality, and writhe. All of the Tait conjectures have been solved, the most recent being the Flyping conjecture, proved by Morwen Thistlethwaite and William Menasco in 1991. Publications Dynamics of a Particle (1856) Treatise on Natural Philosophy (1867); v. 1 and v. 2 (PDF/DjVu at the Internet Archive). An elementary treatise on quaternions (1867); PDF/DjVu Copy of the 1st ed. at the Internet Archive and PDF/DjVu Copy of the 3rd ed. at the Internet Archive. Elements of Natural Philosophy (1872); (PDF/DjVu at the Internet Archive). A "non-mathematical portion of Treatise on Natural Philosophy". Sketch of Thermodynamics (1877); PDF/DjVu Copy at the Internet Archive. Recent Advances in Physical Science (1876); PDF/DjVu Copy at the Internet Archive. Heat (1884); PDF/DjVu Copy at the Internet Archive. Light (1884); PDF/DjVu Copy at the Internet Archive. Properties of Matter (1885); PDF/DjVu Copy at the Internet Archive. Dynamics (1895); PDF/DjVu Copy at the Internet Archive. The Unseen Universe (1875; new edition, 1901) Scientific papers vol. 1 (1898–1900) PDF/DjVu Copy at the Internet Archive. Scientific papers vol. 2 (1898–1900) PDF/DjVu Copy at the Internet Archive. Private life In 1857 Tait married Margaret Archer Porter (1839–1926). She was the sister of (1) William Archer Porter, a lawyer and educationist who served as the Principal of Government Arts College, Kumbakonam and tutor and secretary to the Maharaja of Mysore, (2) James Porter (Master of Peterhouse, Cambridge), and (3) Jane Bailie Porter, who married Alexander Crum Brown, the Scottish organic chemist. Tait was an enthusiastic golfer and, of his seven children, two, Frederick Guthrie Tait (1870–1900) and John Guthrie Tait (1861–1945) went on to become gifted amateur golf champions. (In 1891, Tait invoked the Magnus effect to explain the influence of spin on the flight of a golf ball.) He was an all-round sportsman and represented Scotland at international level in rugby union. His daughter, Edith, married Rev. Harry Reid, who later became Bishop of Edinburgh. Another son, William, was a civil engineer. Recognition Tait was a lifelong friend of James Clerk Maxwell, and a portrait of Tait by Harrington Mann is held in the James Clerk Maxwell Foundation museum in Edinburgh. There are several portraits of Tait by Sir George Reid. One, painted about 1883, is owned by the National Galleries of Scotland, to which it was given by the artist in 1902. Another portrait was unveiled at Peterhouse, Cambridge in October 1902, paid for by the Master and Fellows of Peterhouse, where Tait had been an Honorary Fellow. One of the chairs in the Department of Physics at the University of Edinburgh is the Tait professorship. Peter Guthrie Tait Road at the University of Edinburgh King's Buildings complex is named in his honour. He was also given the following honours; Fellow of the Royal Society of Edinburgh  General Secretary of the Royal Society of Edinburgh, 1879 until 1901 Gunning Victoria Jubilee Prize Keith prize (twice) Royal Medal from the Royal Society of London, in 1886 Honorary degrees by the University of Glasgow and the University of Ireland Honorary membership of the academies of Denmark, Holland, Sweden and Ireland. See also Dowker-Thistlethwaite notation Four color theorem Homoeoid Medial graph Nabla symbol References Further reading External links Pritchard, Chris. "Provisional Bibliography of Peter Guthrie Tait". British Society for the History of Mathematics. An Elementary Treatise on Quaternions, 1890, Cambridge University Press. Scanned PDF, HTML version (in progress) "Knot Theory" Website of Andrew Ranicki in Edinburgh. University of Edinburgh website, Life and Scientific Work of Peter Guthrie Tait’, online book by Cargill Gilston Knott (1898) Scottish physicists Scottish Episcopalians Thermodynamicists Fellows of the Royal Society of Edinburgh Alumni of the University of Edinburgh Alumni of Peterhouse, Cambridge Fellows of Peterhouse, Cambridge People educated at Edinburgh Academy 1831 births 1901 deaths Royal Medal winners Senior Wranglers People from Dalkeith Mathematical physicists Academics of Queen's University Belfast Academics of the University of Edinburgh 19th-century Scottish mathematicians 20th-century Scottish mathematicians
Peter Guthrie Tait
Physics,Chemistry
2,318
69,264,706
https://en.wikipedia.org/wiki/Micronekton
A micronekton is a group of organisms of 2 to 20 cm in size which are able to swim independently of ocean currents. The word 'nekton' is derived from the Greek νήκτον, translit. nekton, meaning "to swim", and was coined by Ernst Haeckel in 1890. Overview Micronekton organisms are ubiquitous in the world's oceans and they can be divided into broad taxonomic groups. The distinction between micronekton and micro-, meso- and macro- zooplankton is based on size. Micronekton typically ranges in size from 2 to 20 cm, macro-zooplankton from 2 mm to 2 cm, meso-zooplankton from 0.2 to 2 mm and micro-zooplankton from 20 μm to 0.2 mm. Micronekton represents 3.8-11.8 billion tons of mesopelagic fishes worldwide, approximately 380 million tons of Antarctic krill in the Southern Ocean and a global estimated biomass of at least 55 million tons of a single group of Ommastrephid squid. This diverse group assemblage is distributed between the sea surface and approximately 1000 m deep (in the mesopelagic zone). Micronekton shows a diverse range of migration patterns including diel vertical migration over several hundreds of metres from below 400 m (deeper layers) to the top 200 m (shallower layers) of the water column at dusk and inversely at dawn, reverse migration (organisms stay in the shallow layer during the day) mid-water migration (organisms stay in the intermediate layer, i.e. between 200 and 400 m) or non-migration (organisms stay in the deep layer at night and shallow layer during the day). Micronekton plays a key role in the oceanic biological pump by transporting organic carbon from the euphotic zone to deeper parts of the oceans It is also preyed upon by various predators such as tunas, billfishes, sharks, marine birds and marine mammals. Taxonomic groups Generally, the taxonomy of global existing micronekton is not yet complete due to the paucity of faunal surveys, net avoidance (organisms sensing the approach of the net and swimming out of its path) and escapement (animals escape through the meshes after entering the net), and gear in-adaptability. New species are continually being discovered and described in new regions of the world's oceans. Crustaceans are highly diverse, with a single group, the decapods, consisting of 15,000 species in around 2,700 genera. Euphausiids consist of 10 genera with a total of 85 species. Hyperiids are also widely distributed in the world's oceans with approximately 233 species across 72 genera. Cephalopods comprise less than 1000 species distributed across 43 families. They occur in all marine habitats such as benthic, burrowing on coral reefs, grass flats, sand, mud, rocks; are epibenthic, pelagic and epipelagic in bays, seas and the open ocean. Bristlemouths (Gonostomatidae), largely Cyclothone, account for more than 50% of the total vertebrate abundance between 100 and 1000 m. Twenty-one species of bristlemouths have been described globally. Lanternfishes are the secondmost abundant marine vertebrates, having diversified into 252 species. Hatchetfishes (Sternoptychidae) and dragonfishes (Stomiidae) are other common mesopelagic taxa in the deep-sea environment. Anatomy and physiology Crustaceans The crustacean body is divided into three sections: head, thorax and tail. They typically have 2 antennae and a varying number of pairs of thoracic legs called pereiopods (or thoracopods). Crustacean species such as Systelaspis debilis and Oplophorus spinosus have specific visual pigments thought to facilitate congener recognition. The oplophorid genera Systellaspis, Acanthephyra and Oplophorus secrete luminous fluids as part of their distress response. Cephalopods Cephalopods are soft-bodied animals with a cranium and, in most forms, a mantle/fin (cuttlebone or gladius) as primary skeletal features. They have highly developed central nervous systems with well-organized eyes. Cephalopods can be divided into four main groups: squids, cuttlefishes, octopuses and chambered nautiluses, which have distinguishable morphological features. Squids can have chromatic vision through the presence of various visual pigments. Mesopelagic fishes Few anatomical and physiological studies of mesopelagic fishes have been conducted, except for research of the swimbladder of these organisms. The deepest-living mesopelagic fishes have no swimbladder. Most species inhabiting the upper mesopelagic zone have gas-filled swimbladders (which aid in buoyancy). Other species have a gas-filled swimbladder when young which becomes filled with fat with age. Polyunsaturated wax esters are common in muscle or adipose tissue of lanternfishes, posing an obstacle to human consumption. Lanternfishes possess retina with a single pigment capable of absorbing bioluminescent light ranging from 480 to 492 nm at a distance of up to 30 m in the deep ocean. Bioluminescence Bioluminescence is the production and emission of light from a living organism as a result of a natural chemical reaction, typically the molecular decomposition of luciferin substrates by the luciferase enzyme in the presence of oxygen. Bioluminescence in animals is used to communicate, defend against predation, and find or attract prey. It is mainly generated endogenously (e.g. photophores of lanternfishes) or through bacterially-mediated symbiosis (e.g. most anglerfish lures, flashlightfish subocular organs), within teleosts. It is common in micronekton (including many types of planktonic crustaceans, mesopelagic fishes such as myctophids/lanternfishes and stomiiformes, and squids). Many mesopelagic species (midwater squids, fish and shrimps) have counter shading ventral bioluminescent photophores which serve to match the intensity of downwelling light so as to hide from predators lurking below. To conceal itself with bioluminescence, the animal must precisely match its luminescence to the intensity, angular distribution and color of the downwelling light. Stomiiformes have barbels, ventral arrays, and red and blue suborbital photophores. Lanternfishes have also developed lateral photophores on the sides of their bodies (for species recognition) and sexually dimorphic luminescent organs on the tail or head. The sexual dimorphism of bioluminescent signalling and sensory systems may help facilitate sexual encounters in the deep ocean. At the onset of sexual maturity, secondary light organs develop in some of the arms of certain female squids e.g. cranchiids (Liocranchia and Leachia pacifica) for use in sexual recognition. Females of the octopod Japetella develop a ring of bioluminescent tissue around their mouth just prior to mating and this tissue atrophies once the eggs are spent. In the squid Ctenopteryx siculus, males develop a large photophore within the posterior region of their body at sexual maturity. Bioluminescent signaling by micronekton also carries some degree of risk for it may expose the organism to a predator. Ecology Foraging patterns Crustaceans show omnivorous feeding patterns since they prey on zooplankton, such as euphausiids and copepods, and are also known for occasional herbivory. All squids have carnivorous foraging patterns. Most mesopelagic fishes are carnivores. Some mesopelagic fishes, for example Ceratoscopelus warmingii, have some herbivorous feeding strategies, and can thus be classified as omnivores. Mesopelagic fishes mostly feed at night or dusk, with a few species being acyclic. Role in food webs Micronekton plays an important role in oceanic food webs by connecting top predators such as tunas and billfishes to lower trophic level zooplankton. Crustaceans, cephalopods and mesopelagic fishes generally have overlapping isotopic niche widths suggesting some degree of similarity in their diet with low level of resource partitioning and a high level of competition among these broad categories. In low productive environments, predators such as swordfish were shown to forage on larger-sized squids since micronekton prey density is reduced and the costs associated with finding prey are higher than the energy intake when consuming smaller-sized micronekton. Crustaceans and mesopelagic fishes generally occupy trophic level 3, smaller-sized squids occupy trophic level 3 to 4 and large nektonic squids such as Ommastrephes bartramii occupy trophic level 5. Behaviour Swarming Crustaceans, such as krill, may form several aggregation types, from high to low densities distributed throughout the water column, that are influenced by current velocities, direction, mean depth, and predator foraging. Cephalopods may form large schools of neritic and oceanic species with millions of individuals, or small schools with a few dozens of individuals or may be found as isolated territorial individuals. Some mesopelagic fishes form schools or are aggregated in scattering layers while others are dispersed Swimming Krill individuals of 45.4 mm in length can maintain horizontal sustained swimming speeds of 0.2 cm s−1 and are able to swim into currents for several hours at speeds of 0.17 cm s-1. Krill are able to dart rapidly backwards to escape predators. Cephalopods such as Illex illecebrosus are able to swim continuously. During daytime, mesopelagic fish often hang motionless in the water column with head up or down in a state of torpor. Myctophids have sustained swimming speeds of approximately 75 cm s−1, with larger individuals having higher rates than smaller ones. At night, fishes in the upper layers of the water column are active and swim horizontally, while those which stayed at depth are immobile and vertically oriented. Mesopelagic fishes are capable of rapid evasive movements to escape predators. However, crustaceans, cephalopods and mesopelagic fishes can adapt their swimming speeds, with the fastest swimming during escape, intermediate during foraging and lowest speed during migration: Reproduction and growth rate Sexual differences in gonads of krill first occur in subadults (> 24 mm), and secondary sexual (external) characteristics develop progressively in the late sub-adult stage (35 mm for females and 43 mm or larger for males). The reproductive cycle of krill usually spans from December to April. Cephalopods have a wide range of reproductive strategies and may spawn once or more than once, with the latter including: (1) polycyclic spawning, with eggs laid in separate batches during the spawning season and growth between the production of egg batches, (2) multiple spawning, with group-synchronous ovulation, monocyclic spawning and growth between egg batches, (3) intermittent terminal spawning, with group-synchronous ovulation, monocyclic spawning and no growth between egg batches, (4) continuous spawning, with asynchronous ovulation, monocyclic spawning and growth between egg batches. Cephalopods typically grow fast and mature rapidly, with their life cycle generally terminating with reproduction. The age of mesopelagic fishes can be determined from their otoliths and their growth rate can be calculated from the von Bertalanffy growth equation. Most mesopelagic fishes become sexually mature one year after hatching in highly productive areas, and more than two years in low productive areas. Most tropical myctophids and smaller gonostomatids are believed to have a one-year life cycle compared to mesopelagic fishes from colder waters which have a longer life cycle. In temperate and subtropical regions, myctophids spawn mainly from late winter to summer. The spawning season for Gonostomatids differ among species, with Sigmops elongatus spawning in spring and summer, Gonostoma ebelingi in early fall, Gonostoma atlanticum during all seasons in the subtropical central Pacific, and Gonostoma gracile in fall and winter in the western Pacific. Other mesopelagic fishes such Maurolicus muelleri, Vinciguerria nimbaria and Vinciguerria poweriae spawn mainly in spring and summer. Vertical and horizontal distributions Vertical migration The vertical migration patterns of micronekton are species dependent. Most micronekton show an extensive diel vertical migration whereby they are concentrated below 400 m of the water column during the day and migrate to the top 200 m at dusk, and they migrate in the opposite direction to below 400 m at dawn. Diel vertical migration of the mesopelagic community represents one of the Earth's largest daily animal migrations. The change in light intensity is believed to be the stimulus for triggering this vertical movement, with the main biological reason being enhanced foraging opportunities at the surface and decreased predation at night than in daytime. Migrant micronekton may be following the movements of their main prey which undergo diel vertical migration at dusk. Upward and downward migrations seem to occur in a series of events by different micronekton groups, with for example, smaller fishes which swim at smaller speeds leaving their location first than larger fishes. Other micronekton species, however, are non-migrating or weakly migrating and hence stay below 400 m depth at dusk, for e.g., members of the Cyclothone genus and some sternoptychids. Mid-water migration, i.e., migration to the lower limit of the shallow scattering layer (at approximately 200 m depth) at nighttime and back to 400 m before daytime, is also seen in some taxa. Ontogenic vertical migration Almost all mesopelagic species are believed to change their vertical distribution range during their life history, with younger individuals generally inhabiting shallower depths than older ones. Horizontal distribution The distributional patterns of micronekton generally seem to coincide with water mass distribution, mesoscale oceanographic processes such as eddies, and presence of seamounts. Micronekton showed reverse migration patterns, being located in the top 200 m of the water column during daytime, in a cyclonic mesoscale eddy in the South West Indian Ocean. Cyclonic eddies also showed greater micronekton densities than anti-cyclonic eddies. Mesoscale cyclonic eddies may hence create favorable conditions, such as enhanced foraging opportunities, for micronekton. Most micronekton species are oceanic but neritic patterns have also been observed. Some micronekton taxa, such as Diaphus suborbitalis, preferentially associate with seamounts. Large populations of D. suborbitalis have been reported off the slopes of the Equator, La Pérouse and MAD-Ridge seamounts in the Indian Ocean. They are located at depths around the seamounts' flanks during the day, and ascend in dense schools to the upper portion of the flanks and over the summits at dusk. Fishes may interact with seamounts in different ways: Diurnal vertically migrating organisms to the surface layer at dusk and being advected to the seamount summit by surface currents, weakly migrant/ non-migrant fishes that are not able to counter strong currents and are hence advected over the benthopelagic zone around seamounts, adults of meso- and bathypelagic species that live over seamount summits to increase their feeding efficiency, and reduce predation risks, "pseudo-oceanic" or "nerito-pelagic" species that preferentially associate with seamounts and resist advection off the pinnacles. Some micronekton taxa may show the "feed-rest" hypothesis, whereby they would rest in the quiescent shelter offered by the seamount topography and sense the environment around the seamount to take advantage of the flow-advected prey, while avoiding advective loss by strong currents. Some cephalopod species may use seamounts as spawning and foraging grounds. Nutritional value The high protein and low-fat content of cephalopods make them interesting components in human diets. Mesopelagic fishes are good sources of "Omega-3" n-3 PUFA (polyunsaturated fatty acids), EPA (icosapentaenoic acid) and DHA (docosahexaenoic acid), making them attractive candidates as dietary supplements for human consumption, as fishmeal in aquaculture farms, or for use as nutraceuticals. Trace element concentrations Compared to pelagic species such as tuna, sharks, and marine mammals, trace element concentrations in micronekton have been poorly studied. Trace elements are defined as those occurring in trace amounts (typically < 0.01% of the organism), and excluding the macronutrients calcium, magnesium, potassium and sodium. Some trace elements, such as iron, manganese, selenium, and zinc are essential to the normal functioning of an organism. Cadmium, lead, and mercury, however, are non-essential elements (i.e., with no known biological function). Other elements such as copper, zinc and selenium, are important in metabolic processes but toxic in high doses. Trace elements, such as mercury, can bioaccumulate to harmful levels when they are stored in tissues of organisms faster than they can be detoxified and/or excreted. Marine vertebrates have specific proteins, metallothionein, which bind trace elements such as cadmium, copper and zinc when in excess. The trace element selenium may reduce the availability of methylmercury by sequestering mercury, thus decreasing its toxicity. Trace element concentrations vary between micronekton broad categories and between metals, with crustaceans having higher levels of arsenic, copper, and zinc, compared to mesopelagic fishes. Copper and zinc are both known to associate with the respiratory pigment hemocyanin in crustaceans. Cephalopods are known to bioaccumulate higher cadmium, copper and zinc concentrations in their digestive glands compared to fishes. Myctophids sampled in the Indian Ocean and Gulf of California were enriched in iron, zinc and cobalt. The mesopelagic fishes Chauliodus sloani, Sigmops elongatus, and Ceratoscopelus warmingii of the South West Indian Ocean, and the Sulu, Celebes and Philippine Seas (South China), have similar range of values of arsenic, cadmium, cobalt, copper, chromium, manganese, lead, selenium, silver, and zinc, suggesting that these organisms have similar biochemical processes, irrespective of their location. Some micronekton organisms showed trace element concentrations above the permitted levels determined by European and worldwide legislations, and will hence have to be regularly monitored for their trace element content so as not to pose a threat to human consumption. Commercial interests There are growing interests in the commercial exploitation of micronekton for human consumption, as fishmeal in aquaculture farms and for nutraceutical products. Cephalopod fisheries already exist, targeting a wide range of species, and with more than half of the total catch taken in the northeast and northwest Pacific, and the northeast and northwest Atlantic. The fisheries target neritic and oceanic squids (e.g., Todarodes, Loligo, Illex, etc.), cuttlefish (e.g., Sepia, Sepiella, and allied genera), and octopuses (Octopus and Eledone). The cephalopod fisheries use the following principal types of fishing methods and gear: Interest in mesopelagic fish exploitation is also rapidly growing due to their sheer number and ubiquitous nature. The mesopelagic fish stock has been estimated at 20-100 billion tons with a potential yield of approximately 200 000 tons per year in the Arabian Sea, and a total global fish biomass of 2-19.5 gigatons between 70°N and 70°S. Catches of mesopelagic fishes for scientific surveys are made using various types of trawls (Isaacs-Kidd midwater trawl, Cobb trawl, rectangular midwater trawl, Hokkaido University Frame Trawl, International Young Gadoid Pelagic Trawl, etc.), with mouth areas of 1–10 m2. Experiments have been conducted with commercial trawls having large mouth openings (100–1000 m2) and large meshes (e.g., 20 cm) in the front part and gradually decreasing towards the codend. These commercial-sized trawls catch larger mesopelagic fishes but poorly sample small Cyclothone species. References External links Plankton Crustaceans Cephalopods Bioluminescence
Micronekton
Chemistry,Biology
4,478
22,821,969
https://en.wikipedia.org/wiki/Spherical%20code
In geometry and coding theory, a spherical code with parameters (n,N,t) is a set of N points on the unit hypersphere in n dimensions for which the dot product of unit vectors from the origin to any two points is less than or equal to t. The kissing number problem may be stated as the problem of finding the maximal N for a given n for which a spherical code with parameters (n,N,1/2) exists. The Tammes problem may be stated as the problem of finding a spherical code with minimal t for given n and N. External links A library of putatively optimal spherical codes Coding theory
Spherical code
Mathematics
131
27,887,460
https://en.wikipedia.org/wiki/Cloud%20storage%20gateway
A cloud storage gateway is a hybrid cloud storage device, implemented in hardware or software, which resides at the customer premises and translates cloud storage APIs such as SOAP or REST to block-based storage protocols such as iSCSI or Fibre Channel or file-based interfaces such as NFS or SMB. According to a 2011 report by Gartner Group, cloud gateways were expected to increase the use of cloud storage by lowering monthly charges and eliminating the concern of data security. Technology Features Modern applications (aka "cloud native applications") use network attached storage by means of REST and SOAP with hypertext transfer protocol on the protocol layer. The related storage is provided from arrays that offer these as object storage. Classic applications use network attached storage by means of Network File System NFS, iSCSI or Server Message Block SMB. To make use of all the advantages of object storage, existing applications need to be rewritten, and new applications must be object storage aware, which is not the case by default. This problem is addressed by cloud storage gateways. They offer object storage via classic native storage protocols like Network File System NFS or Server Message Block SMB (and a very few offer iSCSI as well). As a rule of thumb, classic applications with cloud native object storage can now be used with cloud storage gateways. Functionality In enterprise infrastructures, NFS is mainly used by Linux systems whereas Windows systems are using SMB. Object storage needs data in the form of objects rather than files. For all cloud storage gateways, it is mandatory to cache the incoming files and destage them to object storage on a later step. The time of destaging is subject to the gateway and a policy engine allows functions like pinning = bind specific files to the cache and destage them only for mirroring purpose content based destaging = move only files with specific characteristics to object storage e.g. all MP3 files multi-cloud mirroring = mirror all files to two different object stores Least Recently use = fill the local cache to maximum, move all files to object storage and delete files in cache on a LRU algorithm encrypt prior of destage = files are encrypted on the cloud storage gateway and destaged to object storage in an encrypted form compress and/or deduplication prior of destage = files are deduplicated and/or compressed prior of destaging backup data in a native backup format Combinations of these functions are usual. Default sorting schematics spanning the retrieval interface generally rely on zero-fault content processing, which carries the obvious requirement that two or more of the above functions are synchronized. Extensions Nearly all object storage gateways support Amazon S3 protocol as a quasi-standard. Some offer as well Microsoft Azure Blob, Google Storage, or Openstack SWIFT. Most gateways support public cloud storage e.g. from Amazon or Microsoft as an object store and Dropbox as a file drive store, there are as well a lot of vendors that support private cloud storage as well – including off and on prem storage. Deployment methods There are multiple variants to deploy such gateways – and some vendors support as well different variants as of their product line: bare metal hardware appliance software appliance supporting different hypervisors software on top of an operating system – aka FUSE based Software appliances as well as FUSE-based gateways can be installed on public cloud infrastructures. Advantages Cloud storage gateways avoid the need to change existing applications by providing a standard interface. Additionally, IT users are used to existing protocols – like SMB or NFS. They can make use of cloud storage with the advantage of still using their existing infrastructures (including e.g. Active Directory, LDAP integration, file share functions etc.). While cloud storage gateways initially covered a niche only, they got more attraction as of multi-cloud technologies. As an example: It is possible to run a cloud storage gateway in the form of a software appliance on top of a public or private cloud infrastructure by offering docker volume drivers that enable containers for automatic provisioning of storage used by these containers in a consistent form. They are using the hypervisors disks as a cache only, but destage data on least recently used algorithm to the underlying cloud storage. The de facto standard for object storage is Amazon S3 – it had the most popularity and capacity installed on object storage. But every object storage vendor can (and most of them do) offer Amazon S3 storage – even there is no real "standard" S3 API: Every vendor is a little bit different in implementing S3 API (as seen from the different cloud storage gateway vendors supporting the "specific" APIs of the different object storage vendors). Since 2018, an increasing number of cloud storage gateways hide this complexity by offering S3 on northbound (as of networking technologies, southbound relates to the storage used by a gateway, whereas northbound is the storage provided by the gateway). As such, one may utilize a richer S3 implementation on northbound than the southbound supports. Disadvantages By using cloud storage gateways the complexity to use object storage is hidden, but that also hides some of the advantages of object storage: the ability of horizontal scaling ability to add high efficient metadata to the data content to use extended WORM and archiving capabilities of object storage As applications change to cloud-aware applications (aka called cloud native applications), cloud storage gateways will change from multiprotocol gateways to multi-cloud gateways, providing access to multiple cloud providers as well as multiple southbound protocols and act as relays between different clouds. Market the cloud storage gateway market was valued at over USD 2 billion and was predicted to reach USD 11 billion by 2026, based on a report by the market research firm Mordor intelligence. See also Cloud computing CTERA Networks Panzura Nasuni Oracle Zero Data Loss Recovery Appliance References Cloud clients Cloud storage gateways Data security Data protection
Cloud storage gateway
Engineering
1,217
884,352
https://en.wikipedia.org/wiki/Friedrichs%20extension
In functional analysis, the Friedrichs extension is a canonical self-adjoint extension of a non-negative densely defined symmetric operator. It is named after the mathematician Kurt Friedrichs. This extension is particularly useful in situations where an operator may fail to be essentially self-adjoint or whose essential self-adjointness is difficult to show. An operator T is non-negative if Examples Example. Multiplication by a non-negative function on an L2 space is a non-negative self-adjoint operator. Example. Let U be an open set in Rn. On L2(U) we consider differential operators of the form where the functions ai j are infinitely differentiable real-valued functions on U. We consider T acting on the dense subspace of infinitely differentiable complex-valued functions of compact support, in symbols If for each x ∈ U the n × n matrix is non-negative semi-definite, then T is a non-negative operator. This means (a) that the matrix is hermitian and for every choice of complex numbers c1, ..., cn. This is proved using integration by parts. These operators are elliptic although in general elliptic operators may not be non-negative. They are however bounded from below. Definition of Friedrichs extension The definition of the Friedrichs extension is based on the theory of closed positive forms on Hilbert spaces. If T is non-negative, then is a sesquilinear form on dom T and Thus Q defines an inner product on dom T. Let H1 be the completion of dom T with respect to Q. H1 is an abstractly defined space; for instance its elements can be represented as equivalence classes of Cauchy sequences of elements of dom T. It is not obvious that all elements in H1 can be identified with elements of H. However, the following can be proved: The canonical inclusion extends to an injective continuous map H1 → H. We regard H1 as a subspace of H. Define an operator A by In the above formula, bounded is relative to the topology on H1 inherited from H. By the Riesz representation theorem applied to the linear functional φξ extended to H, there is a unique A ξ ∈ H such that Theorem. A is a non-negative self-adjoint operator such that T1=A - I extends T. T1 is the Friedrichs extension of T. Another way to obtain this extension is as follows. Let : be the bounded inclusion operator. The inclusion is a bounded injective with dense image. Hence is a bounded injective operator with dense image, where is the adjoint of as an operator between abstract Hilbert spaces. Therefore the operator is a non-negative self-adjoint operator whose domain is the image of . Then extends T. Krein's theorem on non-negative self-adjoint extensions M. G. Krein has given an elegant characterization of all non-negative self-adjoint extensions of a non-negative symmetric operator T. If T, S are non-negative self-adjoint operators, write if, and only if, Theorem. There are unique self-adjoint extensions Tmin and Tmax of any non-negative symmetric operator T such that and every non-negative self-adjoint extension S of T is between Tmin and Tmax, i.e. See also Energetic extension Extensions of symmetric operators Notes References N. I. Akhiezer and I. M. Glazman, Theory of Linear Operators in Hilbert Space, Pitman, 1981. Operator theory Linear operators
Friedrichs extension
Mathematics
741
75,103,460
https://en.wikipedia.org/wiki/Corrosion%20Engineering%2C%20Science%20and%20Technology
Corrosion Engineering, Science and Technology (CEST) is a peer-reviewed scientific journal published by Taylor & Francis on behalf of IOM3 covering corrosion engineering, corrosion science, and corrosion control. History The journal was founded in 1965 as the British Corrosion Journal (BCJ). It was launched as a publication of the British Joint Corrosion Group, which represented the interests of a number of professional organisations, including the Institute of Metals (later known the Metals Society and the Institute of Materials), to promote corrosion as an independent area of expertise. In this way, BCJ contrasted with existing journals in this field, namely Corrosion Science, which represented a more academic background. In 1979, the Metals Society established the annual Guy Bengough Medal and Prize, which would be awarded to the best paper published in BCJ from the previous two years. In 2001, the Institute of Materials (IoM) outsourced publication of 13 journals including BCJ to Maney Publishing. The next year, IoM merged into the Institute of Materials, Minerals, and Mining (IOM3). BCJ had initially sourced the majority of its papers from the United Kingdom and the rest of the Commonwealth although increasingly drew from more international sources over time. In 2003, the journal was renamed to Corrosion Engineering, Science and Technology to reflect the international nature of the journal. In 2015, Maney was acquired by Taylor & Francis Group, which continues to publish CEST. The journal is currently edited by Stuart B. Lyon. Abstracting and indexing Corrosion Engineering, Science and Technology is abstracted and indexed in: Chemical Abstracts Service Science Citation Index Expanded Essential Science Indicators Inspec Scopus According to the Journal Citation Reports, the journal has a 2022 impact factor of 1.8. Notes References External links Taylor & Francis academic journals Academic journals established in 1965 Materials science journals Hybrid open access journals
Corrosion Engineering, Science and Technology
Materials_science,Engineering
377
22,899,297
https://en.wikipedia.org/wiki/Rate%20of%20return%20on%20a%20portfolio
The rate of return on a portfolio is the ratio of the net gain or loss (which is the total of net income, foreign currency appreciation and capital gain, whether realized or not) which a portfolio generates, relative to the size of the portfolio. It is measured over a period of time, commonly a year. Calculation The rate of return on a portfolio can be calculated either directly or indirectly, depending the particular type of data available. Direct historical measurement Direct historical measurement of the rate of return on a portfolio applies one of several alternative methods, such as for example the time-weighted return or the modified Dietz method. It requires knowledge of the value of the portfolio at the start and end of the period of time under measurement, together with the external flows of value into and out of the portfolio at various times within the time period. For the time-weighted method, it is also necessary to know the value of the portfolio when these flows occur (i.e. either immediately after, or immediately before). Indirect calculation The rate of return on a portfolio can be calculated indirectly as the weighted average rate of return on the various assets within the portfolio. The weights are proportional to the value of the assets within the portfolio, to take into account what portion of the portfolio each individual return represents in calculating the contribution of that asset to the return on the portfolio. This method is particularly useful for projecting into the future the rate of return on a portfolio, given projections of the rates of return on the constituents of the portfolio. The indirect calculation of the rate of return on a portfolio can be expressed by the formula: which is the sum of the contributions , where: equals the rate of return on the portfolio, equals the weight of asset i in the portfolio, and equals the rate of return on asset i in the portfolio. Example Rate of return rm on a mining stock equals 10% Rate of return rc on a child care centre equals 8% Rate of return rf on a fishing company equals 12% Now suppose that 40% of the portfolio is in the mining stock (weighting for this stock Am = 40%), 40% is in the child care centre (weighting for this stock Ac = 40%) and the remaining 20% is in the fishing company (weighting for this stock Af = 20%). To determine the rate of return on this portfolio, first calculate the contribution of each asset to the return on the portfolio, by multiplying the weighting of each asset by its rate of return, and then add these contributions together: For the mining stock, its weighting is 40% and its rate of return is 10% so its contribution equals 40% x 10% = .04 = 4% For the child care centre, its weighting is 40% and its rate of return is 8% so its contribution equals 40% x 8% = .032 = 3.2% For the fishing company, its weighting is 20% and its rate of return is 12% so its contribution equals 20% x 12% = .024 = 2.4% Adding together these percentage contributions gives 4% + 3.2% + 2.4% = 9.6%, resulting in a rate of return on this portfolio of 9.6%. Negative weights The weight of a particular asset in a portfolio can be negative, as in the case of a liability such as a loan or a short position, inside a portfolio with positive overall value. In such a case, the contribution to the portfolio return will have the opposite sign to the return. Example A portfolio contains a cash account holding US$2,000 at the beginning of the period. The same portfolio also contains a US$1,000 loan at the start of the period. The net value of the portfolio at the beginning of the period is 2,000 - 1,000 = US$1,000. At the end of the period, 1 percent interest has accrued on the cash account, and 5 percent has accrued on the loan. There have been no transactions over the period. The weight of the cash account in the portfolio is 200 percent, and the weight of the loan is -100 percent. The contribution from the cash account is therefore 2 × 1 percent, and the contribution from the loan is -1 × 5 percent. Although the loan liability has grown, so it has a positive return, its contribution is negative. The total portfolio return is 2 - 5 = -3 percent. Negative net assets In cases where the overall net value of the portfolio is greater than zero, then the weight of a liability within the portfolio, such as a borrowing or a short position, is negative. Conversely, in cases where the overall net asset value of the portfolio is less than zero, i.e. the liabilities outweigh the assets, the weights are turned on their heads, and the weights of the liabilities are positive, and the weights of the assets are negative. Example The owner of an investment portfolio borrows US$200,000 from the bank to invest in securities. The portfolio suffers losses, and the owner sells all its holdings. These trades, plus interest paid on the loan, leave US$100,000 cash. The net asset value of the portfolio is 100,000 - 200,000 = -100,000 USD. Going forward into the next period, the weight of the loan is -200,000/-100,000 = +200 percent, and the weight of the cash remaining is +100,000/-100,000 = -100 percent. Returns in the case of negative net assets If a portfolio has negative net assets, i.e. it is a net liability, then a positive return on the portfolio net assets indicates the growth of the net liability, i.e. a further loss. Example US$10,000 interest is accrued on a US$200,000 loan borrowed from a bank. The liability has grown 10,000/200,000 = 5 percent. The return is positive, even though the borrower has lost US$10,000, instead of gained. Contributions in the case of negative net assets A positive contribution to return on negative net assets indicates a loss. It will be associated either with a positive weight combined with a positive return, indicating a loss on a liability, or a negative weight combined with a negative return, indicating a loss on an asset. Discrepancies If there are any external flows or other transactions on the assets in the portfolio during the period of measurement, and also depending on the methodology used for calculating the returns and weights, discrepancies may arise between the direct measurement of the rate of return on a portfolio, and indirect measurement (described above). See also Investment management Modified Dietz method Profit (accounting) Return on capital Risk-adjusted return on capital Time-weighted return References Financial ratios Investment Mathematical finance
Rate of return on a portfolio
Mathematics
1,396
375,503
https://en.wikipedia.org/wiki/Frieze%20group
In mathematics, a frieze or frieze pattern is a two-dimensional design that repeats in one direction. The term is derived from architecture and decorative arts, where such repeating patterns are often used. (See frieze.) Frieze patterns can be classified into seven types according to their symmetries. The set of symmetries of a frieze pattern is called a frieze group. Frieze groups are two-dimensional line groups, having repetition in only one direction. They are related to the more complex wallpaper groups, which classify patterns that are repetitive in two directions, and crystallographic groups, which classify patterns that are repetitive in three directions. General Formally, a frieze group is a class of infinite discrete symmetry groups of patterns on a strip (infinitely wide rectangle), hence a class of groups of isometries of the plane, or of a strip. A symmetry group of a frieze group necessarily contains translations and may contain glide reflections, reflections along the long axis of the strip, reflections along the narrow axis of the strip, and 180° rotations. There are seven frieze groups, listed in the summary table. Many authors present the frieze groups in a different order. The actual symmetry groups within a frieze group are characterized by the smallest translation distance, and, for the frieze groups with vertical line reflection or 180° rotation (groups 2, 5, 6, and 7), by a shift parameter locating the reflection axis or point of rotation. In the case of symmetry groups in the plane, additional parameters are the direction of the translation vector, and, for the frieze groups with horizontal line reflection, glide reflection, or 180° rotation (groups 3–7), the position of the reflection axis or rotation point in the direction perpendicular to the translation vector. Thus there are two degrees of freedom for group 1, three for groups 2, 3, and 4, and four for groups 5, 6, and 7. For two of the seven frieze groups (groups 1 and 4) the symmetry groups are singly generated, for four (groups 2, 3, 5, and 6) they have a pair of generators, and for group 7 the symmetry groups require three generators. A symmetry group in frieze group 1, 2, 3, or 5 is a subgroup of a symmetry group in the last frieze group with the same translational distance. A symmetry group in frieze group 4 or 6 is a subgroup of a symmetry group in the last frieze group with half the translational distance. This last frieze group contains the symmetry groups of the simplest periodic patterns in the strip (or the plane), a row of dots. Any transformation of the plane leaving this pattern invariant can be decomposed into a translation, , optionally followed by a reflection in either the horizontal axis, , or the vertical axis, , provided that this axis is chosen through or midway between two dots, or a rotation by 180°, (ditto). Therefore, in a way, this frieze group contains the "largest" symmetry groups, which consist of all such transformations. The inclusion of the discrete condition is to exclude the group containing all translations, and groups containing arbitrarily small translations (e.g. the group of horizontal translations by rational distances). Even apart from scaling and shifting, there are infinitely many cases, e.g. by considering rational numbers of which the denominators are powers of a given prime number. The inclusion of the infinite condition is to exclude groups that have no translations: the group with the identity only (isomorphic to C1, the trivial group of order 1). the group consisting of the identity and reflection in the horizontal axis (isomorphic to C2, the cyclic group of order 2). the groups each consisting of the identity and reflection in a vertical axis (ditto) the groups each consisting of the identity and 180° rotation about a point on the horizontal axis (ditto) the groups each consisting of the identity, reflection in a vertical axis, reflection in the horizontal axis, and 180° rotation about the point of intersection (isomorphic to the Klein four-group) Descriptions of the seven frieze groups There are seven distinct subgroups (up to scaling and shifting of patterns) in the discrete frieze group generated by a translation, reflection (along the same axis) and a 180° rotation. Each of these subgroups is the symmetry group of a frieze pattern, and sample patterns are shown in Fig. 1. The seven different groups correspond to the 7 infinite series of axial point groups in three dimensions, with n = ∞. They are identified in the table below using Hermann–Mauguin notation, Coxeter notation, Schönflies notation, orbifold notation, nicknames created by mathematician John H. Conway, and finally a description in terms of translation, reflections and rotations. Of the seven frieze groups, there are only four up to isomorphism. Two are singly generated and isomorphic to ; four of them are doubly generated, among which one is abelian and three are nonabelian and isomorphic to , the infinite dihedral group; and one of them has three generators. Lattice types: Oblique and rectangular The groups can be classified by their type of two-dimensional grid or lattice. The lattice being oblique means that the second direction need not be orthogonal to the direction of repeat. See also Symmetry groups in one dimension Line group Rod group Wallpaper group Space group Web demo and software There exist software graphic tools that create 2D patterns using frieze groups. Usually, the entire pattern is updated automatically in response to edits of the original strip. EscherSketch A free online program for drawing, saving, and exporting tessellations. Supports all wallpaper groups. Kali, a free and open source software application for wallpaper, frieze and other patterns. Kali , free downloadable Kali for Windows and Mac Classic. Tess, a nagware tessellation program for multiple platforms, supports all wallpaper, frieze, and rosette groups, as well as Heesch tilings. FriezingWorkz, a freeware Hypercard stack for the Classic Mac platform that supports all frieze groups. References External links Frieze Patterns at cut-the-knot Illuminations: Frieze Patterns Euclidean symmetries Discrete groups Patterns
Frieze group
Physics,Mathematics
1,325
540,904
https://en.wikipedia.org/wiki/Category%20of%20preordered%20sets
In mathematics, the category Ord has preordered sets as objects and order-preserving functions as morphisms. This is a category because the composition of two order-preserving functions is order preserving and the identity map is order preserving. While Ord is a category with different properties, the category of preordered groups, denoted OrdGrp, presents a more complex picture, nonetheless both imply preordered connections. The monomorphisms in Ord are the injective order-preserving functions. The empty set (considered as a preordered set) is the initial object of Ord, and the terminal objects are precisely the singleton preordered sets. There are thus no zero objects in Ord. The categorical product in Ord is given by the product order on the cartesian product. We have a forgetful functor Ord → Set that assigns to each preordered set the underlying set, and to each order-preserving function the underlying function. This functor is faithful, and therefore Ord is a concrete category. This functor has a left adjoint (sending every set to that set equipped with the equality relation) and a right adjoint (sending every set to that set equipped with the total relation). 2-category structure The set of morphisms (order-preserving functions) between two preorders actually has more structure than that of a set. It can be made into a preordered set itself by the pointwise relation: (f ≤ g) ⇔ (∀x f(x) ≤ g(x)) This preordered set can in turn be considered as a category, which makes Ord a 2-category (the additional axioms of a 2-category trivially hold because any equation of parallel morphisms is true in a posetal category). With this 2-category structure, a pseudofunctor F from a category C to Ord is given by the same data as a 2-functor, but has the relaxed properties: ∀x ∈ F(A), F(idA)(x) ≃ x, ∀x ∈ F(A), F(g∘f)(x) ≃ F(g)(F(f)(x)), where x ≃ y means x ≤ y and y ≤ x. See also FinOrd Simplex category References Preordered sets
Category of preordered sets
Mathematics
488
22,309,585
https://en.wikipedia.org/wiki/Names%20for%20the%20number%200
There are several names for the number 0 in different languages. References 0 (number) Integers 0
Names for the number 0
Mathematics
21
5,574,761
https://en.wikipedia.org/wiki/Microsoft%20Interface%20Definition%20Language
Microsoft Interface Definition Language (MIDL) is a text-based interface description language from Microsoft, based on the DCE/RPC IDL which it extends for use with the Microsoft Component Object Model. Its compiler is also called MIDL. Version History MIDL 1.0 is a standard DCE/RPC IDL with enhancements made for defining COM coclasses and interfaces. MIDL 2.0 (also known as MIDLRT) is a updated version of syntax that was developed in-house by Microsoft for use on the Windows platform that allowed for declaring Windows Runtime APIs. Various built in Windows Runtime APIs are written with MIDL 2.0 syntax and are available in the Windows SDK folder. The most recent version of MIDL is MIDL 3.0 released on December 30, 2021. Version 3.0 is a more streamlined version of MIDL 2.0, utilizing more modern and simplified syntax familiar to C, C++, C#, or Java. MIDL 3.0 is also more concise than the previous versions, allowing for programs to be reduced by almost two thirds in length due to using built-in reasonable defaults for attributes and being more concise. References stevewhims (2021-10-21). "Microsoft Interface Definition Language - Win32 apps". learn.microsoft.com. Retrieved 2024-10-29. stevewhims (2022-07-12). "Introduction to Microsoft Interface Definition Language 3.0 - Windows UWP applications". learn.microsoft.com. Retrieved 2024-10-29. stevewhims (2021-12-30). "Microsoft Interface Definition Language 3.0 reference - Windows UWP applications". learn.microsoft.com. Retrieved 2024-10-29. See also Object Description Language External links Microsoft Docs reference Interface Definition Language Component-based software engineering Microsoft application programming interfaces Object-oriented programming Object models
Microsoft Interface Definition Language
Technology
406
24,261,777
https://en.wikipedia.org/wiki/Bu%20Tinah
Bu Tinah (, Būṭīnah, ) is a tiny archipelago amid extensive coral formations and seagrass beds some 25 km south of Zirku and 35 north of Marawah in the United Arab Emirates. Found in the waters of Abu Dhabi, it is protected as a private nature reserve. Bu Tinah Island, rich in biodiversity, lies within the Marawah Marine Biosphere Reserve with a territory of more than 4,000 km2. The biosphere reserve is the region's first and largest UNESCO-designated marine biosphere reserve. It has been a recognized UNESCO site since 2001. Closed to visitors, fishing and the collection of turtle eggs are prohibited on Bu Tinah Island; the ban being enforced by patrols. An Environment Agency-Abu Dhabi Ranger Station is located on the island. Archipelago Bu Tinah is a cluster of islands and shoals, joined or almost so at low water, with nowhere greater than two or three metres above sea level. The main island has a sheltered lagoon opening to the south with the low energy environment permitting stands of mature mangrove to flourish. Even birds like the Socotra cormorant are found here. There are also healthy coral reef habitats with as many as 16 species of coral recorded in the area. The reefs survive in conditions that would kill coral species in other parts of the world. The waters of the Persian Gulf are among the most saline in the world, as well as among the warmest. Corals live in water that is between 23 °C and 28 °C but in the UAE water temperatures go as high as 35 °C in summer. Bu Tinah Island is one of the 28 official finalists for the “New 7 Wonders of Nature.” Flora and fauna Bu Tinah's thriving habitat is a unique living laboratory, with key significance for climate change research. This distinctive natural habitat with its shallow waters, seagrass beds and tall mangroves, set amid extensive coral reefs, hosts rare and globally endangered marine life. Seabirds such as the flamingo and the osprey, diverse species of dolphins, including the endangered Indian Ocean Humpback Dolphin, and the rare hawksbill turtle are found in Bu Tinah. The island's waters are also home to the planet's second-largest population of dugong, a large marine mammal that is globally threatened. Some 600 out of the estimated 3,000 dugongs in the country live in the waters around Bu Tinah and the creatures are listed as a species vulnerable to extinction by the IUCN. This precious natural resource is part of the largest protected area in Abu Dhabi. Its significant coral community and the health of its habitats and species despite its high temperature and salinity levels make the island of keen scientific interest. Green turtles In 2018 a team of Environment Agency - Abu Dhabi and Emirates Nature - WWF marine scientists satellite tagged a number of Green Turtles at Bu Tinah. In a first for science in the region three of the turtles - named Wisdom and Respect after the values of Sheikh Zayed bin Sultan Al Nahyan and one called Yas Mall after a sponsor, were mapped swimming over 6000 km over the course of 7 months, travelling to Oman to nest, and back to Bu Tinah. The study shows previously unknown linkages between Green turtles in the United Arab Emirates and Oman. Bu Tinah features in the Environment Agency - Abu Dhabi documentary Wild Abu Dhabi: The Turtles of Al Dhafra which won a finalist award at the New York Festival TV and Film Awards in the Nature & Wildlife documentary category. Important Bird Area The archipelago has been designated an Important Bird Area (IBA) by BirdLife International because it supports breeding Socotra cormorants and wintering Siberian sand plovers. References External links Butinah.ae - official Bu Tinah website by Environment Agency of Abu Dhabi Visitabudabi.ae - general info about Bu Tinah Shoals Archipelagoes of the United Arab Emirates Biosphere reserves of the United Arab Emirates Mangroves Coral reefs Important Bird Areas of Persian Gulf islands Important Bird Areas of the United Arab Emirates Nature reserves in the United Arab Emirates Lagoons of the United Arab Emirates
Bu Tinah
Biology
833
37,838,550
https://en.wikipedia.org/wiki/2%2C6-Diacetylpyridine
2,6-Diacetylpyridine is an organic compound with the formula C5H3N(C(O)CH3)2. It is a white solid that is soluble in organic solvents. It is a disubstituted pyridine. It is a precursor to ligands in coordination chemistry. Synthesis The synthesis of 2,6-diacetylpyridine begins with oxidation of the methyl groups in 2,6-lutidine to form dipicolinic acid. This process has been well established with potassium permanganate and selenium dioxide. The diketone can be formed from the diester of picolinic acid groups through a Claisen condensation. The resulting adduct can be decarboxylated to give diacetylpyridine. Treating 2,6-pyridinedicarbonitrile with methylmagnesium bromide provides an alternative synthesis for the diketone. Precursor to Schiff base ligands Diacetylpyridine is a popular starting material for ligands in coordination chemistry, often via template reactions. The diiminopyridine (DIP) class of ligands can be formed from diacetylpyridine through Schiff base condensation with substituted anilines. Diiminopyridine ligands have been the focus of much interest due to their ability to traverse a wide range of oxidation states. In azamacrocycle chemistry, diacetylpyridines can undergo the same Schiff base condensation with N1-(3-aminopropyl)propane-1,3-diamines. The product of the condensation can be hydrogenated to yield macrocyclic tetradentate ligands. Similar penta- and hexadentate ligands have been synthesized by varying the polyamine chain. See also 2,6-Diformylpyridine References Pyridines Ligands Acetyl compounds
2,6-Diacetylpyridine
Chemistry
421
1,444,067
https://en.wikipedia.org/wiki/IO.SYS
is an essential part of MS-DOS and Windows 9x. It contains the default MS-DOS device drivers (hardware interfacing routines) and the DOS initialization program. Boot sequence In the PC bootup sequence, the first sector of the boot disk is loaded into memory and executed. If this is the DOS boot sector, it loads the first three sectors of into memory and transfers control to it. then: Loads the rest of itself into memory. Initializes each default device driver in turn (console, disk, serial port, etc..). At this point, the default devices are available. Loads the DOS kernel and calls its initialization routine. The kernel is stored in with MS-DOS and in with Windows 9x. At this point, "normal" file access is available. Processes the file with Windows 9x. Processes the file, in MS-DOS 2.0 and higher and Windows 9x. Loads (or other operating system shell if specified). Displays the bootsplash in Windows 9x. If is present, it is used as the bootsplash. Otherwise, the bootsplash in is used. The filename was also used by (DCP), an MS-DOS derivative by the former East-German VEB Robotron. IBM PC DOS and DR DOS use the file for the same purpose; it in turn loads . In Windows 9x, the not only contains the DOS BIOS, but also holds the DOS kernel, which previously resided in . Under some conditions, Windows 9x uses the alternative filenames or instead. When Windows 9x is installed over a preexisting DOS install, the Windows file may be temporarily named for as long as Windows' dual-boot feature has booted the previous OS. Likewise, the of the older system is named for as long as Windows 9x is active. DR-DOS 7.06 (only this version) also follows this scheme and the filename in order to become bootable via MS-DOS boot sectors. Similarly, FreeDOS uses a combined system file as well, but names it . Disk layout requirements The two first entries of the root directory must be allocated by and , in that order. must be the first file stored in the FAT directory table for files. The files and must be contiguous. However, MS-DOS version 3.3 allows sector 4 and higher to be fragmented; version 5.0 allows the first 3 sectors of to be allocated anywhere (as long as they are contiguous). can be treated like any ordinary file. See also MSDOS.SYS IBMBIO.COM DRBIOS.SYS COMMAND.COM List of DOS system files Hardware abstraction layer (HAL) Remote Program Load Architecture of Windows 9x Notes References DOS files
IO.SYS
Technology
563
12,800,909
https://en.wikipedia.org/wiki/Gross%20merchandise%20volume
Gross merchandise volume (alternatively gross merchandise value or GMV) is a term used in online retailing to indicate a total sales monetary-value (e.g. in U.S. dollars or Euros) for merchandise sold through a particular marketplace over a certain time frame. GMV includes any fees or other deductions which a seller might calculate separately. Site revenue comes from fees and is different from the monetary-value of items sold. GMV for e-commerce retail companies means the average sale price per item charged to the customer multiplied by the number of items sold. For example, if a company sells 10 books at $100, the GMV is $1,000. This is also considered as "gross revenue". In this case, the business model is based on a retail model, where the company basically purchases the items, maintains inventory (if need be) and finally, sells or delivers the items to customers. It does not tell the net sales as GMV does not include costs involved and returns of products. See also Contribution margin References Online retailers Revenue E-commerce
Gross merchandise volume
Technology
221
449,166
https://en.wikipedia.org/wiki/Cyclic%20permutation
In mathematics, and in particular in group theory, a cyclic permutation is a permutation consisting of a single cycle. In some cases, cyclic permutations are referred to as cycles; if a cyclic permutation has k elements, it may be called a k-cycle. Some authors widen this definition to include permutations with fixed points in addition to at most one non-trivial cycle. In cycle notation, cyclic permutations are denoted by the list of their elements enclosed with parentheses, in the order to which they are permuted. For example, the permutation (1 3 2 4) that sends 1 to 3, 3 to 2, 2 to 4 and 4 to 1 is a 4-cycle, and the permutation (1 3 2)(4) that sends 1 to 3, 3 to 2, 2 to 1 and 4 to 4 is considered a 3-cycle by some authors. On the other hand, the permutation (1 3)(2 4) that sends 1 to 3, 3 to 1, 2 to 4 and 4 to 2 is not a cyclic permutation because it separately permutes the pairs {1, 3} and {2, 4}. For the wider definition of a cyclic permutation, allowing fixed points, these fixed points each constitute trivial orbits of the permutation, and there is a single non-trivial orbit containing all the remaining points. This can be used as a definition: a cyclic permutation (allowing fixed points) is a permutation that has a single non-trivial orbit. Every permutation on finitely many elements can be decomposed into cyclic permutations whose non-trivial orbits are disjoint. The individual cyclic parts of a permutation are also called cycles, thus the second example is composed of a 3-cycle and a 1-cycle (or fixed point) and the third is composed of two 2-cycles. Definition There is not widespread consensus about the precise definition of a cyclic permutation. Some authors define a permutation of a set to be cyclic if "successive application would take each object of the permuted set successively through the positions of all the other objects", or, equivalently, if its representation in cycle notation consists of a single cycle. Others provide a more permissive definition which allows fixed points. A nonempty subset of is a cycle of if the restriction of to is a cyclic permutation of . If is finite, its cycles are disjoint, and their union is . That is, they form a partition, called the cycle decomposition of So, according to the more permissive definition, a permutation of is cyclic if and only if is its unique cycle. For example, the permutation, written in cycle notation and two-line notation (in two ways) as has one 6-cycle and two 1-cycles its cycle diagram is shown at right. Some authors consider this permutation cyclic while others do not. With the enlarged definition, there are cyclic permutations that do not consist of a single cycle. More formally, for the enlarged definition, a permutation of a set X, viewed as a bijective function , is called a cycle if the action on X of the subgroup generated by has at most one orbit with more than a single element. This notion is most commonly used when X is a finite set; then the largest orbit, S, is also finite. Let be any element of S, and put for any . If S is finite, there is a minimal number for which . Then , and is the permutation defined by for 0 ≤ i < k and for any element of . The elements not fixed by can be pictured as . A cyclic permutation can be written using the compact cycle notation (there are no commas between elements in this notation, to avoid confusion with a k-tuple). The length of a cycle is the number of elements of its largest orbit. A cycle of length k is also called a k-cycle. The orbit of a 1-cycle is called a fixed point of the permutation, but as a permutation every 1-cycle is the identity permutation. When cycle notation is used, the 1-cycles are often omitted when no confusion will result. Basic properties One of the basic results on symmetric groups is that any permutation can be expressed as the product of disjoint cycles (more precisely: cycles with disjoint orbits); such cycles commute with each other, and the expression of the permutation is unique up to the order of the cycles. The multiset of lengths of the cycles in this expression (the cycle type) is therefore uniquely determined by the permutation, and both the signature and the conjugacy class of the permutation in the symmetric group are determined by it. The number of k-cycles in the symmetric group Sn is given, for , by the following equivalent formulas: A k-cycle has signature (−1)k − 1. The inverse of a cycle is given by reversing the order of the entries: . In particular, since , every two-cycle is its own inverse. Since disjoint cycles commute, the inverse of a product of disjoint cycles is the result of reversing each of the cycles separately. Transpositions A cycle with only two elements is called a transposition. For example, the permutation that swaps 2 and 4. Since it is a 2-cycle, it can be written as . Properties Any permutation can be expressed as the composition (product) of transpositions—formally, they are generators for the group. In fact, when the set being permuted is for some integer , then any permutation can be expressed as a product of and so on. This follows because an arbitrary transposition can be expressed as the product of adjacent transpositions. Concretely, one can express the transposition where by moving to one step at a time, then moving back to where was, which interchanges these two and makes no other changes: The decomposition of a permutation into a product of transpositions is obtained for example by writing the permutation as a product of disjoint cycles, and then splitting iteratively each of the cycles of length 3 and longer into a product of a transposition and a cycle of length one less: This means the initial request is to move to to to and finally to Instead one may roll the elements keeping where it is by executing the right factor first (as usual in operator notation, and following the convention in the article Permutation). This has moved to the position of so after the first permutation, the elements and are not yet at their final positions. The transposition executed thereafter, then addresses by the index of to swap what initially were and In fact, the symmetric group is a Coxeter group, meaning that it is generated by elements of order 2 (the adjacent transpositions), and all relations are of a certain form. One of the main results on symmetric groups states that either all of the decompositions of a given permutation into transpositions have an even number of transpositions, or they all have an odd number of transpositions. This permits the parity of a permutation to be a well-defined concept. See also Cycle sort – a sorting algorithm that is based on the idea that the permutation to be sorted can be factored into cycles, which can individually be rotated to give a sorted result Cycles and fixed points Cyclic permutation of integer Cycle notation Circular permutation in proteins Fisher–Yates shuffle Notes References Sources Anderson, Marlow and Feil, Todd (2005), A First Course in Abstract Algebra, Chapman & Hall/CRC; 2nd edition. . External links Cycle Notation of Permutations, video explains cyclic decomposition. Permutations
Cyclic permutation
Mathematics
1,642
699,772
https://en.wikipedia.org/wiki/Star%20number
In mathematics, a star number is a centered figurate number, a centered hexagram (six-pointed star), such as the Star of David, or the board Chinese checkers is played on. The nth star number is given by the formula Sn = 6n(n − 1) + 1. The first 45 star numbers are 1, 13, 37, 73, 121, 181, 253, 337, 433, 541, 661, 793, 937, 1093, 1261, 1441, 1633, 1837, 2053, 2281, 2521, 2773, 3037, 3313, 3601, 3901, 4213, 4537, 4873, 5221, 5581, 5953, 6337, 6733, 7141, 7561, 7993, 8437, 8893, 9361, 9841, 10333, 10837, 11353, and 11881. The digital root of a star number is always 1 or 4, and progresses in the sequence 1, 4, 1. The last two digits of a star number in base 10 are always 01, 13, 21, 33, 37, 41, 53, 61, 73, 81, or 93. Unique among the star numbers is 35113, since its prime factors (i.e., 13, 37 and 73) are also consecutive star numbers. Relationships to other kinds of numbers Geometrically, the nth star number is made up of a central point and 12 copies of the (n−1)th triangular number — making it numerically equal to the nth centered dodecagonal number, but differently arranged. As such, the formula the nth star number can be written as S_n=1+12T_n-1 where T_n=n(n+1)/2. Infinitely many star numbers are also triangular numbers, the first four being S1 = 1 = T1, S7 = 253 = T22, S91 = 49141 = T313, and S1261 = 9533161 = T4366 . Infinitely many star numbers are also square numbers, the first four being S1 = 12, S5 = 121 = 112, S45 = 11881 = 1092, and S441 = 1164241 = 10792 , for square stars . A star prime is a star number that is prime. The first few star primes are 13, 37, 73, 181, 337, 433, 541, 661, 937. A superstar prime is a star prime whose prime index is also a star number. The first two such numbers are 661 and 1750255921. A reverse superstar prime is a star number whose index is a star prime. The first few such numbers are 937, 7993, 31537, 195481, 679393, 1122337, 1752841, 2617561, 5262193. The term "star number" or "stellate number" is occasionally used to refer to octagonal numbers. Other properties The harmonic series of unit fractions with the star numbers as denominators is: The alternating series of unit fractions with the star numbers as denominators is: See also Centered hexagonal number References Figurate numbers
Star number
Mathematics
700
7,074,871
https://en.wikipedia.org/wiki/Lighting%20ratio
Lighting ratio in photography refers to the comparison of key light (the main source of light from which shadows fall) to the total fill light (the light that fills in the shadow areas). The higher the lighting ratio, the higher the contrast of the image; the lower the ratio, the lower the contrast. The lighting ratio is the ratio of the light levels on the brightest-lit to the least-lit parts of the subject; the brightest-lit areas are lit by both key (K) and fill (F). The American Society of Cinematographers (ASC) defines lighting ratio as (key+fill):fill, or (key+Σfill):Σfill, where Σfill is the sum of all fill lights. Light can be measured in footcandles. A key light of 200 footcandles and fill light of 100 footcandles have a 3:1 ratio (a ratio of three to one) — (200 + 100):100. A key light of 800 footcandles and a fill light of 200 footcandles has a ratio of 5:1 according to the lighting ratio formula — (800 + 200):200 = 1000 / 200 = 5 : 1. The ratio can be determined in relation to F stops since each increase in f-stop is equal to double the amount of light: 2 to the power of the difference in f stops is equal to the first factor in the ratio. For example, a difference in two f-stops between key and fill is 2 squared, or 4:1 ratio. A difference in 3 stops is 2 cubed, or an 8:1 ratio. No difference is equal to 2 to the power of 0, for a 1:1 ratio. See also High-key lighting Low-key lighting Silhouette References Science of photography Engineering ratios
Lighting ratio
Mathematics,Engineering
373
3,305,892
https://en.wikipedia.org/wiki/CrypTool
CrypTool is an open-source project that is a free e-learning software for illustrating cryptographic and cryptanalytic concepts. History The development of CrypTool started in 1998. Originally developed by German companies and universities, it is an open-source project since 2001. Currently 4 versions of CrypTool are maintained and developed: The CrypTool 1 (CT1) software is available in 6 languages (English, German, Polish, Spanish, Serbian, and French). CrypTool 2 (CT2), JCrypTool (JCT), and CrypTool-Online (CTO) are available in English and German. The goal of the CrypTool project is to make users aware of how cryptography can help against network security threats and to explain the underlying concepts of cryptology. CrypTool 1 (CT1) is written in C++ and designed for the Microsoft Windows operating system. In 2007, development began on two additional projects, both based on a pure-plugin architecture, to serve as successors to the original CrypTool program. Both successors regularly publish new stable versions: CrypTool 2 (built with C#/.NET/WPF) (abbreviated CT2) uses the concept of visual programming to clarify cryptographic processes. Currently, CT2 contains more than 150 crypto functions. JCrypTool 1.0 (built with Java/Eclipse/RCP/SWT) (abbreviated JCT) runs on Windows, macOS, and Linux, and offers both a document-centric and a function-centric perspective. Currently, JCT contains more than 100 crypto functions. One of its focal points are modern digital signatures (like Merkle trees and SPHINCS). The CrypTool project is now being developed at the research institute CODE at the Bundeswehr University Munich. CrypTool is used in schools, universities, companies and agencies for education and awareness training. Merger with CrypTools In early 2020, the CrypTool project decided to merge with a similar project of the same name, CrypTools, founded in 2017 in Australia by Luka Lafaye de Micheaux, Arthur Guiot, and Lucas Gruwez. CrypTool, much older and known, thus completely "absorbs" the project under its name. See also Asymmetric key algorithm Topics in cryptography Cryptosystem References External links CrypTool-Online International Cipher Contest "MysteryTwister" (MTC3) – presentation-en.pdf Presentation about the CrypTool-1 program with more than 100 slides and many screenshots 1998 software Free educational software Cryptographic software Free software programmed in C++ Free software programmed in Java (programming language) Free software programmed in C Sharp Windows-only free software Cryptography contests Cryptologic education
CrypTool
Mathematics
565
11,127,998
https://en.wikipedia.org/wiki/Linochora%20graminis
Linochora graminis is a fungal plant pathogen. References External links Index Fungorum USDA ARS Fungal Database Phyllachorales Fungal plant pathogens and diseases Fungus species
Linochora graminis
Biology
40
8,887
https://en.wikipedia.org/wiki/Direct%20product
In mathematics, one can often define a direct product of objects already known, giving a new one. This induces a structure on the Cartesian product of the underlying sets from that of the contributing objects. More abstractly, one talks about the product in category theory, which formalizes these notions. Examples are the product of sets, groups (described below), rings, and other algebraic structures. The product of topological spaces is another instance. There is also the direct sum – in some areas this is used interchangeably, while in others it is a different concept. Examples If we think of as the set of real numbers without further structure, then the direct product is just the Cartesian product If we think of as the group of real numbers under addition, then the direct product still has as its underlying set. The difference between this and the preceding example is that is now a group, and so we have to also say how to add their elements. This is done by defining If we think of as the ring of real numbers, then the direct product again has as its underlying set. The ring structure consists of addition defined by and multiplication defined by Although the ring is a field, is not, because the nonzero element does not have a multiplicative inverse. In a similar manner, we can talk about the direct product of finitely many algebraic structures, for example, This relies on the direct product being associative up to isomorphism. That is, for any algebraic structures and of the same kind. The direct product is also commutative up to isomorphism, that is, for any algebraic structures and of the same kind. We can even talk about the direct product of infinitely many algebraic structures; for example we can take the direct product of countably many copies of which we write as Direct product of groups In group theory one can define the direct product of two groups and denoted by For abelian groups that are written additively, it may also be called the direct sum of two groups, denoted by It is defined as follows: the set of the elements of the new group is the Cartesian product of the sets of elements of that is on these elements put an operation, defined element-wise: Note that may be the same as This construction gives a new group. It has a normal subgroup isomorphic to (given by the elements of the form ), and one isomorphic to (comprising the elements ). The reverse also holds. There is the following recognition theorem: If a group contains two normal subgroups such that and the intersection of contains only the identity, then is isomorphic to A relaxation of these conditions, requiring only one subgroup to be normal, gives the semidirect product. As an example, take as two copies of the unique (up to isomorphisms) group of order 2, say Then with the operation element by element. For instance, and With a direct product, we get some natural group homomorphisms for free: the projection maps defined by are called the coordinate functions. Also, every homomorphism to the direct product is totally determined by its component functions For any group and any integer repeated application of the direct product gives the group of all -tuples (for this is the trivial group), for example and Direct product of modules The direct product for modules (not to be confused with the tensor product) is very similar to the one defined for groups above, using the Cartesian product with the operation of addition being componentwise, and the scalar multiplication just distributing over all the components. Starting from we get Euclidean space the prototypical example of a real -dimensional vector space. The direct product of and is Note that a direct product for a finite index is canonically isomorphic to the direct sum The direct sum and direct product are not isomorphic for infinite indices, where the elements of a direct sum are zero for all but for a finite number of entries. They are dual in the sense of category theory: the direct sum is the coproduct, while the direct product is the product. For example, consider and the infinite direct product and direct sum of the real numbers. Only sequences with a finite number of non-zero elements are in For example, is in but is not. Both of these sequences are in the direct product in fact, is a proper subset of (that is, ). Topological space direct product The direct product for a collection of topological spaces for in some index set, once again makes use of the Cartesian product Defining the topology is a little tricky. For finitely many factors, this is the obvious and natural thing to do: simply take as a basis of open sets to be the collection of all Cartesian products of open subsets from each factor: This topology is called the product topology. For example, directly defining the product topology on by the open sets of (disjoint unions of open intervals), the basis for this topology would consist of all disjoint unions of open rectangles in the plane (as it turns out, it coincides with the usual metric topology). The product topology for infinite products has a twist, and this has to do with being able to make all the projection maps continuous and to make all functions into the product continuous if and only if all its component functions are continuous (that is, to satisfy the categorical definition of product: the morphisms here are continuous functions): we take as a basis of open sets to be the collection of all Cartesian products of open subsets from each factor, as before, with the proviso that all but finitely many of the open subsets are the entire factor: The more natural-sounding topology would be, in this case, to take products of infinitely many open subsets as before, and this does yield a somewhat interesting topology, the box topology. However it is not too difficult to find an example of bunch of continuous component functions whose product function is not continuous (see the separate entry box topology for an example and more). The problem that makes the twist necessary is ultimately rooted in the fact that the intersection of open sets is only guaranteed to be open for finitely many sets in the definition of topology. Products (with the product topology) are nice with respect to preserving properties of their factors; for example, the product of Hausdorff spaces is Hausdorff; the product of connected spaces is connected, and the product of compact spaces is compact. That last one, called Tychonoff's theorem, is yet another equivalence to the axiom of choice. For more properties and equivalent formulations, see the separate entry product topology. Direct product of binary relations On the Cartesian product of two sets with binary relations define as If are both reflexive, irreflexive, transitive, symmetric, or antisymmetric, then will be also. Similarly, totality of is inherited from Combining properties it follows that this also applies for being a preorder and being an equivalence relation. However, if are connected relations, need not be connected; for example, the direct product of on with itself does not relate Direct product in universal algebra If is a fixed signature, is an arbitrary (possibly infinite) index set, and is an indexed family of algebras, the direct product is a algebra defined as follows: The universe set of is the Cartesian product of the universe sets of formally: For each and each -ary operation symbol its interpretation in is defined componentwise, formally: for all and each the th component of is defined as For each the th projection is defined by It is a surjective homomorphism between the algebras As a special case, if the index set the direct product of two algebras is obtained, written as If just contains one binary operation the above definition of the direct product of groups is obtained, using the notation Similarly, the definition of the direct product of modules is subsumed here. Categorical product The direct product can be abstracted to an arbitrary category. In a category, given a collection of objects indexed by a set , a product of these objects is an object together with morphisms for all , such that if is any other object with morphisms for all , there exists a unique morphism whose composition with equals for every . Such and do not always exist. If they do exist, then is unique up to isomorphism, and is denoted . In the special case of the category of groups, a product always exists: the underlying set of is the Cartesian product of the underlying sets of the , the group operation is componentwise multiplication, and the (homo)morphism is the projection sending each tuple to its th coordinate. Internal and external direct product Some authors draw a distinction between an internal direct product and an external direct product. For example, if and are subgroups of an additive abelian group , such that and , then and we say that is the internal direct product of and . To avoid ambiguity, we can refer to the set as the external direct product of and . See also Notes References Abstract algebra ru:Прямое произведение#Прямое произведение групп
Direct product
Mathematics
1,884
12,487,179
https://en.wikipedia.org/wiki/KLF1
Krueppel-like factor 1 is a protein that in humans is encoded by the KLF1 gene. The gene for KLF1 is on the human chromosome 19 and on mouse chromosome 8. Krueppel-like factor 1 is a transcription factor that is necessary for the proper maturation of erythroid (red blood) cells. Structure The molecule has two domains; the transactivation domain and the chromatin-remodeling domain. The carboxyl (C) terminal is composed of three C2H2 zinc fingers that binds to DNA, and the amino (N) terminus is proline rich and acidic. Function Studies in mice first demonstrated the critical function of KLF1 in hematopoietic development. KLF1 deficient (knockout) mouse embryos exhibit a lethal anemic phenotype, fail to promote the transcription of adult β-globin, and die by embryonic day 15. Over-expression of KLF1 results in a reduction of the number of circulating platelets and hastens the onset of the β-globin gene. KLF1 coordinates the regulation of six cellular pathways that are all essential to terminal erythroid differentiation: Cell Membrane & Cytoskeleton Apoptosis Heme Synthesis & Transport Cell Cycling Iron Procurement Globin Chain Production It has also been linked to three main processes that are all essential to transcription of the β globin gene: Chromatin remodeling Modulation of the gamma to beta globin switch Transcriptional activation KLF1 binds specifically to the "CACCC" motif of the β-globin gene promoter. When natural mutations occur in the promoter, β+ thalassemia can arise in humans. Thalassemia's prevalence (2 million worldwide carry the trait) makes KLF1 clinically significant. Clinical significance Next-Generation sequencing efforts have revealed a surprisingly high prevalence of mutations in human KLF1. The chance of a KLF1 null child being conceived is approximately 1:24,000 in Southern China. With pre-natal blood transfusions and bone marrow transplant, it is possible to be born without KLF1. Most mutations in KLF1 lead to a recessive loss-of-function phenotype, however semi-dominant mutations have been identified in humans and mice as the cause of a rare inherited anemia CDA type IV. Additional family studies and clinical research unveiled the molecular genetics of the HPFH KLF1-related condition and established KLF1 as a novel quantitative trait locus for HbF (HBFQTL6). Permissive nature of the role of KLF1 on expression of several RBC antigens are evidenced by a series of known KLF1 mutations which are named after its modifier gene effect on Lutheral blood group In(Lu) ie "Inhibitor of Lutheran". No homozygouse alive human examples are known, corroborating with the Embryonic lethality of KLF1 homozygous mice. So the In(Lu) mutatants are significantly heteroinsuffient for KLF1 function such that RBC are formed, but there is an apparent dominant negative effect on expression of Lutheran Antigen (Basal cell adhesion Molecule) after which it was named, but also significant but somewhat variable degree of inhibition of expression of Colton (Aquaporin1), Ok (CD147 ie EMMPRIN), Indian(CD44), Duffy (Duffy antigen/chemokine receptor or Fy), Scianna (ERMAP), MN (glycophorin A), Diego(band 3), P1, i, AnWj (CD44) etc. Antigens on RBC membrane, and some of which might overlap with KLF1 mutations causing the fraction of hereditary persistence of fetal hemoglobin with CDA type IV. References External links Transcription factors
KLF1
Chemistry,Biology
809
58,402,003
https://en.wikipedia.org/wiki/Scramspace
Scramspace was a hypersonic engine research project established by the University of Queensland, Australia's Centre for Hypersonics. It was a 1.8 meter long, free-flying, hypersonic scramjet. A scramjet is fundamentally an air-breathing engine that travels at hypersonic velocities. Built in Brisbane at an estimated cost of $14 million, it took approximately 3 years to complete. Scramspace was supposed to fire at a hypersonic velocity of Mach 8 or 8600 km/hour (5343 mph) but the flight-test turned out to be a failure and the rocket engine and the payload plummeted in the North Sea off the coast of Norway. Background Scramspace was designed and built at Brisbane, Australia. It took 3 years to build and was estimated to cost around $14 million. It was approximated to fly at around Mach 8. It was the first and the largest research project funded by the Australian Space Research Program. A number of ground-based research tests and Mach 8 flight experiments were involved to establish the research project. A number of engineers and PhD scholars were involved in the making of this project. Ground tests up to Mach 14 were performed to assess the scientific and technical parameters of the project. This was followed by flight tests up to Mach 8. The project involved five countries in partnership: Australia, Japan, Germany, Italy, and the United States. It was led by the University of Queensland's Center for Hypersonics. Aftermath In August 2013, the scramjet was airlifted to Norway for a final flight test at Mach 8. The engine was fabricated to reach an altitude of about 340 km( 211.266 miles) with the help of a two- stage rocket engine. According to the experiment, on leaving the atmosphere, the scramjet had to separate from the rocket engine and re-orient itself for reentry. The flight -sensor data had to be collected in a three-second window before the scramjet disintegrated on reentry. However, because of some unknown issue in the first stage rocket motor, the scramjet payload could not be delivered to the correct altitude and speed in the flight test conducted on September 18, 2013. The uncrewed spacecraft with the payload and the rocket plummeted in the North Sea off the cost of Norway. Results The final stage of the project did not yield any hypersonic flight data. However, the ground testing, modelling and analysis were able to provide reference results for future projects. The project provided valuable insight and results pertaining to hypersonic physics, hypersonic combustion, and the performance of materials and components. It set an example for future hypersonic aircraft research. References Aerospace engineering University of Queensland
Scramspace
Engineering
560
14,814,399
https://en.wikipedia.org/wiki/Oxysterol-binding%20protein
The oxysterol-binding protein (OSBP)-related proteins (ORPs) are a family of lipid transfer proteins (LTPs). Concretely, they constitute a family of sterol and phosphoinositide binding and transfer proteins in eukaryotes that are conserved from yeast to humans. They are lipid-binding proteins implicated in many cellular processes related with oxysterol, including signaling, vesicular trafficking, lipid metabolism, and nonvesicular sterol transport. In yeast cells, some ORPs might function as sterol or lipid transporters though yeast strains lacking ORPs do not have significant defects in sterol transport between the endoplasmic reticulum and the plasma membrane. Although sterol transfer is proposed to occur at regions where organelle membranes are closely apposed, disruption of endoplasmic reticulum-plasma membrane contact sites do not have major effects on sterol transfer, though phospholipid homeostasis is perturbed. Various ORPs confine at membrane contacts sites (MCS), where endoplasmic reticulum (ER) is apposed with other organelle limiting membranes. Yeast ORPs also participate in vesicular trafficking, in which they affect Sec14-dependent Golgi vesicle biogenesis and, later in post-Golgi exocytosis, they affect exocyst complex-dependent vesicle tethering to the plasma membrane. In mammalian cells, some ORPs function as sterol sensors that regulate the assembly of protein complexes in response to changes in cholesterol levels. By that means, ORPs most likely affect organelle membrane lipid compositions, with impacts on signaling and vesicle transport, but also cellular lipid metabolism. Oxysterol is a cholesterol metabolite that can be produced through enzymatic or radical processes. Oxysterols, that are the 27-carbon products of cholesterol oxidation by both enzymic and non-enzymic mechanisms, constitute a large family of lipids involved in a plethora of physiological processes. Studies identifying the specific cellular targets of oxysterol indicate that several oxysterols may be regulators of cellular lipid metabolism via control of gene transcription. In addition, they were shown to be involved in other processes such as immune regulatory functions and brain homeostasis. Structure All oxysterol related proteins (ORP) contain a core lipid-binding domain (ORD), which has a characteristic amino acids sequence, EQVSHHPP. The most studied ORP are human and yeast ones, and the only OSBP-ORP whose structure is completely known is the Kes1p, also called Osh4p, a yeast one. Six different protein domains and structural motifs types are found in OSBP-ORPs. FFAT motif This is two phenylalanines in an acidic tract. It is bound by the endoplasmic reticulum to a lot of proteins involved in lipid metabolism. It is contained in most mammalian ORPs and in about 40% of yeast's ORPs. Ankyrin motif It is thought that it takes part in protein-protein interactions, but it is not known for certain. In some proteins, it also contributes to the localization of each protein to a membrane contact site (zone of close contact between the endoplasmic reticulum and a second organelle). Transmembrane domain It is only present in some human proteins. It is a hydrophobic region which holds the protein to the cell membrane. PH (pleckstrin homology) domain It binds phosphoinositides, usually only the ones which have low affinity and other ligands. It also recognizes organelles enriched in the PIPs. GOLD (Golgi dynamics) domain As well as Ankyrin motif, it probably mediates interactions between proteins. It is only found in one yeast protein and it is not found in any human ORP. ORD (OSBP-related domain) It contains the EQVSHHPP sequence. It has an hydrophobic pocket that binds a sterol and also contains multiple membrane binding surfaces which permit the protein to have the ability to cause liposome aggregation. Main functions As part of the Lipid Transfer proteins (LTPs) family, ORPs have different and variate functions. This functions include signaling, vesicular trafficking, lipid metabolism and nonvesicular sterol transport. ORPs have been studied in many organisms cells as human cells or yeast. In yeast, where organelle membranes are closely apposed it has been proposed that ORPs work as sterol transporters, though only a few ORPs actually bind sterols and collectively yeast ORPs are dispensable for sterol transfer in vivo. They are also part of Golgi-to-plasma membrane vesicular trafficking, but their role is not clear yet. In mammalian, ORPs participate as sterol sensors. This sensors regulate the assembly of protein complexes when cholesterol levels fluctuate. They use the following mechanisms: 1-They could extract and deliver lipids from one membrane to another. Probably at membrane contact site. 2-ORPs help establish the membrane when transient changes in the distribution of lipids occur. They add or remove lipids within different regions of the membrane. The exclusion of certain lipids in particular regions drive to processes such as membrane binding or signaling. 3-They work as lipid sensors altering interactions with other proteins due to binding or releasing lipid ligands. It occurs mainly at inally organelle contact sites. 4-The access of other lipid-binding proteins to the membrane is regulated by ORPs in two ways. One way is by presenting a lipid to a second lipid-binding protein. (5)Another way is preventing the lipid-binding protein from accessing a lipid in the membrane. This two mechanisms are not mutually exclusive so ORPs might use both. OSBP-ORPs human proteins In humans there are 12 ORP genes, and splicing generates 16 different protein products. OSBP-ORPs yeast proteins In yeast (Saccharomyces cerevisiæ) we can find 7 ORP genes called OSH1-7, but they have some additional names as well. Role in disease Some oxysterols have been found to contribute to the inflammation and oxidative damage as well as in cell death in the appearance and especially the development of some of the most important chronic diseases, such as atherosclerosis, neurodegenerative diseases, inflammatory bowel diseases, age-related macular degeneration and other pathological conditions related to cholesterol absorption. Besides, a recent study suggests a method of screening and diagnosing Niemann-Pick C disease by plasma oxysterol screening, which is found to be less invasive, more sensitive and specific and more economical strategy than the current practice. References Protein domains Peripheral membrane proteins
Oxysterol-binding protein
Biology
1,455
1,127,884
https://en.wikipedia.org/wiki/Hamming%20weight
The Hamming weight of a string is the number of symbols that are different from the zero-symbol of the alphabet used. It is thus equivalent to the Hamming distance from the all-zero string of the same length. For the most typical case, a string of bits, this is the number of 1's in the string, or the digit sum of the binary representation of a given number and the ℓ₁ norm of a bit vector. In this binary case, it is also called the population count, popcount, sideways sum, or bit summation. History and usage The Hamming weight is named after the American mathematician Richard Hamming, although he did not originate the notion. The Hamming weight of binary numbers was already used in 1899 by James W. L. Glaisher to give a formula for the number of odd binomial coefficients in a single row of Pascal's triangle. Irving S. Reed introduced a concept, equivalent to Hamming weight in the binary case, in 1954. Hamming weight is used in several disciplines including information theory, coding theory, and cryptography. Examples of applications of the Hamming weight include: In modular exponentiation by squaring, the number of modular multiplications required for an exponent e is log2 e + weight(e). This is the reason that the public key value e used in RSA is typically chosen to be a number of low Hamming weight. The Hamming weight determines path lengths between nodes in Chord distributed hash tables. IrisCode lookups in biometric databases are typically implemented by calculating the Hamming distance to each stored record. In computer chess programs using a bitboard representation, the Hamming weight of a bitboard gives the number of pieces of a given type remaining in the game, or the number of squares of the board controlled by one player's pieces, and is therefore an important contributing term to the value of a position. Hamming weight can be used to efficiently compute find first set using the identity ffs(x) = pop(x ^ (x - 1)). This is useful on platforms such as SPARC that have hardware Hamming weight instructions but no hardware find first set instruction. The Hamming weight operation can be interpreted as a conversion from the unary numeral system to binary numbers. In implementation of some succinct data structures like bit vectors and wavelet trees. Efficient implementation The population count of a bitstring is often needed in cryptography and other applications. The Hamming distance of two words A and B can be calculated as the Hamming weight of A xor B. The problem of how to implement it efficiently has been widely studied. A single operation for the calculation, or parallel operations on bit vectors are available on some processors. For processors lacking those features, the best solutions known are based on adding counts in a tree pattern. For example, to count the number of 1 bits in the 16-bit binary number a = 0110 1100 1011 1010, these operations can be done: Here, the operations are as in C programming language, so means to shift X right by Y bits, X & Y means the bitwise AND of X and Y, and + is ordinary addition. The best algorithms known for this problem are based on the concept illustrated above and are given here: //types and constants used in the functions below //uint64_t is an unsigned 64-bit integer variable type (defined in C99 version of C language) const uint64_t m1 = 0x5555555555555555; //binary: 0101... const uint64_t m2 = 0x3333333333333333; //binary: 00110011.. const uint64_t m4 = 0x0f0f0f0f0f0f0f0f; //binary: 4 zeros, 4 ones ... const uint64_t m8 = 0x00ff00ff00ff00ff; //binary: 8 zeros, 8 ones ... const uint64_t m16 = 0x0000ffff0000ffff; //binary: 16 zeros, 16 ones ... const uint64_t m32 = 0x00000000ffffffff; //binary: 32 zeros, 32 ones const uint64_t h01 = 0x0101010101010101; //the sum of 256 to the power of 0,1,2,3... //This is a naive implementation, shown for comparison, //and to help in understanding the better functions. //This algorithm uses 24 arithmetic operations (shift, add, and). int popcount64a(uint64_t x) { x = (x & m1 ) + ((x >> 1) & m1 ); //put count of each 2 bits into those 2 bits x = (x & m2 ) + ((x >> 2) & m2 ); //put count of each 4 bits into those 4 bits x = (x & m4 ) + ((x >> 4) & m4 ); //put count of each 8 bits into those 8 bits x = (x & m8 ) + ((x >> 8) & m8 ); //put count of each 16 bits into those 16 bits x = (x & m16) + ((x >> 16) & m16); //put count of each 32 bits into those 32 bits x = (x & m32) + ((x >> 32) & m32); //put count of each 64 bits into those 64 bits return x; } //This uses fewer arithmetic operations than any other known //implementation on machines with slow multiplication. //This algorithm uses 17 arithmetic operations. int popcount64b(uint64_t x) { x -= (x >> 1) & m1; //put count of each 2 bits into those 2 bits x = (x & m2) + ((x >> 2) & m2); //put count of each 4 bits into those 4 bits x = (x + (x >> 4)) & m4; //put count of each 8 bits into those 8 bits x += x >> 8; //put count of each 16 bits into their lowest 8 bits x += x >> 16; //put count of each 32 bits into their lowest 8 bits x += x >> 32; //put count of each 64 bits into their lowest 8 bits return x & 0x7f; } //This uses fewer arithmetic operations than any other known //implementation on machines with fast multiplication. //This algorithm uses 12 arithmetic operations, one of which is a multiply. int popcount64c(uint64_t x) { x -= (x >> 1) & m1; //put count of each 2 bits into those 2 bits x = (x & m2) + ((x >> 2) & m2); //put count of each 4 bits into those 4 bits x = (x + (x >> 4)) & m4; //put count of each 8 bits into those 8 bits return (x * h01) >> 56; //returns left 8 bits of x + (x<<8) + (x<<16) + (x<<24) + ... } The above implementations have the best worst-case behavior of any known algorithm. However, when a value is expected to have few nonzero bits, it may instead be more efficient to use algorithms that count these bits one at a time. As Wegner described in 1960, the bitwise AND of x with x − 1 differs from x only in zeroing out the least significant nonzero bit: subtracting 1 changes the rightmost string of 0s to 1s, and changes the rightmost 1 to a 0. If x originally had n bits that were 1, then after only n iterations of this operation, x will be reduced to zero. The following implementation is based on this principle. //This is better when most bits in x are 0 //This algorithm works the same for all data sizes. //This algorithm uses 3 arithmetic operations and 1 comparison/branch per "1" bit in x. int popcount64d(uint64_t x) { int count; for (count=0; x; count++) x &= x - 1; return count; } If greater memory usage is allowed, we can calculate the Hamming weight faster than the above methods. With unlimited memory, we could simply create a large lookup table of the Hamming weight of every 64 bit integer. If we can store a lookup table of the hamming function of every 16 bit integer, we can do the following to compute the Hamming weight of every 32 bit integer. static uint8_t wordbits[65536] = { /* bitcounts of integers 0 through 65535, inclusive */ }; //This algorithm uses 3 arithmetic operations and 2 memory reads. int popcount32e(uint32_t x) { return wordbits[x & 0xFFFF] + wordbits[x >> 16]; } //Optionally, the wordbits[] table could be filled using this function int popcount32e_init(void) { uint32_t i; uint16_t x; int count; for (i=0; i <= 0xFFFF; i++) { x = i; for (count=0; x; count++) // borrowed from popcount64d() above x &= x - 1; wordbits[i] = count; } } Muła et al. have shown that a vectorized version of popcount64b can run faster than dedicated instructions (e.g., popcnt on x64 processors). Minimum weight In error-correcting coding, the minimum Hamming weight, commonly referred to as the minimum weight wmin of a code is the weight of the lowest-weight non-zero code word. The weight w of a code word is the number of 1s in the word. For example, the word 11001010 has a weight of 4. In a linear block code the minimum weight is also the minimum Hamming distance (dmin) and defines the error correction capability of the code. If wmin = n, then dmin = n and the code will correct up to dmin/2 errors. Language support Some C compilers provide intrinsic functions that provide bit counting facilities. For example, GCC (since version 3.4 in April 2004) includes a builtin function __builtin_popcount that will use a processor instruction if available or an efficient library implementation otherwise. LLVM-GCC has included this function since version 1.5 in June 2005. In the C++ Standard Library, the bit-array data structure bitset has a count() method that counts the number of bits that are set. In C++20, a new header <bit> was added, containing functions std::popcount and std::has_single_bit, taking arguments of unsigned integer types. In Java, the growable bit-array data structure has a method that counts the number of bits that are set. In addition, there are and functions to count bits in primitive 32-bit and 64-bit integers, respectively. Also, the arbitrary-precision integer class also has a method that counts bits. In Python, the int type has a bit_count() method to count the number of bits set. This functionality was introduced in Python 3.10, released in October 2021. In Common Lisp, the function logcount, given a non-negative integer, returns the number of 1 bits. (For negative integers it returns the number of 0 bits in 2's complement notation.) In either case the integer can be a BIGNUM. Starting in GHC 7.4, the Haskell base package has a popCount function available on all types that are instances of the Bits class (available from the Data.Bits module). MySQL version of SQL language provides BIT_COUNT() as a standard function. Fortran 2008 has the standard, intrinsic, elemental function popcnt returning the number of nonzero bits within an integer (or integer array). Some programmable scientific pocket calculators feature special commands to calculate the number of set bits, e.g. #B on the HP-16C and WP 43S, #BITS or BITSUM on HP-16C emulators, and nBITS on the WP 34S. FreePascal implements popcnt since version 3.0. Processor support The IBM STRETCH computer in the 1960s calculated the number of set bits as well as the number of leading zeros as a by-product of all logical operations. Cray supercomputers early on featured a population count machine instruction, rumoured to have been specifically requested by the U.S. government National Security Agency for cryptanalysis applications. Control Data Corporation's (CDC) 6000 and Cyber 70/170 series machines included a population count instruction; in COMPASS, this instruction was coded as CXi. The 64-bit SPARC version 9 architecture defines a POPC instruction, but most implementations do not implement it, requiring it be emulated by the operating system. Donald Knuth's model computer MMIX that is going to replace MIX in his book The Art of Computer Programming has an SADD instruction since 1999. SADD a,b,c counts all bits that are 1 in b and 0 in c and writes the result to a. Compaq's Alpha 21264A, released in 1999, was the first Alpha series CPU design that had the count extension (CIX). Analog Devices' Blackfin processors feature the ONES instruction to perform a 32-bit population count. AMD's Barcelona architecture introduced the advanced bit manipulation (ABM) ISA introducing the POPCNT instruction as part of the SSE4a extensions in 2007. Intel Core processors introduced a POPCNT instruction with the SSE4.2 instruction set extension, first available in a Nehalem-based Core i7 processor, released in November 2008. The ARM architecture introduced the VCNT instruction as part of the Advanced SIMD (NEON) extensions. The RISC-V architecture introduced the CPOP instruction as part of the Bit Manipulation (B) extension. See also Two's complement Fan out References Further reading (Item 169: Population count assembly code for the PDP/6-10.) External links Aggregate Magic Algorithms. Optimized population count and other algorithms explained with sample code. Bit Twiddling Hacks Several algorithms with code for counting bits set. Necessary and Sufficient - by Damien Wintour - Has code in C# for various Hamming Weight implementations. Best algorithm to count the number of set bits in a 32-bit integer? - Stackoverflow Coding theory Articles with example C code
Hamming weight
Mathematics
3,202
14,761,583
https://en.wikipedia.org/wiki/PRDM2
PR domain zinc finger protein 2 is a protein that in humans is encoded by the PRDM2 gene. Function This tumor suppressor gene is a member of a nuclear histone/protein methyltransferase superfamily. It encodes a zinc finger protein that can bind to retinoblastoma protein, estrogen receptor, and the TPA-responsive element (MTE) of the heme-oxygenase-1 gene. Although the functions of this protein have not been fully characterized, it may (1) play a role in transcriptional regulation during neuronal differentiation and pathogenesis of retinoblastoma, (2) act as a transcriptional activator of the heme-oxygenase-1 gene, and (3) be a specific effector of estrogen action. Three transcript variants encoding different isoforms have been found for this gene. Interactions PRDM2 has been shown to interact with Estrogen receptor alpha and Retinoblastoma protein. References Further reading External links Transcription factors
PRDM2
Chemistry,Biology
208
18,060,754
https://en.wikipedia.org/wiki/Qualified%20Security%20Assessor
Qualified Security Assessor (QSA) is a designation conferred by the PCI Security Standards Council to those individuals that meet specific information security education requirements, have taken the appropriate training from the PCI Security Standards Council, are employees of a Qualified Security Assessor (QSA) company approved PCI security and auditing firm, and will be performing PCI compliance assessments as they relate to the protection of credit card data. The term QSA can be implied to identify an individual qualified to perform payment card industry compliance auditing and consulting or the firm itself. QSA companies are sometimes differentiated from QSA individuals by the initialism 'QSAC'. The primary goal of an individual with the PCI QSA certification is to perform an assessment of a firm that handles credit card data against the high-level control objectives of the PCI Data Security Standard (PCI DSS). Consultants holding the QSA certification must re-certify annually to ensure they are conversant with any changes to the PCI-DSS requirements and guidelines. References External links PCI Security Standards Council Information privacy Credit card terminology
Qualified Security Assessor
Technology,Engineering
227
6,287,168
https://en.wikipedia.org/wiki/Goodmans%20Industries
Goodmans was a British consumer electronics company. Dissolved 22 February 2022 according to companies house gov.uk Goodmans was Founded in London, in 1923, the company started as a manufacturer of loudspeakers for public address systems. Production and engineering originally took place in Wembley, London before moving to the town of Havant, Hampshire. In the 1960s, Goodmans extended their products to amplifiers with the introduction of the Maxamp30, the first British made solid state amplifier. Throughout the 1970s and 1980s, Goodmans continued to develop loudspeakers, amplifiers, tuners and receivers. From the late 1980s onwards Goodmans undertook a period of diversification into wider consumer electronics including in-car entertainment and television. Goodmans was the shirt sponsor of the English football team Portsmouth F.C. from 1989 to 1995. Product range Goodmans Industries Ltd markets a wide range of consumer electronics, predominantly focused in audio, including record players, stand-alone speakers and radios. They launched their first Digital Radio, GPS280, in 2003 and their first HD ready TV in 27 July 2007. References External links Goodmans at B&M Stores Goodmans Support Center (get help & manuals) Electronics companies of the United Kingdom British brands Manufacturing companies of the United Kingdom Radio manufacturers
Goodmans Industries
Engineering
258
40,201,843
https://en.wikipedia.org/wiki/HD%203322
HD 3322 is a binary star system in the northern constellation of Andromeda. With an apparent visual magnitude of 6.51, it lies below the nominal brightness limit for visibility with the normal naked eye, but it is still possible to see the star with excellent vision under ideal seeing conditions. An annual parallax shift of provides a distance estimate of roughly 700 light years. This is a single-lined spectroscopic binary star system with an orbital period of around 400 days and an eccentricity of 0.57. The visible component has a stellar classification of , matching a chemically peculiar B-type giant mercury-manganese star. Catalano and Leone (1991) found it to be a α2 CVn variable with a period of 4.6904 days, and thus it received the variable star designation PY And. It has an estimated 3.7 times the mass of the Sun and about 4.8 times the Sun's radius. It is radiating around 246 times the Sun's luminosity from its photosphere at an effective temperature of 12,882 K. References B-type giants Alpha2 Canum Venaticorum variables Spectroscopic binaries Andromeda (constellation) Durchmusterung objects 003322 002865 0149 Andromedae, PY
HD 3322
Astronomy
270
68,511,991
https://en.wikipedia.org/wiki/Erbium%28III%29%20nitrate
Erbium(III) nitrate is an inorganic compound, a salt of erbium and nitric acid with the chemical formula Er(NO3)3. The compound forms pink crystals, readily soluble in water, also forms crystalline hydrates. Synthesis Dissolving metallic erbium in nitric acid: Dissolving erbium oxide or hydroxide in nitric acid: Reaction of nitrogen dioxide with metallic erbium: Physical properties Erbium(III) nitrate forms pink hygroscopic crystals. Forms crystalline hydrates of the composition Er(NO3)3*5H2O. Both erbium(III) nitrate and its crystalline hydrate decompose on heating. Dissolves in water and EtOH. Chemical properties The hydrated erbium nitrate thermally decomposed to form ErONO3 and then to erbium oxide. Applications It is used to obtain metallic erbium and is also used as a chemical reagent. References Erbium compounds Nitrates
Erbium(III) nitrate
Chemistry
209
24,950,974
https://en.wikipedia.org/wiki/Kenneth%20Karlin%20%28chemist%29
Kenneth D. Karlin was born on October 30, 1948, in Pasadena, California, a professor of chemistry at Johns Hopkins University in Baltimore, Maryland. Research in his group focuses on coordination chemistry relevant to biological and environmental processes, involving copper or heme complexes. Of particular interest are reactivities of such complexes with nitrogen oxides, O2, and the oxidation of substrates by the resultant compounds. He is also the Editor-in-Chief of the book series Progress in Inorganic Chemistry. Awards and honors Maryland Chemist of the Year Award (American Chemical Society Maryland Section), 2011 F. Albert Cotton Award in Synthetic Inorganic Chemistry, 2009 2009 Sierra Nevada Distinguished Chemist Award Appointed to Ira Remsen Chair in Chemistry, Johns Hopkins University, May 1999. Elected Chair, 1998 Metals in Biology Gordon Research Conference "MERIT" Award, 1993–2003, National Institute of General Medical Sciences (NIH) Fellow, American Association for the Advancement of Science (AAAS) – elected October, 1992 1991 Buck-Whitney Award (ACS Eastern New York Section Research Award) University "Excellence in Research" Award, SUNY at Albany, 1988 General Electric Visiting Faculty Research Fellow, GE R&D Center, Schenectady, NY, 1986–87 Positions 1977–1983 Assistant Professor: Department of Chemistry, SUNY at Albany, Albany, NY 1983–1987 Associate Professor: Department of Chemistry, SUNY at Albany, Albany, NY 1987–1990 Professor: Department of Chemistry, SUNY at Albany, Albany, NY 1990–present Professor: Department of Chemistry, Johns Hopkins University, Baltimore, MD 2009–present Professor: Department of Bioinspired Science, WCU Program, MOBIC (Metal Oxygen BioInspired Chemistry) Group Ewha Womans University Seoul, KOREA Personal Karlin is the son of Stanford mathematician Samuel Karlin. References 1948 births Living people 21st-century American chemists Fellows of the American Association for the Advancement of Science Inorganic chemists Johns Hopkins University faculty Columbia University alumni
Kenneth Karlin (chemist)
Chemistry
402
437,063
https://en.wikipedia.org/wiki/Cinchonism
Cinchonism is a pathological condition caused by an overdose of quinine or its natural source, cinchona bark. Quinine and its derivatives are used medically to treat malaria and lupus erythematosus. In much smaller amounts, quinine is an ingredient of tonic drinks, acting as a bittering agent. Cinchonism can occur from therapeutic doses of quinine, either from one or several large doses. Quinidine (a Class 1A anti-arrhythmic) can also cause cinchonism symptoms to develop with as little as a single dose. Signs and symptoms Signs and symptoms of mild cinchonism (which may occur from standard therapeutic doses of quinine) include flushed and sweaty skin, ringing of the ears (tinnitus), blurred vision, impaired hearing, confusion, reversible high-frequency hearing loss, headache, abdominal pain, rashes, drug-induced lichenoid reaction (lichenoid photosensitivity), vertigo, dizziness, nausea, vomiting and diarrhea. Large doses of quinine may lead to severe (but reversible) symptoms of cinchonism: skin rashes, deafness, somnolence, diminished visual acuity or blindness, anaphylactic shock, and disturbances in heart rhythm or conduction, and death from cardiotoxicity (damage to the heart). Quinine may also trigger a rare form of hypersensitivity reaction in malaria patients, termed blackwater fever, that results in massive hemolysis, hemoglobinemia, hemoglobinuria, and kidney failure. Most symptoms of cinchonism (except in severe cases) are reversible and disappear once quinine is withdrawn. Attempted suicide by intake of a large dose of quinine has caused irreversible tunnel vision and very severe visual impairment. Patients treated with quinine may also suffer from low blood sugar, especially if it is administered intravenously, and hypotension (low blood pressure). Quinine, like chloroquine, inactivates enzymes in the lysosomes of cells and has an anti-inflammatory effect, hence its use in the treatment of rheumatoid arthritis. However, inactivation of these enzymes can also cause abnormal accumulation of glycogen and phospholipids in lysosomes, causing toxic myopathy. It is possible this action is the root cause of cinchonism. References External links Quinine Poisoning by drugs, medicaments and biological substances
Cinchonism
Environmental_science
533
70,936,988
https://en.wikipedia.org/wiki/Stefan%20Karol%20Estreicher
Stefan Karol Estreicher is a theoretical physicist, currently serving as Paul Whitfield Horn Distinguished Professor Emeritus at the Physics Department of Texas Tech University in Lubbock, Texas. Education He received his PhD from the University of Zurich in 1982 and joined the faculty of Texas Tech University in 1986. Academic Work He was elected a Fellow of the American Physical Society in 1997 and a Fellow of the Institute of Physics (UK) in 2006. He won the Friedrich Wilhelm Bessel research award from the Alexander von Humboldt Society in 2001. He served for 6 years as the Chair of the International Steering Committee of the ICDS conferences series and, also for 6 years, as the elected Spokesperson of the P.W. Horn Distinguished Professors at Texas Tech University. He has published over 200 scientific papers dealing with the electrical, optical, and magnetic properties of defects in semiconductors. His studies of vibrational lifetimes revealed the concept of phonon trapping which provides a natural explanation for why and how defects reduce heat flow. He was the first to calculate from first-principles the Kapitza resistance and its temperature dependence at a semiconductor interface. He also published several articles on the history of wine and viticulture. Family He is the son of Zygmunt Estreicher (professor of musicology), grandson of Tadeusz Estreicher (professor of chemistry and historian), great-grandson of Karol Estreicher senior (author of Bibliografia Polska), nephew of Karol Estreicher junior, and grand-nephew of Stanisław Estreicher. References American physicists Theoretical physicists Texas Tech University faculty University of Zurich alumni 1952 births Living people People from Neuchâtel
Stefan Karol Estreicher
Physics
352
71,407,626
https://en.wikipedia.org/wiki/HD%20194612
HD 194612 (HR 7812) is a solitary orange hued star located in the southern circumpolar constellation Octans. It has an apparent magnitude of 5.9, making it visible to the naked eye under ideal conditions. Parallax measurements place it at a distance of 760 light years and it has a low heliocentric radial velocity of . This is a red giant with a stellar classification of K5 III, and Gaia DR3 stellar evolution models place it on the red giant branch. It has double the mass of the Sun and an enlarged radius of due to its evolved status. It shines with a luminosity of from its photosphere at an effective temperature of . Like many giants, HD 194612 has a comparatively modest projected rotational velocity, which is around . References K-type giants Octans 194612 101843 7812 PD-81 00906 Octantis, 49
HD 194612
Astronomy
188
3,242,208
https://en.wikipedia.org/wiki/Game%20testing
Game testing, also called quality assurance (QA) testing within the video game industry, is a software testing process for quality control of video games. The primary function of game testing is the discovery and documentation of software defects. Interactive entertainment software testing is a highly technical field requiring computing expertise, analytic competence, critical evaluation skills, and endurance. In recent years the field of game testing has come under fire for being extremely strenuous and unrewarding, both financially and emotionally. History In the early days of computer and video games, the developer was in charge of all the testing. No more than one or two testers were required due to the limited scope of the games. In some cases, the programmers could handle all the testing. As games become more complex, a larger pool of QA resources, called "Quality Assessment" or "Quality Assurance" is necessary. Most publishers employ a large QA staff for testing various games from different developers. Despite the large QA infrastructure most publishers have, many developers retain a small group of testers to provide on-the-spot QA. Now most game developers rely on their highly technical and game savvy testers to find glitches and 'bugs' in either the programming code or graphic layers. Game testers usually have a background playing a variety of different games on a multitude of platforms. They must be able to notate and reference any problems they find in detailed reports, meet deadlines with assignments and have the skill level to complete the game titles on their most difficult settings. Most of the time the position of game tester is a highly stressful and competitive position with little pay yet is highly sought after for it serves as a doorway into the industry. Game testers are observant individuals and can spot minor defects in the game build. A common misconception is that all game testers enjoy alpha or beta version of the game and report occasionally found bugs. In contrast, game testing is highly focused on finding bugs using established and often tedious methodologies before alpha version. Overview Quality assurance is a critical component in game development, though the video game industry does not have a standard methodology. Instead developers and publishers have their own methods. Small developers do not generally have QA staff; however, large companies may employ QA teams full-time. High-profile commercial games are professionally and efficiently tested by publisher QA department. Testing starts as soon as first code is written and increases as the game progresses towards completion. The main QA team will monitor the game from its first submission to QA until as late as post-production. Early in the game development process the testing team is small and focuses on daily feedback for new code. As the game approaches alpha stage, more team members are employed and test plans are written. Sometimes features that are not bugs are reported as bugs and sometimes the programming team fails to fix issues first time around. A good bug-reporting system may help the programmers work efficiently. As the projects enters beta stage, the testing team will have clear assignments for each day. Tester feedback may determine final decisions of exclusion or inclusion of final features. Introducing testers with fresh perspectives may help identify new bugs. At this point the lead tester communicates with the producer and department heads daily. If the developer has an external publisher, then coordination with publisher's QA team starts. For console games, a build for the console company QA team is sent. Beta testing may involve volunteers, for example, if the game is multiplayer. Testers receive scheduled uniquely identifiable game builds from the developers. The game is play-tested and testers note any uncovered errors. These may range from bugs to art glitches to logic errors and level bugs. Testing requires creative gameplay to discover often subtle bugs. Some bugs are easy to document, but many require detailed description so a developer can replicate or find the bug. Testers implement concurrency control to avoid logging bugs multiple times. Many video game companies separate technical requirement testing from functionality testing altogether since a different testing skillset is required. If a video game development enters crunch time before a deadline, the game-test team is required to test late-added features and content without delay. During this period staff from other departments may contribute to the testing—especially in multiplayer games. One example of sustained crunch, especially among the QA team, was at Treyarch during the development of Call of Duty: Black Ops 4. Most companies rank bugs according to an estimate of their severity: A bugs are critical bugs that prevent the game from being shipped, for example, they may crash the game. B bugs are essential problems that require attention; however, the game may still be playable. Multiple B bugs are equally severe to an A bug. C bugs are small and obscure problems, often in form of recommendation rather than bugs. Game tester A game tester is a member of a development team who performs game testing. Roles The organization of staff differs between organizations; a typical company may employ the following roles associated with testing disciplines: Game producers are responsible for setting testing deadlines in coordination with marketing and quality assurance. They also manage many items outside of game testing, relating to the overall production of a title. Their approval is typically required for final submission or "gold" status. Lead tester, test lead or QA lead is the person responsible for the game working correctly and managing bug lists. A lead tester manages the QA staff. The lead tester works closely with designers and programmers, especially towards the end of the project. The lead tester is responsible for tracking bug reports and ensuring that they are fixed. They are also responsible that QA teams produce formal and complete reports. This includes discarding duplicate and erroneous bug reports, as well as requesting clarifications. As the game nears alpha and beta stages, lead tester brings more testers into the team, coordinates with external testing teams and works with management and producers. Some companies may prevent the game going gold until lead tester approves it. Lead testers are also typically responsible for compiling representative samples of game footage for submission to regulatory bodies such as the ESRB and PEGI. Testers are responsible for checking that the game works, is easy to use, has actions that make sense, and contains fun gameplay. Testers need to write accurate and specific bug reports, and if possible providing descriptions of how the bug can be reproduced. Testers may be assigned to a single game during its entire production, or brought onto other projects as demanded by the department's schedule and specific needs. SDET (Software Development Engineer in Test) or Technical Testers are responsible for building automated test cases and frameworks as well as managing complex test problems such as overall game performance and security. These individuals usually have strong software development skills but with a focus on writing software which exposes defects in other applications. Specific roles and duties will vary between studios. Many games are developed without any Technical Testers. Employment Game QA is less technical than general software QA. Game testers most often require experience however occasionally only a high school diploma and with no technical expertise, suffice. Game testing is normally a full-time job for experienced testers; however, many employees are hired as temporary staff, such as beta testers. In some cases, testers employed by a publisher may be sent to work at the developer's site. The most aggressive recruiting season is late summer/early autumn, as this is the start of the crunch period for games to be finished and shipped in time for the holiday season. Some games studios are starting to take a more technical approach to game QA that is more inline with traditional software testing. Technical Test positions are still fairly rare throughout the industry but these jobs are often full-time positions with long term career paths and require a 4-year computer science degree and significant experience with test automation. Some testers use the job as a stepping stone in the game industry. QA résumés, which display non-technical skill sets, tend towards management, than to marketing or production. Applicants for programming, art, or design positions need to demonstrate technical skills in these areas. Compensation Game testing personnel are usually paid hourly (around US$10–12 an hour). Testing management is usually more lucrative, and requires experience and often a college education. An annual survey found that testers earn an average of $39k annually. Testers with less than three years' experience earn an average of US$25k while testers with over three years' experience earn US$43k. Testing leads, with over six years' experience, earn on an average of US$71k a year. Process A typical bug report progression of testing process is seen below: Identification. Incorrect program behavior is analyzed and identified as a bug. Reporting. The bug is reported to the developers using a defect tracking system. The circumstances of the bug and steps to reproduce are included in the report. Developers may request additional documentation such as a real-time video of the bug's manifestation. Analysis. The developer responsible for the bug, such as an artist, programmer or game designer checks the malfunction. This is outside the scope of game tester duties, although inconsistencies in the report may require more information or evidence from the tester. Verification. After the developer fixes the issue, the tester verifies that the bug no longer occurs. Not all bugs are addressed by the developer, for example, some bugs may be claimed as features (expressed as "NAB" or "not a bug"), and may also be "waived" (given permission to be ignored) by producers, game designers, or even lead testers, according to company policy. Methodology There is no standard method for game testing, and most methodologies are developed by individual video game developers and publishers. Methodologies are continuously refined and may differ for different types of games (for example, the methodology for testing an MMORPG will be different from testing a casual game). Many methods, such as unit testing, are borrowed directly from general software testing techniques. Outlined below are the most important methodologies, specific to video games. Functionality testing is most commonly associated with the phrase "game testing", as it entails playing the game in some form. Functionality testing does not require extensive technical knowledge. Functionality testers look for general problems within the game itself or its user interface, such as stability issues, game mechanic issues, and game asset integrity. Compliance testing is the reason for the existence of game testing labs. First-party licensors for console platforms have strict technical requirements titles licensed for their platforms. For example, Sony publishes a Technical Requirements Checklist (TRC), Microsoft publishes Xbox Requirements (XR), and Nintendo publishes a set of "guidelines" (Lotcheck). Some of these requirements are highly technical and fall outside the scope of game testing. Other parts, most notably the formatting of standard error messages, handling of memory card data, and handling of legally trademarked and copyrighted material, are the responsibility of the game testers. Even a single violation in submission for license approval may have the game rejected, possibly incurring additional costs in further testing and resubmission. In addition, the delay may cause the title to miss an important launch window, potentially costing the publisher even larger sums of money. The requirements are proprietary documents released to developers and publishers under confidentiality agreements. They are not available for the general public to review, although familiarity with these standards is considered a valuable skill to have as a tester. Compliance may also refer to regulatory bodies such as the ESRB and PEGI, if the game targets a particular content rating. Testers must report objectionable content that may be inappropriate for the desired rating. Similar to licensing, games that do not receive the desired rating must be re-edited, retested, and resubmitted at additional cost. Compatibility testing is normally required for PC titles, nearing the end of development as much of the compatibility depends on the final build of the game. Often two rounds of compatibility tests are done - early in beta to allow time for issue resolution, and late in beta or during release candidate. Compatibility testing team test major functionality of the game on various configurations of hardware. Usually a list of commercially important hardware is supplied by the publisher. Compatibility testing ensures that the game runs on different configurations of hardware and software. The hardware encompasses brands of different manufacturers and assorted input peripherals such as gamepads and joysticks. The testers also evaluate performance and results are used for game's advertised minimum system requirements. Compatibility or performance issues may be either fixed by the developer or, in case of legacy hardware and software, support may be dropped. Localization testing act as in-game text editors. Although general text issues are a part of functionality testing, QA departments may employ dedicated localization testers. In particular, early Japanese game translations were rife with errors, and in recent years localization testers are employed to make technical corrections and review translation work of game scripts - catalogued collections of all the in-game text. Testers native to the region where a game is marketed may be employed to ensure the accuracy and quality of a game's localization. Soak testing, in the context of video games, involves leaving the game running for prolonged periods time in various modes of operation, such as idling, paused, or at the title screen. This testing requires no user interaction beyond initial setup, and is usually managed by lead testers. Automated tools may be used for simulating repetitive actions, such as mouse clicks. Soaking can detect memory leaks or rounding errors that manifest only over time. Soak tests are one of the compliance requirements. Beta testing is done during beta stage of development. Often this refers to the first publicly available version of a game. Public betas are effective because thousands of fans may find bugs that the developer's testers did not. Regression testing is performed once a bug has been fixed by the programmers. QA checks to see whether the bug is still there (regression) and then runs similar tests to see whether the fix broke something else. That second stage is often called "halo testing"; it involves testing all around a bug, looking for other bugs. Load testing tests the limits of a system, such as the number of players on an MMO server, the number of sprites active on the screen, or the number of threads running in a particular program. Load testing may require a large group of testers or software that emulates heavy activity. Load testing also measures the capability of an application to function correctly under load. Multiplayer testing may involve separate multiplayer QA team if the game has significant multiplayer portions. This testing is more common with PC games. The testers ensure that all connectivity methods (modem, LAN, Internet) are working. This allows single player and multiplayer testing to occur in parallel. Player-experience modeling refers to attempts to mathematically model player experience and predict player's preference for or liking of a video game. Console hardware For consoles, the majority of testing is not performed on a normal system or consumer unit. Special test equipment is provided to developers and publishers. The most significant tools are the test or debug kits, and the dev kits. The main difference from consumer units is the ability to load games from a burned disc, USB stick, or hard drive. The console can also be set to any publishing region. This allows game developers to produce copies for testing. This functionality is not present in consumer units to combat software piracy and grey-market imports. Test kits have the same hardware specifications and overall appearance as a consumer unit, though often with additional ports and connectors for other testing equipment. Test kits contain additional options, such as running automated compliance checks, especially with regard to save data. The system software also allows the user to capture memory dumps for aid in debugging. Dev kits are not normally used by game testers, but are used by programmers for lower-level testing. In addition to the features of a test kit, dev kits usually have higher hardware specifications, most notably increased system memory. This allows developers to estimate early game performance without worrying about optimizations. Dev kits are usually larger and look different from a test kit or consumer unit. See also Software testing Test plan Game development Software release life cycle Notes References Research Lahti, M., Game testing in Finnish game companies, Master's thesis, Aalto University, School of Science, 2014, Thesis External links Article: The Basics of Test Automation for Apps, Games And the Mobile Web Article: Architecture and Infrastructure Aspects of Mobile Game Testing Video game development Software testing Video game industry labor disputes
Game testing
Engineering
3,409
11,421,309
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20R30/Z108
In molecular biology, Small nucleolar RNA R30/Z108 (snoR30) is a C/D box small nucleolar RNA that acts as a methylation guide for 18S ribosomal RNA in plants. References External links Small nuclear RNA
Small nucleolar RNA R30/Z108
Chemistry
55
2,629,440
https://en.wikipedia.org/wiki/Natural%20material
A natural material is any product or physical matter that comes from plants, animals, or the ground which is not man-made. Minerals and the metals that can be extracted from them (without further modification) are also considered to belong into this category. Natural materials are used as building materials and clothing. Types include: Biotic materials Wood (rattan, bamboo, bark, etc.) Plant fiber (coir, ramie, sisal, cotton, flax, hemp, jute, kapok, kenaf, moss, linen, abacá, etc.) Animal fiber (wool, silk, alpaca, camel, angora, cashmere, mohair, etc.) Inorganic material Stone (flint, granite, limestone, obsidian, sandstone, sand, gems, glass, etc.) Native metal (copper, iron, gold, silver, etc.) Composites (clay, plasticine, etc.) Other natural materials. Soil See also Alternative natural materials Dimension stone Earth shelter Earth structure Green building and wood Greystone (residential buildings made from limestone) Hempcrete Log house Material science Metamaterials Natural building Natural environment Natural product Natural resources Nature Rammed earth Straw-bale construction References Further reading
Natural material
Physics
253
59,427,801
https://en.wikipedia.org/wiki/NGC%204939
NGC 4939 is a large spiral galaxy located in the constellation Virgo. It is located at a distance of about 120 million light years from Earth, which, given its apparent dimensions, means that NGC 4939 is about 190,000 light years across. It was discovered by William Herschel on March 25, 1786. Characteristics NGC 4939 has been characterised as a Seyfert galaxy, a galaxy category which features bright point-like nuclei. NGC 4939 is a type II Seyfert galaxy. Its X-ray spectrum is more consistent with a Compton-thick cold reflection source, which means that the source is hidden behind dense material, mainly gas and dust, and the X-rays observed have been reflected, but a Compton-thin transmission model could not be ruled out. The equivalent width of the FeKα line is large, indicating too that it is a Compton-thick source. Further observations by Swift Observatory confirmed its Compton-thick nature. The source of activity in the active galactic nuclei is a supermassive black hole (SMBH) lying at the centre of the galaxy. The SMBH at the centre of NGC 4939 is accreting material with a rate of 0.077 per year. The black hole has been detected in hard X-rays, which are not absorbed by the Compton-thick column, by INTEGRAL. The galaxy has a large elliptical bulge and maybe a weak bar. It is a grand design spiral galaxy, with two tightly wrapped arms emanating from the bulge. The arms are thin, smooth and well defined and can be traced for nearly one and a half revolutions before fading. Two symmetric arm sections or arcs are observed in the central part of the galaxy. The galaxy is seen with an inclination of 56 degrees. The rotational speed of the galaxy is about 270 km/s. Supernovae Five supernovae have been observed in NGC 4939: SN 1968X (type unknown, mag. 16) was discovered by Paul Wild on 27 November 1968. SN 1973J (type unknown, mag. 16) was discovered by Paul Wild on 21 May 1973. SN 2008aw (Type II, mag. 15.9) was discovered by the Lick Observatory Supernova Search (LOSS) on 2 March 2008. SN 2014B (Type IIP, mag. 17.0) was discovered by the Lick Observatory Supernova Search (LOSS) on 2 January 2014. SN 2020nif (Type II, mag. 16.1492) was discovered by the Zwicky Transient Facility on 24 June 2020. Nearby galaxies NGC 4939 belongs to a small galaxy group known as the NGC 4933 group, named after the multiple galaxy NGC 4933. The group lies between the Local Supercluster and Hydra-Centaurus Supercluster. References External links Unbarred spiral galaxies Seyfert galaxies Virgo (constellation) 4939 045170 Astronomical objects discovered in 1786 Discoveries by William Herschel -02-33-104 13016-1004
NGC 4939
Astronomy
616
2,050,667
https://en.wikipedia.org/wiki/Quantum%20tomography
Quantum tomography or quantum state tomography is the process by which a quantum state is reconstructed using measurements on an ensemble of identical quantum states. The source of these states may be any device or system which prepares quantum states either consistently into quantum pure states or otherwise into general mixed states. To be able to uniquely identify the state, the measurements must be tomographically complete. That is, the measured operators must form an operator basis on the Hilbert space of the system, providing all the information about the state. Such a set of observations is sometimes called a quorum. The term tomography was first used in the quantum physics literature in a 1993 paper introducing experimental optical homodyne tomography. In quantum process tomography on the other hand, known quantum states are used to probe a quantum process to find out how the process can be described. Similarly, quantum measurement tomography works to find out what measurement is being performed. Whereas, randomized benchmarking scalably obtains a figure of merit of the overlap between the error prone physical quantum process and its ideal counterpart. The general principle behind quantum state tomography is that by repeatedly performing many different measurements on quantum systems described by identical density matrices, frequency counts can be used to infer probabilities, and these probabilities are combined with Born's rule to determine a density matrix which fits the best with the observations. This can be easily understood by making a classical analogy. Consider a harmonic oscillator (e.g. a pendulum). The position and momentum of the oscillator at any given point can be measured and therefore the motion can be completely described by the phase space. This is shown in figure 1. By performing this measurement for a large number of identical oscillators we get a probability distribution in the phase space (figure 2). This distribution can be normalized (the oscillator at a given time has to be somewhere) and the distribution must be non-negative. So we have retrieved a function which gives a description of the chance of finding the particle at a given point with a given momentum. For quantum mechanical particles the same can be done. The only difference is that the Heisenberg's uncertainty principle mustn't be violated, meaning that we cannot measure the particle's momentum and position at the same time. The particle's momentum and its position are called quadratures (see Optical phase space for more information) in quantum related states. By measuring one of the quadratures of a large number of identical quantum states will give us a probability density corresponding to that particular quadrature. This is called the marginal distribution, or (see figure 3). In the following text we will see that this probability density is needed to characterize the particle's quantum state, which is the whole point of quantum tomography. What quantum state tomography is used for Quantum tomography is applied on a source of systems, to determine the quantum state of the output of that source. Unlike a measurement on a single system, which determines the system's current state after the measurement (in general, the act of making a measurement alters the quantum state), quantum tomography works to determine the state(s) prior to the measurements. Quantum tomography can be used for characterizing optical signals, including measuring the signal gain and loss of optical devices, as well as in quantum computing and quantum information theory to reliably determine the actual states of the qubits. One can imagine a situation in which a person Bob prepares many identical objects (particles or fields) in the same quantum states and then gives them to Alice to measure. Not confident with Bob's description of the state, Alice may wish to do quantum tomography to classify the state herself. Methods of quantum state tomography Linear inversion Using Born's rule, one can derive the simplest form of quantum tomography. Generally, being in a pure state is not known in advance, and a state may be mixed. In this case, many different types of measurements will have to be performed, many times each. To fully reconstruct the density matrix for a mixed state in a finite-dimensional Hilbert space, the following technique may be used. Born's rule states , where is a particular measurement outcome projector and is the density matrix of the system. Given a histogram of observations for each measurement, one has an approximation to for each . Given linear operators and , define the inner product where is representation of the operator as a column vector and a row vector such that is the inner product in of the two. Define the matrix as . Here Ei is some fixed list of individual measurements (with binary outcomes), and A does all the measurements at once. Then applying this to yields the probabilities: . Linear inversion corresponds to inverting this system using the observed relative frequencies to derive (which is isomorphic to ). This system is not going to be square in general, as for each measurement being made there will generally be multiple measurement outcome projectors . For example, in a 2-D Hilbert space with 3 measurements , each measurement has 2 outcomes, each of which has a projector Ei, for 6 projectors, whereas the real dimension of the space of density matrices is (2⋅22)/2=4, leaving to be 6 x 4. To solve the system, multiply on the left by : . Now solving for yields the pseudoinverse: . This works in general only if the measurement list Ei is tomographically complete. Otherwise, the matrix will not be invertible. Continuous variables and quantum homodyne tomography In infinite dimensional Hilbert spaces, e.g. in measurements of continuous variables such as position, the methodology is somewhat more complex. One notable example is in the tomography of light, known as optical homodyne tomography. Using balanced homodyne measurements, one can derive the Wigner function and a density matrix for the state of the light. One approach involves measurements along different rotated directions in phase space. For each direction , one can find a probability distribution for the probability density of measurements in the direction of phase space yielding the value . Using an inverse Radon transformation (the filtered back projection) on leads to the Wigner function, , which can be converted by an inverse Fourier transform into the density matrix for the state in any basis. A similar technique is often used in medical tomography. Example: single-qubit state tomography The density matrix of a single qubit can be expressed in terms of its Bloch vector and the Pauli vector : . The single-qubit state tomography can be performed by means of single-qubit Pauli measurements: First, create a list of three quantum circuits, with the first one measuring the qubit in the computational basis (Z-basis), the second one performing a Hadamard gate before measurement (which makes the measurement in X-basis), and the third one performing the appropriate phase shift gate (that is ) followed by a Hadamard gate before measurement (which makes the measurement in Y-basis); Then, run these circuits (typically thousands of times), and the counts in the measurement results of the first circuit produces , the second circuit , and the third circuit ; Finally, if , then a measured Bloch vector is produced as , and the measured density matrix is ; If , it'll be necessary to renormalize the measured Bloch vector as before using it to calculate the measured density matrix. This algorithm is the foundation for qubit tomography and is used in some quantum programming routines, like that of Qiskit. Example: homodyne tomography. Electromagnetic field amplitudes (quadratures) can be measured with high efficiency using photodetectors together with temporal mode selectivity. Balanced homodyne tomography is a reliable technique of reconstructing quantum states in the optical domain. This technique combines the advantages of the high efficiencies of photodiodes in measuring the intensity or photon number of light, together with measuring the quantum features of light by a clever set-up called the homodyne tomography detector. Quantum homodyne tomography is understood by the following example. A laser is directed onto a 50-50% beamsplitter, splitting the laser beam into two beams. One is used as a local oscillator (LO) and the other is used to generate photons with a particular quantum state. The generation of quantum states can be realized, e.g. by directing the laser beam through a frequency doubling crystal and then onto a parametric down-conversion crystal. This crystal generates two photons in a certain quantum state. One of the photons is used as a trigger signal used to trigger (start) the readout event of the homodyne tomography detector. The other photon is directed into the homodyne tomography detector, in order to reconstruct its quantum state. Since the trigger and signal photons are entangled (this is explained by the spontaneous parametric down-conversion article), it is important to realize that the optical mode of the signal state is created nonlocal only when the trigger photon impinges the photodetector (of the trigger event readout module) and is actually measured. More simply said, it is only when the trigger photon is measured, that the signal photon can be measured by the homodyne detector. Now consider the homodyne tomography detector as depicted in figure 4 (figure missing). The signal photon (this is the quantum state we want to reconstruct) interferes with the local oscillator, when they are directed onto a 50-50% beamsplitter. Since the two beams originate from the same so called master laser, they have the same fixed phase relation. The local oscillator must be intense, compared to the signal so it provides a precise phase reference. The local oscillator is so intense, that we can treat it classically (a = α) and neglect the quantum fluctuations. The signal field is spatially and temporally controlled by the local oscillator, which has a controlled shape. Where the local oscillator is zero, the signal is rejected. Therefore, we have temporal-spatial mode selectivity of the signal. The beamsplitter redirects the two beams to two photodetectors. The photodetectors generate an electric current proportional to the photon number. The two detector currents are subtracted and the resulting current is proportional to the electric field operator in the signal mode, depended on relative optical phase of signal and local oscillator. Since the electric field amplitude of the local oscillator is much higher than that of the signal the intensity or fluctuations in the signal field can be seen. The homodyne tomography system functions as an amplifier. The system can be seen as an interferometer with such a high intensity reference beam (the local oscillator) that unbalancing the interference by a single photon in the signal is measurable. This amplification is well above the photodetectors noise floor. The measurement is reproduced a large number of times. Then the phase difference between the signal and local oscillator is changed in order to ‘scan’ a different angle in the phase space. This can be seen from figure 4. The measurement is repeated again a large number of times and a marginal distribution is retrieved from the current difference. The marginal distribution can be transformed into the density matrix and/or the Wigner function. Since the density matrix and the Wigner function give information about the quantum state of the photon, we have reconstructed the quantum state of the photon. The advantage of this balanced detection method is that this arrangement is insensitive to fluctuations in the intensity of the laser. The quantum computations for retrieving the quadrature component from the current difference are performed as follows. The photon number operator for the beams striking the photodetectors after the beamsplitter is given by: , where i is 1 and 2, for respectively beam one and two. The mode operators of the field emerging the beamsplitters are given by: The denotes the annihilation operator of the signal and alpha the complex amplitude of the local oscillator. The number of photon difference is eventually proportional to the quadrature and given by: , Rewriting this with the relation: Results in the following relation: , where we see clear relation between the photon number difference and the quadrature component . By keeping track of the sum current, one can recover information about the local oscillator's intensity, since this is usually an unknown quantity, but an important quantity for calculating the quadrature component . Problems with linear inversion One of the primary problems with using linear inversion to solve for the density matrix is that in general the computed solution will not be a valid density matrix. For example, it could give negative probabilities or probabilities greater than 1 to certain measurement outcomes. This is particularly an issue when fewer measurements are made. Another issue is that in infinite dimensional Hilbert spaces, an infinite number of measurement outcomes would be required. Making assumptions about the structure and using a finite measurement basis leads to artifacts in the phase space density. Maximum likelihood estimation Maximum likelihood estimation (also known as MLE or MaxLik) is a popular technique for dealing with the problems of linear inversion. By restricting the domain of density matrices to the proper space, and searching for the density matrix which maximizes the likelihood of giving the experimental results, it guarantees the state to be theoretically valid while giving a close fit to the data. The likelihood of a state is the probability that would be assigned to the observed results had the system been in that state. Suppose the measurements have been observed with frequencies . Then the likelihood associated with a state is where is the probability of outcome for the state . Finding the maximum of this function is non-trivial and generally involves iterative methods. The methods are an active topic of research. Problems with maximum likelihood estimation Maximum likelihood estimation suffers from some less obvious problems than linear inversion. One problem is that it makes predictions about probabilities that cannot be justified by the data. This is seen most easily by looking at the problem of zero eigenvalues. The computed solution using MLE often contains eigenvalues which are 0, i.e. it is rank deficient. In these cases, the solution then lies on the boundary of the n-dimensional Bloch sphere. This can be seen as related to linear inversion giving states which lie outside the valid space (the Bloch sphere). MLE in these cases picks a nearby point that is valid, and the nearest points are generally on the boundary. This is not physically a problem, the real state might have zero eigenvalues. However, since no value may be less than 0, an estimate of an eigenvalue being 0 implies that the estimator is certain the value is 0, otherwise they would have estimated some greater than 0 with a small degree of uncertainty as the best estimate. This is where the problem arises, in that it is not logical to conclude with absolute certainty after a finite number of measurements that any eigenvalue (that is, the probability of a particular outcome) is 0. For example, if a coin is flipped 5 times and each time heads was observed, it does not mean there is 0 probability of getting tails, despite that being the most likely description of the coin. Bayesian methods Bayesian mean estimation (BME) is a relatively new approach which addresses the problems of maximum likelihood estimation. It focuses on finding optimal solutions which are also honest in that they include error bars in the estimate. The general idea is to start with a likelihood function and a function describing the experimenter's prior knowledge (which might be a constant function), then integrate over all density matrices using the product of the likelihood function and prior knowledge function as a weight. Given a reasonable prior knowledge function, BME will yield a state strictly within the n-dimensional Bloch sphere. In the case of a coin flipped N times to get N heads described above, with a constant prior knowledge function, BME would assign as the probability for tails. BME provides a high degree of accuracy in that it minimizes the operational divergences of the estimate from the actual state. Methods for incomplete data The number of measurements needed for a full quantum state tomography for a multi-particle system scales exponentially with the number of particles, which makes such a procedure impossible even for modest system sizes. Hence, several methods have been developed to realize quantum tomography with fewer measurements. The concept of matrix completion and compressed sensing have been applied to reconstruct density matrices from an incomplete set of measurements (that is, a set of measurements which is not a quorum). In general, this is impossible, but under assumptions (for example, if the density matrix is a pure state, or a combination of just a few pure states) then the density matrix has fewer degrees of freedom, and it may be possible to reconstruct the state from the incomplete measurements. Permutationally Invariant Quantum Tomography is a procedure that has been developed mostly for states that are close to being permutationally symmetric, which is typical in nowadays experiments. For two-state particles, the number of measurements needed scales only quadratically with the number of particles. Besides the modest measurement effort, the processing of the measured data can also be done efficiently: It is possible to carry out the fitting of a physical density matrix on the measured data even for large systems. Permutationally Invariant Quantum Tomography has been combined with compressed sensing in a six-qubit photonic experiment. Quantum measurement tomography One can imagine a situation in which an apparatus performs some measurement on quantum systems, and determining what particular measurement is desired. The strategy is to send in systems of various known states, and use these states to estimate the outcomes of the unknown measurement. Also known as "quantum estimation", tomography techniques are increasingly important including those for quantum measurement tomography and the very similar quantum state tomography. Since a measurement can always be characterized by a set of POVM's, the goal is to reconstruct the characterizing POVM's . The simplest approach is linear inversion. As in quantum state observation, use . Exploiting linearity as above, this can be inverted to solve for the . Not surprisingly, this suffers from the same pitfalls as in quantum state tomography: namely, non-physical results, in particular negative probabilities. Here the will not be valid POVM's, as they will not be positive. Bayesian methods as well as Maximum likelihood estimation of the density matrix can be used to restrict the operators to valid physical results. Quantum process tomography Quantum process tomography (QPT) deals with identifying an unknown quantum dynamical process. The first approach, introduced in 1996 and sometimes known as standard quantum process tomography (SQPT) involves preparing an ensemble of quantum states and sending them through the process, then using quantum state tomography to identify the resultant states. Other techniques include ancilla-assisted process tomography (AAPT) and entanglement-assisted process tomography (EAPT) which require an extra copy of the system. Each of the techniques listed above are known as indirect methods for characterization of quantum dynamics, since they require the use of quantum state tomography to reconstruct the process. In contrast, there are direct methods such as direct characterization of quantum dynamics (DCQD) which provide a full characterization of quantum systems without any state tomography. The number of experimental configurations (state preparations and measurements) required for full quantum process tomography grows exponentially with the number of constituent particles of a system. Consequently, in general, QPT is an impossible task for large-scale quantum systems. However, under weak decoherence assumption, a quantum dynamical map can find a sparse representation. The method of compressed quantum process tomography (CQPT) uses the compressed sensing technique and applies the sparsity assumption to reconstruct a quantum dynamical map from an incomplete set of measurements or test state preparations. Quantum dynamical maps A quantum process, also known as a quantum dynamical map, , can be described by a completely positive map , where , the bounded operators on Hilbert space; with operation elements satisfying so that . Let be an orthogonal basis for . Write the operators in this basis . This leads to , where . The goal is then to solve for , which is a positive superoperator and completely characterizes with respect to the basis. Standard quantum process tomography SQPT approaches this using linearly independent inputs , where is the dimension of the Hilbert space . For each of these input states , sending it through the process gives an output state which can be written as a linear combination of the , i.e. . By sending each through many times, quantum state tomography can be used to determine the coefficients experimentally. Write , where is a matrix of coefficients. Then . Since form a linearly independent basis, . Inverting gives : . See also Quantum discord Quantum process References Quantum mechanics Tomography
Quantum tomography
Physics
4,340
60,586,231
https://en.wikipedia.org/wiki/Zobellia%20galactanivorans
Zobellia galactanivorans is a gram-negative marine bacterium isolated from the surface of red algae of the coast of France. Z. galactanivorans forms yellow colonies with a bacillus or diplobacillus morphology. Furthermore, it is mesophilic and can grow degrade carrageenans and agars - both found in the cell wall of red algae. Z. galactanivorans contains the gene porA and porB, each encoding a β-porphyranase. β-porphyranase porA and porB are catalytic enzymes that hydrolyze the β-D-galactopyranose (1→4) α-L-galactopyranose-6-sulfate linkage in porphyran. There is a 35% sequence similarity between β-Porphyranase-B and β-Porphyranase-A. Horizontal gene transfer Orthologs between Z. galactanivorans and Bacteroides plebeius-1698, a strain of Bacteroides plebeius, contain a sequence similarity of 48%-69%. Homologous genes between other Bacteroides species only have a 30% sequence similarity. Moreover, porphyranase genes in both Z. galactanivorans and B. plebeius are located in similar orders along their chromosome, or are syntenic. References External links Type strain of Zobellia galactanivorans at BacDive - the Bacterial Diversity Metadatabase Flavobacteria Gut flora bacteria Bacteria described in 2005
Zobellia galactanivorans
Biology
334
9,055,009
https://en.wikipedia.org/wiki/Carbamino
Carbamino refers to an adduct generated by the addition of carbon dioxide to the free amino group of an amino acid or a protein, such as hemoglobin forming carbaminohemoglobin. Determining quantity of carboamino in products It is possible to determine how much carbamino is formed through the techniques of electron ionization and mass spectrometry. In determining the amount of product by mass spectrometry, a careful set of instructions are followed which allows for the carbamino adducts to be transferred to a vacuum for mass spectrometry. With the separation of the carbamino adducts in the ion sampling process, it should be that the pH does not change. Hence, mass spectrometry and electron ionization are a way to measure how much carbamino adduct there is in comparison to concentration of peptide in a solution. Formation of sugar-carbamino The sugar-carbamino is formed through a C-glycosidic linkage with the amino acid side chain via various linkers. The synthesis involves introducing annulation to appropriate amino acid residues to rigidify glycopeptides, followed by Diels-Alder cycloadditions to fuse cyclic α- and β-amino acids to the sugar moiety. This also involves the preparation of fused bicyclic C-glycosyl α-amino acid 4, which is confirmed through 2D NMR experiments, particularly NOESY. The approach to conformationally constrained (annulated)-C-glycosyl α- and β-amino acids is based upon the Diels-Alder reaction of pyranose dienes with α- and β-nitro acrylic esters. Carbamino compounds in blood The concentration of carbamate (HbCO2) was estimated in oxygenated and deoxygenated red blood cells of adult and fetal humans. The estimation was carried out at a constant pressure of carbon dioxide (PCO2 = 40 mm Hg) and varied pH levels of the serum. The bicarbonate concentration in the red cells was calculated using the Donnan ratio for chloride and bicarbonate ions. Based on this figure, the carbamate concentration was determined by subtracting the bicarbonate concentration and dissolved CO2 from the total CO2 concentration. Deoxygenated fetal red cells contain more HbCO2 than deoxygenated adult red cells at a given pH value in the red cell. Upon oxygenation, HbCO2 decreased in both types of erythrocytes to values lower than in deoxygenated cells, at a constant pH. The fraction of 'oxylabile carbamate' (-ΔHbCO2/ΔHbO2) at a red cell pH of 7·2 and a PCO2 of 40 mm Hg is 0·117 in fetal and 0·081 in adult erythrocytes. The apparent carbamate equilibrium constants (K'c and K'z) were calculated from the fraction of moles carbamate formed per Hb monomer (moles CO2/mole Hbi). These constants can be used to estimate the carbamate concentration in normal adult and fetal blood. In adult red cells, the first apparent dissociation constant of carbonic acid is significantly higher in oxygenated (-log10K'1 = pK'1 = 6·10) than in deoxygenated (pK'1 = 6·12) red cells, whereas in fetal red cells, the difference is smaller and statistically not significant. Using the present results, the fractional contribution of carbamino compounds of hemoglobin to the amount of carbon dioxide exchanged during the respiratory cycle was computed for a given set of physiological conditions in arterial and mixed venous blood. The computed value was found to be 10·5% in adult and 19% in fetal blood. See also Amine gas treating Ionic liquids in carbon capture References Functional groups Chemical reactions
Carbamino
Chemistry
852
8,912,313
https://en.wikipedia.org/wiki/Flower%20mantis
Flower mantises are praying mantises that use a special form of camouflage referred to as aggressive mimicry, which they not only use to attract prey, but avoid predators as well. These insects have specific colorations and behaviors that mimic flowers in their surrounding habitats. This strategy has been observed in other mantises including the stick mantis and dead-leaf mantis. The observed behavior of these mantises includes positioning themselves on a plant and either inserting themselves within the irradiance or on the foliage of the plants until a prey insect comes within range. Many species of flower mantises are popular as pets. The flower mantises are diurnal group with a single ancestry (a clade), but the majority of the known species belong to family Hymenopodidea. Example species: Orchid mantis The orchid mantis, Hymenopus coronatus of southeast Asia mimics orchid flowers. There is no evidence that suggests that they mimic a specific orchid, but their bodies are often white with pink markings and green eyes. These insects display different body morphologies depending on their life stage; juveniles are able to bend their abdomens upwards, allowing them to easily resemble a flower. However, the adult's wings are too large, inhibiting their ability to bend as the juveniles do. This dichotomy suggests that there must be other processes involved to attract insect prey species. Since Hymenopus coronatus do not mimic one orchid in particular, their colorations often do not match the coloration of a single orchid species. Antipredator behaviour One mechanism displayed by the orchid mantis to attract prey is the ability to absorb UV light the same way that flowers do. This makes the mantis appear flower-like to UV-sensitive insects who are often pollinators. To an insect, the mantis and the surrounding flowers appear blue; this contrasts against the foliage in the background that appears red. In his 1940 book Adaptive Coloration in Animals, Hugh Cott quotes an account by Nelson Annandale, saying that the mantis hunts on the flowers of the "Straits Rhododendron", Melastoma polyanthum. The nymph has what Cott calls "special alluring coloration" (aggressive mimicry), where the animal itself is the "decoy". The insect is pink and white, with flattened limbs with "that semiopalescent, semicrystalline appearance that is caused in flower petals by a purely structural arrangement of liquid globules or empty cells". The mantis climbs up the twigs of the plant and stands imitating a flower and waits for its prey patiently. It then sways from side to side, and soon small flies land on and around it, attracted by the small black spot on the end of its abdomen, which resembles a fly. When a larger dipteran fly, as big as a house fly, landed nearby, the mantis at once seized and ate it. More recently (2015), the orchid mantis's coloration has been shown to mimic tropical flowers effectively, attracting pollinators and catching them. Juvenile mantises secrete a mixture of the chemicals 3HOA and 10HDA, attracting their top prey species, the oriental bumblebee. This method of deception is aggressive chemical mimicry, imitating the chemical composition of the bee's pheromones. The chemicals are stored in the mandibles and released when H. coronatus is hunting. Adult mantises do not produce these chemicals. Taxonomic range The flower mantises include species from several genera, many of which are popularly kept as pets. Seven of the genera are in the Hymenopodidae: See also List of mantis genera and species References Further reading Wickler, Wolfgang (1968). Mimicry in plants and animals. McGraw-Hill, New York. Mantodea Mimicry Insect common names
Flower mantis
Biology
804
70,327,630
https://en.wikipedia.org/wiki/Great%20Mill%20Disaster
The Great Mill Disaster, also known as the Washburn A Mill explosion, occurred on May 2, 1878, in Minneapolis, Minnesota, United States. The disaster resulted in 18 deaths. The explosion occurred on a Thursday evening when an accumulation of flour dust inside the Washburn A Mill, the largest mill in the world at the time, led to a dust explosion that killed the fourteen workers inside the mill. The resulting fire destroyed several nearby mills and killed a further four millworkers. The destruction seriously impacted the city's productive capacity for flour, which was a major industry in the city. Following the blast, Cadwallader C. Washburn, the mill's owner, had a new mill, designed by William de la Barre, constructed on the site of the old one. This building was also later destroyed, and today the building's ruins are a National Historic Landmark and operated as part of the Mill City Museum. Background In 1874, businessman Cadwallader C. Washburn of La Crosse, Wisconsin, opened the Washburn A Mill in Minneapolis. At the time of its opening, it was the largest industrial building in the city and the largest flour mill in the world. With about 200 employees in 1878, it was also one of the city's largest employers. The mill was located adjacent to several other flour mills along the Mississippi River near the Saint Anthony Falls, where it derived its power from a canal that flowed through the building's lower level. At this time, Minneapolis was a hub of flour production in the United States, having recently surpassed other cities such as St. Louis and Buffalo, New York, in terms of flour productive capacity, with the city popularly referred to as Flour City. Explosion At about 6 p.m. on May 2, 1878, the mill's large day shift staff had completed their work for the day and the fourteen-man night shift staff had arrived. At around 7 p.m., three large explosions occurred within several seconds of each other inside the mill, killing the fourteen employees inside. The explosions launched debris several hundred feet into the air, with some large granite debris found eight city blocks from the mill. The sound of the explosion was heard as far away as Saint Paul, a distance of from the mill, while some people in Minneapolis who had felt the blast thought that it had been an earthquake. The explosion spawned a massive fire that spread to two adjacent mills, the Diamond and Humboldt mills, causing both of them to also explode and killing another four millworkers including mill owner Jack Reisman. The intensity of the heat from the blaze hindered firefighting activities, as firefighters could not get close to the buildings, and as a result they continued to fight the fire through the night. The following day, the Minneapolis Tribune reported on the disaster, saying, "Minneapolis has met with a calamity, the suddenness and horror of which it is difficult for the mind to comprehend". In total, six mills were destroyed. Aftermath As part of an investigation into the cause of the disaster, mill manager John A. Christian stated that it had been a dust explosion caused by flour dust in the building. Two professors from the University of Minnesota, S. F. Peckham and Louis W. Peck, later confirmed that abundant flour dust had been the cause of the explosion after reviewing controlled experiments regarding flour dust combustion. They concluded that two dry millstones had rubbed against each other and caused a spark that ignited the dust, causing the explosion. Following the event, there were concerns about the effect it would have on the city's milling industry, as the disaster had destroyed roughly one-third to one-half of the city's flour productive capacity. Shortly after the explosion, Washburn, who had traveled to Minneapolis upon hearing of the incident, announced his intention to rebuild the mill, with technological improvements that would make it safer and increase its productive capacity. Washburn hired Austrian engineer William de la Barre to design the new building, which he based on a mill in Budapest. De la Barre also installed dust collectors and improved ventilation systems. This new building was completed in 1880 on the site of the former building. The reopening coincided with an economic boom for the city, and flour production steadily increased until it peaked during World War I, after which there was a steady industry decline. The new mill (later known as the Gold Medal Flour mill) was affected by a fire in 1928, but following repairs it continued to operate until 1965. The building was later abandoned and finally destroyed in a fire in 1991. In 2003, the building's ruins were converted into the Mill City Museum, a history museum that focuses on the milling history of the city. Today, the ruins are listed on the National Register of Historic Places as a National Historic Landmark. The MNopedia entry for the disaster states, "It was the worst disaster of its type in the city's history, prompting major safety upgrades in future mill developments". According to General Mills (the eventual successor company of the mill), the disaster prompted Washburn to take an interest in the welfare of the children of the millworkers who had been affected, leading to the creation of the Washburn Memorial Orphan Asylum. Its successor organization, the Washburn Center for Children, continues to operate as a child and family services organization in the Twin Cities area. Memorials On the site of the destroyed mill, a stone memorial marker that lists the names of the 14 workers who died at the previous factory was erected as part of a stone portal. The memorial also includes a brief history of the disaster. Today, it is located near the Stone Arch Bridge. In the city's Lakewood Cemetery, a memorial dedicated to the 18 people who died in the disaster was erected in 1885. The memorial includes a plaque that lists the names of the deceased, while the base of the memorial depicts a sheaf of wheat, a broken gear, and a millstone. See also Tradeston Flour Mills explosion – A similar dust explosion at a flour mill in Glasgow in 1872 List of industrial disasters List of industrial disasters by death toll Notes References Further reading 1878 disasters in the United States 1878 in Minnesota 1878 industrial disasters Disasters in Minnesota Dust explosions Events in Minneapolis Explosions in 1878 Fires in Minnesota Food processing disasters History of Minneapolis Industrial fires and explosions in the United States Occupational safety and health
Great Mill Disaster
Chemistry
1,285
30,578,565
https://en.wikipedia.org/wiki/Zoological%20specimen
A zoological specimen is an animal or part of an animal preserved for scientific use. Various uses are: to verify the identity of a (species), to allow study, increase public knowledge of zoology. Zoological specimens are extremely diverse. Examples are bird and mammal study skins, mounted specimens, skeletal material, casts, pinned insects, dried material, animals preserved in liquid preservatives, and microscope slides. Natural history museums are repositories of zoological specimens Study skins Bird and mammal specimens are conserved as dry study skins, a form of taxidermy. The skin is removed from the animal's carcass, treated with absorbents, and filled with cotton or polyester batting (In the past plant fibres or sawdust were used). Bird specimens have a long, thin, wooden dowel wrapped in batting at their center. The dowel is often intentionally longer than the bird's body and exits at the animal's vent. This exposed dowel provides a place to handle the bird without disturbing the feathers. Mammal study skins do not normally utilize wooden dowels, instead preparators use wire to support the legs and tail of mammals. Labels are attached to a leg of the specimen with thread or string. Heat and chemicals are sometimes used to aid the drying of study skins. Skeletal Preparations (Osteology) Osteological collections consist of cleaned, complete and partial skeletons, crania of Vertebrates, mainly birds and mammals. They are used in studies of comparative anatomy and to identify bones from archaeological sites. Human bones are used in medical and forensic studies. Molluscs In museum collections it is common for the dry material to greatly exceed the amount of material that is preserved in alcohol. The shells minus their soft parts are kept in card trays within drawers or in glass tubes, often as lots (a lot is a collection of a single species taken from a single locality on a single occasion). Shell collections sometimes suffer from Byne's disease which also affects birds eggs. The study of dry mollusc shells is called conchology as distinct from malacology (wet specimens). Insects and similar invertebrates Most hard-bodied insect specimens and some other hard-bodied invertebrates such as certain Arachnida, are preserved as pinned specimens. Either while still fresh, or after rehydrating them if necessary because they had dried out, specimens are transfixed by special stainless steel entomological pins. As the insect dries the internal tissues solidify and, possibly aided to some extent by the integument, they grip the pin and secure the specimen in place on the pin. Very small, delicate specimens may instead be secured by fine steel points driven into slips of card, or glued to card points or similar attachments that in turn are pinned in the same way as entire mounted insects. The pins offer a means of handling the specimens without damage, and they also bear labels for descriptive and reference data. Once dried, the specimens may be kept in conveniently sized open trays. The bottoms of the trays are lined with a material suited to receiving and holding entomological pins securely and conveniently. Cork and foam plastics are convenient examples. However, open trays are very vulnerable to attack by museum beetle and similar pests, so such open trays are stored in turn inside glass-topped, insect-proof drawers, commonly protected by suitable pesticides or repellents or barriers. Alternatively, some museums store the pinned specimens directly in larger trays or drawers that are glass-topped and stored in cabinets. In contrast to such dried specimens, soft-bodied specimens most commonly are kept in "wet collections", meaning that they are stored in alcohol or similar preservative or fixative liquids, according to the intended function. Small specimens, whether hard or soft bodied, and whether entire, dissected, or sectioned, may be stored as microscope slide preparations. Wet specimens "Wet" specimen collections are stored in different solutions. A very old method is to store the specimen in 70% ethanol with various additives after fixing with formalin or in these days sometimes with a salt-solution. Some methods are very useful, because the color can be preserved. (Salt-)Solutions like this are Jores, Kaiserling and Romhányi. Modern specimens are stored in borosilicate glass due to its chemical and thermal resistance and good optical clarity. Data Minimum data associated with zoological specimens is the place and date of collection, attached to the specimen by a label. Additional information is the name of the collector and the habitat. Tissue from specimens may be saved for genetic studies (molecular data, DNA). Depending on the animal group, other data may be included, for instance in bird collections the bird’s breeding condition, weight, colours of its eyes, bills and legs and nature of the stomach contents. Composite specimens A single specimen may be a composite of preparations sharing a unique number. An example would be a vertebrate with an alcohol-preserved skin and viscera, a cleared and stained head, the post-cranial dried skeleton, histological, glass slides of various organs, and frozen tissue samples. This specimen could also be a voucher for a publication, or photographs and audiotape. Voucher specimens A voucher is a representative specimen of the animal used in a study, such as a specimen collected as part of an ecological survey or a specimen which was the source of DNA for a molecular study. Voucher specimens confirm the identity of the species referred to in the study. They are a backup against misidentification, changing species concepts which mislead results. Type specimens are a special type of voucher specimen used in taxonomy. Historic specimens Museum zoological specimens may have historic significance. For example, the specimens collected by Johann Baptist von Spix and Carl Friedrich Philipp von Martius during their Brazil Expedition (1817–1820) are housed in the Munich Zoology Museum. Models Museums make extensive use of models. When these are accurate they are considered to be specimens in their own right. Examples are the glass invertebrates of Leopold and Rudolf Blaschka. Examples See also Biological specimen Bird collections Cryopreservation Insect collecting Laboratory specimen Seed bank Type specimen Further reading Hall, E. R. 1962. Collecting and preparing study specimens of vertebrates. University of Kansas Museum of Natural History Miscellaneous Publications no. 30. 46 pp. Hangay, G., and M. Dingley. 1985. Biological museum methods. Volume I. Vertebrates. Academic Press, Sydney, Australia Howie, F. M. P. 1985. Conserving Natural History Collections: Some Present Problems and Strategies for the Future. Proceedings of the 1985 Workshop on Care and Maintenance of Natural History Collections:1-6. Kageyama, M., R. Monk, R. Bradley, G. Edson, and R. Baker. 2006. The changing significance and definition of the biological voucher. In S. Williams and C. Hawks (eds.) Museum Studies: Perspectives and Innovations. Society for the Preservation of Natural History Collections, Washington, D.C., 259-266. McAlpine, Donald F. 1985. Curators and Natural History Collections: Have We Become Islands in Science?. Proceedings of the 1985 Workshop on Care and Maintenance of Natural History Collections:7-14. Suarez, Andrew V. and Neil D. Tsutsui. 2004. The Value of Museum Collections for Research and Society. BioScience 54(1):66-74. References External links Naturkundemuseum Stuttgart Zoological Collection Database SZN Impressive. Images of wet specimens,labels,catalogues etc. Biological Survey of Canada The role of voucher specimens in validating faunistic and ecological research Museum handbook UBC Bird skin preparation Texas Tech University Halter, A.S. Standards for management of the recent mammal and bird collections Texas Tech University Natural history collections of the University of Edinburgh Wet Specimen collection of the National Museum of Australia See also List of natural history dealers Zoological nomenclature Zoology
Zoological specimen
Biology
1,634
55,637,660
https://en.wikipedia.org/wiki/HD%20111456
HD 111456 is a yellow-white hued star in the northern circumpolar constellation of Ursa Major. It is dimly visible to the naked eye, having an apparent visual magnitude of 5.85. Based upon an annual parallax shift of as seen from Earth, it is located about from the Sun. The star is moving closer to the Sun with a radial velocity of . HD 111456 is a nucleus cluster member of the Ursa Major Moving Group, a set of stars that are moving through space with a similar heading and velocity. Six other stars in the nucleus of the group are prominent members of the Big Dipper asterism. The stellar classification for this star is F7 V, indicating that it is an ordinary F-type main-sequence star. It is young, around 300−400 million years of age, and is spinning with a relatively high projected rotational velocity of 41.5 km/s. This is one of the most active F-type stars known, and it is a strong emitter of X-rays and an extreme UV source. It is an astrometric binary with a period of four years and a mass ratio of 0.5. Hence, the companion may be a young white dwarf star. References F-type main-sequence stars Astrometric binaries Ursa Major moving group Ursa Major Durchmusterung objects 111456 062512 4867
HD 111456
Astronomy
290
12,355,383
https://en.wikipedia.org/wiki/Pritchardia%20woodfordiana
Pritchardia woodfordiana is a species of flowering plant in the family Arecaceae. It is found only in Solomon Islands. It may be a form of Pritchardia pacifica. References Undescribed plant species woodfordiana Data deficient plants Endemic flora of the Solomon Islands (archipelago) Taxonomy articles created by Polbot
Pritchardia woodfordiana
Biology
73
31,228,061
https://en.wikipedia.org/wiki/VR%20mode
VR mode or Video Recording mode is a feature on stand-alone consumer and computer DVD recorders that allows video recording and editing on a DVD rewritable disc. In VR mode, users can create and rename titles for the scenes. Also, if a scene is deleted, the space allocated by it will be utilized later without the need of reformatting a disc. If the user would like to record on the same disc again at a later time, in VR mode, users may eject the disc and it will not be finalized by the recorder until it is manually initiated. For the sake of comparison, any DVD recorded in VR's competitor V mode (or Video mode) will be automatically finalized before it is ejected by the recorder. Disc finalization is still required if the disc formatted for VR mode will be played in another DVD player. Currently, users can only record in VR mode with the use of DVD-RW, DVD-RAM and DVD+RW discs, (updated in 2000 to accommodate DVD-R (General)) [DVD players marked “RW compatible” and “DVD Multi” can play DVD-VR recorded discs] and on some recorders, also on hard-disk drives. Blu-ray Disc and HD DVD also support VR mode-like features. DVD-VR & DVD+VR There are two quite different application formats commonly known as VR mode. 1) DVD-VR was established by the DVD Forum and can be found on DVD-RW and DVD-RAM 2) DVD+VR is the creation and responsibility of Philips Electronics and is seen on their DVD+RW recorders. DVD-VR The DVD-VR recording mode offers advanced editing (including Non Linear Editing (NLE)) but is not compatible with DVD-Video. Recorders do not edit the video data stream directly. Editing is achieved by creating a 'playlist' which references segments of the recorded video data stream and compile the playlist by chapters of the video stream or can access the video stream directly by time reference. Recorders generally employ one method or the other, but seldom both. DVD-VR can also be used with DVD+RW media, but recorders seldom do so. DVD+VR The DVD+VR recording mode (aka +VR functionality) is compatible with DVD-Video (normal DVD-Video players), but offers basic editing like partial overwriting, title dividing, chapter marker placement, replace the menu screens, etc. This can be accomplished easily on DVD+R media. DVD+VR can theoretically be used with DVD-RW media, but partial overwriting and replacement of menus cannot be so easily accomplished due to limitations of the media. In order to achieve this, it would be necessary for the recorder to read and store the entire contents of the disc, erase the disc and then rewrite it. For this reason alone, DVD+VR is seldom used with DVD-RW (or DVD-R) media. See also DVD Blu-ray Disc HD DVD DVD+VR DVD-VR References DVD Optical discs Optical disc authoring
VR mode
Technology
632
65,226,978
https://en.wikipedia.org/wiki/Slavic%20creation%20myth
The Slavic creation myth is a cosmogonic myth in Slavic mythology that explains how the world was created, who created it, and what principles guide it. This myth, in its Christianized form, survived until the nineteenth and twentieth century in various parts of the Slavdom in chronicles or folklore. In the Slavic mythology there are three versions of this myth: the first version is the so-called earth-diver myth, which intertwines two main motifs: the dualistic motif – the cooperation of God and the Devil (that is, the "good god" and the "bad god") is required to create the world, and the oceanic motif – the pre-existence water, where the seed of the Earth comes from; the second version speaks about the origin of the universe and the world from the Cosmic Egg and the World Tree; the third one about creation from a dismemberment of a primordial being. Creation of the world Creation by diving The myth that has been preserved from Poland comes from the Sieradz Land and was written down in 1898: In the Russian and Ukrainian variants, the devil retains some of the sand created under the tongue, and when the Earth begins to grow, the sand bursts out his mouth. This myth was written by the Russian slavist Alexander Afanasyev, who was one of the first researchers to study Russian folklore in 1859: The dualistic creation myth by "evil god" diving has 24 credentials in Balto-Slavic areas and 12 credentials in Finno-Ugric areas. The Bulgarian myth does not mention the Devil's catastrophe, but it develops the theme of creation by the formula "by God's and my power", and the Devil, who twice reversed the order of the formula, could not reach the bottom until the third time he pronounced the formula correctly, reached the bottom. The Moldavian variant also ends with the expansion of the Earth and the Transylvanian Romani extended the dualistic motif by punishing the devil by the bull and the Tree of Life, from which the people were formed. Only in a myth from Slovenia God goes to the bottom of the waters on His own. In another version of the myth, the Devil tries to push God into the sea to become the only creator – first he pushes him east, then west, south and north, but the land always expands. Annoyed by this fact, the Devil awakens God and tells him that it is time to bless the Earth, since it has grown so big. God suits him: "Once you carried me four ways to the water to throw me into it, you drew a cross with me, and this is how I blessed the earth myself." Then God goes to the Heavens and Devil, who attacked him, was thrown down into the abyss by lightning. Seemingly, the consecration of the earth seems to be a Christian motif, but this motif is used in myths to set directions and exists in other mythologies: according to the Maidu, the Earth Maker descended into the cosmic center of the world and there he met a Coyote (a trickster figure), who after the creation of the world went to sleep. The Earth Maker stretched the Earth from the south, through the west, to the north, and when the Coyote woke up, he stretched the Earth to the east. When the Earth Maker was left alone, he went around the Earth, staggering a full circle, fixing (in one version of the myth) the Earth to cardinal directions with stone hooks. For some Indian tribes, therefore, determining the directions of the world is a religious activity and for this reason, the Mexican Huicholas interpret Christian sign of the cross as an imitation of the Indian myth. For the Slavs, therefore, "consecration to the Earth" is the structuring of the universe and the designation of the directions of the Earth, and the extension of the point state "to infinity". Yet another myth says that the Earth grows all the time and God, who is left alone, does not know how to stop it. So God sends a bee to overhear the Devil. The Devil, laughing at God, says to himself: a stupid God does not know that you have to take some stick, draw the sign of the cross and say "Enough of this Earth!" When the Devil saw a bee running away on his shoulder, he tried to catch it, but it ran away from him, so he cursed her master: "May he who sent you here eat your dung," and God, who heard this, ordered the bee to produce honey from now on. A myth from Dobrzyń Land says that the Devil tells the duck to steal some earth from God, and when she was returning with the earth in her beak, she was captured by a hawk, who started choking her, and from the earth that fell out of her beak, mountains were created. For the creation of the world or of a being, the cooperation of God and the Devil is always required, who are endowed with equal power. Researchers also identify Slavic gods who hide under the Christian terms God and Devil. The Slavic word for God Bog or Boh was used by Christian missionaries as an equivalent of the Latin Deus and the Greek Theos because it corresponded meaningfully to the notion of a supernatural being, but in the Slavic religion, Bog always appears in compound names, i.e. Daž-bog, Stri-bog, Cherno-bog, or in names i.e. Boži-dar, Bohu-mil, Bogu-slav, etc., so most probably God was not a proper name for the figure mentioned in the myths of creation. When interpreting the figure of God, the text of Procopius on the religion of the Slavs may be helpful: , analyzing the folk image of the Christian God, indicates that God sits in heaven, sends rains in anger, shoots lightning at evil spirits, rules predatory animals and fate. These features indicate a god-thunderer, and therefore most likely Perun was replaced by God. Perun is one of the oldest Indo-European gods and is descended from the Proto-Indo-European storm god *perkʷunos. His name probably means literally the "Striking One" (compare Proto-Slavic *pьrati - "to beat, to hit"). The core *perkʷ means oak (cf. Latin querqus - "oak") - a sacred tree dedicated to Perun. In Ruthenian chronicles, he is presented as gray-haired, which would distinguish him from the Celtic Taranis, Germanic Thor or Hindu Indra as war gods, and made him resemble Roman Jupiter and Greek Zeus as rulers. However, according to some researchers, such as Henryk Łowmiański, the description of God rather points to Svarog. The Devil is interpreted as Veles, the god of the underworld. In Primary Chronicle, the Ruthenians, when making an alliance with the Greeks, swear on Perun and Veles, which may suggest that Veles' power was comparable to that of Perun. In Polish (and in some other Slavic languages too), just as Perun (Piorun) was devalued to the piorun "lightning", so Veles was devalued to the veles "devil, demon" in Czech. In South Slavic folklore, St. Elijah, the Christianized Perun, is often opposed to St. Nicholas, the Christianized Veles. The creation myth also fits Chernobog (lit. "Black God") and Belobog (lit. "White God"), who were to be worshiped by the Polabian Slavs: This myth may come from some ancient substrate, perhaps pre-European, assimilated by the Slavs and subjected to further transformations. This myth could also be perpetuated under the influence of the Persian antithetical couple Ahura Mazda and Ahriman, who left their mark in various syncretic religions. Bogomil's influence was also suggested: the followers of this religion claimed that the main drama of creation was the conflict between two brothers: the older Satanael (the suffix -el adds the divine element to Devil) and the younger Jesus (Savaof – the Word = Logos-Christ) – Satanael created the world and man, and God sent him the Word in the form of Jesus to save them. In the 16th century Legend of the Tiberian Sea, God, when he hovered over the water, saw Satanael as a water bird and ordered him to dive into the sea. According to the critics of this theory, it has serious shortcomings: the full text of this myth does not appear in any Bogomil texts, and this myth does not exist in areas dominated by Bogomilism, also in Western Europe, where the Cathars influenced the local folklore. This myth, however, existed in the territories of Poland, Ukraine and Belarus, where the Bogomil faith never reached. Creation from the Cosmic Egg The myth in which the creation of the world from the Cosmic Egg or World Egg can be found in the Carpathian carol also written down by Afanasayev: This carol contains three elements: the first one is two pigeons sitting on an oak tree, the second one is the catching of sand and stones by birds, the third is the creation of the world. Two pigeons, birds, hens or bees sitting in the crown of the tree is a popular motif among the Slavs – it represents the World Tree. In folklore, the World Tree stands in the "navel of the world", it is supposed to be heavy, tall and with a wide leaf. In a similar preserved myth, God throws his staff into the water, which then changes into a tree. On its branch, God and the Devil sit down to take the world out of the water. The relationship of this prayer with the World Egg is indicated by the "fine sand" from which the "black earth" and the "blue stones" from which the heaven and the celestial objects are made. This corresponds to the widespread myths of the Cosmic Egg, which is broken down in the creative act, and from whose lower shell the Earth is formed, and from whose upper shell the Heavens are formed. Vladimir Toporov also points to the existence of this myth in Russian fairy tales. In these fairy tales, the hero, looking for a princess, travels through three kingdoms, and after defeating three vipers, the kingdoms are reduced to three eggs. Fairy-tale eggs are generally submerged in water and their extraction and breaking up creates a "kingdom" – a world in fairy tongue. Also, the triple of egg-kingdoms is not accidental – it corresponds to the tripartite division of the world in Indo-European mythologies into Heaven (Vyraj), Earth and the Underworld (Nav). In Dobrzyń Land, it was directly believed that the world was created from an egg lying on a giant tree and the story of the princess from the egg, which the prince was to marry, was preserved: she was tricked by a witch into a duck that was killed, whose blood then grew into an apple tree. From Slovenia the myth has survived, where God sends a rooster to Earth, who lays an egg from which seven rivers are poured: There were also riddles in Poland that pointed to the egg: "There is a world. And in this world there is a yellow flower" or "There is a white world. And in this world a yellow flower". Creation from dismemberment Another kind of creation myth has survived: the creation of the world from a dismembered first human or another being. Polish scholar Stanislaw Schayer recalled the text from the Dove Book, which was a collection of oral stories of the clergy, the following story: a great book fell from heaven, in which the history of being was written; kings ask tsar David to read it, but the book is too big, but David, inspired by the Holy Spirit, will answer three questions; the first one concerns the creation of the world: In four variants, the last three lines replace the text: This myth is most likely not a Christian influence, but Slavic phraseology has been Christianized, probably under the influence of the apocryphal Book of Enoch, in which the same was done with the Iranian myth, which in turn could have been the source of the Russian myth. A similar motif is present in other Indo-European myths: in Hindu mythology a society was formed from the body of Purusha – the first man: Brahmins from the mouth, warriors from the shoulders, peasants from the hips, shudras from the feet; in Scandinavia this being was Ymir and in Iran Gayōmart. Functioning of the world The world is sustained by animals or fish. In the myth described by Afanasayev, the world is sustained by whales: at first there were seven of them, but three are gone and four are left. Then one died and three are left and therefore the world is crooked. A similar myth, where the fall of one of the "pillars of the world" causes a catastrophe, occurs, for example, in China. Such a decomposition of the original seven: 3 + 1 + 3 can testify to the multiplicity of worlds - three were before ours and three will be after ours. A similar motif exists among Hopi Indians or in the doctrine of the five worlds of Bambara. The world, in order not to break, is wrapped around it is the Zmiy or Zmiya (Viper). This can mean a constant threat from one of the creators. A similar theme exists in Nordic mythology (Jörmungandr). The dome of the world was made of stone, sometimes of silicon, which explained the formation of lightning, or of blue gemstone, which is a symbol of the stated time. The dome, especially in the Western Slavs, was based on a "pillar". (a kind of Axis Mundi – Cosmic Tree) running from the Pole Star, which rotates the whole vault. The location of the contact between the pillar and the vault and the ground had specific characteristics: these places were called the zabka (frog) or sierdzeń (gudgeon pin), which is connected to the constellation of the Great and Little Wagon. In the Slavs, the souls of the dead travelled to the Undergrounds via a bridge - at night it was the Milky Way and during the day it was a rainbow. In the materials collected by such a Milky Way is called the Way of the Soul, the Way of the [blue] Army and it was spilled with stardust. The other Axis Mundi connecting the worlds was the Tree of the Family, connected to the dziady – the deceased's name is also his identity, and it lasts until someone mentions his name, until his name is forgotten and he joins the nameless group of souls. The souls already in the afterlife return to Earth in the rays of the sun. For the Slavs, Cosmic Trees could function as Cosmic Mountains. Mountains were often treated as magical places, temples were built on them or rituals were performed there. The mountains such as Ślęża, Kyiv Hill or Bald Mountain were especially popular, the Montenegrins called Durmitor mountain "the Blue Column", and the Slovaks considered Kriváň as a sacred mountain. In Kievan Rus' it was believed that "the high mountain of Triglav appeared first from the water". References Bibliography Further reading Slavic mythology Creation myths
Slavic creation myth
Astronomy
3,204
2,302,081
https://en.wikipedia.org/wiki/T-tubule
T-tubules (transverse tubules) are extensions of the cell membrane that penetrate into the center of skeletal and cardiac muscle cells. With membranes that contain large concentrations of ion channels, transporters, and pumps, T-tubules permit rapid transmission of the action potential into the cell, and also play an important role in regulating cellular calcium concentration. Through these mechanisms, T-tubules allow heart muscle cells to contract more forcefully by synchronising calcium release from the sarcoplasmic reticulum throughout the cell. T-tubule structure and function are affected beat-by-beat by cardiomyocyte contraction, as well as by diseases, potentially contributing to heart failure and arrhythmias. Although these structures were first seen in 1897, research into T-tubule biology is ongoing. Structure T-tubules are tubules formed from the same phospholipid bilayer as the surface membrane or sarcolemma of skeletal or cardiac muscle cells. They connect directly with the sarcolemma at one end before travelling deep within the cell, forming a network of tubules with sections running both perpendicular (transverse) to and parallel (axially) to the sarcolemma. Due to this complex orientation, some refer to T-tubules as the transverse-axial tubular system. The inside or lumen of the T-tubule is open at the cell surface, meaning that the T-tubule is filled with fluid containing the same constituents as the solution that surrounds the cell (the extracellular fluid). Rather than being just a passive connecting tube, the membrane that forms T-tubules is highly active, being studded with proteins including L-type calcium channels, sodium-calcium exchangers, calcium ATPases and Beta adrenoceptors. T-tubules are found in both atrial and ventricular cardiac muscle cells (cardiomyocytes), in which they develop in the first few weeks of life. They are found in ventricular muscle cells in most species, and in atrial muscle cells from large mammals. In cardiac muscle cells, across different species, T-tubules are between 20 and 450 nanometers in diameter and are usually located in regions called Z-discs where the actin myofilaments anchor within the cell. T-tubules within the heart are closely associated with the intracellular calcium store known as the sarcoplasmic reticulum in specific regions referred to as terminal cisternae. The association of the T-tubule with a terminal cisterna is known as a diad. In skeletal muscle cells, T-tubules are three to four times narrower than those in cardiac muscle cells, and are between 20 and 40 nm in diameter. They are typically located at either side of the myosin strip, at the junction of overlap (A-I junction) between the A and I bands. T-tubules in skeletal muscle are associated with two terminal cisternae, known as a triad. Regulators The shape of the T-tubule system is produced and maintained by a variety of proteins. The protein amphiphysin-2 is encoded by the gene BIN1 and is responsible for forming the structure of the T-tubule and ensuring that the appropriate proteins (in particular L-type calcium channels) are located within the T-tubule membrane. Junctophilin-2 is encoded by the gene JPH2 and helps to form a junction between the T-tubule membrane and the sarcoplasmic reticulum, vital for excitation-contraction coupling. Titin capping protein known as telethonin is encoded by the TCAP gene and helps with T-tubule development and is potentially responsible for the increasing number of T-tubules seen as muscles grow. Function Excitation-contraction coupling T-tubules are an important link in the chain from electrical excitation of a cell to its subsequent contraction (excitation-contraction coupling). When contraction of a muscle is needed, stimulation from a nerve or an adjacent muscle cell causes a characteristic flow of charged particles across the cell membrane known as an action potential. At rest, there are fewer positively charged particles on the inner side of the membrane compared to the outer side, and the membrane is described as being polarised. During an action potential, positively charged particles (predominantly sodium and calcium ions) flow across the membrane from the outside to the inside. This reverses the normal imbalance of charged particles and is referred to as depolarization. One region of membrane depolarizes adjacent regions, and the resulting wave of depolarization then spreads along the cell membrane. The polarization of the membrane is restored as potassium ions flow back across the membrane from the inside to the outside of the cell. In cardiac muscle cells, as the action potential passes down the T-tubules it activates L-type calcium channels in the T-tubular membrane. Activation of the L-type calcium channel allows calcium to pass into the cell. T-tubules contain a higher concentration of L-type calcium channels than the rest of the sarcolemma and therefore the majority of the calcium that enters the cell occurs via T-tubules. This calcium binds to and activates a receptor, known as a ryanodine receptor, located on the cell's own internal calcium store, the sarcoplasmic reticulum. Activation of the ryanodine receptor causes calcium to be released from the sarcoplasmic reticulum, causing the muscle cell to contract. In skeletal muscle cells, however, the L-type calcium channel is directly attached to the ryanodine receptor on the sarcoplasmic reticulum allowing activation of the ryanodine receptor directly without the need for an influx of calcium. The importance of T-tubules is not solely due to their concentration of L-type calcium channels, but lies also within their ability to synchronise calcium release within the cell. The rapid spread of the action potential along the T-tubule network activates all of the L-type calcium channels near-simultaneously. As T-tubules bring the sarcolemma very close to the sarcoplasmic reticulum at all regions throughout the cell, calcium can then be released from the sarcoplasmic reticulum across the whole cell at the same time. This synchronisation of calcium release allows muscle cells to contract more forcefully. In cells lacking T-tubules such as smooth muscle cells, diseased cardiomyocytes, or muscle cells in which T-tubules have been artificially removed, the calcium that enters at the sarcolemma has to diffuse gradually throughout the cell, activating the ryanodine receptors much more slowly as a wave of calcium leading to less forceful contraction. As the T-tubules are the primary location for excitation-contraction coupling, the ion channels and proteins involved in this process are concentrated here - there are 3 times as many L-type calcium channels located within the T-tubule membrane compared to the rest of the sarcolemma. Furthermore, beta adrenoceptors are also highly concentrated in the T-tubular membrane, and their stimulation increases calcium release from the sarcoplasmic reticulum. Calcium control As the space within the lumen of the T-tubule is continuous with the space that surrounds the cell (the extracellular space), ion concentrations between the two are very similar. However, due to the importance of the ions within the T-tubules (particularly calcium in cardiac muscle), it is very important that these concentrations remain relatively constant. As the T-tubules are very thin, they essentially trap the ions. This is important as, regardless of the ion concentrations elsewhere in the cell, T-tubules still have enough calcium ions to permit muscle contraction. Therefore, even if the concentration of calcium outside the cell falls (hypocalcaemia), the concentration of calcium within the T-tubule remains relatively constant, allowing cardiac contraction to continue. As well as T-tubules being a site for calcium entry into the cell, they are also a site for calcium removal. This is important as it means that calcium levels within the cell can be tightly controlled in a small area (i.e. between the T-tubule and sarcoplasmic reticulum, known as local control). Proteins such as the sodium-calcium exchanger and the sarcolemmal ATPase are located mainly in the T-tubule membrane. The sodium-calcium exchanger passively removes one calcium ion from the cell in exchange for three sodium ions. As a passive process it can therefore allow calcium to flow into or out of the cell depending on the combination of the relative concentrations of these ions and the voltage across the cell membrane (the electrochemical gradient). The calcium ATPase removes calcium from the cell actively, using energy derived from adenosine triphosphate (ATP). Detubulation In order to study T-tubule function, T-tubules can be artificially uncoupled from the surface membrane using a technique known as detubulation. Chemicals such as glycerol or formamide (for skeletal and cardiac muscle respectively) can be added to the extracellular solution that surrounds the cells. These agents increase the osmolarity of the extracellular solution, causing the cells to shrink. When these agents are withdrawn, the cells rapidly expand and return to their normal size. This shrinkage and re-expansion of the cell causes T-tubules to detach from the surface membrane. Alternatively, the osmolarity of the extracellular solution can be decreased, using for example hypotonic saline, causing a transient cell swelling. Returning the extracellular solution to a normal osmolarity allows the cells to return to their previous size, again leading to detubulation. History The idea of a cellular structure that later became known as a T-tubule was first proposed in 1881. The very brief time lag between stimulating a striated muscle cell and its subsequent contraction was too short to have been caused by a signalling chemical travelling the distance between the sarcolemma and the sarcoplasmic reticulum. It was therefore suggested that pouches of membrane reaching into the cell might explain the very rapid onset of contraction that had been observed. It took until 1897 before the first T-tubules were seen, using light microscopy to study cardiac muscle injected with India ink.  Imaging technology advanced, and with the advent of transmission electron microscopy the structure of T-tubules became more apparent leading to the description of the longitudinal component of the T-tubule network in 1971. In the 1990s and 2000s confocal microscopy enabled three-dimensional reconstruction of the T-tubule network and quantification of T-tubule size and distribution, and the important relationships between T-tubules and calcium release began to be unravelled with the discovery of calcium sparks. While early work focussed on ventricular cardiac muscle and skeletal muscle, in 2009 an extensive T-tubule network in atrial cardiac muscle cells was observed. Ongoing research focusses on the regulation of T-tubule structure and how T-tubules are affected by and contribute to cardiovascular diseases. Clinical significance The structure of T-tubules can be altered by disease, which in the heart may contribute to weakness of the heart muscle or abnormal heart rhythms. The alterations seen in disease range from a complete loss of T-tubules to more subtle changes in their orientation or branching patterns. T-tubules may be lost or disrupted following a myocardial infarction, and are also disrupted in the ventricles of patients with heart failure, contributing to reduced force of contraction and potentially decreasing the chances of recovery. Heart failure can also cause the near-complete loss of T-tubules from atrial cardiomyocytes, reducing atrial contractility and potentially contributing to atrial fibrillation. Structural changes in T-tubules can lead to the L-type calcium channels moving away from the ryanodine receptors. This can increase the time taken for calcium levels within the cell to rise leading to weaker contractions and arrhythmias. However, disordered T-tubule structure may not be permanent, as some suggest that T-tubule remodelling might be reversed through the use of interval training. See also Muscle contraction References Cell anatomy Membrane biology Muscular system
T-tubule
Chemistry
2,565
448,933
https://en.wikipedia.org/wiki/Primitive%20element%20theorem
In field theory, the primitive element theorem states that every finite separable field extension is simple, i.e. generated by a single element. This theorem implies in particular that all algebraic number fields over the rational numbers, and all extensions in which both fields are finite, are simple. Terminology Let be a field extension. An element is a primitive element for if i.e. if every element of can be written as a rational function in with coefficients in . If there exists such a primitive element, then is referred to as a simple extension. If the field extension has primitive element and is of finite degree , then every element can be written in the form for unique coefficients . That is, the set is a basis for E as a vector space over F. The degree n is equal to the degree of the irreducible polynomial of α over F, the unique monic of minimal degree with α as a root (a linear dependency of ). If L is a splitting field of containing its n distinct roots , then there are n field embeddings defined by and for , and these extend to automorphisms of L in the Galois group, . Indeed, for an extension field with , an element is a primitive element if and only if has n distinct conjugates in some splitting field . Example If one adjoins to the rational numbers the two irrational numbers and to get the extension field of degree 4, one can show this extension is simple, meaning for a single . Taking , the powers 1, α, α2, α3 can be expanded as linear combinations of 1, , , with integer coefficients. One can solve this system of linear equations for and over , to obtain and . This shows that α is indeed a primitive element: One may also use the following more general argument. The field clearly has four field automorphisms defined by and for each choice of signs. The minimal polynomial of must have , so must have at least four distinct roots . Thus has degree at least four, and , but this is the degree of the entire field, , so . Theorem statement The primitive element theorem states: Every separable field extension of finite degree is simple. This theorem applies to algebraic number fields, i.e. finite extensions of the rational numbers Q, since Q has characteristic 0 and therefore every finite extension over Q is separable. Using the fundamental theorem of Galois theory, the former theorem immediately follows from Steinitz's theorem. Characteristic p For a non-separable extension of characteristic p, there is nevertheless a primitive element provided the degree [E : F] is p: indeed, there can be no non-trivial intermediate subfields since their degrees would be factors of the prime p. When [E : F] = p2, there may not be a primitive element (in which case there are infinitely many intermediate fields by Steinitz's theorem). The simplest example is , the field of rational functions in two indeterminates T and U over the finite field with p elements, and . In fact, for any in , the Frobenius endomorphism shows that the element lies in F , so α is a root of , and α cannot be a primitive element (of degree p2 over F), but instead F(α) is a non-trivial intermediate field. Proof Suppose first that is infinite. By induction, it suffices to prove that any finite extension is simple. For , suppose fails to be a primitive element, . Then , since otherwise . Consider the minimal polynomials of over , respectively , and take a splitting field containing all roots of and of . Since , there is another root , and a field automorphism which fixes and takes . We then have , and: , and therefore . Since there are only finitely many possibilities for and , only finitely many fail to give a primitive element . All other values give . For the case where is finite, we simply take to be a primitive root of the finite extension field . History In his First Memoir of 1831, published in 1846, Évariste Galois sketched a proof of the classical primitive element theorem in the case of a splitting field of a polynomial over the rational numbers. The gaps in his sketch could easily be filled (as remarked by the referee Poisson) by exploiting a theorem of Lagrange from 1771, which Galois certainly knew. It is likely that Lagrange had already been aware of the primitive element theorem for splitting fields. Galois then used this theorem heavily in his development of the Galois group. Since then it has been used in the development of Galois theory and the fundamental theorem of Galois theory. The primitive element theorem was proved in its modern form by Ernst Steinitz, in an influential article on field theory in 1910, which also contains Steinitz's theorem; Steinitz called the "classical" result Theorem of the primitive elements and his modern version Theorem of the intermediate fields. Emil Artin reformulated Galois theory in the 1930s without relying on primitive elements. References External links J. Milne's course notes on fields and Galois theory The primitive element theorem at mathreference.com The primitive element theorem at planetmath.org The primitive element theorem on Ken Brown's website (pdf file) Field (mathematics) Theorems in abstract algebra
Primitive element theorem
Mathematics
1,080
67,835,347
https://en.wikipedia.org/wiki/JUWELS
JUWELS (Jülich Wizard for European Leadership Science) is a supercomputer developed by Atos and hosted by the Jülich Supercomputing Centre (JSC) of the Forschungszentrum Jülich. Supercomputer It is capable of a theoretical peak of 70.980 petaflops (the speed is for JUWELS Booster Module) and it serves as the replacement of the now out-of-operation JUQUEEN supercomputer. JUWELS Booster Module was ranked as the seventh fastest supercomputer in the world at its debut on the November 2020 TOP500 list. The JUWELS Booster Module is part of a modular system architecture and a second Xeon based JUWELS Cluster Module ranked separately as the 44th fastest supercomputer in the world on the November 2020 TOP500 list. JUWELS Booster Module uses AMD Epyc processors with Nvidia A100 GPUs for acceleration. University of Edinburgh contracted a deal to utilise JUWELS to pursue research in the fields of particle physics, astronomy, cosmology and nuclear physics. In 2021, JUWELS Booster was among eight other supercomputing systems which participated in the MLPerf HPC training benchmark, which is the benchmark developed by the consortium of artificial intelligence developers from academia, research labs, and industry aiming to unbiasedly evaluate the training and inference performance for hardware, software, and services used for AI. JUWELS also ranked among the top 15 on the worldwide Green500 list of energy-efficient supercomputers. The Simulation and Data Laboratory (SimLab) for Climate Science at Forschungszentrum Jülich uses JUWELS to detect gravity waves in the atmosphere by running computing programs to continuously download and compute on the operational radiance measurements from the NASA's data servers. See also Computer science Computing Supercomputing in Europe References External links Forschungszentrum Jülich website Supercomputing in Europe Jülich Research Centre
JUWELS
Technology
417
13,191,396
https://en.wikipedia.org/wiki/Canadian%20Association%20of%20Rocketry
The Canadian Association of Rocketry - L'Association Canadienne De Fuséologie (CAR-ACF) is a Canadian federal not for profit self-supporting association and governing body representing amateur/model rocketeers across Canada. The history of amateur/ model rocketry in Canada goes back to 1965 with its approval by the Canadian Federal government with the assistance of the Canadian Aeronautics and Space Institute (CASI), the Royal Canadian Flying Clubs (RCFCA), the new Canadian Association of Rocketry (CAR), and then with the help of the Youth Aeronautic and Aerospace of Canada (YAAC). CAR-ACF was incorporated in 2009 from the then existing Canadian Association of Rocketry - CAR. Among its many duties, CAR-ACF is: to promote development of Amateur Aerospace as a recognized sport and worthwhile amateur activity. the official national body for amateur aerospace in Canada. a chartering organization for model rocket clubs across the country. offers its chartered clubs contest sanction and assistance in getting and keeping flying sites. the voice of its membership, providing liaison and certification programs with Transport Canada, Natural Resources Canada (Explosives Regulatory Division), and other government agencies also works with local governments, zoning boards and parks departments to promote the interests of local chartered clubs. CAR-ACF is the principal stakeholder representing Non-military, Non-commercial aerospace on the Transport Canada Canadian Aviation Regulatory Advisory Council (CARAC) which is responsible for maintaining and developing the Canadian Aviation Regulations (CARS). a Rocketry Association whose rules and regulations as formally acceptable to the Minister of Transport. External links Canadian Association of Rocketry - L'Association Canadienne De Fuséologie Clubs and societies in Canada Model rocketry
Canadian Association of Rocketry
Astronomy
349
5,293,306
https://en.wikipedia.org/wiki/SNP%20array
In molecular biology, SNP array is a type of DNA microarray which is used to detect polymorphisms within a population. A single nucleotide polymorphism (SNP), a variation at a single site in DNA, is the most frequent type of variation in the genome. Around 335 million SNPs have been identified in the human genome, 15 million of which are present at frequencies of 1% or higher across different populations worldwide. Principles The basic principles of SNP array are the same as the DNA microarray. These are the convergence of DNA hybridization, fluorescence microscopy, and solid surface DNA capture. The three mandatory components of the SNP arrays are: An array containing immobilized allele-specific oligonucleotide (ASO) probes. Fragmented nucleic acid sequences of target, labelled with fluorescent dyes. A detection system that records and interprets the hybridization signal. The ASO probes are often chosen based on sequencing of a representative panel of individuals: positions found to vary in the panel at a specified frequency are used as the basis for probes. SNP chips are generally described by the number of SNP positions they assay. Two probes must be used for each SNP position to detect both alleles; if only one probe were used, experimental failure would be indistinguishable from homozygosity of the non-probed allele. Applications An SNP array is a useful tool for studying slight variations between whole genomes. The most important clinical applications of SNP arrays are for determining disease susceptibility and for measuring the efficacy of drug therapies designed specifically for individuals. In research, SNP arrays are most frequently used for genome-wide association studies. Each individual has many SNPs. SNP-based genetic linkage analysis can be used to map disease loci, and determine disease susceptibility genes in individuals. The combination of SNP maps and high density SNP arrays allows SNPs to be used as markers for genetic diseases that have complex traits. For example, genome-wide association studies have identified SNPs associated with diseases such as rheumatoid arthritis and prostate cancer. A SNP array can also be used to generate a virtual karyotype using software to determine the copy number of each SNP on the array and then align the SNPs in chromosomal order. SNPs can also be used to study genetic abnormalities in cancer. For example, SNP arrays can be used to study loss of heterozygosity (LOH). LOH occurs when one allele of a gene is mutated in a deleterious way and the normally-functioning allele is lost. LOH occurs commonly in oncogenesis. For example, tumor suppressor genes help keep cancer from developing. If a person has one mutated and dysfunctional copy of a tumor suppressor gene and his second, functional copy of the gene gets damaged, they may become more likely to develop cancer. Other chip-based methods such as comparative genomic hybridization can detect genomic gains or deletions leading to LOH. SNP arrays, however, have an additional advantage of being able to detect copy-neutral LOH (also called uniparental disomy or gene conversion). Copy-neutral LOH is a form of allelic imbalance. In copy-neutral LOH, one allele or whole chromosome from a parent is missing. This problem leads to duplication of the other parental allele. Copy-neutral LOH may be pathological. For example, say that the mother's allele is wild-type and fully functional, and the father's allele is mutated. If the mother's allele is missing and the child has two copies of the father's mutant allele, disease can occur. High density SNP arrays help scientists identify patterns of allelic imbalance. These studies have potential prognostic and diagnostic uses. Because LOH is so common in many human cancers, SNP arrays have great potential in cancer diagnostics. For example, recent SNP array studies have shown that solid tumors such as gastric cancer and liver cancer show LOH, as do non-solid malignancies such as hematologic malignancies, ALL, MDS, CML and others. These studies may provide insights into how these diseases develop, as well as information about how to create therapies for them. Breeding in a number of animal and plant species has been revolutionized by the emergence of SNP arrays. The method is based on the prediction of genetic merit by incorporating relationships among individuals based on SNP array data. This process is known as genomic selection. Crop-specific arrays find use in agriculture. References Further reading Molecular biology Gene expression Bioinformatics Microarrays
SNP array
Chemistry,Materials_science,Engineering,Biology
1,000
1,246,673
https://en.wikipedia.org/wiki/Council%20circle
A council circle is a distinctive feature at the center of some tribal communities in North America. The historical function of the council circles is debated. Some suggest that the talking circles are ceremonial, and others support a hypothesis that they were places for political discussion that suggest aboriginal democracy. In current use, the council circle is often synonymous with the talking circle, and is a means of group communication that promotes input from all the members. The practice has been adopted by people of many cultures. A talking stick, or other significant or impromptu object, is passed around the circle, and only the circle member holding the stick is allowed to speak, though he or she may allow others to interject. Talking sticks in the context of the council circle may have been used pre-historically by indigenous peoples to create egalitarian forums. Photographs show that some talking sticks were very tall, suggesting that circle participants would have stood when speaking. See also Center for Council Learning circle Study circle References External links Talking Circle – A Place for Peace, Harmony and Reflection by Daniel N. Paul Circle of Strength – Restoring Relationship Through Empathy – California prison program Human communication
Council circle
Biology
226
3,865,832
https://en.wikipedia.org/wiki/Evolution%40Home
evolution@home was a volunteer computing project for evolutionary biology, launched in 2001. The aim of evolution@home is to improve understanding of evolutionary processes. This is achieved by simulating individual-based models. The Simulator005 module of evolution@home was designed to better predict the behaviour of Muller's ratchet. The project was operated semi-automatically; participants had to manually download tasks from the webpage and submit results by email using this method of operation. yoyo@home used a BOINC wrapper to completely automate this project by automatically distributing tasks and collecting their results. Therefore, the BOINC version was a complete volunteer computing project. yoyo@home has declared its involvement in this project finished. See also Artificial life Digital organism Evolutionary computation Folding@home List of volunteer computing projects References Science in society Free science software Volunteer computing projects Digital organisms Bioinformatics
Evolution@Home
Engineering,Biology
180
23,544,519
https://en.wikipedia.org/wiki/Halichondrin%20B
Halichondrin B is a polyether macrolide originally isolated from the marine sponge Halichondria okadai by Hirata and Uemura in 1986. In the same report, these authors also reported the exquisite anticancer activity of halichondrin B against murine cancer cells both in culture and in in vivo studies. Halichondrin B was highly prioritized for development as a novel anticancer therapeutic by the United States National Cancer Institute and, in 1991, was the original test case for identification of mechanism of action (in this case, tubulin-targeted mitotic inhibitor) by NCI's then-brand-new "60-cell line screen" The complete chemical synthesis of halichondrin B was achieved by Yoshito Kishi and colleagues at Harvard University in 1992, an achievement that ultimately enabled the discovery and development of the structurally simplified and pharmaceutically optimized analog eribulin (E7389, ER-086526, NSC-707389). Eribulin was approved by the U.S. Food and Drug Administration on November 15, 2010, to treat patients with metastatic breast cancer who have received at least two prior chemotherapy regimens for late-stage disease, including both anthracycline- and taxane-based chemotherapies. Eribulin is marketed by Eisai Co. under the tradename Halaven. Biosynthesis While a producer organism for Halichondrin B has never been isolated in pure culture, the structural features of Halichondrin B, such as the 'odd-even' rule of methylation, and the abundance of oxygen heterocycles, suggest it is a product of dinoflagellate polyether metabolism. In support of this conjecture, the known dinoflagellate toxin okadaic acid was isolated from the same species of sponge. But, Halichondrin B is not found in H. panicea or H. japonica which are found in similar tide pools in Japan as Halichondria okadai. See also Altohyrtin A References Macrolides Polyether toxins
Halichondrin B
Chemistry
450
3,263,222
https://en.wikipedia.org/wiki/Multi-Object%20Spectrometer
A multi-object spectrometer is a type of optical spectrometer capable of simultaneously acquiring the spectra of multiple separate objects in its field of view. It is used in astronomical spectroscopy and is related to long-slit spectroscopy. This technique became available in the 1980s. Description The term multi-object spectrograph is commonly used for spectrographs using a bundle of fibers to image part of the field. The entrance of the fibers is at the focal plane of the imaging instrument. The bundle is then reshaped; the individual fibers are aligned at the entrance slit of a spectrometer, dispersing the light on a detector. This technique is closely related to integral field spectrography (IFS), more specifically to fiber-IFS. It is a form of snapshot hyperspectral imaging, itself a part of imaging spectroscopy. Apertures Typically, the apertures of multi-object spectrographs can be modified to fit the needs of the given observation. For example, the MOSFIRE (Multi-Object Spectrometer for Infra-Red Exploration ) instrument on the W. M. Keck Observatory contains the Configurable Slit Unit (CSU) allowing arbitrary positioning of up to forty-six 18 cm slits by moving opposable bars. Some fiber-fed spectroscopes, such as the Large Sky Area Multi-Object Fibre Spectroscopic Telescope (LAMOST) can move the fibers to desired position. The LAMOST moves its 4000 fibers separately within designated areas for the requirements of a measurement, and can correct positioning errors in real time. The James Webb Space Telescope uses a fixed Micro-Shutter Assembly (MSA), an array of nearly 250000 5.1 mm by 11.7 mm shutters that can independently be opened or closed to change the location of the open slits on the device. Uses in telescopes Ground-based instruments Instruments with multi-object spectrometry capabilities are available on most 8-10 meter-class ground-based observatories. For example, the Large Binocular Telescope, W. M. Keck Observatory, Gran Telescopio Canarias, Gemini Observatory, New Technology Telescope, William Herschel Telescope, UK Schmidt Telescope and LAMOST include such system. Four instruments in the Very Large Telescope, including the KMOS (K-band multi-object spectrograph) and the VIMOS (Visible Multi Object Spectrograph) instruments, have multi-object spectroscopic capabilities. Space-based instruments The Hubble Space Telescope has been operating the NICMOS (Near Infrared Camera and Multi-Object Spectrometer) from 1997 to 1999 and from 2002 to 2008. The James Webb Space Telescope's NIRSpec (Near-Infrared Spectrograph) instrument is a multi-object spectrometer. References Observational astronomy Astronomical spectroscopy
Multi-Object Spectrometer
Physics,Chemistry,Astronomy
580
2,152,181
https://en.wikipedia.org/wiki/List%20of%20chemical%20elements
118 chemical elements have been identified and named officially by IUPAC. A chemical element, often simply called an element, is a type of atom which has a specific number of protons in its atomic nucleus (i.e., a specific atomic number, or Z). The definitive visualisation of all 118 elements is the periodic table of the elements, whose history along the principles of the periodic law was one of the founding developments of modern chemistry. It is a tabular arrangement of the elements by their chemical properties that usually uses abbreviated chemical symbols in place of full element names, but the linear list format presented here is also useful. Like the periodic table, the list below organizes the elements by the number of protons in their atoms; it can also be organized by other properties, such as atomic weight, density, and electronegativity. For more detailed information about the origins of element names, see List of chemical element name etymologies. List See also List of people whose names are used in chemical element names List of places used in the names of chemical elements List of chemical element name etymologies Roles of chemical elements Extended periodic table Theories about undiscovered elements References External links Atoms made thinkable, an interactive visualisation of the elements allowing physical and chemical properties of the elements to be compared
List of chemical elements
Chemistry
266
4,374,802
https://en.wikipedia.org/wiki/Zygosaccharomyces%20bailii
Zygosaccharomyces bailii is a species in the genus Zygosaccharomyces. It was initially described as Saccharomyces bailii by Lindner in 1895, but in 1983 it was reclassified as Zygosaccharomyces bailii in the work by Barnett et al. Spoilage resulting from growth of the yeast Zygosaccharomyces is widespread, which has caused significant economic losses to the food industry. Within this genus, Z. bailii is one of the most troublesome species due to its exceptional tolerance to various stressful conditions. A wide range of acidic and/or high-sugar products such as fruit concentrates, wine, soft drinks, syrups, ketchup, mayonnaise, pickles, salad dressing, etc., are normally considered to be shelf-stable, i.e. they readily inactivate a broad range of food-borne microorganisms. However, these products are still susceptible to spoilage by Z. bailii. Morphology and modes of reproduction Zygosaccharomyces bailii vegetative cells are usually ellipsoid, non-motile and reproduced asexually by multilateral budding, i.e. the buds can arise from various sites on the cells. During the budding process, a parent cell produces a bud on its outer surface. As the bud elongates, the parent cell's nucleus divides and one nucleus migrates into the bud. Cell wall material is filled in the gap between the bud and the parent cell; eventually the bud is separated to form a daughter cell of unequal size. Z. bailii cell size varies within a range of (3.5 - 6.5) x (4.5 - 11.5) μm and the cells exist singly or in pair, rarely in short chain. It has been observed that the doubling time of this yeast is approximately 3 hours at 23 °C in yeast nitrogen base broth containing 20% (w/v) fructose (pH 4.0). In more stressful conditions, this generation time is significantly extended. Besides the asexual reproduction mode, under certain conditions (e.g. nutritional stress) Z. bailii produces sexual spores (ascospores) in a sac called ascus (plural: asci). Normally, each ascus contains one to four ascospores, which are generally smooth, thin-walled, spherical or ellipsoidal. It should be mentioned that the ascospores are rarely observed as it is difficult and may take a long time to induce their formation; besides many yeast strains lose the ability to produce ascospores on repeated sub-cultures in the laboratory. On various nutrient agars, Z. bailii colonies are smooth, round, convex and white to cream coloured, with a diameter of 2 – 3 mm at 3 – 7 days. As the morphology properties of Zygosaccharomyces are identical to other yeast genera such as Saccharomyces, Candida and Pichia, it is impossible to differentiate Zygosaccharomyces from other yeasts or individual species within the genus based on macroscopic and microscopic morphology observations. Therefore, the yeast identification to species level is more dependent on physiological and genetic characteristics than on morphological criteria. Culture conditions In general, any glucose-containing medium is suitable for the culture and counting of yeasts, e.g. Sabouraud medium, malt extract agar (MEA), tryptone glucose yeast extract agar (TGY), yeast glucose chloramphenicol agar (YGC). For the detection of acid-resistant yeasts like Z. bailii, acidified media are recommended, such as MEA or TGY with 0.5% (v/v) acetic acid added. Plating with agar media is often used for counting of yeasts, with surface spreading technique is preferable to pour plate method because the former technique gives a better recovery of cells with lower dilution errors. The common incubation conditions are aerobic atmosphere, temperature 25 °C for a period of 5 days. Nevertheless, a higher incubation temperature (30 °C) and shorter incubation time (3 days) can be applied for Z. bailii, as the yeast grows faster at this elevated temperature. Physiological properties Among the Zygosaccharomyces spoilage species, Z. bailii possesses the most pronounced and diversified resistance characteristics, enabling it to survive and proliferate in very stressful conditions. It appears that Z. bailii prefers ecological environments characterized by high osmotic conditions. The most frequently described natural habitats are dried or fermented fruits, tree exudates (in vineyards and orchards), and at various stages of sugar refining and syrup production. Besides, it is seldom to encounter Z. bailii as a major spoilage agent in unprocessed foods; usually the yeast only attains importance in processed products when the competition with bacteria and moulds is reduced by intrinsic factors such as pH, water activity (aw), preservatives, etc. Resistance characteristics An outstanding feature of Z. bailii is its exceptional resistance to weak acid preservatives commonly used in foods and beverages, such as acetic, lactic, propionic, benzoic, sorbic acids and sulfur dioxide. In addition, it is reported that the yeast is able to tolerate high ethanol concentrations (≥ 15% (v/v)). The ranges of pH and aw for growth are wide, 2.0 - 7.0 and 0.80 - 0.99, respectively. Besides being preservative resistant, other features that contribute to the spoilage capacity of Z. bailii are: (i) its ability to vigorously ferment hexose sugars (e.g. glucose and fructose), (ii) ability to cause spoilage from an extremely low inoculum (e.g. one viable cell per package of any size), (iii) moderate osmotolerance (in comparison to Zygosaccharomyces rouxii). Therefore, foods at particular risk to spoilage by this yeast usually have low pH (2.5 to 5.0), low aw and contain sufficient amounts of fermentable sugars. The extreme acid resistance of Z. bailii has been reported by many authors. On several occasions, growth of the yeast has been observed in fruit-based alcohols (pH 2.8 - 3.0, 40 - 45% (w/v) sucrose) preserved with 0.08% (w/v) benzoic acid, and in beverages (pH 3.2) containing either 0.06% (w/v) sorbic acid, 0.07% (w/v) benzoic acid, or 2% (w/v) acetic acid. Notably, individual cells in any Z. bailii population differ considerably in their resistance to sorbic acid, with a small fraction able to grow in preservative levels double that of the average population. In some types of food, the yeast is even able to grow in the presence of benzoic and sorbic acids at concentrations higher than those legally permitted and at pH values below the pKa of the acids. For example, according to the European Union (EU) legislation, sorbic acid is limited to 0.03% (w/v) in soft drinks (pH 2.5 - 3.2); however Z. bailii can grow in soft drinks containing 0.05% (w/v) of this acid (pKa 4.8). Particularly, there is strong evidence that the resistance of Z. bailii is stimulated by the presence of multiple preservatives. Hence, the yeast can survive and defeat synergistic preservative combinations that normally provide microbiological stability to processed foods. It has been observed that the cellular acetic acid uptake was inhibited when sorbic or benzoic acid was incorporated into the culture medium. Similarly, ethanol levels up to 10% (v/v) did not adversely influence sorbic and benzoic acid resistance of the yeast at pH 4.0 - 5.0. Moreover, Sousa et al. (1996) have proved that in Z. bailii, ethanol plays a protective role against the negative effect of acetic acid by inhibiting the transport and accumulation of this acid intracellularly. Like other microorganisms, Z. bailii has the ability to adapt to sub-inhibitory levels of a preservative, which enables the yeast to survive and grow in much higher concentrations of the preservative than before adaptation. In addition, it seems that Z. bailii resistance to acetic, benzoic and propionic acid is strongly correlated, as the cells which were adapted to benzoic acid also showed enhanced tolerances to other the preservatives. Some studies have revealed the negligible effects of different sugars on preservative resistance of Z. bailii, e.g. comparable sorbic and benzoic acid resistance was observed regardless whether the cells were grown in culture medium containing glucose or fructose as fermentable substrates. However, the preservative resistance of the yeast is influenced by glucose level, with maximum resistance obtained at 10 - 20% (w/v) sugar concentrations. As Z. bailii is moderately osmotolerant, the salt and sugar levels in foods are usually insufficient to control its growth. The highest tolerance to salt has been observed at low pH values, e.g. the maximum NaCl allowing growth was 12.5% (w/v) at pH 3.0 whereas this was only 5.0% (w/v) at pH 5.0. Moreover, the presence of either salt or sugar has a positive effect on the ability of Z. bailii to initiate growth at extreme pH levels, e.g. the yeast showed no growth at pH 2.0 in the absence of NaCl and sucrose, but grew at this pH in 2.5% (w/v) NaCl or 50% (w/v) sucrose. Most facultatively fermentative yeast species cannot grow in the complete absence of oxygen. That means limitation of oxygen availability might be useful in controlling food spoilage caused by fermentative yeasts. However, it has been observed that Z. bailii is able to grow rapidly and ferment sugar vigorously in a complex medium under strictly anaerobic condition, indicating that the nutritional requirement for anaerobic growth was met by the complex-medium components. Therefore, restriction of oxygen entry into foods and beverages, which are rich in nutrients, is not a promising strategy to prevent the risk of spoilage by this yeast. Besides, Leyva et al. (1999) have reported that Z. bailii cells can retain their spoilage capability by producing a significant amount of gas even in non-growing conditions (i.e. presence of sugars but absence of nitrogen source). Preservative resistance mechanisms Different strategies have been suggested in accounting for Z. bailii resistance to weak acid preservatives, which include: (i) degradation of the acids, (ii) prevention of entry or removal of acids from the cells, (iii) alteration of the inhibitor target, or amelioration of the caused damage. Particularly, the intrinsic resistance mechanisms of Z. bailii are extremely adaptable and robust. Their functionality and effectiveness are unaffected or marginally suppressed by environmental conditions such as low pH, low aw and limited nutrients. For a long time, it has been known that Z. bailii can maintain an acid gradient across the cell membrane, which indicates the induction of a system whereby the cells can reduce the intracellular acid accumulation. According to Warth (1977), Z. bailii uses an inducible, active transport pump to expel acid anions from the cells for counteracting the toxic effects of the acids. As the pump requires energy to function optimally, high sugar levels enhance Z. bailii preservative resistance. Nevertheless, this view was disputed from an observation that the concentration of acid was exactly as predicted from the intracellular, extracellular pH's and pKa of the acid. Besides, it is unlikely that an active acid extrusion alone would be sufficient to achieve an unequal acid distribution across the cell membrane. Instead, Z. bailii might have developed much more efficient ways of altering its cell membrane to limit the diffusional entry of acids into the cells. This, in turn, will dramatically reduce any need for active extrusion of protons and acid anions, thus saving a lot of energy. Indeed, Warth (1989) has reported that the uptake rate of propionic acid by diffusion in Z. bailii is much lower than in other acid-sensitive yeasts (e.g. Saccharomyces cerevisiae). Hence, it is conceivable that Z. bailii puts more effort on limiting the influx of acids in order to enhance its acid resistance. Another mechanism of Z. bailii to deal with acid challenge is that the yeast uses a plasma membrane H+-adenosine triphosphatase (H+-ATPase) to expel proton from cells, thereby preventing intracellular acidification. In addition, Cole and Keenan (1987) have suggested that Z. bailii resistance includes an ability to tolerate chronic intracellular pH drops. Besides, the fact that the yeast is able to metabolize preservatives may also contribute to its acid tolerance. Regarding the resistance of Z. bailii to SO2, it has been proposed that the cells reduce the concentration of SO2 by producing extracellular sulphite-binding compounds such as acetaldehyde. Metabolism The fructophilic behaviour is well known in Z. bailii. Unlike most of other yeasts, Z. bailii metabolizes fructose more rapidly than glucose and grows much faster in foods containing ≥ 1% (w/w) of fructose. In addition, it has been observed that the alcoholic fermentation under aerobic conditions (the Crabtree effect) in Z. bailii is influenced by the carbon source, i.e. ethanol is produced at a higher rate and with a higher yield on fructose than on glucose. This is because in Z. bailii, fructose is transported by a specific high-capacity system, while glucose is transported by a lower-capacity system, which is partially inactivated by fructose and also accepts fructose as a substrate. The slow fermentation of sucrose is directly related to fructose metabolism. According to Pitt and Hocking (1997), Z. bailii cannot grow in foods with sucrose as the sole carbon source. As it requires time to hydrolyze sucrose into glucose and fructose (in low pH conditions), there is a long delay between manufacture and spoilage of products contaminated with this yeast when sucrose is used as the primary carbohydrate ingredient. This is usually preceded by a lag of 2 – 4 weeks and apparent deterioration of product quality is only shown 2 – 3 months after manufacturing Therefore, the use of sucrose as a sweetener (instead of glucose or fructose) is highly recommended in synthetic products such as soft drinks. Fermentation of sugars (e.g. glucose, fructose and sucrose) is a key metabolic reaction of most yeasts (including Z. bailii) when cultured under facultative anaerobic conditions. As sugars are common components of foods and beverages, fermentation is a typical feature of the spoilage process. Principally, these sugars are converted to ethanol and CO2, causing the products to lose sweetness and acquire a distinctive alcoholic aroma along with gassiness. Besides, many secondary products are formed in small amounts, such as organic acids, esters, aldehydes, etc. Z. bailii is noted for its strong production of secondary metabolites, e.g. acetic acid, ethyl acetate and acetaldehyde. In high enough concentrations, these substances can have a dominant effect on the sensorial quality of the products. The higher resistance of Z. bailii to weak acids than S. cerevisiae can partly be explained by its ability to metabolize preservatives. It has been demonstrated that Z. bailii is able to consume acetic acid in the presence of fermentable sugars, whereas the acetate uptake and utilization systems of S. cerevisiae are all glucose-repressed. In addition, Z. bailii can also oxidatively degrade sorbate and benzoate (and use these compounds as a sole carbon source), while S. cerevisiae does not have this capability. Spoilage activities According to Thomas and Davenport (1985), early reports of spoilage in mayonnaise and salad dressing due to Z. bailii date back to the beginning of the 20th century. More detailed investigations in the 1940s and 1950s confirmed that Z. bailii was the main spoiler in cucumber pickles, sundry pickled vegetable mixes, acidified sauces, etc. Around the same time, fermentation spoilage incidents occasionally appeared in fruit syrups and beverages preserved with moderate benzoic acid levels (0.04 - 0.05% (w/w)). Again, Z. bailii was identified as the spoilage source. Nowadays, despite great improvements in formulation control, food processing equipment and sanitation technologies (e.g. automated clean-in-place), the yeast remains highly problematic in sauces, acidified foods, pickled or brined vegetables, fruit concentrates and various non-carbonated fruit drinks. Z. bailii is also well recognized as one of the main spoilers in wines due to its high resistance to combinations of ethanol and organic acids at low pH. Furthermore, the spoilage by this yeast has been expanding into new food categories such as prepared mustards, fruit-flavoured carbonated soft drinks containing citrus, apple and grape juice concentrates. The ability of Z. bailii in spoiling a wide range of foods is a reflection of its high resistance to many stress factors. Therefore, it has been included in the list of most dangerous spoilage yeasts by several authors. Spoilage by Z. bailii often occurs in acidic shelf-stable foods, which rely upon the combined effects of acidity (e.g. vinegar), salt and sugar to suppress microbial growth. The spoiled foods usually display sensorial changes that can be easily recognized by consumers, thus resulting in significant economic losses due to consumers' complaints or product recalls Observable signs of spoilage include product leakage from containers, colour change, emission of unpleasant yeasty odours, emulsion separation (in mayonnaises, dressings), turbidity, flocculation or sediment formation (in wines, beverages) and visible colonies or brown film development on product surfaces. The specific off-flavour that has been attributed to Z. bailii is related to H2S. In addition, the taste of spoiled foods can be modified by the production of acetic acid and fruity esters. It has been reported that growth of Z. bailii also results in significant gas and ethanol formation, causing a typical alcoholic taste. The excessive gas production is a direct consequence of high fermentable ability of this yeast and in more solid food, gas bubbles can appear within the product. Under extreme circumstances, the produced gas pressure inside glass jars or bottles can reach such a level that explosions may take place, creating an additional hazard of injuries from broken glass. It should be mentioned that in general, detectable spoilage by yeasts requires the presence of a high number of cells, approximately 5 - 6 log CFU/ml. Apart from spoiling foods, as a direct consequent of growth, Z. bailii can modify the product texture and composition such that it may be more readily colonized by other spoilage microorganisms. For example, by utilizing acetic acid, the yeast can raise the pH of pickles sufficiently to allow the growth of less acid-tolerant bacteria. Besides, as with other yeasts, the concentration of fermentable sugar in a product affects the rate of spoilage by Z. bailii, e.g. the yeast grows faster in the presence of 10% (w/w) than 1% (w/w) glucose. Particularly, Z. bailii can grow and cause spoilage from extremely low inocula, as few as one viable cell in ≥ 10 liters of beverages. That means detection of low numbers of yeast cells in a product does not guarantee its stability. No sanitation or microbiological quality control program can cope with this degree of risk. Hence, the only alternatives would be reformulation of food to increase the stability and/or application of high-lethality thermal-processing parameters. Apart from unwanted spoilage, this yeast is also present in the fermentation of traditional Italian balsamic vinegar (Zygosaccharomyces rouxii together with Zygosaccharomyces bailii, Z. pseudorouxii, Z. mellis, Z. bisporus, Z. lentus, Hanseniaspora valbyensis, Hanseniaspora osmophila, Candida lactis-condensi, Candida stellata, Saccharomycodes ludwigii, Saccharomyces cerevisiae) See also Yeast in winemaking Zygosaccharomyces References External links Review: Spoilage yeasts in the wine industry Osmophiles Saccharomycetaceae Yeasts Fungal plant pathogens and diseases Fungi described in 1895 Fungus species
Zygosaccharomyces bailii
Biology
4,560
36,913,274
https://en.wikipedia.org/wiki/HD%20102350
HD 102350 is a single star in the constellation Centaurus. It has a yellow hue and is visible to the naked eye with an apparent visual magnitude of 4.11. The distance to this star is approximately 390 light years based on parallax, but it is drifting closer with a radial velocity of −3 km/s. It has an absolute magnitude of −1.51. This is an aging bright giant star with a stellar classification of G0II. It is a candidate Cepheid variable, but Hipparcos photometry found its brightness to be constant. The star has expanded to 22 times the radius of the Sun and is radiating 283 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 5,051 K. It has a magnitude 13.0 visual companion at an angular separation of along a position angle of 313° relative to the brighter component, as of 2000. HD 102350 is listed in the Washington Double Star Catalog as having a 13th magnitude companion about away, but it is a distant background object unrelated to HD 102350. References G-type bright giants Suspected variables Centaurus CD-60 03741 102350 057439 4522
HD 102350
Astronomy
252
63,612,550
https://en.wikipedia.org/wiki/Systems%20Improved%20Numerical%20Differential%20Analyzer
The Systems Improved Numerical Differential Analyzer (acronym SINDA) is a commercially available software system developed by C&R Technologies that solves resistor-capacitor (R-C) network representations of physical problems governed by diffusion equations. The software was originally designed as a general thermal analyzer for the spacecraft and launch vehicle thermal community and is currently an integral part of the Thermal Desktop plugin for AutoCAD. References Physics software
Systems Improved Numerical Differential Analyzer
Physics
89
42,881,025
https://en.wikipedia.org/wiki/METAGENassist
METAGENassist is a freely available web server for comparative metagenomic analysis. Comparative metagenomic studies involve the large-scale comparison of genomic or taxonomic census data from bacterial samples across different environments. Historically this has required a sound knowledge of statistics, computer programming, genetics and microbiology. As a result, only a small number of researchers are routinely able to perform comparative metagenomic studies. To circumvent these limitations, METAGENassist was developed to allow metagenomic analyses to be performed by non-specialists, easily and intuitively over the web. METAGENassist is particularly notable for its rich graphical output and its extensive database of bacterial phenotypic information. Features METAGENassist is designed to support a wide range of statistical comparisons across metagenomic samples. METAGENassist accepts a wide range of bacterial census data or taxonomic profile data derived from 16S rRNA data, classical DNA sequencing, NextGen shotgun sequencing or even classical microbial culturing techniques. These taxonomic profile data can be in different formats including standard comma-separated value (CSV) formats or in program-specific formats generated by tools such as mothur and QIIME. Once the data are uploaded to the website, METAGENassist offers users a large selection of data pre-processing and data quality checking tools such as: 1) taxonomic name normalization; 2) taxonomic-to-phenotypic mapping; 3) data integrity/quality checks and 4) data normalization. METAGENassist also supports an extensive collection of classical univariate and multivariate analyses, such as fold-change analysis, t-tests, one-way ANOVA, partial least-squares discriminant analysis (PLS-DA) and principal component analysis (PCA). Each of these analyses generates colorful, informative graphs and tables in PNG or PDF formats. All of the processed data and images are also available for download. These data analysis and visualization tools can be used to visualize key features that distinguish or characterize microbial populations in different environments or in different conditions. METAGENassist distinguishes itself from most other metagenomics data analysis tools through its extensive use of automated taxonomic-to-phenotypic mapping and its ability to support sophisticated data analyses with the resulting phenotypic data. METAGENassist’s phenotype database covers more than 11,000 microbial species annotated with 20 different phenotypic categories, including oxygen requirements, energy source(s), metabolism, and GC content. This gives users substantially more features with which to compare and analyze different samples. The phenotype database is regularly updated with information retrieved from several resources including BacMap, GOLD, and other NCBI taxonomy resources. See also BASys References Biological databases
METAGENassist
Biology
571
15,025,332
https://en.wikipedia.org/wiki/Onychodontiformes
Onychodontiformes (also known as Onychodontida and Struniiformes) is an order of prehistoric sarcopterygian fish that lived during the Devonian period. The onychodontiforms are generally regarded as early-diverging members of the coelacanth lineage. Phylogeny The following cladogram is adapted from Mondéjar-Fernández (2020). The study recovered Onychodontiformes as a paraphyletic group, which is shown in green: References External links Onychodontiformes at Palaeos Onychodontida phylogeny at Mikko's Phylogeny Archive Prehistoric lobe-finned fish Prehistoric fish orders Devonian bony fish Early Devonian first appearances Famennian extinctions Paraphyletic groups
Onychodontiformes
Biology
170
58,774,125
https://en.wikipedia.org/wiki/Streptomyces%20sp.%20myrophorea
Streptomyces sp. myrophorea, isolate McG1 is a species of Streptomyces, that originates from a (ethnopharmacology) folk cure in the townland of Toneel North in Boho, County Fermanagh. This area was previously occupied by the Druids (~1500 years ago) and before this neolithic people (~ 3,700 years ago) who engraved the nearby Reyfad stones. Streptomyces sp. myrophorea is inhibitory to many species of ESKAPE pathogens, can grow at high pH (10.5) and can tolerate relatively high levels of radioactivity. Physiology and morphology Streptomyces sp. myrophorea isolate McG1 has light green to white spores and hyphae when cultivated on SFM agar. The colonies of Streptomyces sp. myrophorea have a distinctly dusty appearance and produce an aroma similar to germaline on maturation. This bacteria produces many spores, approximately 0.5-1.0 micrometers in width which form in straight chains. Ecology Streptomyces sp. myrophorea isolate McG1 was discovered in an alkaline, species rich environment. This bacteria grows at a maximum pH of 10.5, and is therefore alkaliphilic. The bacteria tolerate higher levels of alkalinity but do not thrive. Streptomyces sp. myrophorea can also withstand relatively high levels of radiation (up to 4kGy. This may be related to the underlying limestone and shale substrata which emits radon gas. Antibiotic production Only antibiotic gene synthesis clusters have been identified in Streptomyces sp. myrophorea; the antibiotics actually produced in-situ have yet to be identified. Streptomyces sp. McG1 is broadly inhibitory to both Gram positive and Gram negative bacteria including carbapenem resistant Acinetobacter baumannii, (a critical pathogen on the World Health Organization priority pathogens list), vancomycin resistant Enterococcus faecium, methicillin resistant Staphylococcus aureus (listed as high priority) and Klebsiella pneumoniae. Streptomyces sp. myrophorea has limited effects against strains of Enterococcus faecium and Pseudomonas aeruginosa. Alkaline tolerance Streptomyces sp. myrophorea may be able to flourish in an alkaline environment because it contains many genes similar to those associated with alkaline tolerance in other bacterial species. Streptomyces sp. myrophorea list of alkaline tolerance genes: In-vitro antibiotic resistance The presence of antibiotic resistance genes is often linked to the production of antibiotics. Stretpomyces sp. myrophorea has been recorded to be resistant to 28 antibiotics and sensitive to eight antibiotics in one series of tests. The antibiotics were tested at breakpoint concentrations recommended by The European Committee on Antimicrobial Susceptibility Testing - EUCAST, to test antimicrobial susceptibility. Antibiotic sensitivity of Streptomyces sp. myrophorea: Sensitivities recorded as: Sensitivite (S), Resistant (R) or Intermediate (I). Whole genome sequencing The genome sequence of Streptomyces sp. myrophorea, isolate McG1 was deposited at the NCBI (TaxID 2099643), Biosample accession number SAMN08518548, BioProject accession number PRJNA433829, Submission ID: SUB3653175 Locus tag prefix: C4625. https://www.ncbi.nlm.nih.gov/bioproject/433829. Link to NCBI sequence read archive (SRA) https://www.ncbi.nlm.nih.gov/sra?LinkName=biosample_sra&from_uid=8518548 Copies of this bacteria are deposited in the National Collection of Type Cultures (NCTC), UK and the Deutsche Sammlung von Mikroorganismen und Zellkulturen (DSMZ) GmbH, Germany. References External links Extremophiles Alkaliphiles Actinomycetota Streptomyces Limestone Fermanagh and Omagh district Undescribed species
Streptomyces sp. myrophorea
Biology,Environmental_science
939
68,132,396
https://en.wikipedia.org/wiki/Blumeviridae
Blumeviridae is a family of RNA viruses, which infect prokaryotes. Taxonomy Blumeviridae contains 31 genera: Alehndavirus Bonghivirus Cehntrovirus Dahmuivirus Dehgumevirus Dehkhevirus Espurtavirus Gifriavirus Hehrovirus Ivolevirus Kahnayevirus Kahraivirus Kemiovirus Kerishovirus Konmavirus Lirnavirus Lonzbavirus Marskhivirus Nehohpavirus Nehpavirus Obhoarovirus Pacehavirus Pahdacivirus Rhohmbavirus Semodevirus Shihmovirus Shihwivirus Tibirnivirus Tinebovirus Wahdswovirus Yenihzavirus References Virus families Riboviria
Blumeviridae
Biology
161
56,256,205
https://en.wikipedia.org/wiki/Single-electron%20transistor
A single-electron transistor (SET) is a sensitive electronic device based on the Coulomb blockade effect. In this device the electrons flow through a tunnel junction between source/drain to a quantum dot (conductive island). Moreover, the electrical potential of the island can be tuned by a third electrode, known as the gate, which is capacitively coupled to the island. The conductive island is sandwiched between two tunnel junctions modeled by capacitors, and , and resistors, and , in parallel. History A new subfield of condensed matter physics began in 1977 when David Thouless pointed out that, when made small enough, the size of a conductor affects its electronic properties. This was followed by mesoscopic physics research in the 1980s based on the submicron-size of systems investigated. Thus began research related to the single-electron transistor. The first single-electron transistor based on the phenomenon of Coulomb blockade was reported in 1986 by Soviet scientists and D. V. Averin. A couple years later, T. Fulton and G. Dolan at Bell Labs in the US fabricated and demonstrated how such a device works. In 1992 Marc A. Kastner demonstrated the importance of the energy levels of the quantum dot. In the late 1990s and early 2000s, Russian physicists S. P. Gubin, V. V. Kolesov, E. S. Soldatov, A. S. Trifonov, V. V. Khanin, G. B. Khomutov, and S. A. Yakovenko were the first ones to demonstrate a molecule-based SET operational at room temperature. Relevance The increasing relevance of the Internet of things and the healthcare applications give more relevant impact to the electronic device power consumption. For this purpose, ultra-low power consumption is one of the main research topics into the current electronics world. The amazing number of tiny computers used in the day-to-day world (e.g. mobile phones and home electronics) requires a significant power consumption level of the implemented devices. In this scenario, the SET has appeared as a suitable candidate to achieve this low power range with high level of device integration. Applicable areas include: super-sensitive electrometers, single-electron spectroscopy, DC current standards, temperature standards, detection of infrared radiation, voltage state logics, charge state logics, programmable single-electron transistor logic. Device Principle The SET has, like the FET, three electrodes: source, drain, and a gate. The main technological difference between the transistor types is in the channel concept. While the channel changes from insulated to conductive with applied gate voltage in the FET, the SET is always insulated. The source and drain are coupled through two tunnel junctions, separated by a metallic or semiconductor-based quantum nanodot (QD), also known as the "island". The electrical potential of the QD can be tuned with the capacitively coupled gate electrode to alter the resistance, by applying a positive voltage the QD will change from blocking to non-blocking state and electrons will start tunnelling to the QD. This phenomenon is known as the Coulomb blockade. The current, from source to drain follows Ohm's law when is applied, and it equals where the main contribution of the resistance, comes from the tunnelling effects when electrons move from source to QD, and from QD to drain. regulates the resistance of the QD, which regulates the current. This is the exact same behaviour as in regular FETs. However, when moving away from the macroscopic scale, the quantum effects will affect the current, In the blocking state all lower energy levels are occupied at the QD and no unoccupied level is within tunnelling range of electrons originating from the source (green 1.). When an electron arrives at the QD (2.) in the non-blocking state it will fill the lowest available vacant energy level, which will raise the energy barrier of the QD, taking it out of tunnelling distance once again. The electron will continue to tunnel through the second tunnel junction (3.), after which it scatters inelastically and reaches the drain electrode Fermi level (4.). The energy levels of the QD are evenly spaced with a separation of This gives rise to a self-capacitance of the island, defined as: To achieve the Coulomb blockade, three criteria need to be met: The bias voltage must be lower than the elementary charge divided by the self-capacitance of the island: The thermal energy in the source contact plus the thermal energy in the island, i.e. must be below the charging energy: otherwise the electron will be able to pass the QD via thermal excitation. The tunnelling resistance, should be greater than which is derived from Heisenberg's uncertainty principle. where corresponds to the tunnelling time and is shown as and in the schematic figure of the internal electrical components of the SET. The time () of electron tunnelling through the barrier is assumed to be negligibly small in comparison with the other time scales. This assumption is valid for tunnel barriers used in single-electron devices of practical interest, where If the resistance of all the tunnel barriers of the system is much higher than the quantum resistance it is enough to confine the electrons to the island, and it is safe to ignore coherent quantum processes consisting of several simultaneous tunnelling events, i.e. co-tunnelling. Theory The background charge of the dielectric surrounding the QD is indicated by . and denote the number of electrons tunnelling through the two tunnel junctions and the total number of electrons is . The corresponding charges at the tunnel junctions can be written as: where and are the parasitic leakage capacities of the tunnel junctions. Given the bias voltage, you can solve the voltages at the tunnel junctions: The electrostatic energy of a double-connected tunnel junction (like the one in the schematical picture) will be The work performed during electron tunnelling through the first and second transitions will be: Given the standard definition of free energy in the form: where we find the free energy of a SET as: For further consideration, it is necessary to know the change in free energy at zero temperatures at both tunnel junctions: The probability of a tunnel transition will be high when the change in free energy is negative. The main term in the expressions above determines a positive value of as long as the applied voltage will not exceed the threshold value, which depends on the smallest capacity in the system. In general, for an uncharged QD ( and ) for symmetric transitions () we have the condition (that is, the threshold voltage is reduced by half compared with a single transition). When the applied voltage is zero, the Fermi level at the metal electrodes will be inside the energy gap. When the voltage increases to the threshold value, tunnelling from left to right occurs, and when the reversed voltage increases above the threshold level, tunnelling from right to left occurs. The existence of the Coulomb blockade is clearly visible in the current–voltage characteristic of a SET (a graph showing how the drain current depends on the gate voltage). At low gate voltages (in absolute value), the drain current will be zero, and when the voltage increases above the threshold, the transitions behave like an ohmic resistance (both transitions have the same permeability) and the current increases linearly. The background charge in a dielectric can not only reduce, but completely block the Coulomb blockade. In the case where the permeability of the tunnel barriers is very different a stepwise I-V characteristic of the SET arises. An electron tunnels to the island through the first transition and is retained on it, due to the high tunnel resistance of the second transition. After a certain period of time, the electron tunnels through the second transition, however, this process causes a second electron to tunnel to the island through the first transition. Therefore, most of the time the island is charged in excess of one charge. For the case with the inverse dependence of permeability the island will be unpopulated and its charge will decrease stepwise. Only now can we understand the principle of operation of a SET. Its equivalent circuit can be represented as two tunnel junctions connected in series via the QD, perpendicular to the tunnel junctions is another control electrode (gate) connected. The gate electrode is connected to the island through a control tank The gate electrode can change the background charge in the dielectric, since the gate additionally polarizes the island so that the island charge becomes equal to Substituting this value into the formulas found above, we find new values for the voltages at the transitions: The electrostatic energy should include the energy stored on the gate capacitor, and the work performed by the voltage on the gate should be taken into account in the free energy: At zero temperatures, only transitions with negative free energy are allowed: or . These conditions can be used to find areas of stability in the plane With increasing voltage at the gate electrode, when the supply voltage is maintained below the voltage of the Coulomb blockade (i.e. ), the drain output current will oscillate with a period These areas correspond to failures in the field of stability. The oscillations of the tunnelling current occur in time, and the oscillations in two series-connected junctions have a periodicity in the gate control voltage. The thermal broadening of the oscillations increases to a large extent with increasing temperature. Temperature dependence Various materials have successfully been tested when creating single-electron transistors. However, temperature is a huge factor limiting implementation in available electronical devices. Most of the metallic-based SETs only work at extremely low temperatures. As mentioned in bullet 2 in the list above: the electrostatic charging energy must be greater than to prevent thermal fluctuations affecting the Coulomb blockade. This in turn implies that the maximum allowed island capacitance is inversely proportional to the temperature, and needs to be below 1 aF to make the device operational at room temperature. The island capacitance is a function of the QD size, and a QD diameter smaller than 10 nm is preferable when aiming for operation at room temperature. This in turn puts huge restraints on the manufacturability of integrated circuits because of reproducibility issues. CMOS compatibility The level of the electrical current of the SET can be amplified enough to work with available CMOS technology by generating a hybrid SET–FET device. The EU funded, in 2016, project IONS4SET (#688072) looks for the manufacturability of SET–FET circuits operative at room temperature. The main goal of this project is to design a SET-manufacturability process-flow for large-scale operations seeking to extend the use of the hybrid SET–CMOS architectures. To assure room temperature operation, single dots of diameters below 5 nm have to be fabricated and located between source and drain with tunnel distances of a few nanometers. Up to now there is no reliable process-flow to manufacture a hybrid SET–FET circuit operative at room temperature. In this context, this EU project explores a more feasible way to manufacture the SET–FET circuit by using pillar dimensions of approximately 10 nm. See also Coulomb blockade MOSFET Transistor model References Nanoelectronics Transistor types
Single-electron transistor
Materials_science
2,371
4,261,263
https://en.wikipedia.org/wiki/XPDL
The XML Process Definition Language (XPDL) is a format standardized by the Workflow Management Coalition (WfMC) to interchange business process definitions between different workflow products, i.e. between different modeling tools and management suites. XPDL defines an XML schema for specifying the declarative part of workflow / business process. XPDL is designed to exchange the process definition, both the graphics and the semantics of a workflow business process. XPDL is currently the best file format for exchange of BPMN diagrams; it has been designed specifically to store all aspects of a BPMN diagram. XPDL contains elements to hold graphical information, such as the X and Y position of the nodes, as well as executable aspects which would be used to run a process. This distinguishes XPDL from BPEL which focuses exclusively on the executable aspects of the process. BPEL does not contain elements to represent the graphical aspects of a process diagram. It is possible to say that XPDL is the XML Serialization of BPMN. History The Workflow Management Coalition, founded in August 1993, began by defining the Workflow Reference Model (ultimately published in 1995) that outlined the five key interfaces that a workflow management system must have. Interface 1 was for defining the business process, which includes two aspects: a process definition expression language and a programmatic interface to transfer the process definition to/from the workflow management system. The first revision of a process definition expression language was called Workflow Process Definition Language (WPDL) which was published in 1998. This process meta-model contained all the key concepts required to support workflow automation expressed using URL Encoding. Interoperability demonstrations were held to confirm the usefulness of this language as a way to communicate process models. By 1998, the first standards based on XML began to appear. The Workflow Management Coalition Working Group 1 produced an updated process definition expression language called XML Process Definition Language (XPDL) now known as XPDL 1.0. This second revision was an XML based interchange language that contained many of the same concepts as WPDL, with some improvements. XPDL 1.0 was ratified by the WfMC in 2002, and was subsequently implemented by more than two dozen workflow/BPM products to exchange process definitions. There was a large number of research projects and academic studies on workflow capabilities around XPDL, which was essentially the only standard language at the time for interchange of process design. The WfMC continued to update and improve the process definition interchange language. In 2004 the WfMC endorsed BPMN, a graphical formalism to standardize the way that process definitions were visualized. XPDL was extended specifically with the goal of representing in XML all the concepts present in a BPMN diagram. This third revision of a process definition expression language is known as XPDL 2.0 and was ratified by the WfMC in October 2005. In April 2008, the WfMC ratified XPDL 2.1 as the fourth revision of this specification. XPDL 2.1 includes extension to handle new BPMN 1.1 constructs, as well as clarification of conformance criteria for implementations. In spring 2012, the WfMC completed XPDL 2.2 as the fifth revision of this specification. XPDL 2.2 builds on version 2.1 by introducing support for the process modeling extensions added to BPMN 2.0. References Wil M.P. van der Aalst, "Business Process Management Demystified: A Tutorial on Models, Systems and Standards for Workflow Management", Springer Lecture Notes in Computer Science, Vol 3098/2004. Wil M.P. van der Aalst, "Patterns and XPDL: A Critical Evaluation of the XML Process Definition Language", Eindhoven University of Technology, PDF. Jiang Ping, Q. Mair, J. Newman, "Using UML to design distributed collaborative workflows: from UML to XPDL", Twelfth IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, 2003. WET ICE 2003. Proceedings, . W.M.P. van der Aalst, "Don't go with the flow: Web services composition standards exposed", IEEE Intelligent Systems, Jan/Feb 2003. Jürgen Jung, "Mapping Business Process Models to Workflow Schemata An Example Using Memo-ORGML And XPDL", Universität Koblenz-Landau, April 2004, PDF. Volker Gruhn, Ralf Laue, "Using Timed Model Checking for Verifying Workflows", José Cordeiro and Joaquim Filipe (Eds.): Proceedings of the 2nd Workshop on Computer Supported Activity Coordination, Miami, USA, 23.05.2005 - 24.05.2005, 75-88. INSTICC Press . Nicolas Guelfi, Amel Mammar, "A formal framework to generate XPDL specifications from UML activity diagrams", Proceedings of the 2006 ACM symposium on Applied computing, 2006. Peter Hrastnik, "Execution of business processes based on web services", International Journal of Electronic Business, Volume 2, Number 5 / 2004. Petr Matousek, "An ASM Specication of the XPDL Language Semantics", Symposium on the Effectiveness of Logic in Computer Science, March 2002, PS. F. Puente, A. Rivero, J.D. Sandoval, P. Hernández, and C.J. Molina, "Improved Workflow Management System based on XPDL", Editor(s): M. Boumedine, S. Ranka, Proceedings of the IASTED Conference on Knowledge Sharing and Collaborative Engineering, St. Thomas, US Virgin Islands, November 29-December 1, 2006, . Petr Matousek, "Verification method proposal for business processes and workflows specified using the XPDL standard language", PhD thesis, Jan 2003. Thomas Hornung, Agnes Koschmider, Jan Mendling, "Integration of Heterogeneous BPM Schemas: The Case of XPDL and BPEL", Technical Report JM-2005-03, Vienna University of Economics and Business Administration, 2006 PDF. Wei Ge, Baoyan Song, Derong Shen, Ge Yu, "e_SWDL: An XML Based Workflow Definition Language for Complicated Applications in Web Environments" Web Technologies and Applications: 5th Asia-Pacific Web Conference, APWeb 2003, Xian, China, April 23–25, 2003. Proceedings, . Ryan K. L. Ko, Stephen S. G. Lee, Eng Wah Lee (2009) Business Process Management (BPM) Standards: A Survey. In: Business Process Management Journal, Emerald Group Publishing Limited. Volume 15 Issue 5. . PDF References See also Business Process Management BPMN Workflow Management Coalition External links XPDL & Workflow Patterns PDF Critical comments on XPDL 1.0 Enterprise Workflow National Project supported by the Office of the Deputy Prime Minister endorses WfMC standards for use in all workflow projects in UK. Open Source Java XPDL Editor XML-based standards Workflow technology Specification languages Modeling languages
XPDL
Technology,Engineering
1,474
8,463,705
https://en.wikipedia.org/wiki/Wiswesser%20line%20notation
Wiswesser line notation (WLN), invented by William J. Wiswesser in 1949, was the first line notation capable of precisely describing complex molecules. It was the basis of ICI Ltd's CROSSBOW database system developed in the late 1960s. WLN allowed for indexing the Chemical Structure Index (CSI) at the Institute for Scientific Information (ISI). It was also the tool used to develop the CAOCI (Commercially Available Organic Chemical Intermediates) database, the datafile from which Accelrys' (successor to MDL) ACD file was developed. WLN is still being extensively used by BARK Information Services. Descriptions of how to encode molecules as WLN have been published in several books. Examples 1H : methane 2H : ethane 3H : propane 1Y : isobutane 1X : neopentane Q1 : methanol 1R : toluene 1V1 : acetone 2O2 : diethyl ether 1VR : acetophenone ZR CVQ : 3-aminobenzoic acid QVYZ1R : phenylalanine QX2&2&2 : 3-ethylpentan-3-ol QVY3&1VQ : 2-propylbutanedioic acid L66J BMR& DSWQ IN1&1 : 6-dimethylamino-4-phenylamino-naphthalene-2-sulfonic acid QVR-/G 5 : pentachlorobenzoic acid References External links http://www.emolecules.com/doc/cheminformatics-101.htm Everything Old is New Again: Wiswesser Line Notation (WLN) Chemical nomenclature Cheminformatics Encodings
Wiswesser line notation
Chemistry
379