source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Petascale%20computing
Petascale computing refers to computing systems capable of calculating at least 1015 floating point operations per second (1 petaFLOPS). Petascale computing allowed faster processing of traditional supercomputer applications. The first system to reach this milestone was the IBM Roadrunner in 2008. Petascale supercomputers were succeeded by exascale computers. Definition Floating point operations per second (FLOPS) are one measure of computer performance. FLOPS can be recorded in different measures of precision, however the standard measure (used by the TOP500 supercomputer list) uses 64 bit (double-precision floating-point format) operations per second using the High Performance LINPACK (HPLinpack) benchmark. The metric typically refers to single computing systems, although can be used to measure distributed computing systems for comparison. It can be noted that there are alternative precision measures using the LINPACK benchmarks which are not part of the standard metric/definition. It has been recognised that HPLinpack may not be a good general measure of supercomputer utility in real world application, however it is the common standard for performance measurement. History The petaFLOPS barrier was first broken on 16 September 2007 by the distributed computing Folding@home project. The first single petascale system, the Roadrunner, entered operation in 2008. The Roadrunner, built by IBM, had a sustained performance of 1.026 petaFLOPS. The Jaguar became the second computer to break the petaFLOPS milestone, later in 2008, and reached a performance of 1.759 petaFLOPS after a 2009 update. By 2018, Summit had become the world's most powerful supercomputer, at 200 petaFLOPS before Fugaku reached 415 petaFLOPS in June 2020. See also Exascale computing Computer performance by orders of magnitude :Category:Petascale computers Zettascale computing
https://en.wikipedia.org/wiki/Indexed%20file
An indexed file is a computer file with an index that allows easy random access to any record given its file key. The key must be such that it uniquely identifies a record. If more than one index is present the other ones are called alternate indexes. The indexes are created with the file and maintained by the system. IBM supports indexed files with the Indexed Sequential Access Method (ISAM) on OS/360 and successors. IBM virtual storage operating systems added VSAM, which supports indexed files as Key Sequenced Data Sets (KSDS), with more options. Support for indexed files is built into COBOL and PL/I. Other languages with more limited I/O facilities such as C support indexed files through add-on packages in a runtime library such as C-ISAM. Some of Digital's operating systems, such as OpenVMS, support indexed file I/O using the Record Management Services. In recent systems, relational databases are often used in place of indexed files. Language support The COBOL language supports indexed files with the following command in the FILE CONTROL section ORGANIZATION IS INDEXED IBM PL/I uses the file attribute ENVIRONMENT(INDEXED) or ENVIRONMENT(VSAM) to declare an indexed file. See also B-trees Hash table Data set (IBM mainframe) Legacy system dbm also X/Open ndbm and GNU gdbm Berkeley DB Inline citations COBOL Legacy systems Database index techniques PL/I programming language family
https://en.wikipedia.org/wiki/Kardar%E2%80%93Parisi%E2%80%93Zhang%20equation
In mathematics, the Kardar–Parisi–Zhang (KPZ) equation is a non-linear stochastic partial differential equation, introduced by Mehran Kardar, Giorgio Parisi, and Yi-Cheng Zhang in 1986. It describes the temporal change of a height field with spatial coordinate and time coordinate : Here, is white Gaussian noise with average and second moment , , and are parameters of the model, and is the dimension. In one spatial dimension, the KPZ equation corresponds to a stochastic version of Burgers' equation with field via the substitution . Via the renormalization group, the KPZ equation is conjectured to be the field theory of many surface growth models, such as the Eden model, ballistic deposition, and the weakly asymmetric single step solid on solid process (SOS) model. A rigorous proof has been given by Bertini and Giacomin in the case of the SOS model. KPZ universality class Many interacting particle systems, such as the totally asymmetric simple exclusion process, lie in the KPZ universality class. This class is characterized by the following critical exponents in one spatial dimension (1 + 1 dimension): the roughness exponent , growth exponent , and dynamic exponent . In order to check if a growth model is within the KPZ class, one can calculate the width of the surface: where is the mean surface height at time and is the size of the system. For models within the KPZ class, the main properties of the surface can be characterized by the Family–Vicsek scaling relation of the roughness with a scaling function satisfying In 2014, Hairer and Quastel showed that more generally, the following KPZ-like equations lie within the KPZ universality class: where is any even-degree polynomial. A family of processes that are conjectured to be universal limits in the (1+1) KPZ universality class and govern the long time fluctuations are the Airy processes. Solving the KPZ equation Due to the nonlinearity in the equation and the presence of sp
https://en.wikipedia.org/wiki/Mikl%C3%B3s%20B%C3%B3na
Miklós Bóna (born October 6, 1967, in Székesfehérvár) is an American mathematician of Hungarian origin. Bóna completed his undergraduate studies in Budapest and Paris, then obtained his Ph.D. at MIT in 1997 as a student of Richard P. Stanley. Since 1999, he has taught at the University of Florida, where in 2010 he was inducted to the Academy of Distinguished Teaching Scholars. Bóna's main fields of research include the combinatorics of permutations, as well as enumerative and analytic combinatorics. Since 2010, he has been one of the editors-in-chief of the Electronic Journal of Combinatorics. Books External links Professional home page
https://en.wikipedia.org/wiki/Kakutani%27s%20theorem%20%28geometry%29
Kakutani's theorem is a result in geometry named after Shizuo Kakutani. It states that every convex body in 3-dimensional space has a circumscribed cube, i.e. a cube all of whose faces touch the body. The result was further generalized by Yamabe and Yujobô to higher dimensions, and by Floyd to other circumscribed parallelepipeds.
https://en.wikipedia.org/wiki/Lamella%20%28surface%20anatomy%29
In surface anatomy, a lamella is a thin plate-like structure, often one amongst many lamellae very close to one another, with open space between. Aside from respiratory organs, they appear in other biological roles including filter feeding and the traction surfaces of geckos. In fish, gill lamellae are used to increase the surface area in contact with the environment to maximize gas exchange (both to attain oxygen and to expel carbon dioxide) between the water and the blood. In fish gills there are two types of lamellae, primary and secondary. The primary gill lamellae (also called gill filament) extends from the gill arch, and the secondary gill lamellae extends from the primary gill lamellae. Gas exchange primarily occurs at the secondary gill lamellae, where the tissue is notably only one cell layer thick. Furthermore, countercurrent gas exchange at the secondary gill lamellae further maximizes oxygen uptake and carbon dioxide release. See also Pecten (biology) – the similar structure in birds
https://en.wikipedia.org/wiki/Zariski%20tangent%20space
In algebraic geometry, the Zariski tangent space is a construction that defines a tangent space at a point P on an algebraic variety V (and more generally). It does not use differential calculus, being based directly on abstract algebra, and in the most concrete cases just the theory of a system of linear equations. Motivation For example, suppose given a plane curve C defined by a polynomial equation F(X,Y) = 0 and take P to be the origin (0,0). Erasing terms of higher order than 1 would produce a 'linearised' equation reading L(X,Y) = 0 in which all terms XaYb have been discarded if a + b > 1. We have two cases: L may be 0, or it may be the equation of a line. In the first case the (Zariski) tangent space to C at (0,0) is the whole plane, considered as a two-dimensional affine space. In the second case, the tangent space is that line, considered as affine space. (The question of the origin comes up, when we take P as a general point on C; it is better to say 'affine space' and then note that P is a natural origin, rather than insist directly that it is a vector space.) It is easy to see that over the real field we can obtain L in terms of the first partial derivatives of F. When those both are 0 at P, we have a singular point (double point, cusp or something more complicated). The general definition is that singular points of C are the cases when the tangent space has dimension 2. Definition The cotangent space of a local ring R, with maximal ideal is defined to be where 2 is given by the product of ideals. It is a vector space over the residue field k:= R/. Its dual (as a k-vector space) is called tangent space of R. This definition is a generalization of the above example to higher dimensions: suppose given an affine algebraic variety V and a point v of V. Morally, modding out 2 corresponds to dropping the non-linear terms from the equations defining V inside some affine space, therefore giving a system of linear equations that define the tangent sp
https://en.wikipedia.org/wiki/Elsimar%20M.%20Coutinho
Elsimar Metzker Coutinho (18 May 1930 – 17 August 2020) was a Brazilian scientist of Luso-Austrian descent, professor, gynecologist, television personality, and character named as "Prince of Itapoa", in the books of Jorge Amado which references the Coutinho family's land in Itapoa where Amado himself lived. Biography Coutinho was born in Pojuca, the son of a landowner, politician and pharmacognosis professor of the Pharmacology School of Paraná (Escola de Farmácia do Paraná), who was appointed mayor of the municipality of Pojuca in the 1930s (during the military coup that placed Getulio Vargas, a civilian, in the presidency) and Mrs. Alaíde Metzker. Coutinho is also the brother of Senator Alaor Coutinho and the artist Riolan Coutinho. Coutinho triangulated between South America, North America and Europe on a regular basis. Coutinho is a descendant of a branch of the Portuguese Coutinho family which settled lands in South America when they married into the family of Sebastião José de Carvalho e Melo, Marquis of Pombal." Coutinho completed his early education in Pojuca, followed by high school at the Colégio Estadual da Bahia (Central). The family arrived in the city of Salvador when his father decided to develop a vast expanse of land in a neighbourhood of the city known as Itapoã, made famous by the books of Jorge Amado and the music of Dorival Caymmi. To this day the family still own large sections of the beachfront. The former Coutinho lands are the triangle between what is known today as Dorival Caymmi square to the Pedra do Sal, up to and including the Abaetê Lagoon. Later Coutinho studied pharmacy at the Faculdade de Farmácia e Bioquímica, graduating in 1951 followed by a course in medicine, completed in 1956, both at the Federal University of Bahia/UFBA. Coutinho then attended a graduate course at the Sorbonne in Paris. There he met his wife Micheline Charlotte, a descendant of the aristocratic Barlatier de Mas and Peghoux families. They had three childr
https://en.wikipedia.org/wiki/Kosher%20Check
Kosher Check is a hechsher of the Orthodox Rabbinical Council of British Columbia. Its symbol is used on labels of food which are certified Kosher by the Council. Kosher Check is headquartered in Vancouver, British Columbia. Kosher Chek has an international network of regional coordinators and Rabbinic representatives, all of them strictly Orthodox in their personal practice. Regional coordinators are based in Asia, Europe, and North America. The Orthodox Rabbinical Council of British Columbia is a recommended kosher certifying agency by the Chicago Rabbinical Council (CRC). In November 2013 BC Kosher was renamed Kosher Check with the tagline "Kosher Checked. Globally Accepted," and the new symbol was introduced. The hechsher is only available to manufacturers of food that have enhanced food-safety protocols. Today Kosher Check certifies thousands of products produced by manufacturers all around the world. See also Food safety Hechsher Kashrut Orthodox Judaism
https://en.wikipedia.org/wiki/Side%20population
A side population (SP) in flow cytometry is a sub-population of cells that is distinct from the main population on the basis of the markers employed. By definition, cells in a side population have distinguishing biological characteristics (for example, they may exhibit stem cell-like characteristics), but the exact nature of this distinction depends on the markers used in identifying the side population. Examples Side populations were first identified in hematopoietic stem cells by Dr. Margaret Goodell. SPs have been identified in hepatocellular carcinomas and may be the cells that efflux chemotherapy drugs, accounting for the resistance of cancer to chemotherapy. Recent studies on testicular stem cells indicate that more than 40% of the SP (defined in this case as cells that show higher efflux of DNA-binding dye Hoechst 33342) was undifferentiated spermatogonia, while other differentiated fractions were represented by only 0.2%. SP cells can rapidly efflux lipophilic fluorescent dyes to produce a characteristic profile based on fluorescence-activated flow cytometric analysis. Previous studies have demonstrated SP cells in bone marrow obtained from patients with acute myeloid leukemia, suggesting that these cells might be candidate leukemic stem cells, and recent studies have found a SP of tumor progenitor cells in human solid tumors. These new data indicate that the ability of malignant SP cells to expel anticancer drugs may directly improve their survival and sustain their clonogenicity during exposure to cytostatic drugs, allowing disease recurrence when therapy is withdrawn. Identification of a tumor progenitor population with intrinsic mechanisms for cytostatic drug resistance might also provide clues for improved therapeutic intervention. The molecules involved in effluxing Hoechst 33342 are members of the ATP-binding cassette family, such as MDR1 (P-glycoprotein) and ABCG2.
https://en.wikipedia.org/wiki/Kit%20Parker
Kevin Kit Parker is a lieutenant colonel in the United States Army Reserve and the Tarr Family Professor of Bioengineering and Applied Physics at Harvard University. His research includes cardiac cell biology and tissue engineering, traumatic brain injury, and biological applications of micro- and nanotechnologies. Additional work in his laboratory has included fashion design, marine biology, and the application of counterinsurgency methods to countering transnational organized crime. Early life and education Parker attended Boston University's College of Engineering and graduated in 1989. He earned a Master of Science degree in 1993 and a doctoral degree in applied physics in 1998 from Vanderbilt University. Military career Parker is a paratrooper who has served in the United States Army since 1992. After the September 11 attacks, he served two tours of duty in Afghanistan. In addition to his combat tours, Parker conducted two missions into Afghanistan as part of the Gray Team in 2011. Civilian career Initially, at Harvard the focus of his research was heart muscle cells. He turned to traumatic brain injury in 2005 after realizing that an Army friend of his, who had received injuries in an IED blast in Iraq in 2005, was suffering from an undiagnosed medical condition rather than a psychological problem. Other research of Parker's includes designing camouflage using skin cells of cuttlefish and the use of a cotton candy machine to make dressings for wounds. Parker served on the Defense Science Research Council for nearly a decade, the Defense Science Board Task Force on Autonomy, and has consulted to other US government agencies as well as the medical device and pharma industry. In 2011, Parker headed Harvard's committee for reintroducing ROTC at the university. In July 2016, it was announced that The Disease Biophysics Group at Harvard, led by Kit Parker, created a tissue-engineered soft-robotic ray that swims using wave-like fin motions, and turns acc
https://en.wikipedia.org/wiki/The%20Imitation%20Game
The Imitation Game is a 2014 period biographical thriller film directed by Morten Tyldum and written by Graham Moore, based on the 1983 biography Alan Turing: The Enigma by Andrew Hodges. The film's title quotes the name of the game cryptanalyst Alan Turing proposed for answering the question "Can machines think?", in his 1950 seminal paper "Computing Machinery and Intelligence". The film stars Benedict Cumberbatch as Turing, who decrypted German intelligence messages for the British government during World War II. Keira Knightley, Matthew Goode, Rory Kinnear, Charles Dance, and Mark Strong appear in supporting roles. The Imitation Game was released theatrically in the United States on November 28, 2014. The film grossed over $233 million worldwide on a $14 million production budget, making it the highest-grossing independent film of 2014. It received eight nominations at the 87th Academy Awards, winning for Best Adapted Screenplay; five nominations at the 72nd Golden Globe Awards; and three nominations at the 21st Screen Actors Guild Awards. It also received nine BAFTA nominations, and won the People's Choice Award at the 39th Toronto International Film Festival. Plot In 1951, two policemen, Nock and Staehl, investigate the mathematician Alan Turing after an apparent break-in at his home. During his interrogation by Nock, Turing tells of his time working at Bletchley Park during the Second World War. In 1928, the young Turing is unhappy and bullied at boarding school. He develops a friendship with Christopher Morcom, who sparks his interest in cryptography. Turing develops romantic feelings for him, but Christopher soon dies from tuberculosis. When Britain declares war on Germany in 1939, Turing travels to Bletchley Park. Under the direction of Commander Alastair Denniston, he joins the cryptography team of Hugh Alexander, John Cairncross, Peter Hilton, Keith Furman, and Charles Richards. The team is trying to analyze the Enigma machine, which the Nazis use to
https://en.wikipedia.org/wiki/Stromquist%E2%80%93Woodall%20theorem
The Stromquist–Woodall theorem is a theorem in fair division and measure theory. Informally, it says that, for any cake, for any n people with different tastes, and for any fraction w, there exists a subset of the cake that all people value at exactly a fraction w of the total cake value, and it can be cut using at most cuts. The theorem is about a circular 1-dimensional cake (a "pie"). Formally, it can be described as the interval [0,1] in which the two endpoints are identified. There are n continuous measures over the cake: ; each measure represents the valuations of a different person over subsets of the cake. The theorem says that, for every weight , there is a subset , which all people value at exactly : , where is a union of at most intervals. This means that cuts are sufficient for cutting the subset . If the cake is not circular (that is, the endpoints are not identified), then may be the union of up to intervals, in case one interval is adjacent to 0 and one other interval is adjacent to 1. Proof sketch Let be the subset of all weights for which the theorem is true. Then: . Proof: take (recall that the value measures are normalized such that all partners value the entire cake as 1). If , then also . Proof: take . If is a union of intervals in a circle, then is also a union of intervals. is a closed set. This is easy to prove, since the space of unions of intervals is a compact set under a suitable topology. If , then also . This is the most interesting part of the proof; see below. From 1-4, it follows that . In other words, the theorem is valid for every possible weight. Proof sketch for part 4 Assume that is a union of intervals and that all partners value it as exactly . Define the following function on the cake, : Define the following measures on : Note that . Hence, for every partner : . Hence, by the Stone–Tukey theorem, there is a hyper-plane that cuts to two half-spaces, , such that: Define and .
https://en.wikipedia.org/wiki/Midphalangeal%20hair
Midphalangeal hair, or the presence/absence of hair on the middle phalanx of the ring finger, is one of the most widely studied markers in classical genetics of human populations. Although this polymorphism was observed at other fingers as well, for this kind of research, the fourth finger of the hand has been conventionally selected. Description In humans, hair is commonly present on all the basal segments of the digits and invariably absent from all the terminal ones. On the middle segments, there is wide fluctuation with apparent familial and racial tendencies. Hair is present on the middle segment of the fingers more frequently than on the middle segment of the toes. Hair is most often found on the middle segment of the fourth finger. History Willier (1974), related citations quoted Danforth as stating that 'the hair follicle is a kind of biological microcosm in which almost any problem relating to growth, differentiation, decline and rejuvenescence of tissue can be studied to advantage....' While riding on a streetcar in Wilkes-Barre one summer, Danforth observed, in his own words, that 'a man in front of me draped his arm over the back of the seat and I noticed that while his arm was very hairy the middle segments of his fingers were free of hair and so, I observed, were my own; but I knew this was not generally true.' So far as he was aware, no one before had recognized this variation as possibly hereditary. Inheritance The genetic determination of presence or absence of hair on the dorsal aspect of the middle phalanx was first suggested by Danforth (1921). From a study of 80 families with a total of 178 children, he suggested that 'a phylogenetically progressive loss of hair is brought about through the action of one or more recessive genes, or of one primary recessive gene with several modifying factors that regulate the distribution of hair when it is present.' Stated conversely, 'despite the fact that in evolutionary progress hair is disappearing from
https://en.wikipedia.org/wiki/Wiener%20deconvolution
In mathematics, Wiener deconvolution is an application of the Wiener filter to the noise problems inherent in deconvolution. It works in the frequency domain, attempting to minimize the impact of deconvolved noise at frequencies which have a poor signal-to-noise ratio. The Wiener deconvolution method has widespread use in image deconvolution applications, as the frequency spectrum of most visual images is fairly well behaved and may be estimated easily. Wiener deconvolution is named after Norbert Wiener. Definition Given a system: where denotes convolution and: is some original signal (unknown) at time . is the known impulse response of a linear time-invariant system is some unknown additive noise, independent of is our observed signal Our goal is to find some so that we can estimate as follows: where is an estimate of that minimizes the mean square error , with denoting the expectation. The Wiener deconvolution filter provides such a . The filter is most easily described in the frequency domain: where: and are the Fourier transforms of and , is the mean power spectral density of the original signal , is the mean power spectral density of the noise , , , and are the Fourier transforms of , and , and , respectively, the superscript denotes complex conjugation. The filtering operation may either be carried out in the time-domain, as above, or in the frequency domain: and then performing an inverse Fourier transform on to obtain . Note that in the case of images, the arguments and above become two-dimensional; however the result is the same. Interpretation The operation of the Wiener filter becomes apparent when the filter equation above is rewritten: Here, is the inverse of the original system, is the signal-to-noise ratio, and is the ratio of the pure filtered signal to noise spectral density. When there is zero noise (i.e. infinite signal-to-noise), the term inside the square brackets equals 1, which means that the
https://en.wikipedia.org/wiki/Tempo%20giusto
Tempo giusto () is a musical term that literally means 'in correct time'. General In the 17th and 18th centuries (Baroque and early Classical), tempo giusto referred to the idea that each meter has its own 'ideal' tempo; this was also referred to as tempo ordinario (ordinary time). The larger the beat value of the meter, the slower the tempo. Therefore, meters with beat values of a minim/half note (e.g. , ) should be performed with a slow tempo; those with quaver/eighth note beats (e.g. ) are fast; while those with crotchet/quarter note beats (e.g. , , ) are performed at a moderate or middling tempo. This convention started in Italy in the 1600s (seicento), and continued in Germany in the 1700s, as theorized by Friedrich Wilhelm Marpurg (1755) and Johann Kirnberger (1776; see sequel): Conventions existed for what the "correct" tempo for a particular style was, notably detailed for French dances in Michel L'Affilard (1691–1717). The composer and music theorist Johann Kirnberger (1776) formalized and refined this idea by instructing the performer to consider the following details in combination when determining the best performance tempo of a piece: the tempo giusto of the meter, the tempo term (Allegro, Adagio, etc., if there is one, at the start of the piece), the particular rhythms in the piece (taking account of the longest and shortest notes), the 'character' of the piece, and the piece's genre (whether it was a minuet, sarabande, gigue, etc.). In this way, an experienced musician could rely on his/her (informed) intuition to find the 'right' tempo. Occasionally, a composer will mark a piece tempo giusto to request the performer to use his/her experience in this way: that is, to intuit the correct tempo from the structure and nature of the piece itself. From the mid-18th century, the notion of each meter having an 'ideal tempo' fell out of fashion, as composers started preferring to indicate tempo with tempo terms and (later, in the nineteenth century) with
https://en.wikipedia.org/wiki/HumGen
The Centre of Genomics and Policy (previously the HumGen team) is affiliated with McGill University and the Genome Innovation Centre Canada. The Centre was launched to respond to the urgent need for informed public policy and analyses on socio-ethical issues related to human genetics research at the international, national, and provincial levels. Its website provides policy makers and the public access to policy statements concerning genetic research. History The Centre for Genomics and Policy Located within the McGill Genome Centre, the Centre of Genomics and Policy (CGP) works at the crossroads of law, medicine, and public policy. Applying a multidisciplinary perspective and collaborating with national and international partners, the CGP analyzes the socio-ethical and legal norms influencing the promotion, prevention and protection of human health. Currently, the CGP's research covers several areas of genomics and policy that include: stem cell research and therapies, personalized medicine, prevention and treatment of cancer, data sharing in research, pediatrics, genetic counselling, digital health and AI, intellectual property and open science, epigenetics, intersex and diversity in health, gene editing, genetic discrimination and biobanking (population genetics). These domains are approached using three guiding foundations: internationalization, policy development and knowledge transfer. First, the CGP promotes internationalization by undertaking comparative analyses of policies and guidelines around the world. Secondly, the CGP actively participates in the creation of international consortia with a view to promoting multidisciplinary policymaking. Finally, via its numerous workshops and lecture series, the CGP encourages knowledge transfer. The CGP team was formerly affiliated with the Centre de recherche en droit public (CRDP) of the Université de Montréal under the name Genetics and Society Project. HumGen.org Faced with rapid advances in human genetics
https://en.wikipedia.org/wiki/Antiphonary%20of%20St.%20Benigne
The Antiphonary tonary missal of St. Benigne (also called Antiphonarium Codex Montpellier or Tonary of Saint-Bénigne of Dijon) was written in the last years of the 10th century, when the Abbot William of Volpiano at St. Benignus of Dijon reformed the liturgy of several monasteries in Burgundy. The chant manuscript records mainly Western plainchant of the Roman-Frankish proper Mass and part of the chant sung during the matins ("Gregorian chant"), but unlike the common form of the Gradual and of the Antiphonary, William organized his manuscript according to the chant genre (antiphons with psalmody, alleluia verses, graduals, offertories, and proses for the missal part), and these sections were subdivided into eight parts according to the octoechos. This disposition followed the order of a tonary, but William of Volpiano wrote not only the incipits of the classified chant, he wrote the complete chant text with the music in central French neumes which were still written in campo aperto, and added a second alphabetic notation of his own invention for the melodic structure of the codified chant. Historical background This particular type of a fully notated tonary only appeared in Burgundy and Normandy. It can be regarded as a characteristic document of a certain school founded by William of Volpiano, who was reforming abbot at St. Benignus of Dijon since 989. In 1001 he followed a request by Duke Richard II and became first abbot at the Abbey of Fécamp which was a reforming centre of monasticism in Normandy. Only this manuscript was written during the time of William by the same hand as several other manuscripts of the Library of the Medical Faculty of Montpellier (today "Bibliothèque interuniversitaire de Médecin") which all belong to St. Bénigne. It is not known, whether it was really written by William of Volpiano in person. The few things known about him can be read in a hagiographic source, the Vita domni Willelmi in fourteen chapters written by his disciple, the
https://en.wikipedia.org/wiki/Mung%20bean
The mung bean (Vigna radiata), alternatively known as the green gram, maash (٫ )٫ mūng (), monggo, đậu xanh (Vietnamese; literally, "green bean"), kacang hijau (Indonesian; literally "green bean") or munggo (Philippines), is a plant species in the legume family. The mung bean is mainly cultivated in East, Southeast and South Asia. It is used as an ingredient in both savoury and sweet dishes. Description The green gram is an annual vine with yellow flowers and fuzzy brown pods. The English word mung originated from the Hindi word (), which is derived from the Sanskrit word (). Morphology Mung bean (Vigna radiata) is a plant species of Fabaceae which is also known as green gram. It is sometimes confused with black gram (Vigna mungo) for their similar morphology, though they are two different species. The green gram is an annual vine with yellow flowers and fuzzy brown pods. There are three subgroups of Vigna radiata, including one cultivated (Vigna radiata subsp. radiata) and two wild ones (Vigna radiata subsp. sublobata and Vigna radiata subsp. glabra). It has a height of about 15–125 cm. Mung bean has a well-developed root system. The lateral roots are many and slender, with root nodules grown. Stems are much branched, sometimes twining at the tips. Young stems are purple or green, and mature stems are grayish yellow or brown. They can be divided into erect cespitose, semi-trailing and trailing types. Wild types tend to be prostrate while cultivated types are more erect. Leaves are ovoid or broad-ovoid, cotyledons die after emergence, and ternate leaves are produced on two single leaves. The leaves are 6–12 cm long and 5–10 cm wide. Racemes with yellow flowers are borne in the axils and tips of the leaves, with 10-25 flowers per pedicel, self-pollinated. The fruits are elongated cylindrical or flat cylindrical pods, usually 30-50 per plant. The pods are 5–10 cm long and 0.4-0.6 cm wide and contain 12-14 septum-separated seeds, which can be either cylindrica
https://en.wikipedia.org/wiki/Strasbourg%20Institute%20of%20Material%20Physics%20and%20Chemistry
The Strasbourg Institute of Material Physics and Chemistry (IPCMS—) is a joint research unit between the French National Center for Scientific Research (CNRS) and the University of Strasbourg. It was founded in 1987 and is located in the district of Cronenbourg in Strasbourg, France. History The IPCMS was born from a reflection initiated in the early eighties on the need to refocus and coordinate research in the physics and chemistry of condensed matter and materials. In the context of the then emerging Materials Center in Strasbourg, a first reorganization project for condensed matter physics was formalized in 1983. Then, in the same years, the strategic importance of materials for innovation is recognized, justifying the extension of the initial project to chemists, to constitute the backbone of the future institute by bringing together physicists and chemists on the objective of designing and studying new materials (metals, ceramics, ...) for their electronic properties (magnetic, optical, dielectric, etc.). CNRS-ULP-EHICS joint unit, the IPCMS is officially created in 1987 with François Gautier as Director and Jean-Claude Bernier as deputy director. Originally located on five different sites of the University of Strasbourg, it was in 1994 that members of the IPCMS were grouped together in the current building on the Campus of Cronenbourg. The IPCMS is then organized into five research groups around three types of materials - polymers and organic materials, metallic materials, ceramics and inorganic materials - and two topics of study: nonlinear optics and optoelectronics on one hand, surfaces and interfaces on the other hand. Research The multi-disciplinary nature of the IPCMS is expressed by leading activities in spin electronics, magnetism, ultra-fast optics, electron microscopy and local probes, biomaterials as well as in the synthesis and characterization of functional organic, inorganic or hybrid materials. All scales are considered from the isolated mo
https://en.wikipedia.org/wiki/Enteritis
Enteritis is inflammation of the small intestine. It is most commonly caused by food or drink contaminated with pathogenic microbes, such as Serratia, but may have other causes such as NSAIDs, radiation therapy as well as autoimmune conditions like Crohn's disease and celiac disease. Symptoms include abdominal pain, cramping, diarrhea, dehydration, and fever. Related diseases of the gastrointestinal system include inflammation of the stomach and large intestine. Duodenitis, jejunitis and ileitis are subtypes of enteritis which are localised to a specific part of the small intestine. Inflammation of both the stomach and small intestine is referred to as gastroenteritis. Signs and symptoms Signs and symptoms of enteritis are highly variable and vary based on the specific cause and other factors such as individual variance and stage of disease. Symptoms may include abdominal pain, cramping, diarrhea, dehydration, fever, nausea, vomiting and weight loss. Causes Autoimmune Crohn's disease – also known as regional enteritis, it can occur along any surface of the gastrointestinal tract. In 40% of cases, it is limited to the small intestine. Coeliac disease – caused by an autoimmune reaction to gluten by genetically predisposed individuals. Eosinophilic enteropathy – a condition where eosinophils build up in the gastrointestinal tract and blood vessels, leading to polyp formation, necrosis, inflammation and ulcers. It is most commonly seen in patients with a history of atopy, however is overall relatively uncommon. Infectious enteritis In Germany, 90% of cases of infectious enteritis are caused by four pathogens, Norovirus, Rotavirus, Campylobacter and Salmonella. Other common causes of infectious enteritis include bacteria such as Shigella and E. coli, as well as viruses such as adenovirus, astrovirus and calicivirus. Other less common pathogens include Bacillus cereus, Clostridium perfringens, Clostridium difficile and Staphylococcus aureus. Campylobacter jejun
https://en.wikipedia.org/wiki/Assay
An assay is an investigative (analytic) procedure in laboratory medicine, mining, pharmacology, environmental biology and molecular biology for qualitatively assessing or quantitatively measuring the presence, amount, or functional activity of a target entity. The measured entity is often called the analyte, the measurand, or the target of the assay. The analyte can be a drug, biochemical substance, chemical element or compound, or cell in an organism or organic sample. An assay usually aims to measure an analyte's intensive property and express it in the relevant measurement unit (e.g. molarity, density, functional activity in enzyme international units, degree of effect in comparison to a standard, etc.). If the assay involves exogenous reactants (the reagents), then their quantities are kept fixed (or in excess) so that the quantity and quality of the target are the only limiting factors. The difference in the assay outcome is used to deduce the unknown quality or quantity of the target in question. Some assays (e.g., biochemical assays) may be similar to chemical analysis and titration. However, assays typically involve biological material or phenomena that are intrinsically more complex in composition or behavior, or both. Thus, reading of an assay may be noisy and involve greater difficulties in interpretation than an accurate chemical titration. On the other hand, older generation qualitative assays, especially bioassays, may be much more gross and less quantitative (e.g., counting death or dysfunction of an organism or cells in a population, or some descriptive change in some body part of a group of animals). Assays have become a routine part of modern medical, environmental, pharmaceutical, and forensic technology. Other businesses may also employ them at the industrial, curbside, or field levels. Assays in high commercial demand have been well investigated in research and development sectors of professional industries. They have also undergone generatio
https://en.wikipedia.org/wiki/Freeze%20%28software%20engineering%29
In software engineering, a freeze is a point in time in the development process after which the rules for making changes to the source code or related resources become more strict, or the period during which those rules are applied. A freeze helps move the project forward towards a release or the end of an iteration by reducing the scale or frequency of changes, and may be used to help meet a roadmap. The exact rules depend on the type of freeze and the particular development process in use; for example, they may include only allowing changes which fix bugs, or allowing changes only after thorough review by other members of the development team. They may also specify what happens if a change contrary to the rules is required, such as restarting the freeze period. Common types of freezes are: A (complete) specification freeze, in which the parties involved decide not to add any new requirement, specification, or feature to the feature list of a software project, so as to begin coding work. A (complete) feature freeze, in which all work on adding new features is suspended, shifting the effort towards fixing bugs and improving the user experience. The addition of new features may have a disruptive effect on other parts of the program, due both to the introduction of new, untested source code or resources and to interactions with other features; thus, a feature freeze helps improve the program's stability. For example: "user interface feature freeze" means no more features will be permitted to the user interface portion of the code; bugs can still be fixed. A (complete) code freeze, in which no changes whatsoever are permitted to a portion or the entirety of the program's source code. Particularly in large software systems, any change to the source code may have unintended consequences, potentially introducing new bugs; thus, a code freeze helps ensure that a portion of the program that is known to work correctly will continue to do so. Code freezes are often emplo
https://en.wikipedia.org/wiki/Extended%20ASCII
Extended ASCII is a repertoire of character encodings that include (most of) the original 96 ASCII character set, plus up to 128 additional characters. There is no formal definition of "extended ASCII", and even use of the term is sometimes criticized, because it can be mistakenly interpreted to mean that the American National Standards Institute (ANSI) had updated its standard to include more characters, or that the term identifies a single unambiguous encoding, neither of which is the case. The ISO standard ISO 8859 was the first international standard to formalise a (limited) expansion of the ASCII character set: of the many language variants it encoded, ISO 8859-1 ("ISO Latin 1")which supports most Western European languages is best known in the West. There are many other extended ASCII encodings (more than 220 DOS and Windows codepages). EBCDIC ("the other" major character code) likewise developed many extended variants (more than 186 EBCDIC codepages) over the decades. All modern operating systems use Unicode which supports thousands of characters. However, extended ASCII remains important in the history of computing, and supporting multiple extended ASCII character sets required software to be written in ways that made it much easier to support the UTF-8 encoding method later on. History ASCII was designed in the 1960s for teleprinters and telegraphy, and some computing. Early teleprinters were electromechanical, having no microprocessor and just enough electromechanical memory to function. They fully processed one character at a time, returning to an idle state immediately afterward; this meant that any control sequences had to be only one character long, and thus a large number of codes needed to be reserved for such controls. They were typewriter-derived impact printers, and could only print a fixed set of glyphs, which were cast into a metal type element or elements; this also encouraged a minimum set of glyphs. Seven-bit ASCII improved over prior
https://en.wikipedia.org/wiki/Van%20Genuchten%E2%80%93Gupta%20model
The Van Genuchten–Gupta model is an inverted S-curve applicable to crop yield and soil salinity relations. It is named after Martinus Theodore van Genuchten and Satyandra K. Gupta's work from the 1990s. Equation The mathematical expression is: where Y is the yield, Ym is the maximum yield of the model, C is salt concentration of the soil, C50 is the C value at 50% yield, and P is an exponent to be found by optimization and maximizing the model's goodness of fit to the data. In the figure: Ym = 3.1, C50 = 12.4, P = 3.75 Alternative one As an alternative, the logistic S-function can be used. The mathematical expression is: where: with Y being the yield, Yn the minimum Y, Ym the maximum Y, X the salt concentration of the soil, while A, B and C are constants to be determined by optimization and maximizing the model's goodness of fit to the data. If the minimum Yn=0 then the expression can be simplified to: In the figure: Ym = 3.43, Yn = 0.47, A = 0.112, B = -3.16, C = 1.42. Alternative two The third degree or cubic regression also offers a useful alternative. The equation reads: with Y the yield, X the salt concentration of the soil, while A, B, C and D are constants to be determined by the regression. In the figure: A = 0.0017, B = 0.0604, C=0.3874, D = 2.3788. These values were calculated with Microsoft Excel The curvature is more pronounced than in the other models. See also Maas–Hoffman model
https://en.wikipedia.org/wiki/Marshall%20Hall%20%28mathematician%29
Marshall Hall Jr. (17 September 1910 – 4 July 1990) was an American mathematician who made significant contributions to group theory and combinatorics. Education and career Hall studied mathematics at Yale University, graduating in 1932. He studied for a year at Cambridge University under a Henry Fellowship working with G. H. Hardy. He returned to Yale to take his Ph.D. in 1936 under the supervision of Øystein Ore. He worked in Naval Intelligence during World War II, including six months in 1944 at Bletchley Park, the center of British wartime code breaking. In 1946 he took a position at Ohio State University. In 1959 he moved to the California Institute of Technology where, in 1973, he was named the first IBM Professor at Caltech, the first named chair in mathematics. After retiring from Caltech in 1981, he accepted a post at Emory University in 1985. Hall died in 1990 in London on his way to a conference to mark his 80th birthday. Contributions He wrote a number of papers of fundamental importance in group theory, including his solution of Burnside's problem for groups of exponent 6, showing that a finitely generated group in which the order of every element divides 6 must be finite. His work in combinatorics includes an important paper of 1943 on projective planes, which for many years was one of the most cited mathematics research papers. In this paper he constructed a family of non-Desarguesian planes which are known today as Hall planes. He also worked on block designs and coding theory. His classic book on group theory was well received when it came out and is still useful today. His book Combinatorial Theory came out in a second edition in 1986, published by John Wiley & Sons. He proposed Hall's conjecture on the differences between perfect squares and perfect cubes, which remains an open problem as of 2015. Publications 1943: "Projective Planes", Transactions of the American Mathematical Society 54(2): 229–77 1959: The Theory of Groups, Macmil
https://en.wikipedia.org/wiki/Debt%20service%20ratio
In economics and government finance, a country’s debt service ratio is the ratio of its debt service payments (principal + interest) to its export earnings. A country's international finances are healthier when this ratio is low. For most countries the ratio is between 0 and 20%. In contrast to the debt service coverage ratio, which is calculated as income divided by debt, this ratio is inverse and calculated as debt service divided by country's income from international trade, i.e., exports.
https://en.wikipedia.org/wiki/Pinch%20analysis
Pinch analysis is a methodology for minimising energy consumption of chemical processes by calculating thermodynamically feasible energy targets (or minimum energy consumption) and achieving them by optimising heat recovery systems, energy supply methods and process operating conditions. It is also known as process integration, heat integration, energy integration or pinch technology. The process data is represented as a set of energy flows, or streams, as a function of heat load (product of specific enthalpy and mass flow rate; SI unit W) against temperature (SI unit K). These data are combined for all the streams in the plant to give composite curves, one for all hot streams (releasing heat) and one for all cold streams (requiring heat). The point of closest approach between the hot and cold composite curves is the pinch point (or just pinch) with a hot stream pinch temperature and a cold stream pinch temperature. This is where the design is most constrained. Hence, by finding this point and starting the design there, the energy targets can be achieved using heat exchangers to recover heat between hot and cold streams in two separate systems, one for temperatures above pinch temperatures and one for temperatures below pinch temperatures. In practice, during the pinch analysis of an existing design, often cross-pinch exchanges of heat are found between a hot stream with its temperature above the pinch and a cold stream below the pinch. Removal of those exchangers by alternative matching makes the process reach its energy target. History In 1971, Ed Hohmann stated in his PhD that 'one can compute the least amount of hot and cold utilities required for a process without knowing the heat exchanger network that could accomplish it. One also can estimate the heat exchange area required'. In late 1977, Ph.D. student Bodo Linnhoff under the supervision of Dr John Flower at the University of Leeds showed the existence in many processes of a heat integration bottle
https://en.wikipedia.org/wiki/Desargues%27s%20theorem
In projective geometry, Desargues's theorem, named after Girard Desargues, states: Two triangles are in perspective axially if and only if they are in perspective centrally. Denote the three vertices of one triangle by and , and those of the other by and . Axial perspectivity means that lines and meet in a point, lines and meet in a second point, and lines and meet in a third point, and that these three points all lie on a common line called the axis of perspectivity. Central perspectivity means that the three lines and are concurrent, at a point called the center of perspectivity. This intersection theorem is true in the usual Euclidean plane but special care needs to be taken in exceptional cases, as when a pair of sides are parallel, so that their "point of intersection" recedes to infinity. Commonly, to remove these exceptions, mathematicians "complete" the Euclidean plane by adding points at infinity, following Jean-Victor Poncelet. This results in a projective plane. Desargues's theorem is true for the real projective plane and for any projective space defined arithmetically from a field or division ring; that includes any projective space of dimension greater than two or in which Pappus's theorem holds. However, there are many "non-Desarguesian planes", in which Desargues's theorem is false. History Desargues never published this theorem, but it appeared in an appendix entitled Universal Method of M. Desargues for Using Perspective (Manière universelle de M. Desargues pour practiquer la perspective) to a practical book on the use of perspective published in 1648. by his friend and pupil Abraham Bosse (1602–1676). Coordinatization The importance of Desargues's theorem in abstract projective geometry is due especially to the fact that a projective space satisfies that theorem if and only if it is isomorphic to a projective space defined over a field or division ring. Projective versus affine spaces In an affine space such as the Euclidean
https://en.wikipedia.org/wiki/Corn%20syrup
Corn syrup is a food syrup which is made from the starch of corn (called maize in many countries) and contains varying amounts of sugars: glucose, maltose and higher oligosaccharides, depending on the grade. Corn syrup is used in foods to soften texture, add volume, prevent crystallization of sugar, and enhance flavor. Corn syrup is not the same as high-fructose corn syrup (HFCS), which is manufactured from corn syrup by converting a large proportion of its glucose into fructose using the enzyme D-xylose isomerase, thus producing a sweeter substance. The more general term glucose syrup is often used synonymously with corn syrup, since glucose syrup in the United States is most commonly made from corn starch. Technically, glucose syrup is any liquid starch hydrolysate of mono-, di-, and higher-saccharides and can be made from any source of starch: wheat, tapioca and potatoes are the most common other sources. Commercial preparation Historically, corn syrup was produced by combining corn starch with dilute hydrochloric acid, and then heating the mixture under pressure. The process was invented by the German chemist Gottlieb Kirchhoff in 1811. Currently, corn syrup is obtained through a multi-step bioprocess. First, the enzyme α-amylase is added to a mixture of corn starch and water. α-amylase is secreted by various species of the bacterium genus Bacillus and the enzyme is isolated from the liquid in which the bacteria were grown. The enzyme breaks down the starch into oligosaccharides, which are then broken into glucose molecules by adding the enzyme glucoamylase, known also as "γ-amylase". Glucoamylase is secreted by various species of the fungus Aspergillus; the enzyme is isolated from the liquid in which the fungus is grown. The glucose can then be transformed into fructose by passing the glucose through a column that is loaded with the enzyme D-xylose isomerase, an enzyme that is isolated from the growth medium of any of several bacteria. Corn syrup is produce
https://en.wikipedia.org/wiki/Watkins%20snark
In the mathematical field of graph theory, the Watkins snark is a snark with 50 vertices and 75 edges. It was discovered by John J. Watkins in 1989. As a snark, the Watkins graph is a connected, bridgeless cubic graph with chromatic index equal to 4. The Watkins snark is also non-planar and non-hamiltonian. It has book thickness 3 and queue number 2. Another well known snark on 50 vertices is the Szekeres snark, the fifth known snark, discovered by George Szekeres in 1973. Gallery Edges [[1,2], [1,4], [1,15], [2,3], [2,8], [3,6], [3,37], [4,6], [4,7], [5,10], [5,11], [5,22], [6,9], [7,8], [7,12], [8,9], [9,14], [10,13], [10,17], [11,16], [11,18], [12,14], [12,33], [13,15], [13,16], [14,20], [15,21], [16,19], [17,18], [17,19], [18,30], [19,21], [20,24], [20,26], [21,50], [22,23], [22,27], [23,24], [23,25], [24,29], [25,26], [25,28], [26,31], [27,28], [27,48], [28,29], [29,31], [30,32], [30,36], [31,36], [32,34], [32,35], [33,34], [33,40], [34,41], [35,38], [35,40], [36,38], [37,39], [37,42], [38,41], [39,44], [39,46], [40,46], [41,46], [42,43], [42,45], [43,44], [43,49], [44,47], [45,47], [45,48], [47,50], [48,49], [49,50]]
https://en.wikipedia.org/wiki/Endeavour%20Software%20Project%20Management
Endeavour Software Project Management is an open-source solution to manage large-scale enterprise software projects in an iterative and incremental development process. History Endeavour Software Project Management was founded in September 2008 with the intention to develop a solution for replacing expensive and complex project management systems that is easy to use, intuitive, and realistic by eliminating features considered unnecessary. In September 2009 the project was registered in SourceForge, and in April 2010 the project was included in SourceForge's blog with an average of 210 weekly downloads. Features The major features include support for the following software artifacts: Projects Use cases Iterations Project plans Change requests Defect tracking Test cases Test plans Task Actors Document management Project glossary Project Wiki Developer management Reports (assignments, defects, cumulative flow) SVN browser integration with Svenson Continuous Integration with Hudson Email notifications Fully internationalizable System requirements Endeavour Software Project Management can be deployed in any Java EE-compliant application server and any relational database running under a variety of different operating systems. Its cross-browser capability allows it to run in most popular web browsers. Usage Software project management Iterative and incremental development Use-case-driven Issue tracking Test-case management software Integrated wiki See also Project management software List of project management software Notes
https://en.wikipedia.org/wiki/Type%20II%20Cepheid
Type II Cepheids are variable stars which pulsate with periods typically between 1 and 50 days. They are population II stars: old, typically metal-poor, low mass objects. Like all Cepheid variables, Type IIs exhibit a relationship between the star's luminosity and pulsation period, making them useful as standard candles for establishing distances where little other data is available Longer period Type II Cepheids, which are more luminous, have been detected beyond the Local Group in the galaxies NGC 5128 and NGC 4258. Classification Historically Type II Cepheids were called W Virginis variables, but are now divided into three subclasses based on the length of their period. Stars with periods between 1 and 4 days are of the BL Herculis subclass and 10–20 days belong to the W Virginis subclass. Stars with periods greater than 20 days, and usually alternating deep and shallow minima, belong to the RV Tauri subclass. RV Tauri variables are usually classified by a formal period from deep minimum to deep minimum, hence 40 days or more. The divisions between the types are not always clearcut or agreed. For example, the dividing line between BL Her and W Vir types is quoted at anything between 4 and 10 days, with no obvious division between the two. RV Tau variables may not have obvious alternating minima, while some W Vir stars do. Nevertheless, each type is thought to represent a distinct different evolutionary stage, with BL Her stars being helium core burning objects moving from the horizontal branch towards the asymptotic giant branch (AGB), W Vir stars undergoing hydrogen or helium shell burning on a blue loop, and RV Tau stars being post-AGB objects at or near the end of nuclear fusion. RV Tau stars in particular show irregularities in their light curves, with slow variations in the brightness of both maxima and minima, variations in the period, intervals with little variation, and sometimes a temporary breakdown into chaotic behaviour. R Scuti has one
https://en.wikipedia.org/wiki/Indicative%20conditional
In natural languages, an indicative conditional is a conditional sentence such as "If Leona is at home, she isn't in Paris", whose grammatical form restricts it to discussing what could be true. Indicatives are typically defined in opposition to counterfactual conditionals, which have extra grammatical marking which allows them to discuss eventualities which are no longer possible. Indicatives are a major topic of research in philosophy of language, philosophical logic, and linguistics. Open questions include which logical operation indicatives denote, how such denotations could be composed from their grammatical form, and the implications of those denotations for areas including metaphysics, psychology of reasoning, and philosophy of mathematics. Formal analyses Early analyses identified indicative conditionals with the logical operation known as the material conditional. According to the material conditional analysis, an indicative "If A then B" is true unless A is true and B is not. Although this analysis covers many observed cases, it misses some crucial properties of actual conditional speech and reasoning. One problem for the material conditional analysis is that it allows indicatives to be true even when their antecedent and consequent are unrelated. For instance, the indicative "If Paris is in France then trout are fish" is intuitively strange since the location of Paris has nothing to do with the classification of trout. However, since its antecedent and the consequent are both true, the material conditional analysis treats it as a true statement. Similarly, the material conditional analysis treats conditionals with false antecedents as vacuously true. For instance, since Paris is not in Australia, the conditional "If Paris is in Australia, then trout are fish" would be treated as true on a material conditional analysis. These arguments have been taken to show that no truth-functional operator will suffice as a semantics for indicative conditionals. In
https://en.wikipedia.org/wiki/Environmental%20biotechnology
Environmental biotechnology is biotechnology that is applied to and used to study the natural environment. Environmental biotechnology could also imply that one try to harness biological process for commercial uses and exploitation. The International Society for Environmental Biotechnology defines environmental biotechnology as "the development, use and regulation of biological systems for remediation of contaminated environments (land, air, water), and for environment-friendly processes (green manufacturing technologies and sustainable development)". Environmental biotechnology can simply be described as "the optimal use of nature, in the form of plants, animals, bacteria, fungi and algae, to produce renewable energy, food and nutrients in a synergistic integrated cycle of profit making processes where the waste of each process becomes the feedstock for another process". Significance for agriculture, food security, climate change mitigation and adaptation and the MDGs The IAASTD has called for the advancement of small-scale agro-ecological farming systems and technology in order to achieve food security, climate change mitigation, climate change adaptation and the realisation of the Millennium Development Goals. Environmental biotechnology has been shown to play a significant role in agroecology in the form of zero waste agriculture and most significantly through the operation of over 15 million biogas digesters worldwide. Significance towards industrial biotechnology Consider the effluents of starch plant which has mixed up with a local water body like a lake or pond. We find huge deposits of starch which are not so easily taken up for degradation by microorganisms except for a few exemptions. Microorganisms from the polluted site are scan for genomic changes that allow them to degrade/utilize the starch better than other microbes of the same genus. The modified genes are then identified. The resultant genes are cloned into industrially significant microorg
https://en.wikipedia.org/wiki/Ahlswede%E2%80%93Daykin%20inequality
The Ahlswede–Daykin inequality , also known as the four functions theorem (or inequality), is a correlation-type inequality for four functions on a finite distributive lattice. It is a fundamental tool in statistical mechanics and probabilistic combinatorics (especially random graphs and the probabilistic method). The inequality states that if are nonnegative functions on a finite distributive lattice such that for all x, y in the lattice, then for all subsets X, Y of the lattice, where and The Ahlswede–Daykin inequality can be used to provide a short proof of both the Holley inequality and the FKG inequality. It also implies the XYZ inequality. For a proof, see the original article or . Generalizations The "four functions theorem" was independently generalized to 2k functions in and .
https://en.wikipedia.org/wiki/Convergence%20of%20measures
In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by convergence of measures, consider a sequence of measures μn on a space, sharing a common collection of measurable sets. Such a sequence might represent an attempt to construct 'better and better' approximations to a desired measure μ that is difficult to obtain directly. The meaning of 'better and better' is subject to all the usual caveats for taking limits; for any error tolerance ε > 0 we require there be N sufficiently large for n ≥ N to ensure the 'difference' between μn and μ is smaller than ε. Various notions of convergence specify precisely what the word 'difference' should mean in that description; these notions are not equivalent to one another, and vary in strength. Three of the most common notions of convergence are described below. Informal descriptions This section attempts to provide a rough intuitive description of three notions of convergence, using terminology developed in calculus courses; this section is necessarily imprecise as well as inexact, and the reader should refer to the formal clarifications in subsequent sections. In particular, the descriptions here do not address the possibility that the measure of some sets could be infinite, or that the underlying space could exhibit pathological behavior, and additional technical assumptions are needed for some of the statements. The statements in this section are however all correct if is a sequence of probability measures on a Polish space. The various notions of convergence formalize the assertion that the 'average value' of each 'sufficiently nice' function should converge: To formalize this requires a careful specification of the set of functions under consideration and how uniform the convergence should be. The notion of weak convergence requires this convergence to take place for every continuous bounded function . This notion
https://en.wikipedia.org/wiki/Violacein
Violacein is a naturally-occurring bis-indole pigment with antibiotic (anti-bacterial, anti-viral, anti-fungal and anti-tumor) properties. Violacein is produced by several species of bacteria, including Chromobacterium violaceum, and gives these organisms their striking purple hues. Violacein shows increasing commercially interesting uses, especially for industrial applications in cosmetics, medicines and fabrics. Biosynthesis Violacein is formed by enzymatic condensation of two tryptophan molecules, requiring the action of five proteins. The genes required for its production, vioABCDE, and the regulatory mechanisms employed have been studied within a small number of violacein-producing strains. Production of violacein is controlled by quorum sensing using acyl-homoserine lactones (AHLs). Only a few genera of bacteria have been reported to produce violacein. These include Chromobacterium, Duganella, Pseudoalteromonas, Janthinobacterium, Iodobacter, Rugamonas, and Massilia. Antibiotic activity Violacein is known to have diverse biological activities, including as a cytotoxic anticancer agent and antibacterial action against Staphylococcus aureus and other gram-positive pathogens. Determining the biological roles of this pigmented molecule has been of particular interest to researchers, and understanding violacein's function and mechanism of action is relevant to potential applications. Commercial production of violacein and related compounds has proven difficult so improving fermentative yields of violacein is being pursued through genetic engineering and synthetic biology.
https://en.wikipedia.org/wiki/Criegee%20intermediate
A Criegee intermediate (also called a Criegee zwitterion or Criegee biradical) is a carbonyl oxide with two charge centers. These chemicals may react with sulfur dioxide and nitrogen oxides in the earth's atmosphere, and are implicated in the formation of aerosols, which are an important factor in controlling global climate. Criegee intermediates are also an important source of OH (hydroxyl radicals). OH radicals are the most important oxidant in the troposphere, and are important in controlling air quality and pollution. The formation of this sort of structure was first postulated in the 1950s by Rudolf Criegee, for whom it is named. It was not until 2012 that direct detection of such chemicals was reported. Infrared spectroscopy suggests the electronic structure has a substantially zwitterionic character rather than the biradical character that had previously been proposed. Formation Criegee intermediates are formed by the gas-phase reactions of alkenes and ozone in the earth's atmosphere. Ozone adds across the carbon–carbon double bond of the alkene to form a molozonide, which then decomposes to produce a carbonyl (RR'CO) and a carbonyl oxide. The latter is known as the Criegee intermediate. The alkene ozonolysis reaction is extremely exothermic, releasing about of excess energy. Therefore, the Criegee intermediates are formed with a large amount of internal energy. Removal When Criegee intermediates are formed, some portion of them will undergo prompt unimolecular decay, producing OH radicals and other products. However, they may instead become stabilized by interactions with other molecules or react with other chemicals to give different products. Criegee intermediates may be collisionally stabilized via collisions with other molecules in the atmosphere. These stabilized Criegee intermediates may then undergo thermal unimolecular decay to OH radicals and other products, or may undergo bimolecular reactions with other atmospheric species. In the o
https://en.wikipedia.org/wiki/Dark%20silicon
In the electronics industry, dark silicon is the amount of circuitry of an integrated circuit that cannot be powered-on at the nominal operating voltage for a given thermal design power (TDP) constraint. Dennard scaling would posit that as transistors get smaller, they become more efficient in proportion to the increase in number for a given area, but this scaling has broken down in recent years, meaning that increases in the efficiency of smaller transistors are not proportionate with the increase in their number. This discontinuation of scaling has led to sharp increases in power density that hamper powering-on all transistors simultaneously while keeping temperatures in a safe operating range. As of 2011, researchers from different groups have projected that, at 8 nm technology nodes, the amount of dark silicon may reach up to 50–80% depending upon the processor architecture, cooling technology, and application workloads. Dark silicon may be unavoidable even in server workloads with abundance of inherent client request-level parallelism. Challenges and opportunities The emergence of dark silicon introduces several challenges for the architecture, electronic design automation (EDA), and hardware-software co-design communities. These include the question of how best to utilize the plethora of transistors (with potentially many dark ones) when designing and managing energy-efficient on-chip many-core processors under peak power and thermal constraints. Architects have initiated several efforts to leverage dark silicon in designing application-specific and accelerator-rich architectures. Recently, researchers have explored how dark silicon exposes new challenges and opportunities for the EDA community. In particular, they have demonstrated thermal, reliability (soft error and aging), and process variation concerns for dark silicon many-core processors.
https://en.wikipedia.org/wiki/Jacek%20Malinowski
Jacek Malinowski (born May 20, 1959), is a Polish professor and mathematical logician, the editor-in-chief of Studia Logica, Head of the Department of Logic and Cognitive Science at the Polish Academy of Science, Warsaw, Poland, and Head of the Section of Logical Semiotics at Nicolaus Copernicus University in Torun, Poland. From 2000 to 2001 Malinowski was a fellow of the Alexander von Humboldt Foundation at the Institute of Philosophy, Humboldt University in Berlin. From 2002 to 2003 he was at the Institute of Computer Science, University of Leipzig with a Marie Curie Fellowship. From 2004 to 2005 he was a fellow of the Netherlands Institute of Advanced Studies - NIAS.
https://en.wikipedia.org/wiki/Hans%20Riesel
Hans Ivar Riesel (May 28, 1929 in Stockholm – December 21, 2014) was a Swedish mathematician who discovered the 18th known Mersenne prime in 1957, using the computer BESK: this prime is 23217-1 and consists of 969 digits. He held the record for the largest known prime from 1957 to 1961, when Alexander Hurwitz discovered a larger one. Riesel also discovered the Riesel numbers as well as developing the Lucas–Lehmer–Riesel test. After having worked at the Swedish Board for Computing Machinery, he was awarded his Ph.D. from Stockholm University in 1969 for his thesis Contributions to numerical number theory, and in the same year joined the Royal Institute of Technology as a senior lecturer and associate professor. Selected publications See also Riesel number Riesel Sieve
https://en.wikipedia.org/wiki/Health%20psychology
Health psychology is the study of psychological and behavioral processes in health, illness, and healthcare. The discipline is concerned with understanding how psychological, behavioral, and cultural factors contribute to physical health and illness. Psychological factors can affect health directly. For example, chronically occurring environmental stressors affecting the hypothalamic–pituitary–adrenal axis, cumulatively, can harm health. Behavioral factors can also affect a person's health. For example, certain behaviors can, over time, harm (smoking or consuming excessive amounts of alcohol) or enhance (engaging in exercise) health. Health psychologists take a biopsychosocial approach. In other words, health psychologists understand health to be the product not only of biological processes (e.g., a virus, tumor, etc.) but also of psychological (e.g., thoughts and beliefs), behavioral (e.g., habits), and social processes (e.g., socioeconomic status and ethnicity). By understanding psychological factors that influence health, and constructively applying that knowledge, health psychologists can improve health by working directly with individual patients or indirectly in large-scale public health programs. In addition, health psychologists can help train other healthcare professionals (e.g., physicians and nurses) to apply the knowledge the discipline has generated, when treating patients. Health psychologists work in a variety of settings: alongside other medical professionals in hospitals and clinics, in public health departments working on large-scale behavior change and health promotion programs, and in universities and medical schools where they teach and conduct research. Although its early beginnings can be traced to the field of clinical psychology, four different divisions within health psychology and one related field, occupational health psychology (OHP), have developed over time. The four divisions include clinical health psychology, public health psycho
https://en.wikipedia.org/wiki/Moving%20Image%20Source
Moving Image Source is a website of the Museum of the Moving Image (New York City) devoted to the history of film, television, and digital media. Made possible with support from the Hazen Polsky Foundation, it features original articles by leading critics, authors, and scholars; a calendar that highlights major retrospectives, festivals, and gallery exhibitions at venues around the world; and a regularly updated guide to online research resources. Film critic Dennis Lim currently serves as editor-in-chief. The launch of Moving Image Source was marked by a special program at The Times Center in Manhattan at 6:30 p.m. on June 5, featuring a conversation between directors Werner Herzog (Encounters at the End of the World, opening June 11) and Jonathan Demme (The Silence of the Lambs). Moving Image Source is updated every Thursday with additions to the Articles and Calendar sections. Articles The articles relate to recent and ongoing retrospectives and gallery exhibitions as well as to significant new DVDs and books on film, media, and moving-image culture. Pieces are accompanied by photographs, video clips, and sidebars offering suggestions for further viewing, reading, or listening. June contributors included critics and authors Melissa Anderson, Michael Atkinson, Joshua Clover, Tom Charity, Thomas Doherty, Chris Fujiwara, Ed Halter, B. Kite, Michael Koresky, Rob Nelson, Nick Pinkerton, Tony Rayns, Jonathan Rosenbaum, Dan Sallitt, and Ed Sikov. Topics included the career of Werner Herzog, Wallis-Hazen Productions, a reappraisal of the ’60s films of William Klein, the late Taiwanese filmmaker Edward Yang, video artist Eddo Stern, the late films of Howard Hawks, Japanese actor Tatsuya Nakadai, and the recent restoration of Max Ophüls’s Lola Montès. Calendar The Moving Image Source Calendar is a selective guide to major screenings, series, festivals, and gallery exhibitions. Calendar entries include program summaries, exhibition descriptions, titles of films or fe
https://en.wikipedia.org/wiki/Levator%20ani
The levator ani is a broad, thin muscle group, situated on either side of the pelvis. It is formed from three muscle components: the pubococcygeus, the iliococcygeus, and the puborectalis. It is attached to the inner surface of each side of the lesser pelvis, and these unite to form the greater part of the pelvic floor. The coccygeus muscle completes the pelvic floor, which is also called the pelvic diaphragm. It supports the viscera in the pelvic cavity, and surrounds the various structures that pass through it. The levator ani is the main pelvic floor muscle and contracts rhythmically during female orgasm, and painfully during vaginismus. Structure The levator ani is made up of 3 parts: Iliococcygeus muscle Pubococcygeus muscle Puborectalis muscle The iliococcygeus arises from the inner side of the ischium (the lower and back part of the hip bone) and from the posterior part of the tendinous arch of the obturator fascia, and is attached to the coccyx and anococcygeal body; it is usually thin, and may be absent, or be largely replaced by fibrous tissue. An accessory slip at its posterior part is sometimes named the iliosacralis. The pubococcygeus muscle has medial fibres forming the pubovaginalis in the female, and the puboprostaticus in the male. Origin and insertion The levator ani arises, in front, from the posterior surface of the superior pubic ramus lateral to the symphysis; behind, from the inner surface of the spine of the ischium; and between these two points, from the obturator fascia. Posteriorly, this fascial origin corresponds, more or less closely, with the tendinous arch of the pelvic fascia, but in front, the muscle arises from the fascia at a varying distance above the arch, in some cases reaching nearly as high as the canal for the obturator vessels and nerve. The fibers pass downward and backward to the middle line of the floor of the pelvis; the most posterior are inserted into the side of the last two segments of the coccyx; those p
https://en.wikipedia.org/wiki/Orthogonal%20transformation
In linear algebra, an orthogonal transformation is a linear transformation T : V → V on a real inner product space V, that preserves the inner product. That is, for each pair of elements of V, we have Since the lengths of vectors and the angles between them are defined through the inner product, orthogonal transformations preserve lengths of vectors and angles between them. In particular, orthogonal transformations map orthonormal bases to orthonormal bases. Orthogonal transformations are injective: if then , hence , so the kernel of is trivial. Orthogonal transformations in two- or three-dimensional Euclidean space are stiff rotations, reflections, or combinations of a rotation and a reflection (also known as improper rotations). Reflections are transformations that reverse the direction front to back, orthogonal to the mirror plane, like (real-world) mirrors do. The matrices corresponding to proper rotations (without reflection) have a determinant of +1. Transformations with reflection are represented by matrices with a determinant of −1. This allows the concept of rotation and reflection to be generalized to higher dimensions. In finite-dimensional spaces, the matrix representation (with respect to an orthonormal basis) of an orthogonal transformation is an orthogonal matrix. Its rows are mutually orthogonal vectors with unit norm, so that the rows constitute an orthonormal basis of V. The columns of the matrix form another orthonormal basis of V. If an orthogonal transformation is invertible (which is always the case when V is finite-dimensional) then its inverse is another orthogonal transformation. Its matrix representation is the transpose of the matrix representation of the original transformation. Examples Consider the inner-product space with the standard euclidean inner product and standard basis. Then, the matrix transformation is orthogonal. To see this, consider Then, The previous example can be extended to construct all orthogonal tran
https://en.wikipedia.org/wiki/Catastrophe%20modeling
Catastrophe modeling (also known as cat modeling) is the process of using computer-assisted calculations to estimate the losses that could be sustained due to a catastrophic event such as a hurricane or earthquake. Cat modeling is especially applicable to analyzing risks in the insurance industry and is at the confluence of actuarial science, engineering, meteorology, and seismology. Catastrophes/ Perils Natural catastrophes (sometimes referred to as "nat cat") that are modeled include: Hurricane (main peril is wind damage; some models can also include storm surge and rainfall) Earthquake (main peril is ground shaking; some models can also include tsunami, fire following earthquakes, liquefaction, landslide, and sprinkler leakage damage) severe thunderstorm or severe convective storms (main sub-perils are tornado, straight-line winds and hail) Flood Extratropical cyclone (commonly referred to as European windstorm) Wildfire Winter storm Human catastrophes include: Terrorism events Warfare Casualty/liability events Forced displacement crises Cyber data breaches Lines of business modeled Cat modeling involves many lines of business, including: Personal property Commercial property Workers' compensation Automobile physical damage Limited liabilities Product liability Business Interruption Inputs, Outputs, and Use Cases The input into a typical cat modeling software package is information on the exposures being analyzed that are vulnerable to catastrophe risk. The exposure data can be categorized into three basic groups: Information on the site locations, referred to as geocoding data (street address, postal code, county/CRESTA zone, etc.) Information on the physical characteristics of the exposures (construction, occupation/occupancy, year built, number of stories, number of employees, etc.) Information on the financial terms of the insurance coverage (coverage value, limit, deductible, etc.) The output of a cat model is an estimate of
https://en.wikipedia.org/wiki/Iatromathematicians
Iatromathematicians (from Greek ἰατρική "medicine" and μαθηματικά "mathematics") were a school of physicians in 17th-century Italy who tried to apply the laws of mathematics and mechanics in order to understand the functioning of the human body. They were also keen students of anatomy. These iatromathematicians made an effort to prove that applying a purely mechanical conception to the study of the human body is futile. The mechanical conceptions that they had referred to was Leonardo da Vinci’s studies of the human body, and the writings of Aristotle about the motion of animals related to geometric analysis. Iatromathematicians considered the bodies functioning to be measured by quantifiable numbers, weights, and measures. Iatromathematics The field of iatromathematics is allied to science; however, it lacks the applicability of the proper scientific method and is therefore considered a form of pseudoscience. It applies the study of astrology to medicine. Iatromathematicians viewed the human body through astrological reasoning as well as mechanics. They associate various stars, or zodiac signs with the functioning of the human body. The twelve astrological signs contribute to each part of the body from head to toe. Moreover, planets and existing cosmos in space are correlated with certain parts of the body. Through examining a natal chart, iatromathematicians attempt to predict biological setbacks in an individual. Iatromathematicians examine the active and energetic temperament of the human body. Moreover, they explore the causes of various health problems and attempt to find ways to treat certain detrimental diseases. In iatromathematics, there is a particular assumption that there is an impact of various energetic fields caused on the star bodies. The star body of an individual is often referred to by astrologers as an energetic matrix and is believed to be spawned by heavenly bodies such as the sun, moon, planets, and several other astrological signs. Iatr
https://en.wikipedia.org/wiki/Extension%20%28music%29
In music, an extension is a set of musical notes that lie outside the standard range or tessitura. Staff A note that lies outside the lines of a musical staff is an extension of the staff. The note will lie on a ledger line. Middle C, for example, is an extension note on both treble and bass clefs, however is not outside the grand staff. Soprano C and Deep C lie two ledger lines above treble and below bass respectively (as well as the grand staff). Instruments An instrumental extension is a range of playable notes outside the normal range of the instrument. A baritone horn, if played by a skillful player, can be played an octave above the normal range. Since this is not standard, these notes would be an extension. (See also: Crook (music)). With the bowed string instruments, lower pitches than the standard range are sometimes used through scordatura in which the lowest string is tuned down a note or two. The double bass sometimes uses a C extension extending the range of the E string downwards to C. Some Bösendorfer pianos have extra keys, extending the range several notes lower than a standard 88-key piano. Voice In vocal performance, a singer's extension is all notes that are a part of the singer's vocal range that lie outside the singer's tessitura. This usually include notes that a singer can hit, but does not use on a regular basis. For example, a coloratura soprano regularly, as defined by range, will sing in the whistle register. A standard mezzo-soprano has a range to the high F or G above middle C, however a mezzo-soprano with good head voice extension can rival the coloratura soprano in range. However, since her normal tessitura is mezzo-soprano (or under Soprano C), her abilities in the whistle register would be considered her extension. Although not commonly thought of, extension can be applied to the lower register as well. A baritone may actually be able to reach depths of low D or E below low C, but is more comfortable in the higher bariton
https://en.wikipedia.org/wiki/NPH%20insulin
Neutral Protamine Hagedorn (NPH) insulin, also known as isophane insulin, is an intermediate-acting insulin given to help control blood sugar levels in people with diabetes. It is used by injection under the skin once to twice a day. Onset of effects is typically in 90 minutes and they last for 24 hours. Versions are available that come premixed with a short-acting insulin, such as regular insulin. The common side effect is low blood sugar. Other side effects may include pain or skin changes at the sites of injection, low blood potassium, and allergic reactions. Use during pregnancy is relatively safe for the fetus. NPH insulin is made by mixing regular insulin and protamine in exact proportions with zinc and phenol such that a neutral-pH is maintained and crystals form. There are human and pig insulin based versions. Protamine insulin was first created in 1936 and NPH insulin in 1946. It is on the World Health Organization's List of Essential Medicines. NPH is an abbreviation for "neutral protamine Hagedorn". In 2020, insulin isophane was the 221st most commonly prescribed medication in the United States, with more than 2million prescriptions. In 2020, the combination of human insulin with insulin isophane was the 246th most commonly prescribed medication in the United States, with more than 1million prescriptions. Medical uses NPH insulin is cloudy and has an onset of 1–3 hours. Its peak is 6–8 hours and its duration is up to 24 hours. It has an intermediate duration of action, meaning longer than that of regular and rapid-acting insulin, and shorter than long acting insulins (ultralente, glargine or detemir). A recent Cochrane systematic review compared the effects of NPH insulin to other insulin analogues (insulin detemir, insulin glargine, insulin degludec) in both children and adults with Type 1 diabetes. Insulin detemir appeared provide a lower risk of severe hyperglycemia compared to NPH insulin, however this finding was inconsistent across included stu
https://en.wikipedia.org/wiki/Vanadyl%20ribonucleoside
Vanadyl ribonucleoside is a potent transition-state analog of ribonucleic acid and potent inhibitor of many species of ribonuclease formed from a vanadium coordination complex and one ribonucleoside. Vanadium's [Ar] 3d3 4s2 electron configuration allows it to make five sigma bonds and two pi bonds with adjacent atoms. History RNA is notoriously unstable and vulnerable to ribonucleases, which has thus been an obstacle to the production and analysis of the cellular transcriptome. First referenced by Berger et al., the substance was used to prevent the digestion of RNA during isolation from white blood cells, and was rapidly adopted for such purposes as the acquisition of RNA from green beans. Production Vanadyl ribonucleoside is produced by combining vanadyl sulphate with various ribonucleosides (such as guanosine) in a 1:10 molar ratio. Use Vanadyl ribonucleoside, along with other RNase inhibitors, has been a staple of molecular biochemistry since its invention by allowing for the stability of RNA in its storage and use.
https://en.wikipedia.org/wiki/Sanov%27s%20theorem
In mathematics and information theory, Sanov's theorem gives a bound on the probability of observing an atypical sequence of samples from a given probability distribution. In the language of large deviations theory, Sanov's theorem identifies the rate function for large deviations of the empirical measure of a sequence of i.i.d. random variables. Let A be a set of probability distributions over an alphabet X, and let q be an arbitrary distribution over X (where q may or may not be in A). Suppose we draw n i.i.d. samples from q, represented by the vector . Then, we have the following bound on the probability that the empirical measure of the samples falls within the set A: , where is the joint probability distribution on , and is the information projection of q onto A. In words, the probability of drawing an atypical distribution is bounded by a function of the KL divergence from the true distribution to the atypical one; in the case that we consider a set of possible atypical distributions, there is a dominant atypical distribution, given by the information projection. Furthermore, if A is the closure of its interior,
https://en.wikipedia.org/wiki/H%C3%A9non%20map
In mathematics, the Hénon map, sometimes called Hénon–Pomeau attractor/map, is a discrete-time dynamical system. It is one of the most studied examples of dynamical systems that exhibit chaotic behavior. The Hénon map takes a point in the plane and maps it to a new point The map depends on two parameters, and , which for the classical Hénon map have values of and . For the classical values the Hénon map is chaotic. For other values of and the map may be chaotic, intermittent, or converge to a periodic orbit. An overview of the type of behavior of the map at different parameter values may be obtained from its orbit diagram. The map was introduced by Michel Hénon as a simplified model of the Poincaré section of the Lorenz model. For the classical map, an initial point of the plane will either approach a set of points known as the Hénon strange attractor, or diverge to infinity. The Hénon attractor is a fractal, smooth in one direction and a Cantor set in another. Numerical estimates yield a correlation dimension of 1.21 ± 0.01 or 1.25 ± 0.02 (depending on the dimension of the embedding space) and a Box Counting dimension of 1.261 ± 0.003 for the attractor of the classical map. Attractor The Hénon map maps two points into themselves: these are the invariant points. For the classical values of a and b of the Hénon map, one of these points is on the attractor: This point is unstable. Points close to this fixed point and along the slope 1.924 will approach the fixed point and points along the slope -0.156 will move away from the fixed point. These slopes arise from the linearizations of the stable manifold and unstable manifold of the fixed point. The unstable manifold of the fixed point in the attractor is contained in the strange attractor of the Hénon map. The Hénon map does not have a strange attractor for all values of the parameters a and b. For example, by keeping b fixed at 0.3 the bifurcation diagram shows that for a = 1.25 the Hénon map
https://en.wikipedia.org/wiki/Molniya%20orbit
A Molniya orbit (, "Lightning") is a type of satellite orbit designed to provide communications and remote sensing coverage over high latitudes. It is a highly elliptical orbit with an inclination of 63.4 degrees, an argument of perigee of 270 degrees, and an orbital period of approximately half a sidereal day. The name comes from the Molniya satellites, a series of Soviet/Russian civilian and military communications satellites which have used this type of orbit since the mid-1960s. The Molniya orbit has a long dwell time over the hemisphere of interest, while moving very quickly over the other. In practice, this places it over either Russia or Canada for the majority of its orbit, providing a high angle of view to communications and monitoring satellites covering these high-latitude areas. Geostationary orbits, which are necessarily inclined over the equator, can only view these regions from a low angle, hampering performance. In practice, a satellite in a Molniya orbit serves the same purpose for high latitudes as a geostationary satellite does for equatorial regions, except that multiple satellites are required for continuous coverage. Satellites placed in Molniya orbits have been used for television broadcasting, telecommunications, military communications, relaying, weather monitoring, early warning systems and some classified purposes. History The Molniya orbit was discovered by Soviet scientists in the 1960s as a high-latitude communications alternative to geostationary orbits, which require large launch energies to achieve a high perigee and to change inclination to orbit over the equator (especially when launched from Russian latitudes). As a result, OKB-1 sought a less energy-demanding orbit. Studies found that this could be achieved using a highly elliptical orbit with an apogee over Russian territory. The orbit's name refers to the "lightning" speed with which the satellite passes through the perigee. The first use of the Molniya orbit was by the co
https://en.wikipedia.org/wiki/Strategy%20pattern
In computer programming, the strategy pattern (also known as the policy pattern) is a behavioral software design pattern that enables selecting an algorithm at runtime. Instead of implementing a single algorithm directly, code receives run-time instructions as to which in a family of algorithms to use. Strategy lets the algorithm vary independently from clients that use it. Strategy is one of the patterns included in the influential book Design Patterns by Gamma et al. that popularized the concept of using design patterns to describe how to design flexible and reusable object-oriented software. Deferring the decision about which algorithm to use until runtime allows the calling code to be more flexible and reusable. For instance, a class that performs validation on incoming data may use the strategy pattern to select a validation algorithm depending on the type of data, the source of the data, user choice, or other discriminating factors. These factors are not known until run-time and may require radically different validation to be performed. The validation algorithms (strategies), encapsulated separately from the validating object, may be used by other validating objects in different areas of the system (or even different systems) without code duplication. Typically, the strategy pattern stores a reference to some code in a data structure and retrieves it. This can be achieved by mechanisms such as the native function pointer, the first-class function, classes or class instances in object-oriented programming languages, or accessing the language implementation's internal storage of code via reflection. Structure UML class and sequence diagram In the above UML class diagram, the Context class doesn't implement an algorithm directly. Instead, Context refers to the Strategy interface for performing an algorithm (strategy.algorithm()), which makes Context independent of how an algorithm is implemented. The Strategy1 and Strategy2 classes implement the Strategy
https://en.wikipedia.org/wiki/N-version%20programming
N-version programming (NVP), also known as multiversion programming or multiple-version dissimilar software, is a method or process in software engineering where multiple functionally equivalent programs are independently generated from the same initial specifications. The concept of N-version programming was introduced in 1977 by Liming Chen and Algirdas Avizienis with the central conjecture that the "independence of programming efforts will greatly reduce the probability of identical software faults occurring in two or more versions of the program". The aim of NVP is to improve the reliability of software operation by building in fault tolerance or redundancy. NVP approach The general steps of N-version programming are: An initial specification of the intended functionality of the software is developed. The specification should unambiguously define: functions, data formats (which include comparison vectors, c-vectors, and comparison status indicators, cs-indicators), cross-check points (cc-points), comparison algorithm, and responses to the comparison algorithm. From the specifications, two or more versions of the program are independently developed, each by a group that does not interact with the others. The implementations of these functionally equivalent programs use different algorithms and programming languages. At various points of the program, special mechanisms are built into the software which allow the program to be governed by the N-version execution environment (NVX). These special mechanisms include: comparison vectors (c-vectors, a data structure representing the program's state), comparison status indicators (cs-indicators), and synchronization mechanisms. The resulting programs are called N-version software (NVS). Some N-version execution environment (NVX) is developed which runs the N-version software and makes final decisions of the N-version programs as a whole given the output of each individual N-version program. The implementation of t
https://en.wikipedia.org/wiki/Alexandra%20Bellow
Alexandra Bellow (née Bagdasar; previously Ionescu Tulcea; born 30 August 1935) is a Romanian-American mathematician, who has made contributions to the fields of ergodic theory, probability and analysis. Biography Bellow was born in Bucharest, Romania, on August 30, 1935, as Alexandra Bagdasar. Her parents were both physicians. Her mother, Florica Bagdasar (née Ciumetti), was a child psychiatrist. Her father, , was a neurosurgeon. She received her M.S. in mathematics from the University of Bucharest in 1957, where she met and married her first husband, mathematician Cassius Ionescu-Tulcea. She accompanied her husband to the United States in 1957 and received her Ph.D. from Yale University in 1959 under the direction of Shizuo Kakutani with thesis Ergodic Theory of Random Series. After receiving her degree, she worked as a research associate at Yale from 1959 until 1961, and as an assistant professor at the University of Pennsylvania from 1962 to 1964. From 1964 until 1967 she was an associate professor at the University of Illinois at Urbana–Champaign. In 1967 she moved to Northwestern University as a Professor of Mathematics. She was at Northwestern until her retirement in 1996, when she became Professor Emeritus. During her marriage to Cassius Ionescu-Tulcea (1956–1969), she and her husband co-wrote many papers and a research monograph on lifting theory. Alexandra's second husband was the writer Saul Bellow, who was awarded the Nobel Prize in Literature in 1976, during their marriage (1975–1985). Alexandra features in Bellow's writings; she is portrayed lovingly in his memoir To Jerusalem and Back (1976), and, his novel The Dean's December (1982), more critically, satirically in his last novel, Ravelstein (2000), which was written many years after their divorce. The decade of the nineties was for Alexandra a period of personal and professional fulfillment, brought about by her marriage in 1989 to the mathematician Alberto P. Calderón. Mathematical work Som
https://en.wikipedia.org/wiki/Jira%20%28software%29
Jira ( ) is a proprietary issue tracking product developed by Atlassian that allows bug tracking and agile project management. Naming The product name comes from the second and third syllables of the Japanese word pronounced as Gojira, which is Japanese for Godzilla. The name originated from a nickname Atlassian developers used to refer to Bugzilla, which was previously used internally for bug-tracking. Description JIRA was an open source tool available for anyone to download. Its popularity drove thousands of users to adopt it within organizations across the globe. Unlike IBM Engineering Management Platform, JIRA is primarily for use in small teams and individuals, not large projects or enterprises. Subsequently, the product was taken off of open-source servers somehow. Atlassian created a business around this product. According to Atlassian, Jira is used for issue tracking and project management. Some of the organizations that have used Jira at some point in time for bug-tracking and project management include Fedora Commons, Hibernate, and the Apache Software Foundation, which uses both Jira and Bugzilla. Jira includes tools allowing migration from competitor Bugzilla. Jira is offered in four packages: Jira Work Management is intended as generic project management. Jira Software includes the base software, including agile project management features (previously a separate product: Jira Agile). Jira Service Management is intended for use by IT operations or business service desks. Jira Align is intended for strategic product and portfolio management. Jira is written in Java and uses the Pico inversion of control container, Apache OFBiz entity engine, and WebWork 1 technology stack. For remote procedure calls (RPCs), Jira has REST, SOAP, and XML-RPC interfaces. Jira integrates with source control programs such as Clearcase, Concurrent Versions System (CVS), Git, Mercurial, Perforce, Subversion, and Team Foundation Server. It ships with various translatio
https://en.wikipedia.org/wiki/IndieAuth
IndieAuth is an open standard decentralized authentication protocol that uses OAuth 2.0 and enables services to verify the identity of a user represented by a URL, as well as to obtain an access token, that can be used to access resources under the control of the user. IndieAuth is developed in the IndieWeb community and was published as a W3C Note. It was published as a W3C Note by the Social Web Working Group due to lacking the time needed to formally progress it to a W3C recommendation, despite having several interoperable implementations. Implementations WordPress IndieAuth Plugin Known Micro.blog Grav (CMS) IndieAuth Plugin Drupal IndieWeb Plugin Cellar Door See also OpenID WebID
https://en.wikipedia.org/wiki/Ghost%20sign
A ghost sign is an old hand-painted advertising sign that has been preserved on a building for an extended period of time. The sign may be kept for its nostalgic appeal, or simply indifference by the owner. History and preservation Ghost signs are found across the world with the United States, the United Kingdom, France and Canada having many surviving examples. Ghost signs are also called fading ads or brickads. In many cases these are advertisements painted on brick that remained over time. Old painted advertisements are occasionally discovered upon demolition of later-built adjoining structures. Throughout rural areas, old barn advertisements continue to promote defunct brands and quaint roadside attractions. Many ghost signs from the 1890s to 1960s are still visible. Such signs were most commonly used in the decades before the Great Depression. The painters of the signs were called "wall dogs". As signage advertising formats changed, less durable signs appeared in the later 20th century, and ghost signs from that era are less common. Ghost signs were originally painted with oil-based house paints. The paint that has survived the test of time most likely contains lead, which keeps it strongly adhered to the masonry surface. Ghost signs were often preserved through repainting the entire sign since the colors often fade over time. When ownership changed, a new sign would be painted over the old one. In 2013, conservators undertook an effort to preserve ghost signs in Philadelphia. In the city of Detroit, well-preserved ghost signs have been uncovered when an adjoining building is demolished as part of the city's blight-fighting efforts. Gallery See also Palimpsest Privilege sign Signwriter
https://en.wikipedia.org/wiki/Mushroom%20gravy
Mushroom gravy is a simple sauce that can be composed from stock (beef is typical, but chicken may be used), roux (a mixture of equal parts butter and flour to thicken), and mushroom base. It can also be enhanced with mace, to add a delicate nutmeg flavor. See also List of gravies List of mushroom dishes Mushroom sauce
https://en.wikipedia.org/wiki/Electrochemical%20energy%20conversion
Electrochemical energy conversion is a field of energy technology concerned with electrochemical methods of energy conversion including fuel cells and photoelectrochemical. This field of technology also includes electrical storage devices like batteries and supercapacitors. It is increasingly important in context of automotive propulsion systems. There has been the creation of more powerful, longer running batteries allowing longer run times for electric vehicles. These systems would include the energy conversion fuel cells and photoelectrochemical mentioned above. See also Bioelectrochemical reactor Chemotronics Electrochemical cell Electrochemical engineering Electrochemical reduction of carbon dioxide Electrofuels Electrohydrogenesis Electromethanogenesis Enzymatic biofuel cell Photoelectrochemical cell Photoelectrochemical reduction of CO2 Notes External links International Journal of Energy Research MSAL NIST scientific journal article Georgia tech Electrochemistry Electrochemical engineering Energy engineering Energy conversion Biochemical engineering
https://en.wikipedia.org/wiki/Nord-10
Nord-10 was a medium-sized general-purpose 16-bit minicomputer designed for multilingual time-sharing applications and for real-time multi-program systems, produced by Norsk Data. It was introduced in 1973. The later follow up model, Nord-10/S, introduced in 1975, introduced CPU cache, paging, and other miscellaneous improvements. The CPU had a microprocessor, which was defined in the manual as a portmanteau of microcode processor, not to be confused with the then nascent microprocessor. The CPU additionally contained instructions, operator communication, bootstrap loaders, and hardware test programs, that were implemented in a 1K read-only memory. The microprocessor also allowed for customer specified instructions to be built in. Nord-10 had a memory management system with hardware paging extending the memory size from 64 to 256K 16-bit words and two independent protecting systems, one acting on each page and one on the mode of instructions. The interrupt system had 16 program levels in hardware, each with its own set of general-purpose registers. Note: Much of the following information is taken from a document written by Norsk Data introducing the Nord-10. Some information, particularly about the memory system, may be inaccurate for the later Nord-10/S. Central processor The central processing unit (CPU) consisted of a total 24 printed circuit boards. The last eight positions in the rack were used for input/output (I/O) devices operated by program control, such as the console teleprinter (teletype), paper punched tape and punched card reader and punch, line printer, display, operator's panel, and a real-time clock. The Nord-10 had 160 processor registers, of which 128 were available to programs, eight on each of the 16 program levels. Six of those registers were general registers, one was the program counter, and the other contained status information. Floating point arithmetic operations were standard. The instructions could operate on five different forma
https://en.wikipedia.org/wiki/Bulma
is a fictional character featured in the Dragon Ball franchise, first appearing in the manga series created by Akira Toriyama. She debuted in the first chapter "Bulma and Son Goku", published in Weekly Shōnen Jump magazine on June 19, 1984, issue 51, meeting Goku and recruiting him as her bodyguard to travel and find the wish-granting Dragon Balls. Bulma is the daughter of Dr. Brief, the founder of the Capsule Corporation, a company that creates special small capsules that shrink and hold objects of various sizes for easy storage. Being the daughter of a brilliant scientist, Bulma is also a scientific genius, as well as an inventor and engineer. Along with creating the Dragon Radar, a device that detects the energy signal emitted by a Dragon Ball, Bulma's role as an inventor becomes important at several points in the series; including the time machine that brings her future son Trunks to the past. Characterization Bulma's appearance in the anime is slightly different than in the manga. In the manga version her hair color is violet, while in the anime it is turquoise. Her long hairstyle stays shoulder-length as a teenager. She changes her hairstyle and cut very often, and rarely the same dress for long periods. Many of the dresses that she wears bear her name or the Capsule Corporation logo. Bulma is presented as an extraordinarily intelligent individual, being able to create technology capable of feats beyond contemporary science. She usually displays strong analytical skills and the ability to recognize design and engineering styles. Although not a martial artist, Bulma occasionally defends herself with the use of firearms and other technological equipment. Creation and design Bulma is loosely based on the character Tang Sanzang from the Chinese classic novel Journey to the West. Bulma and Goku were the first pair of characters which were introduced in the manga and Toriyama stated that he subsequently introduced other characters in pairs because "that way, I'
https://en.wikipedia.org/wiki/Lead%20cycle
The lead cycle is the biogeochemical cycle of lead through the atmosphere, lithosphere, biosphere, and hydrosphere, which has been influenced by anthropogenic activities. Natural lead sources Lead (Pb) is a heavy trace element and is formed by the radioactive decay of uranium and thorium. In crustal rocks, it is present as the lead sulfide mineral galena. Natural sources of lead in the lead cycle include wind borne dust, volcanic outgassing, and forest fires. Natural weathering of rocks by physical and chemical agents can mobilize lead in soils. Mobilized lead can react to form oxides or carbonates. It can co-precipitate with other minerals by being occluded through surface adsorption and complexation Anthropogenic lead cycle Anthropogenic activities have accelerated lead mobilization to the environment. The majority of anthropogenic lead comes from non-ferrous metal manufacturing plants, mining and smelting of ores, stationary and mobile fossil fuel combustion platforms, and lead batteries. These activities produce very fine micron-sized Pb particles that can be transported as aerosols. Anthropogenic lead fluxes decreased from the 1980s to the 2000s as a result of global regulation and outlawing of leaded gasoline. However, the global production in lead has seen a steady rise in the 21st century. Lead accumulation in the ocean Wet deposition removes lead from the atmosphere to the surface ocean. Precipitation leads to solubilization of aerosols and washout of particulates. Pb concentrations in the oceans is dependent on wet deposition and the concentration of Pb present in atmosphere. The main sink for lead is burial in marine sediments Lead in drinking water Lead is highly regulated in drinking water because it affects the developing brain and the nervous system. Children are more prone to lead exposure because they absorb more of ingested Pb from gastrointestinal tracts. Environmental Protection Agency established the Lead and Copper rule (LCR) in 1991 wh
https://en.wikipedia.org/wiki/Association%20of%20the%20British%20Pharmaceutical%20Industry
The Association of the British Pharmaceutical Industry (ABPI) is the trade association for over 120 companies in the UK producing prescription medicines for humans, founded in 1891. It is the British equivalent of America's PhRMA; however, the member companies research, develop, manufacture and supply medicines prescribed for the National Health Service. History The organisation was founded in London in 1891 and originally known as the "Drug Club". It was re-named the Association of the British Pharmaceutical Industry in 1948. A rival institution to represent wholesalers, the "Northern Wholesale Druggists' Association", was formed in 1902 and lasted until 1966. Management and offices A board of management of members oversee the ABPI. The Board is made up of individuals who are elected by members to represent the industry and up to five people who are co-opted by the Board. Elections commence every January for elected seats to ensure that the Board is fully representative and has access to the broadest range of skills and expertise possible. The ABPI's head office is in London with three regional offices in Cardiff, Belfast and Edinburgh. Membership Membership of the Association of the British Pharmaceutical Industry is not open to individuals, only companies. Members fall into three categories: Full members who hold marketing authorisation for manufacture or supply of a prescription medicine for human use and who undertake business in the UK Research affiliate members who carry out business in the UK and are involved in research and/or development of medicines for human use but do not have a UK sales operation, for example contract research organisations or contract manufacturing organizations General affiliate members who operate in the UK, have a business interest in the industry and will typically provide products or services to the industry, although not producing prescription medicines, for example law firms Functions The ABPI represents the views of t
https://en.wikipedia.org/wiki/Muniscins
The muniscin protein family was initially defined in 2009 as proteins having 2 homologous domains that are involved in clathrin mediated endocytosis (CME) and have been reviewed. In addition to FCHO1, FCHO2 and Syp1, SGIP1 is also included in the family because it contains the μ (mu) homology domain and is involved in CME, even though it does not contain the F-BAR domain Muniscins are known as alternate cargo adaptors. That is, they participate in selecting which cargo molecules are internalized via CME. Additionally, the structure of the dimer, with its concave face oriented toward the plasma membrane, is thought to help curve the membrane as the clathrin coated pit forms. The muniscins are early arriving proteins involved in CME. FCHO proteins are required for CME, but do not appear to be required to initiate CME. The μ homology domain of muniscins has been reported to have evolved from part of an ancient cargo adaptor protein complex named TSET. See also AP2 adaptor complex Vesicular transport adaptor protein
https://en.wikipedia.org/wiki/Chihara%E2%80%93Ismail%20polynomials
In mathematics, the Chihara–Ismail polynomials are a family of orthogonal polynomials introduced by , generalizing the van Doorn polynomials introduced by and the Karlin–McGregor polynomials. They have a rather unusual measure, which is discrete except for a single limit point at 0 with jump 0, and is non-symmetric, but whose support has an infinite number of both positive and negative points.
https://en.wikipedia.org/wiki/Bayesian%20optimization
Bayesian optimization is a sequential design strategy for global optimization of black-box functions that does not assume any functional forms. It is usually employed to optimize expensive-to-evaluate functions. History The term is generally attributed to and is coined in his work from a series of publications on global optimization in the 1970s and 1980s. Strategy Bayesian optimization is typically used on problems of the form , where is a set of points, , which rely upon less than 20 dimensions (), and whose membership can easily be evaluated. Bayesian optimization is particularly advantageous for problems where is difficult to evaluate due to its computational cost. The objective function, , is continuous and takes the form of some unknown structure, referred to as a "black box". Upon its evaluation, only is observed and its derivatives are not evaluated. Since the objective function is unknown, the Bayesian strategy is to treat it as a random function and place a prior over it. The prior captures beliefs about the behavior of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form the posterior distribution over the objective function. The posterior distribution, in turn, is used to construct an acquisition function (often also referred to as infill sampling criteria) that determines the next query point. There are several methods used to define the prior/posterior distribution over the objective function. The most common two methods use Gaussian processes in a method called kriging. Another less expensive method uses the Parzen-Tree Estimator to construct two distributions for 'high' and 'low' points, and then finds the location that maximizes the expected improvement. Standard Bayesian optimization relies upon each being easy to evaluate, and problems that deviate from this assumption are known as exotic Bayesian optimization problems. Optimization problems can become exotic if it is known tha
https://en.wikipedia.org/wiki/Contraction%20hierarchies
In computer science, the method of contraction hierarchies is a speed-up technique for finding the shortest-path in a graph. The most intuitive applications are car-navigation systems: a user wants to drive from to using the quickest possible route. The metric optimized here is the travel time. Intersections are represented by vertices, the road sections connecting them by edges. The edge weights represent the time it takes to drive along this segment of the road. A path from to is a sequence of edges (road sections); the shortest path is the one with the minimal sum of edge weights among all possible paths. The shortest path in a graph can be computed using Dijkstra's algorithm but, given that road networks consist of tens of millions of vertices, this is impractical. Contraction hierarchies is a speed-up method optimized to exploit properties of graphs representing road networks. The speed-up is achieved by creating shortcuts in a preprocessing phase which are then used during a shortest-path query to skip over "unimportant" vertices. This is based on the observation that road networks are highly hierarchical. Some intersections, for example highway junctions, are "more important" and higher up in the hierarchy than for example a junction leading into a dead end. Shortcuts can be used to save the precomputed distance between two important junctions such that the algorithm doesn't have to consider the full path between these junctions at query time. Contraction hierarchies do not know about which roads humans consider "important" (e.g. highways), but they are provided with the graph as input and are able to assign importance to vertices using heuristics. Contraction hierarchies are not only applied to speed-up algorithms in car-navigation systems but also in web-based route planners, traffic simulation, and logistics optimization. Implementations of the algorithm are publicly available as open source software. Algorithm The contraction hierarchies (CH) algor
https://en.wikipedia.org/wiki/Class%20formation
In mathematics, a class formation is a topological group acting on a module satisfying certain conditions. Class formations were introduced by Emil Artin and John Tate to organize the various Galois groups and modules that appear in class field theory. Definitions A formation is a topological group G together with a topological G-module A on which G acts continuously. A layer E/F of a formation is a pair of open subgroups E, F of G such that F is a finite index subgroup of E. It is called a normal layer if F is a normal subgroup of E, and a cyclic layer if in addition the quotient group is cyclic. If E is a subgroup of G, then AE is defined to be the elements of A fixed by E. We write Hn(E/F) for the Tate cohomology group Hn(E/F, AF) whenever E/F is a normal layer. (Some authors think of E and F as fixed fields rather than subgroup of G, so write F/E instead of E/F.) In applications, G is often the absolute Galois group of a field, and in particular is profinite, and the open subgroups therefore correspond to the finite extensions of the field contained in some fixed separable closure. A class formation is a formation such that for every normal layer E/F H1(E/F) is trivial, and H2(E/F) is cyclic of order |E/F|. In practice, these cyclic groups come provided with canonical generators uE/F ∈ H2(E/F), called fundamental classes, that are compatible with each other in the sense that the restriction (of cohomology classes) of a fundamental class is another fundamental class. Often the fundamental classes are considered to be part of the structure of a class formation. A formation that satisfies just the condition H1(E/F)=1 is sometimes called a field formation. For example, if G is any finite group acting on a field L and A=L×, then this is a field formation by Hilbert's theorem 90. Examples The most important examples of class formations (arranged roughly in order of difficulty) are as follows: Archimedean local class field theory: The module A is the group of
https://en.wikipedia.org/wiki/Suricata%20%28software%29
Suricata is an open-source based intrusion detection system (IDS) and intrusion prevention system (IPS). It was developed by the Open Information Security Foundation (OISF). A beta version was released in December 2009, with the first standard release following in July 2010. Free intrusion detection systems OSSEC HIDS Prelude Hybrid IDS Sagan Snort Zeek NIDS See also Aanval
https://en.wikipedia.org/wiki/Single-use%20bioreactor
A single-use bioreactor or disposable bioreactor is a bioreactor with a disposable bag instead of a culture vessel. Typically, this refers to a bioreactor in which the lining in contact with the cell culture will be plastic, and this lining is encased within a more permanent structure (typically, either a rocker or a cuboid or cylindrical steel support). Commercial single-use bioreactors have been available since the end of the 1990s and are now made by several well-known producers (See below) . Single-use at bioreactors Single-use bioreactors are widely used in the field of mammalian cell culture and are now rapidly replacing conventional bioreactors. Instead of a culture vessel made from stainless steel or glass, a single-use bioreactor is equipped with a disposable bag. The disposable bag is usually made of a three-layer plastic foil. One layer is made from Polyethylene terephthalate or LDPE to provide mechanical stability. A second layer made using PVA or PVC acts as a gas barrier. Finally, a contact layer is made from PVA or PP. For medical applications the single-use materials that contact the product must be certified by the European Medicines Agency or similar authorities responsible for other regions. Types of single-use bioreactors In general there are two different approaches for constructing single-use bioreactors, differing in the means used to agitate the culture medium. Some single-use bioreactors use stirrers like conventional bioreactors, but with stirrers that are integrated into the plastic bag. The closed bag and the stirrer are pre-sterilized. In use the bag is mounted in the bioreactor and the stirrer is connected to a driver mechanically or magnetically. Other single-use bioreactors are agitated by a rocking motion. This type of bioreactor does not need any mechanical agitators inside the single-use bag.,. Both the stirred and the rocking motion single-use bioreactors are used up to a scale of 1000 Liters volume. Several variatio
https://en.wikipedia.org/wiki/Ephaptic%20coupling
Ephaptic coupling is a form of communication within the nervous system and is distinct from direct communication systems like electrical synapses and chemical synapses. It may refer to the coupling of adjacent (touching) nerve fibers caused by the exchange of ions between the cells, or it may refer to coupling of nerve fibers as a result of local electric fields. In either case ephaptic coupling can influence the synchronization and timing of action potential firing in neurons. Myelination is thought to inhibit ephaptic interactions. History and etymology The idea that the electrical activity generated by nervous tissue may influence the activity of surrounding nervous tissue is one that dates back to the late 19th century. Early experiments, like those by Emil du Bois-Reymond, demonstrated that the firing of a primary nerve may induce the firing of an adjacent secondary nerve (termed "secondary excitation"). This effect was not quantitatively explored, however, until experiments by Katz and Schmitt in 1940, when the two explored the electric interaction of two adjacent limb nerves of the crab Carcinus maenas. Their work demonstrated that the progression of the action potential in the active axon caused excitability changes in the inactive axon. These changes were attributed to the local currents that form the action potential. For example, the currents that caused the depolarization (excitation) of the active nerve caused a corresponding hyperpolarization (depression) of the adjacent resting fiber. Similarly, the currents that caused repolarization of the active nerve caused slight depolarization in the resting fiber. Katz and Schmitt also observed that stimulation of both nerves could cause interference effects. Simultaneous action potential firing caused interference and resulted in decreased conduction velocity, while slightly offset stimulation resulted in synchronization of the two impulses. In 1941 Angélique Arvanitaki explored the same topic and proposed
https://en.wikipedia.org/wiki/DashO%20%28software%29
DashO is a code obfuscator, compactor, optimizer, watermarker and encryptor for Java, Kotlin and Android applications. It aims to achieve little or no performance loss even as the code complexity increases. DashO can also statically analyze the code to find unused types, methods, and fields, and delete them, thereby making the application smaller. DashO can delete used methods that are not needed in published applications, such as debugging and logging calls. See also Dotfuscator - a code obfuscator for .NET. ProGuard (software) - a code obfuscator for Java.
https://en.wikipedia.org/wiki/Jacob%27s%20staff
The term Jacob's staff is used to refer to several things, also known as cross-staff, a ballastella, a fore-staff, a ballestilla, or a balestilha. In its most basic form, a Jacob's staff is a stick or pole with length markings; most staffs are much more complicated than that, and usually contain a number of measurement and stabilization features. The two most frequent uses are: in astronomy and navigation for a simple device to measure angles, later replaced by the more precise sextants; in surveying (and scientific fields that use surveying techniques, such as geology and ecology) for a vertical rod that penetrates or sits on the ground and supports a compass or other instrument. The simplest use of a Jacob's staff is to make qualitative judgements of the height and angle of an object relative to the user of the staff. In astronomy and navigation In navigation the instrument is also called a cross-staff and was used to determine angles, for instance the angle between the horizon and Polaris or the sun to determine a vessel's latitude, or the angle between the top and bottom of an object to determine the distance to said object if its height is known, or the height of the object if its distance is known, or the horizontal angle between two visible locations to determine one's point on a map. The Jacob's staff, when used for astronomical observations, was also referred to as a radius astronomicus. With the demise of the cross-staff, in the modern era the name "Jacob's staff" is applied primarily to the device used to provide support for surveyor's instruments. Etymology The origin of the name of the instrument is not certain. Some refer to the Biblical patriarch Jacob, specifically in the Book of Genesis (). It may also take its name after its resemblance to Orion, referred to by the name of Jacob on some medieval star charts. Another possible source is the Pilgrim's staff, the symbol of St James (Jacobus in Latin). The name cross staff simply comes from it
https://en.wikipedia.org/wiki/Hemopoietic%20growth%20factor
Hemopoietic growth factors regulate the differentiation and proliferation of particular progenitor cells. Made available through recombinant DNA technology, they hold tremendous potential for medical uses when a person's natural ability to form blood cells is diminished or defective. Recombinant erythropoietin (EPO) is very effective in treating the diminished red blood cell production that accompanies end-stage kidney disease. Erythropoietin is a sialoglycoprotein hormone produced by peritubular cells of kidney. Granulocyte-macrophage colony-stimulating factor and granulocyte CSF are given to stimulate white blood cell formation in cancer patients who are receiving chemotherapy, which tends to kill their red bone marrow cells as well as the cancer cells. Thrombopoietin shows great promise for preventing platelet depletion during chemotherapy. CSFs and thrombopoietin also improve the outcome of patients who receive bone marrow transplants. Types Erythropoietin is a glycoprotein hormone secreted by the interstitial fibroblast cells of the kidneys in response to low oxygen levels. It prompts the production of erythrocytes. Thrombopoietin, another glycoprotein hormone, is produced by the liver and kidneys. It triggers the development of megakaryocytes into platelets. Cytokines are glycoproteins secreted by a wide variety of cells, including red bone marrow, leukocytes, macrophages, fibroblasts, and endothelial cells. They act locally as autocrine or paracrine factors, stimulating the proliferation of progenitor cells and helping to stimulate both nonspecific and specific resistance to disease. There are two major subtypes of cytokines known as colony-stimulating factors and interleukins. Colony-stimulating factors are glycoproteins that act locally, as autocrine or paracrine factors. Some trigger the differentiation of myeloblasts into granular leukocytes, namely, neutrophils, eosinophils, and basophils. These are referred to as granulocyte CSFs. A differen
https://en.wikipedia.org/wiki/Sharp%20MZ%20character%20set
Sharp MZ character set is a character set is made by Sharp Corporation for Sharp MZ. Character set The following tables show the Sharp MZ-700 character sets. Each character is shown with a potential Unicode equivalent. Space and control characters are represented by the abbreviations for their names.
https://en.wikipedia.org/wiki/Real-time%20kinematic%20positioning
Real-time kinematic positioning (RTK) is the application of surveying to correct for common errors in current satellite navigation (GNSS) systems. It uses measurements of the phase of the signal's carrier wave in addition to the information content of the signal and relies on a single reference station or interpolated virtual station to provide real-time corrections, providing up to centimetre-level accuracy (see DGPS). With reference to GPS in particular, the system is commonly referred to as carrier-phase enhancement, or CPGPS. It has applications in land surveying, hydrographic surveying, and in unmanned aerial vehicle navigation. Background The distance between a satellite navigation receiver and a satellite can be calculated from the time it takes for a signal to travel from the satellite to the receiver. To calculate the delay, the receiver must align a pseudorandom binary sequence contained in the signal to an internally generated pseudorandom binary sequence. Since the satellite signal takes time to reach the receiver, the satellite's sequence is delayed in relation to the receiver's sequence. By increasingly delaying the receiver's sequence, the two sequences are eventually aligned. The accuracy of the resulting range measurement is essentially a function of the ability of the receiver's electronics to accurately process signals from the satellite, and additional error sources such as non-mitigated ionospheric and tropospheric delays, multipath, satellite clock and ephemeris errors. Carrier-phase tracking RTK follows the same general concept, but uses the satellite signal's carrier wave as its signal, ignoring the information contained within. RTK uses a fixed base station and a rover to reduce the rover's position error. The base station transmits correction data to the rover. As described in the previous section, the range to a satellite is essentially calculated by multiplying the carrier wavelength times the number of whole cycles between the sate
https://en.wikipedia.org/wiki/Real%20RAM
In computing, especially computational geometry, a real RAM (random-access machine) is a mathematical model of a computer that can compute with exact real numbers instead of the binary fixed point or floating point numbers used by most actual computers. The real RAM was formulated by Michael Ian Shamos in his 1978 Ph.D. dissertation. Model The "RAM" part of the real RAM model name stands for "random-access machine". This is a model of computing that resembles a simplified version of a standard computer architecture. It consists of a stored program, a computer memory unit consisting of an array of cells, and a central processing unit with a bounded number of registers. Each memory cell or register can store a real number. Under the control of the program, the real RAM can transfer real numbers between memory and registers, and perform arithmetic operations on the values stored in the registers. The allowed operations typically include addition, subtraction, multiplication, and division, as well as comparisons, but not modulus or rounding to integers. The reason for avoiding integer rounding and modulus operations is that allowing these operations could give the real RAM unreasonable amounts of computational power, enabling it to solve PSPACE-complete problems in polynomial time. When analyzing algorithms for the real RAM, each allowed operation is typically assumed to take constant time. Implementation Software libraries such as LEDA have been developed which allow programmers to write computer programs that work as if they were running on a real RAM. These libraries represent real values using data structures which allow them to perform arithmetic and comparisons with the same results as a real RAM would produce. For example, In LEDA, real numbers are represented using the leda_real datatype, which supports k-th roots for any natural number k, rational operators, and comparison operators. The time analysis of the underlying real RAM algorithm using these real da
https://en.wikipedia.org/wiki/Tight%20binding
In solid-state physics, the tight-binding model (or TB model) is an approach to the calculation of electronic band structure using an approximate set of wave functions based upon superposition of wave functions for isolated atoms located at each atomic site. The method is closely related to the LCAO method (linear combination of atomic orbitals method) used in chemistry. Tight-binding models are applied to a wide variety of solids. The model gives good qualitative results in many cases and can be combined with other models that give better results where the tight-binding model fails. Though the tight-binding model is a one-electron model, the model also provides a basis for more advanced calculations like the calculation of surface states and application to various kinds of many-body problem and quasiparticle calculations. Introduction The name "tight binding" of this electronic band structure model suggests that this quantum mechanical model describes the properties of tightly bound electrons in solids. The electrons in this model should be tightly bound to the atom to which they belong and they should have limited interaction with states and potentials on surrounding atoms of the solid. As a result, the wave function of the electron will be rather similar to the atomic orbital of the free atom to which it belongs. The energy of the electron will also be rather close to the ionization energy of the electron in the free atom or ion because the interaction with potentials and states on neighboring atoms is limited. Though the mathematical formulation of the one-particle tight-binding Hamiltonian may look complicated at first glance, the model is not complicated at all and can be understood intuitively quite easily. There are only three kinds of matrix elements that play a significant role in the theory. Two of those three kinds of elements should be close to zero and can often be neglected. The most important elements in the model are the interatomic matrix element
https://en.wikipedia.org/wiki/Cohesion%20%28chemistry%29
In chemistry and physics, cohesion (), also called cohesive attraction or cohesive force, is the action or property of like molecules sticking together, being mutually attractive. It is an intrinsic property of a substance that is caused by the shape and structure of its molecules, which makes the distribution of surrounding electrons irregular when molecules get close to one another, creating electrical attraction that can maintain a microscopic structure such as a water drop. Cohesion allows for surface tension, creating a "solid-like" state upon which light-weight or low-density materials can be placed. Water, for example, is strongly cohesive as each molecule may make four hydrogen bonds to other water molecules in a tetrahedral configuration. This results in a relatively strong Coulomb force between molecules. In simple terms, the polarity (a state in which a molecule is oppositely charged on its poles) of water molecules allows them to be attracted to each other. The polarity is due to the electronegativity of the atom of oxygen: oxygen is more electronegative than the atoms of hydrogen, so the electrons they share through the covalent bonds are more often close to oxygen rather than hydrogen. These are called polar covalent bonds, covalent bonds between atoms that thus become oppositely charged. In the case of a water molecule, the hydrogen atoms carry positive charges while the oxygen atom has a negative charge. This charge polarization within the molecule allows it to align with adjacent molecules through strong intermolecular hydrogen bonding, rendering the bulk liquid cohesive. Van der Waals gases such as methane, however, have weak cohesion due only to van der Waals forces that operate by induced polarity in non-polar molecules. Cohesion, along with adhesion (attraction between unlike molecules), helps explain phenomena such as meniscus, surface tension and capillary action. Mercury in a glass flask is a good example of the effects of the ratio betwe
https://en.wikipedia.org/wiki/Parameter%20%28computer%20programming%29
In computer programming, a parameter or a formal argument is a special kind of variable used in a subroutine to refer to one of the pieces of data provided as input to the subroutine. These pieces of data are the values of the arguments (often called actual arguments or actual parameters) with which the subroutine is going to be called/invoked. An ordered list of parameters is usually included in the definition of a subroutine, so that, each time the subroutine is called, its arguments for that call are evaluated, and the resulting values can be assigned to the corresponding parameters. Unlike argument in usual mathematical usage, the argument in computer science is the actual input expression passed/supplied to a function, procedure, or routine in the invocation/call statement, whereas the parameter is the variable inside the implementation of the subroutine. For example, if one defines the add subroutine as def add(x, y): return x + y, then x, y are parameters, while if this is called as add(2, 3), then 2, 3 are the arguments. Variables (and expressions thereof) from the calling context can be arguments: if the subroutine is called as a = 2; b = 3; add(a, b) then the variables a, b are the arguments, not the values 2, 3. See the Parameters and arguments section for more information. The semantics for how parameters can be declared and how the (value of) arguments are passed to the parameters of subroutines are defined by the evaluation strategy of the language, and the details of how this is represented in any particular computer system depend on the calling convention of that system. In the most common case, call by value, a parameter acts within the subroutine as a new local variable initialized to the value of the argument (a local (isolated) copy of the argument if the argument is a variable), but in other cases, e.g. call by reference, the argument variable supplied by the caller can be affected by actions within the called subroutine. Example The followi
https://en.wikipedia.org/wiki/Bogolyubov%20Prize%20for%20young%20scientists
The Bogoliubov Prize for young scientists is an award offered to young researchers in theoretical physics by the Joint Institute for Nuclear Research (JINR), an international intergovernmental organization located in Dubna, Russia. The award is issued in memory of the physicist and mathematician Nikolay Bogoliubov. The prize is awarded to young (up to 33-year-old) researchers for "outstanding contributions in fields of theoretical physics related to Bogoliubov's scientific interests". The awardee is one who has demonstrated "early scientific maturity" and whose results are recognized worldwide and peer-reviewed. The laureates generally emulate Bogoliubov's own skill in using sophisticated mathematics to attempt to solve concrete physical problems (mostly in the fields of nonlinear dynamics, statistical physics, quantum field theory and elementary particle physics). Jury The jury is presided by the theoretical physicist Dmitry Shirkov, who co-authored many works with Nikolay Bogoliubov. Laureates 1999 Oleg Shvedov (Moscow State University, Russia): for a series of works on asymptotical methods in statistical physics and quantum field theory. 2001 Evgenii Ivashkevich (JINR, Russia): for a series of works on analytical methods in non-equilibrium statistical mechanics. 2005 Aurélien Barrau (the Laboratory of sub-atomic physics and cosmology and Joseph Fourier University, Grenoble, France): for a series of works on astrophysics and cosmology. See also List of physics awards
https://en.wikipedia.org/wiki/Deeplearning4j
Eclipse Deeplearning4j is a programming library written in Java for the Java virtual machine (JVM). It is a framework with wide support for deep learning algorithms. Deeplearning4j includes implementations of the restricted Boltzmann machine, deep belief net, deep autoencoder, stacked denoising autoencoder and recursive neural tensor network, word2vec, doc2vec, and GloVe. These algorithms all include distributed parallel versions that integrate with Apache Hadoop and Spark. Deeplearning4j is open-source software released under Apache License 2.0, developed mainly by a machine learning group headquartered in San Francisco. It is supported commercially by the startup Skymind, which bundles DL4J, TensorFlow, Keras and other deep learning libraries in an enterprise distribution called the Skymind Intelligence Layer. Deeplearning4j was contributed to the Eclipse Foundation in October 2017. Introduction Deeplearning4j relies on the widely used programming language Java, though it is compatible with Clojure and includes a Scala application programming interface (API). It is powered by its own open-source numerical computing library, ND4J, and works with both central processing units (CPUs) and graphics processing units (GPUs). Deeplearning4j has been used in several commercial and academic applications. The code is hosted on GitHub. A support forum is maintained on Gitter. The framework is composable, meaning shallow neural nets such as restricted Boltzmann machines, convolutional nets, autoencoders, and recurrent nets can be added to one another to create deep nets of varying types. It also has extensive visualization tools, and a computation graph. Distributed Training with Deeplearning4j occurs in a cluster. Neural nets are trained in parallel via iterative reduce, which works on Hadoop-YARN and on Spark. Deeplearning4j also integrates with CUDA kernels to conduct pure GPU operations, and works with distributed GPUs. Scientific computing for the JVM Deeplearning4j
https://en.wikipedia.org/wiki/Open%20quantum%20system
In physics, an open quantum system is a quantum-mechanical system that interacts with an external quantum system, which is known as the environment or a bath. In general, these interactions significantly change the dynamics of the system and result in quantum dissipation, such that the information contained in the system is lost to its environment. Because no quantum system is completely isolated from its surroundings, it is important to develop a theoretical framework for treating these interactions in order to obtain an accurate understanding of quantum systems. Techniques developed in the context of open quantum systems have proven powerful in fields such as quantum optics, quantum measurement theory, quantum statistical mechanics, quantum information science, quantum thermodynamics, quantum cosmology, quantum biology, and semi-classical approximations. Quantum system and environment A complete description of a quantum system requires the inclusion of the environment. Completely describing the resulting combined system then requires the inclusion of its environment, which results in a new system that can only be completely described if its environment is included and so on. The eventual outcome of this process of embedding is the state of the whole universe described by a wavefunction . The fact that every quantum system has some degree of openness also means that no quantum system can ever be in a pure state. A pure state is unitary equivalent to a zero-temperature ground state, forbidden by the third law of thermodynamics. Even if the combined system is in a pure state and can be described by a wavefunction , a subsystem in general cannot be described by a wavefunction. This observation motivated the formalism of density matrices, or density operators, introduced by John von Neumann in 1927 and independently, but less systematically by Lev Landau in 1927 and Felix Bloch in 1946. In general, the state of a subsystem is described by the density operator and
https://en.wikipedia.org/wiki/Optimal%20solutions%20for%20the%20Rubik%27s%20Cube
Optimal solutions for the Rubik's Cube are solutions that are the shortest in some sense. There are two common ways to measure the length of a solution. The first is to count the number of quarter turns. The second is to count the number of outer-layer twists, called "face turns". A move to turn an outer layer two quarter (90°) turns in the same direction would be counted as two moves in the quarter turn metric (QTM), but as one turn in the face metric (FTM, or HTM "Half Turn Metric", or OBTM "Outer Block Turn Metric"). The maximal number of face turns needed to solve any instance of the Rubik's Cube is 20, and the maximal number of quarter turns is 26. These numbers are also the diameters of the corresponding Cayley graphs of the Rubik's Cube group. In STM (slice turn metric), the minimal number of turns is unknown. There are many algorithms to solve scrambled Rubik's Cubes. An algorithm that solves a cube in the minimum number of moves is known as God's algorithm. Move notation To denote a sequence of moves on the 3×3×3 Rubik's Cube, this article uses "Singmaster notation", which was developed by David Singmaster. The followings are standard moves, which do not move centre cubies of any face to another location: The letters L, R, F, B, U, and D indicate a clockwise quarter turn of the left, right, front, back, up, and down face respectively. Half turns are indicated by appending a 2. A counterclockwise quarter turn is indicated by appending a prime symbol ( ′ ). However, because these notations are human-oriented, we use clockwise as positive, and not mathematically-oriented, which is counterclockwise as positive. The following are non-standard moves, which move centre cubies of faces to other locations: The letters M, S and E are used to denote the turning of a middle layer. M represents turning the layer between the R and L faces 1 quarter turn top to bottom. S represents turning the layer between the F and B faces 1 quarter turn clockwise, as seen
https://en.wikipedia.org/wiki/PSMD10
26S proteasome non-ATPase regulatory subunit 10 or gankyrin is an enzyme that in humans is encoded by the PSMD10 gene. First isolated in 1998 by Tanaka et al.; Gankyrin is an oncoprotein that is a component of the 19S regulatory cap of the proteasome. Structurally, it contains a 33-amino acid ankyrin repeat that forms a series of alpha helices. It plays a key role in regulating the cell cycle via protein-protein interactions with the cyclin-dependent kinase CDK4. It also binds closely to the E3 ubiquitin ligase MDM2, which is a regulator of the degradation of p53 and retinoblastoma protein, both transcription factors involved in tumor suppression and found mutated in many cancers. Gankyrin also has an anti-apoptotic effect and is overexpressed in certain types of tumor cells such as hepatocellular carcinoma. Function The 26S proteasome is a multicatalytic proteinase complex with a highly ordered structure composed of 2 complexes, a 20S core and a 19S regulator. The 20S core is composed of 4 rings of 28 non-identical subunits; 2 rings are composed of 7 alpha subunits and 2 rings are composed of 7 beta subunits. The 19S regulator is composed of a base, which contains 6 ATPase subunits and 2 non-ATPase subunits, and a lid, which contains up to 10 non-ATPase subunits. Proteasomes are distributed throughout eukaryotic cells at a high concentration and cleave peptides in an ATP/ubiquitin-dependent process in a non-lysosomal pathway. An essential function of a modified proteasome, the immunoproteasome, is the processing of class I MHC peptides. This gene encodes a non-ATPase subunit of the 19S regulator. Two transcripts encoding different isoforms have been described. Pseudogenes have been identified on chromosomes 3 and 20. Clinical significance The proteasome and its subunits are of clinical significance for at least two reasons: (1) a compromised complex assembly or a dysfunctional proteasome can be associated with the underlying pathophysiology of specific disea
https://en.wikipedia.org/wiki/Schmutzdecke
Schmutzdecke (German, "dirt cover" or dirty skin, sometimes wrongly spelled schmutzedecke) is a hypogeal biological layer formed on the surface of a slow sand filter. The schmutzdecke is the layer that provides the effective purification in potable water treatment, the underlying sand providing the support medium for this biological treatment layer. The composition of any particular schmutzdecke varies, but will typically consist of a gelatinous biofilm matrix of bacteria, fungi, protozoa, rotifera and a range of aquatic insect larvae. As a schmutzdecke ages, more algae tend to develop, and larger aquatic organisms may be present including some bryozoa, snails and annelid worms.
https://en.wikipedia.org/wiki/Mitochondrial%20permeability%20transition%20pore
The mitochondrial permeability transition pore (mPTP or MPTP; also referred to as PTP, mTP or MTP) is a protein that is formed in the inner membrane of the mitochondria under certain pathological conditions such as traumatic brain injury and stroke. Opening allows increase in the permeability of the mitochondrial membranes to molecules of less than 1500 Daltons in molecular weight. Induction of the permeability transition pore, mitochondrial membrane permeability transition (mPT or MPT), can lead to mitochondrial swelling and cell death through apoptosis or necrosis depending on the particular biological setting. Roles in pathology The MPTP was originally discovered by Haworth and Hunter in 1979 and has been found to be involved in neurodegeneration, hepatotoxicity from Reye-related agents, cardiac necrosis and nervous and muscular dystrophies among other deleterious events inducing cell damage and death. MPT is one of the major causes of cell death in a variety of conditions. For example, it is key in neuronal cell death in excitotoxicity, in which overactivation of glutamate receptors causes excessive calcium entry into the cell. MPT also appears to play a key role in damage caused by ischemia, as occurs in a heart attack and stroke. However, research has shown that the MPT pore remains closed during ischemia, but opens once the tissues are reperfused with blood after the ischemic period, playing a role in reperfusion injury. MPT is also thought to underlie the cell death induced by Reye's syndrome, since chemicals that can cause the syndrome, like salicylate and valproate, cause MPT. MPT may also play a role in mitochondrial autophagy. Cells exposed to toxic amounts of Ca2+ ionophores also undergo MPT and death by necrosis. Structure While the MPT modulation has been widely studied, little is known about its structure. Initial experiments by Szabó and Zoratti proposed the MPT may comprise Voltage Dependent Anion Channel (VDAC) molecules. Nevertheless,
https://en.wikipedia.org/wiki/Biological%20soil%20crust
Biological soil crusts are communities of living organisms on the soil surface in arid and semi-arid ecosystems. They are found throughout the world with varying species composition and cover depending on topography, soil characteristics, climate, plant community, microhabitats, and disturbance regimes. Biological soil crusts perform important ecological roles including carbon fixation, nitrogen fixation and soil stabilization; they alter soil albedo and water relations and affect germination and nutrient levels in vascular plants. They can be damaged by fire, recreational activity, grazing and other disturbances and can require long time periods to recover composition and function. Biological soil crusts are also known as biocrusts or as cryptogamic, microbiotic, microphytic, or cryptobiotic soils. Natural history Biology and composition Biological soil crusts are most often composed of fungi, lichens, cyanobacteria, bryophytes, and algae in varying proportions. These organisms live in intimate association in the uppermost few millimeters of the soil surface, and are the biological basis for the formation of soil crusts. Cyanobacteria Cyanobacteria are the main photosynthetic component of biological soil crusts, in addition to other photosynthetic taxa such as mosses, lichens, and green algae. The most common cyanobacteria found in soil crusts belong to large filamentous species such as those in the genus Microcoleus. These species form bundled filaments that are surrounded by a gelatinous sheath of polysaccharides. These filaments bind soil particles throughout the uppermost soil layers, forming a 3-D net-like structure that holds the soil together in a crust. Other common cyanobacteria species are as those in the genus Nostoc, which can also form sheaths and sheets of filaments that stabilize the soil. Some Nostoc species are also able to fix atmospheric nitrogen gas into bio-available forms such as ammonia. Bryophytes Bryophytes in soil crusts include moss
https://en.wikipedia.org/wiki/Resolution%20proof%20compression%20by%20splitting
In mathematical logic, proof compression by splitting is an algorithm that operates as a post-process on resolution proofs. It was proposed by Scott Cotton in his paper "Two Techniques for Minimizing Resolution Proof". The Splitting algorithm is based on the following observation: Given a proof of unsatisfiability and a variable , it is easy to re-arrange (split) the proof in a proof of and a proof of and the recombination of these two proofs (by an additional resolution step) may result in a proof smaller than the original. Note that applying Splitting in a proof using a variable does not invalidates a latter application of the algorithm using a differente variable . Actually, the method proposed by Cotton generates a sequence of proofs , where each proof is the result of applying Splitting to . During the construction of the sequence, if a proof happens to be too large, is set to be the smallest proof in . For achieving a better compression/time ratio, a heuristic for variable selection is desirable. For this purpose, Cotton defines the "additivity" of a resolution step (with antecedents and and resolvent ): Then, for each variable , a score is calculated summing the additivity of all the resolution steps in with pivot together with the number of these resolution steps. Denoting each score calculated this way by , each variable is selected with a probability proportional to its score: To split a proof of unsatisfiability in a proof of and a proof of , Cotton proposes the following: Let denote a literal and denote the resolvent of clauses and where and . Then, define the map on nodes in the resolution dag of : Also, let be the empty clause in . Then, and are obtained by computing and , respectively. Notes Proof theory
https://en.wikipedia.org/wiki/Chartered%20Mathematician
Chartered Mathematician (CMath) is a professional qualification in Mathematics awarded to professional practising mathematicians by the Institute of Mathematics and its Applications (IMA) in the United Kingdom. Chartered Mathematician is the IMA's highest professional qualification; achieving it is done through a rigorous peer-reviewed process. It provides formal recognition of a member’s qualifications in Mathematics, professional practise of Mathematics at an advanced level, technical standing, and commitment to remain at the forefront of Mathematics theory and practise throughout one's professional career. The required standard for Chartered Mathematician registration is typically an accredited UK MMath degree, at least five years of peer-reviewed professional practise of advanced Mathematics, attainment of a senior-level of technical standing, and an ongoing commitment to Continuing Professional Development. A Chartered Mathematician is entitled to use the post-nominal letters CMath, in accordance with the Royal Charter granted to the IMA by the Privy Council. The profession of Chartered Mathematician is a 'regulated profession' under the European professional qualification directives. See also Institute of Mathematics and its Applications
https://en.wikipedia.org/wiki/Coset%20construction
In mathematics, the coset construction (or GKO construction) is a method of constructing unitary highest weight representations of the Virasoro algebra, introduced by Peter Goddard, Adrian Kent and David Olive (1986). The construction produces the complete discrete series of highest weight representations of the Virasoro algebra and demonstrates their unitarity, thus establishing the classification of unitary highest weight representations.
https://en.wikipedia.org/wiki/Digital%20signature
A digital signature is a mathematical scheme for verifying the authenticity of digital messages or documents. A valid digital signature on a message gives a recipient confidence that the message came from a sender known to the recipient. Digital signatures are a standard element of most cryptographic protocol suites, and are commonly used for software distribution, financial transactions, contract management software, and in other cases where it is important to detect forgery or tampering. Digital signatures are often used to implement electronic signatures, which include any electronic data that carries the intent of a signature, but not all electronic signatures use digital signatures. Electronic signatures have legal significance in some countries, including Canada, South Africa, the United States, Algeria, Turkey, India, Brazil, Indonesia, Mexico, Saudi Arabia, Uruguay, Switzerland, Chile and the countries of the European Union. Digital signatures employ asymmetric cryptography. In many instances, they provide a layer of validation and security to messages sent through a non-secure channel: Properly implemented, a digital signature gives the receiver reason to believe the message was sent by the claimed sender. Digital signatures are equivalent to traditional handwritten signatures in many respects, but properly implemented digital signatures are more difficult to forge than the handwritten type. Digital signature schemes, in the sense used here, are cryptographically based, and must be implemented properly to be effective. They can also provide non-repudiation, meaning that the signer cannot successfully claim they did not sign a message, while also claiming their private key remains secret. Further, some non-repudiation schemes offer a timestamp for the digital signature, so that even if the private key is exposed, the signature is valid. Digitally signed messages may be anything representable as a bitstring: examples include electronic mail, contracts, or
https://en.wikipedia.org/wiki/FireMonkey
FireMonkey (abbreviated FMX) is a cross-platform GUI framework developed by Embarcadero Technologies for use in Delphi, C++Builder or Python, using Object Pascal, C++ or Python to build cross platform applications for Windows, macOS, iOS, and Android. A 3rd party library, FMX Linux, enables the building of FireMonkey applications on Linux. History FireMonkey is based on VGScene, which was designed by Eugene Kryukov of KSDev from Ulan-Ude, Russia as a next generation vector-based GUI. In 2011, VGScene was sold to the American company Embarcadero Technologies. Kryukov continued to be involved in the development of FireMonkey. Along with the traditional Windows only Visual Component Library (VCL), the cross-platform FireMonkey framework is included as part of Delphi, C++Builder and RAD Studio since version XE2. FireMonkey started out as a vector based UI framework, but evolved to be a bitmap or raster based UI framework to give greater control of the look to match target platform appearances. In 2021, FireMonkey for Python was released by Embarcadero, which was designed by Lucas Moura Belo. FireMonkey for Python is a natively compiled Python module powered by the Python4Delphi library. It gives Python developers access to the FireMonkey GUI framework and is freely redistributable. It fully supports Windows, MacOS, Linux, and Android GUI development. Overview FireMonkey is a cross-platform UI framework, and allows developers to create user interfaces that run on Windows, macOS, iOS and Android. It is written to use the GPU where possible, and applications take advantage of the hardware acceleration features available in Direct2D on Windows Vista, Windows 7, Windows 8 and Windows 10, OpenGL on macOS, OpenGL ES on iOS and Android, and on Windows platforms where Direct2D is not available (Windows XP for example) it falls back to GDI+. Applications and interfaces developed with FireMonkey are separated into two categories, HD and 3D. An HD application is a traditional
https://en.wikipedia.org/wiki/Roger%20Evans%20Howe
Roger Evans Howe (born May 23, 1945) is the William R. Kenan, Jr. Professor Emeritus of Mathematics at Yale University, and Curtis D. Robert Endowed Chair in Mathematics Education at Texas A&M University. He is known for his contributions to representation theory, in particular for the notion of a reductive dual pair and the Howe correspondence, and his contributions to mathematics education. Biography He attended Ithaca High School, then Harvard University as an undergraduate, becoming a Putnam Fellow in 1964. He obtained his Ph.D. from University of California, Berkeley in 1969. His thesis, titled On representations of nilpotent groups, was written under the supervision of Calvin Moore. Between 1969 and 1974, Howe taught at the State University of New York in Stony Brook before joining the Yale faculty in 1974. His doctoral students include Ju-Lee Kim, Jian-Shu Li, Zeev Rudnick, Eng-Chye Tan, and Chen-Bo Zhu. He moved to Texas A&M University in 2015. He has been a fellow of the American Academy of Arts and Sciences since 1993, and a member of the National Academy of Sciences since 1994. Howe received a Lester R. Ford Award in 1984. In 2006 he was awarded the American Mathematical Society Distinguished Public Service Award in recognition of his "multifaceted contributions to mathematics and to mathematics education." In 2012 he became a fellow of the American Mathematical Society. In 2015 he received the inaugural Award for Excellence in Mathematics Education. A conference in his honor was held at the National University of Singapore in 2006, and at Yale University in 2015. Selected works Roger Howe, "Tamely ramified supercuspidal representations of ", Pacific Journal of Mathematics 73 (1977), no. 2, 437–460. Roger Howe and Calvin C. Moore, "Asymptotic properties of unitary representations", Journal of Functional Analysis 32 (1979), no. 1, 72–96. Roger Howe, "θ-series and invariant theory", in Automorphic forms, representations and L-functions (Proc.
https://en.wikipedia.org/wiki/Digital%20television%20adapter
A digital television adapter (DTA), commonly known as a converter box or decoder box, is a television tuner that receives a digital television (DTV) transmission, and converts the digital signal into an analog signal that can be received and displayed on an analog television set. Some also have an HDMI output since some TVs with HDMI do not have a digital tuner. The input digital signal may be over-the-air terrestrial television signals received by a television antenna, or signals from a digital cable system. It normally does not refer to satellite TV, which has always required a set-top box either to operate the big satellite dish, or to be the integrated receiver/decoder (IRD) in the case of direct-broadcast satellites (DBS). In North America and South Korea, these ATSC tuner boxes convert from ATSC to NTSC, while in most of Europe and other places such as Australia and most Asian countries, they convert from Digital Video Broadcasting (DVB) to PAL, and in Japan, the Philippines and almost all countries in Latin America, they convert from ISDB-T to either NTSC or PAL. Because the DTV transition did nothing to reduce the number of broadcast television system standards (and in fact further complicated them), and due to varying frequency allocations and bandplans, there are many other combinations specific to other countries. United States On June 12, 2009, all full-power analog television transmissions ended in the United States. Viewers who watch broadcast television on older analog TV sets must use a digital converter box. Since many of the low-power TV stations continued to broadcast in analog for a while, consumers who watch low-power stations needed an adapter with an analog passthrough feature that allows the viewer to watch both digital and analog signals. Viewers who receive their television signals through cable or satellite were not affected by this change and did not need a digital television adapter (however, see the cable TV exception below). A
https://en.wikipedia.org/wiki/List%20of%20flags%20of%20Equatorial%20Guinea
This is a list of flags used in Equatorial Guinea in Africa. For more information about the national flag, see flag of Equatorial Guinea. National flag Ethnic group flags Municipality flags Historical flags See also Flag of Equatorial Guinea Coat of arms of Equatorial Guinea
https://en.wikipedia.org/wiki/Fast%20Infoset
Fast Infoset (or FI) is an international standard that specifies a binary encoding format for the XML Information Set (XML Infoset) as an alternative to the XML document format. It aims to provide more efficient serialization than the text-based XML format. FI is effectively a lossless compression, analogous to gzip, for XML, except that while the original formatting is lost, no information is lost in the conversion from XML to FI, and back to XML. While the purpose of compression is to reduce physical data size, FI aims to optimize both document size and processing performance. The Fast Infoset specification is defined by both the ITU-T and the ISO/IEC standards bodies. FI is officially defined in ITU-T Rec. X.891 and ISO/IEC 24824-1, and entitled Fast Infoset. The standard was published by ITU-T on May 14, 2005, and by ISO on May 4, 2007. The Fast Infoset standard document can be downloaded from the ITU website. Though the document does not assert intellectual property (IP) restrictions on implementation or use, page ii warns that it has received notices and the subject may not be completely free of IP assertions. A common misconception is that FI requires ASN.1 tool support. Although the formal specification uses ASN.1 notation, the standard includes Encoding Control Notation (ECN) and ASN.1 tools are not required by implementations. An alternative to FI is FleXPath. Structure The underlying file format is ASN.1, with tag/length/value blocks. Text values of attributes and elements are stored with length prefixes rather than end delimiters, and data segments do not require escapement for special characters. The equivalent of end tags ("terminators") are needed only at the end of a list of child-elements. Binary data is transmitted in native format, and need not be converted to a transmission format such as base64. Fast Infoset is a higher level format built on ASN.1 forms and notation. Element and attribute names are stored within the octet stream, unl