source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Rydberg%20molecule
A Rydberg molecule is an electronically excited chemical species. Electronically excited molecular states are generally quite different in character from electronically excited atomic states. However, particularly for highly electronically excited molecular systems, the ionic core interaction with an excited electron can take on the general aspects of the interaction between the proton and the electron in the hydrogen atom. The spectroscopic assignment of these states follows the Rydberg formula, named after the Swedish physicist Johannes Rydberg, and they are called Rydberg states of molecules. Rydberg series are associated with partially removing an electron from the ionic core. Each Rydberg series of energies converges on an ionization energy threshold associated with a particular ionic core configuration. These quantized Rydberg energy levels can be associated with the quasiclassical Bohr atomic picture. The closer you get to the ionization threshold energy, the higher the principal quantum number, and the smaller the energy difference between near threshold Rydberg states. As the electron is promoted to higher energy levels in a Rydberg series, the spatial excursion of the electron from the ionic core increases and the system is more like the Bohr quasiclassical picture. The Rydberg states of molecules with low principal quantum numbers can interact with the other excited electronic states of the molecule. This can cause shifts in energy. The assignment of molecular Rydberg states often involves following a Rydberg series from intermediate to high principal quantum numbers. The energy of Rydberg states can be refined by including a correction called the quantum defect in the Rydberg formula. The quantum defect correction can be associated with the presence of a distributed ionic core. The experimental study of molecular Rydberg states has been conducted with traditional methods for generations. However, the development of laser-based techniques such as Reson
https://en.wikipedia.org/wiki/ACTR1B
Beta-centractin is a protein that in humans is encoded by the ACTR1B gene. This gene encodes a 42.3 kD subunit of dynactin, a macromolecular complex consisting of 10 subunits ranging in size from 22 to 150 kD. Dynactin binds to both microtubules and cytoplasmic dynein. It is involved in a diverse array of cellular functions, including ER-to-Golgi transport, the centripetal movement of lysosomes and endosomes, spindle formation, chromosome movement, nuclear positioning, and axonogenesis. This subunit, like ACTR1A, is an actin-related protein. These two proteins are of equal length and share 90% amino acid identity. They are present in a constant ratio of approximately 1:15 in the dynactin complex.
https://en.wikipedia.org/wiki/FAO%20GM%20Foods%20Platform
The FAO GM Foods Platform is a web platform where participating countries can share information on their assessments of the safety of genetically modified (recombinant-DNA) foods and feeds based on the Codex Alimentarius. It also allows for sharing of assessments of low-level GMO contamination (LLP, low-level presence). The platform was set up by the Food and Agriculture Organization of the United Nations, and was launched at the FAO headquarters in Rome on 1 July 2013. The information uploaded to the platform is freely available to be read.
https://en.wikipedia.org/wiki/Basque%20Center%20for%20Applied%20Mathematics
The Basque Center of Applied Mathematics (BCAM) is a research center on applied mathematics, created with the support of the Basque Government and the University of the Basque Country. The BCAM headquarters are in Alda. Mazarredo, 14 in Bilbao, the capital of the province of Biscay in the Basque Country of northern Spain. Background In January 2007, the Department of Education, Universities and Research of the Basque Government set up Ikerbasque, the Basque Foundation for Science, which was charged with three objectives: the attraction and recovery of front-rank, consolidated researchers; the creation of new research centers with standards of excellence, and social outreach for science. The creation and current activity of BCAM – the Basque Center for Applied Mathematics - fall within the framework of the second of these objectives. In early 2008, Ikerbasque commissioned Enrique Zuazua to carry out a prospective study on the viability of setting up a center for mathematical research in the Basque Country. In March, 2008, the Ikerbasque Board of Trustees decided to go ahead with the creation of such a center as part of the BERC program (Basque Excellent Research Centres), later to become known as BCAM – Basque Center for Applied Mathematics. At the same time, the first international call for submissions for posts of director, managers and scientists was made. The center is located in the province of Biscay, given the extensive industrial fabric that the region has had traditionally as well as its current development of R+D+i activities. BCAM was officially created as a non-profit Association on September 1, 2008, and backed by the following three institutions: Ikerbasque, the University of the Basque Country (UPV-EHU), Innobasque, the Basque Foundation for Innovation and The Biscay Government. Scientific Directors of BCAM till today Jose Antonio Lozano - In charge since January 10, 2019 Luis Vega González - from May 8, 2013 till January 9, 2019. Tomás Cha
https://en.wikipedia.org/wiki/Recursive%20grammar
In computer science, a grammar is informally called a recursive grammar if it contains production rules that are recursive, meaning that expanding a non-terminal according to these rules can eventually lead to a string that includes the same non-terminal again. Otherwise it is called a non-recursive grammar. For example, a grammar for a context-free language is left recursive if there exists a non-terminal symbol A that can be put through the production rules to produce a string with A (as the leftmost symbol). All types of grammars in the Chomsky hierarchy can be recursive and it is recursion that allows the production of infinite sets of words. Properties A non-recursive grammar can produce only a finite language; and each finite language can be produced by a non-recursive grammar. For example, a straight-line grammar produces just a single word. A recursive context-free grammar that contains no useless rules necessarily produces an infinite language. This property forms the basis for an algorithm that can test efficiently whether a context-free grammar produces a finite or infinite language.
https://en.wikipedia.org/wiki/Saccharina%20latissima
Saccharina latissima is a brown alga (class Phaeophyceae), of the family Laminariaceae. It is known by the common names sugar kelp, sea belt, and Devil's apron, and is one of the species known to Japanese cuisine as kombu. It is found in the north Atlantic Ocean, Arctic Ocean and north Pacific Ocean. It is common along the coast of Northern Europe as far south as Galicia Spain, the coast of North America north of Massachusetts and central California, and the coast of Asia south to Korea and Japan. Description Saccharina latissima is a yellowish brown colour with a long narrow, undivided blade that can grow to long and wide. The central band is dimpled while the margins are smoother with a wavy edge, this is to cause greater water movement around the blades to aid in gas exchange. The frond is attached to the rock by stout rhizoids about 5 mm in diameter in the intertidal and sublittoral zones by a claw-like holdfast and a short, pliable, cylindrical stipe. Ecology Saccharina latissima is an ecologically important system. It is a primary producer, delivering plant material to the coastal food web. The three-dimensional forests also serve as a habitat for animals, resulting in a high biodiversity. Fish, shellfish and other animals find food and hiding places within these forests. It is a known host to the pathogenic fungus Phycomelaina laminariae. Human use Sugar kelp is used as a food in many places where it grows, one of many species often called Kombu. Sugar kelp can be used as a vegetable in salads but is most frequently used in soups and stocks where it provides savory flavors and is especially highly valued in vegetarian cooking. Kombu is a key component of miso soup. The savory flavor of sugar kelp comes from free amino acids like glutamate. Monosodium glutamate was first isolated from Saccharina. Sugar kelp gets its name due to it containing the sugar alcohol mannitol which is extracted from it to be used as a sugar substitute, especially for chewing gu
https://en.wikipedia.org/wiki/Reinnervation
Reinnervation is the restoration, either by spontaneous cellular regeneration or by surgical grafting, of nerve supply to a body part from which it has been lost or damaged. See also Denervation Neuroregeneration Targeted reinnervation
https://en.wikipedia.org/wiki/AssemblyScript
AssemblyScript is a TypeScript-based programming language that is optimized for, and statically compiled to, WebAssembly (currently using , the reference AssemblyScript compiler). Resembling ECMAScript and JavaScript, but with static types, the language is developed by the AssemblyScript Project with contributions from the AssemblyScript community. Overview In 2017, the availability of support for WebAssembly, a standard definition for a low-level bytecode and an associated virtual machine, became widespread among major web browsers, providing web developers a lower-level and potentially higher-performance compilation target for client-side programs and applications to execute within web browsers, in addition to the interpreted (and in practice dynamically compiled) JavaScript web scripting language. WebAssembly allows programs and code to be statically compiled ahead of time in order to run at potentially native-level or “bare-metal” performance within web browsers, without the overhead of interpretation or the initial latency of dynamic compilation. With the adoption of WebAssembly in major web browsers, Alon Zakai, creator of Emscripten, an LLVM/Clang-based C and C++ compiler that targeted a subset of JavaScript called asm.js, added support for WebAssembly as a compilation target in Emscripten, allowing C and/or C++ programs and code to be compiled directly to WebAssembly. While Emscripten and similar compilers allow web developers to write new code, or port existing code, written in a high-level language such as C, C++, Go, and Rust to WebAssembly to achieve potentially higher, native-level execution performance in web browsers, this forces web developers accustomed to developing client-side web scripts and applications in ECMAScript/JavaScript (the de facto client-side programming language in web browsers) to use a different language for targeting WebAssembly than JavaScript. AssemblyScript, as a variant of TypeScript that is syntactically similar to Jav
https://en.wikipedia.org/wiki/St%C3%A9phanie%20P.%20Lacour
Stéphanie P. Lacour (born 1975) is a French neurotechnologist and full professor holding the Foundation Bertarelli Chair in Neuroprosthetic Technology at the Swiss Federal Institute of Technology in Lausanne (EPFL). Lacour is a pioneer in the field of stretchable electronics and directs a laboratory at EPFL which specializes in the development of Soft BioElectronic Interfaces to enable seamless integration of neuroprosthetic devices into human tissues. Lacour is also a co-founding member and director of the Center for Neuroprosthetics at the EPFL Satellite Campus in Geneva, Switzerland. Early life and education Lacour was born around 1975 and grew up in France. She completed her M.Sc. in Integrated Electronic Devices at the Institut National des Sciences Appliquées (INSA) in Lyon, France, in 1998. After completing her master's degree, Lacour remained at INSA and conducted her graduate studies in Electrical Engineering from 1998 to 2001. In 2001, Lacour moved to the United States to work under the mentorship of Dr. Sigurd Wagner as postdoctoral researcher at Princeton University. During her time at Princeton, Lacour made significant advancements in developing stretchable electronics that could be implemented in biological systems more easily than typical electronic hardware. Lacour finished her postdoctoral work in 2005 and began further postdoctoral work at the University of Cambridge under the mentorship of Dr. James Fawcett where she began to explore the therapeutic potential of her technologies in repairing nerves after injury. In 2007, Lacour received the University Research Fellowship from the Royal Society and became a Research Project Manager and head of the Stretchable Bioelectronics group at the Nanoscience Centre in Cambridge, UK. Stretchable integrated circuits In 2003, Lacour published a paper looking at implementing thin gold stripes onto elastomeric substances to make electronic skins more flexible and stretchable. She found that inducing micromet
https://en.wikipedia.org/wiki/Gel%20doc
A gel doc, also known as a gel documentation system, gel image system or gel imager, refers to equipment widely used in molecular biology laboratories for the imaging and documentation of nucleic acid and protein suspended within polyacrylamide or agarose gels. Genetic information is stored in DNA. Polyacrylamide or agarose gel electrophoresis procedures are carried out to examine nucleic acids or proteins in order to analyze the genetic data. For protein analysis, two-dimensional gel electrophoresis is employed (2-DGE) which is one of the methods most frequently used in comparative proteomic investigations that can distinguish thousands of proteins in a single run. Proteins are separated using 2-DGE first, based on their isoelectric points (pIs) in one dimension and then based on their molecular mass in the other. After that, a thorough qualitative and quantitative analysis of the proteomes is performed using gel documentation with software image assessment methods on the 2-DGE gels stained for protein visibility. Gels are typically stained with Ethidium bromide or other nucleic acid stains such as GelGreen. Generally, a gel doc includes an ultraviolet (UV) light transilluminator, a hood or a darkroom to shield external light sources and protect the user from UV exposure, a computer, software and a high-performance CCD camera for image capturing. Regarding the optical sensor utilized in commercial gel-document systems, the image quality increases with image sensor size. With advancement in CMOS camera sensors like Sony's Pregius and Exmor series, low light capable cameras made of these sensors are also being incorporated in gel documentation systems. The dynamic range of the imaging device is a significant barrier to detecting the complete concentration range of cellular proteins in 2DE gels. Dense protein regions are extremely luminous and require just brief exposures in fluorescence imagers with full-field illumination and CCD cameras. Longer exposures are need
https://en.wikipedia.org/wiki/CDK7%20pathway
CDK7 is a cyclin-dependent kinase shown to be not easily classified. CDK7 is both a CDK-activating kinase (CAK) and a component of the general transcription factor TFIIH. Introduction An intricate network of cyclin-dependent kinases (CDKs) is organized in a pathway to ensure that each cell accurately replicates its DNA and segregates it equally between the two daughter cells. One CDK–the CDK7 complex–cannot be so easily classified. CDK7 is both a CDK-activating kinase (CAK), which phosphorylates cell-cycle CDKs within the activation segment (T-loop), and a component of the general transcription factor TFIIH, which phosphorylates the C-terminal domain (CTD) of the largest subunit of Pol II. A proposed mode of CDK7 inhibition is the phosphorylation of cyclin H by CDK7 itself or by another kinase. CDK7 has been observed as a prerequisite to S phase entry and mitosis. CDK7 is activated by the binding of cyclin H and its substrate specificity is altered by the binding of MAT1. The free form of the complex formed, CDK7-cycH-MAT1, operates as CDK-activating kinase (CAK). In vivo, CDK7 forms a stable complex with cyclin H and MAT1 only when its T-loop is phosphorylated on either Ser164 or Thr170 residues. The T-loop To be active, most CDKs require not only a cyclin partner but also phosphorylation at one particular site, which corresponds to Thr161 in human CDK1, and which is located within the so-called T-loop (or activation loop) of kinase subdomain VIII. CDKl, CDK2 and CDK4 all require T-loop phosphorylation for maximum activity. The free form of CDK7-cycH-MAT1 phosphorylates the T-loops of CDK1, CDK2, CDK4 and CDK6. For all CDK substrates of CDK7, phosphorylation by CDK7 occurs following the binding of the substrate kinase to its associated cyclin. This two-step process has been observed in CDK2, where the association of CDK2 with cyclin A results in a conformational change that primes the catalytic site for binding of its ATP substrate and phosphorylation by CDK7 o
https://en.wikipedia.org/wiki/Apache%20CXF
Apache CXF is an open source software project developing a Web services framework. It originated as the combination of Celtix developed by IONA Technologies and XFire developed by a team hosted at Codehaus in 2006. These two projects were combined at the Apache Software Foundation. The name "CXF" was derived by combining "Celtix" and "XFire". Description CXF is often used with Apache ServiceMix, Apache Camel and Apache ActiveMQ in service-oriented architecture (SOA) infrastructure projects. Apache CXF supports the Java programming interfaces JAX-WS, JAX-RS, JBI, JCA, JMX, JMS over SOAP, Spring, and the XML data binding frameworks JAXB, Aegis, Apache XMLBeans, SDO. CXF includes the following: Web Services Standards Support: SOAP WS-Addressing WS-Policy WS-ReliableMessaging WS-SecureConversation WS-Security WS-SecurityPolicy JAX-WS API for Web service development Java-first support WSDL-first tooling JAX-RS (JSR 339 2.0) API for RESTful Web service development JavaScript programming model for service and client development Maven tooling CORBA support HTTP, JMS and WebSocket transport layers Embeddable Deployment: ServiceMix or other JBI containers Geronimo or other Java EE containers Tomcat or other servlet containers OSGi Reference OSGi Remote Services implementation IONA Technologies distributes a commercial Enterprise version of Apache CXF under the name FUSE Services Framework. See also The Axis Web Services framework Apache Wink, a project in incubation with JAX-RS support List of web service frameworks Citations
https://en.wikipedia.org/wiki/Tree%20wrap
A tree wrap or tree wrapping is a wrap of garden tree saplings, roses, and other delicate plants to protect them from frost damage (e.g. frost cracks or complete death). In the past it was made of straw (straw wrap) . Now there are commercial tree wrap materials, such as crepe paper or burlap tapes. Tree wrapping is also used to prevent saplings from sunscald and drying of the bark. A disadvantage of tape wrapping is dampness under the wrapping during rainy seasons.
https://en.wikipedia.org/wiki/Percolation%20critical%20exponents
In the context of the physical and mathematical theory of percolation, a percolation transition is characterized by a set of universal critical exponents, which describe the fractal properties of the percolating medium at large scales and sufficiently close to the transition. The exponents are universal in the sense that they only depend on the type of percolation model and on the space dimension. They are expected to not depend on microscopic details such as the lattice structure, or whether site or bond percolation is considered. This article deals with the critical exponents of random percolation. Percolating systems have a parameter which controls the occupancy of sites or bonds in the system. At a critical value , the mean cluster size goes to infinity and the percolation transition takes place. As one approaches , various quantities either diverge or go to a constant value by a power law in , and the exponent of that power law is the critical exponent. While the exponent of that power law is generally the same on both sides of the threshold, the coefficient or "amplitude" is generally different, leading to a universal amplitude ratio. Description Thermodynamic or configurational systems near a critical point or a continuous phase transition become fractal, and the behavior of many quantities in such circumstances is described by universal critical exponents. Percolation theory is a particularly simple and fundamental model in statistical mechanics which has a critical point, and a great deal of work has been done in finding its critical exponents, both theoretically (limited to two dimensions) and numerically. Critical exponents exist for a variety of observables, but most of them are linked to each other by exponent (or scaling) relations. Only a few of them are independent, and the choice of the fundamental exponents depends on the focus of the study at hand. One choice is the set motivated by the cluster size distribution, another choice is motivated
https://en.wikipedia.org/wiki/Slepian%E2%80%93Wolf%20coding
In information theory and communication, the Slepian–Wolf coding, also known as the Slepian–Wolf bound, is a result in distributed source coding discovered by David Slepian and Jack Wolf in 1973. It is a method of theoretically coding two lossless compressed correlated sources. Problem setup Distributed coding is the coding of two, in this case, or more dependent sources with separate encoders and a joint decoder. Given two statistically dependent independent and identically distributed finite-alphabet random sequences and , the Slepian–Wolf theorem gives a theoretical bound for the lossless coding rate for distributed coding of the two sources. Theorem The bound for the lossless coding rates as shown below: If both the encoder and the decoder of the two sources are independent, the lowest rate it can achieve for lossless compression is and for and respectively, where and are the entropies of and . However, with joint decoding, if vanishing error probability for long sequences is accepted, the Slepian–Wolf theorem shows that much better compression rate can be achieved. As long as the total rate of and is larger than their joint entropy and none of the sources is encoded with a rate smaller than its entropy, distributed coding can achieve arbitrarily small error probability for long sequences. A special case of distributed coding is compression with decoder side information, where source is available at the decoder side but not accessible at the encoder side. This can be treated as the condition that has already been used to encode , while we intend to use to encode . In other words, two isolated sources can compress data as efficiently as if they were communicating with each other. The whole system is operating in an asymmetric way (compression rate for the two sources are asymmetric). This bound has been extended to the case of more than two correlated sources by Thomas M. Cover in 1975, and similar results were obtained in 1976 by
https://en.wikipedia.org/wiki/Videcom%20International
Videcom International Limited is a United Kingdom travel technology company based in Henley-on-Thames. It designs, develops and provides modern computer reservations systems to airlines and the travel industry, specializing in the hosting and distribution of airline sales. The system is connected to the Global Distribution Systems of Sabre, Amadeus, Galileo, Worldspan and Abacus, which travel agents use to make airline bookings, and is also connected to other airline systems for interline bookings. The IATA airline designator for Videcom is U1. History Founded in 1972, the company originally manufactured computer terminals for uses throughout the aviation and travel sectors, including airline reservation centers, airport operations and travel agency systems. Over 450,000 computer terminals were manufactured between 1972 and 2002 and refurbishments are still supported today, with many units still in use globally at airlines and airports. The company diversified into airline software development in 1987. In 1976, Videcom International along with British Airways, British Caledonian and CCL, launched Travicom, the world's first multi-access reservations system. It was wholly based on Videcom technology. They formed a network to thousands of travel agents in the UK providing distribution for 49 subscribing international airlines, including British Airways, British Caledonian, TWA, Pan American World Airways, Qantas, Singapore Airlines, Air France, Lufthansa, SAS, Air Canada, KLM, Alitalia, Cathay Pacific, JAL) and some African airlines. It allowed agents and airlines to communicate via a common distribution language and network, handling 97% of UK airline business trade bookings by 1987. The system went on to be replicated by Videcom in other areas of the world including the Middle East (DMARS), New Zealand, Kuwait (KMARS), Ireland, Caribbean, United States and Hong Kong. The Travicom multi access system was eventually replaced by Galileo in the UK and in 1988, Trave
https://en.wikipedia.org/wiki/Mir-549%20microRNA%20precursor%20family
In molecular biology mir-549 microRNA is a short RNA molecule. MicroRNAs function to regulate the expression levels of other genes by several mechanisms. See also MicroRNA
https://en.wikipedia.org/wiki/Neolocal%20residence
Neolocal residence is a type of post-marital residence in which a newly married couple resides separately from both the husband's natal household and the wife's natal household. Neolocal residence forms the basis of most developed nations, especially in the West, and is also found among some nomadic communities. Upon marriage, each partner is expected to move out of their parents' household and establish a new residence, thus forming the core of an independent nuclear family. Neolocal residence involves the creation of a new household where a child marries or even when they reach adulthood and become socially and economically active. Neolocal residence and nuclear family domestic structures are found in societies where geographical mobility is important. In Western societies, they are consistent with the frequent moves that are necessary due to choices and changes within a supply- and demand-regulated labor market. They are also prevalent in hunting and gathering economies, where nomadic movements are intrinsic to the subsistence strategy. In western countries, employment in large corporations or the military often calls for frequent relocations, making it nearly impossible for extended families to remain together hence creating new generation of families. Description In neolocal residence, newly formed couples form their own separate household units, and create what is considered a nuclear family. This contrasts with other forms of post-marital residence, such as patrilocal residence and matrilocal residence, in which the couple resides with or near the husband's family (patrilocal residence) or the wife's family (matrilocal residence). Neolocality first appeared in Northwestern Europe. It was from there brought to British colonies in the Americas. As American colonists expanded westward, this form of residence remained. Although some believe neolocal residence came as a result of industrialization, there is evidence of neolocality in England from before indu
https://en.wikipedia.org/wiki/3rd%20meridian%20east
The meridian 3° east of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, the Atlantic Ocean, Europe, Africa, the Southern Ocean, and Antarctica to the South Pole. The 3rd meridian east forms a great circle with the 177th meridian west. From Pole to Pole Starting at the North Pole and heading south to the South Pole, the 3rd meridian east passes through: {| class="wikitable plainrowheaders" ! scope="col" width="125" | Co-ordinates ! scope="col" | Country, territory or sea ! scope="col" | Notes |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Arctic Ocean | style="background:#b0e0e6;" | |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Atlantic Ocean | style="background:#b0e0e6;" | |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | North Sea | style="background:#b0e0e6;" | |- | ! scope="row" | | |- | ! scope="row" | | Passing just west of Lille (at ) and through Narbonne (at ) |- | ! scope="row" | | |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Mediterranean Sea | style="background:#b0e0e6;" | |- | ! scope="row" | | Island of Majorca |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Mediterranean Sea | style="background:#b0e0e6;" | Passing just east of the island of Cabrera, |- | ! scope="row" | | Passing just west of Algiers |- | ! scope="row" | | |- | ! scope="row" | | |- | ! scope="row" | | |- | ! scope="row" | | |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Atlantic Ocean | style="background:#b0e0e6;" | Passing just west of Bouvet Island, |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Southern Ocean | style="background:#b0e0e6;" | |- | ! scope="row" | Antarctica | Queen Maud Land, claimed by |- |} e003rd meridian east
https://en.wikipedia.org/wiki/Kuznetsov%20trace%20formula
In analytic number theory, the Kuznetsov trace formula is an extension of the Petersson trace formula. The Kuznetsov or relative trace formula connects Kloosterman sums at a deep level with the spectral theory of automorphic forms. Originally this could have been stated as follows. Let be a sufficiently "well behaved" function. Then one calls identities of the following type Kuznetsov trace formula: The integral transform part is some integral transform of g and the spectral part is a sum of Fourier coefficients, taken over spaces of holomorphic and non-holomorphic modular forms twisted with some integral transform of g. The Kuznetsov trace formula was found by Kuznetsov while studying the growth of weight zero automorphic functions. Using estimates on Kloosterman sums he was able to derive estimates for Fourier coefficients of modular forms in cases where Pierre Deligne's proof of the Weil conjectures was not applicable. It was later translated by Jacquet to a representation theoretic framework. Let be a reductive group over a number field F and be a subgroup. While the usual trace formula studies the harmonic analysis on G, the relative trace formula is a tool for studying the harmonic analysis on the symmetric space . For an overview and numerous applications Cogdell, J.W. and I. Piatetski-Shapiro, The arithmetic and spectral analysis of Poincaré series, volume 13 of Perspectives in mathematics. Academic Press Inc., Boston, MA, (1990).
https://en.wikipedia.org/wiki/Xavier%20Guichard
Xavier Guichard (1870–1947) was a French Director of Police, archaeologist and writer. His 1936 book Eleusis Alesia: Enquête sur les origines de la civilisation européenne is an early example of speculative thinking concerning Earth mysteries, based on his observations of apparent alignments between Alesia-like place names on a map of France. His theories are analogous to those of his near-contemporary in the United Kingdom, Alfred Watkins, concerning Ley lines. Xavier Guichard appears as a character in the novels of Georges Simenon, where he is the superior of the fictional detective Jules Maigret. See also 366 geometry
https://en.wikipedia.org/wiki/Imitation%20SWI
ISWI (Imitation SWItch) is one of the five major DNA chromatin remodeling complex types, or subfamilies, found in most eukaryotic organisms. ISWI remodeling complexes place nucleosomes along segments of DNA at regular intervals. The placement of nucleosomes by ISWI protein complexes typically results in the silencing of the DNA because the nucleosome placement prevents transcription of the DNA. ISWI, like the closely related SWI/SNF subfamily, is an ATP-dependent chromatin remodeler. However, the chromatin remodeling activities of ISWI and SWI/SNF are distinct and mediate the binding of non-overlapping sets of DNA transcription factors. The protein ISW1 is the first ATPase subunit which has been isolated in the ISWI chromatin remodeling family in the fruit fly Drosophila. This protein presents high level of similarity to the SWI/SNF chromatin remodeling family in the ATPase domain. Outside the ATPase domain ISWI loses the similarity with the member of the SWI/SNF family, possessing a SANT domain instead of the bromodomain. The protein ISWI can interact with several proteins giving three different chromatin-remodeling complexes in Drosophila melanogaster: NURF (nucleosome remodeling factor), CHRAC (chromatin remodeling and assembly complex) and ACF (ATP-utilising chromatin remodeling and assembly factor). In vitro, the ISWI protein alone can assemble nucleosomes on linear DNA and it can move nucleosomes on linear DNA from the center to the extremities. Inside the CHRAC complex, ISWI catalyzes the inverse reaction, moving nucleosomes from the extremities to the center. Generation of loops in dsDNA A single molecule study using atomic force microscopy (AFM) and tethered particle motion (TPM) has observed that ISWI can bind naked DNA in the absence of ATP, wrapping DNA around the protein. In presence of ATP, the protein generates DNA loops while simultaneously generating negative supercoils in the template. The first figure in this paper shows three AFM images
https://en.wikipedia.org/wiki/Dirichlet%20average
Dirichlet averages are averages of functions under the Dirichlet distribution. An important one are dirichlet averages that have a certain argument structure, namely where and is the Dirichlet measure with dimension N. They were introduced by the mathematician Bille C. Carlson in the '70s who noticed that the simple notion of this type of averaging generalizes and unifies many special functions, among them generalized hypergeometric functions or various orthogonal polynomials:. They also play an important role for the solution of elliptic integrals (see Carlson symmetric form) and are connected to statistical applications in various ways, for example in Bayesian analysis. Notable Dirichlet averages Some Dirichlet averages are so fundamental that they are named. A few are listed below. R-function The (Carlson) R-function is the Dirichlet average of , with . Sometimes is also denoted by . Exact solutions: For it is possible to write an exact solution in the form of an iterative sum where , is the dimension of or and . S-function The (Carlson) S-function is the Dirichlet average of ,
https://en.wikipedia.org/wiki/Annual%20Review%20of%20Nuclear%20and%20Particle%20Science
The Annual Review of Nuclear and Particle Science is a peer-reviewed academic journal that publishes review articles about nuclear and particle science. As of 2023, Journal Citation Reports lists the journal's 2022 impact factor as 12.4, ranking it first of 19 journal titles in the category "Physics, Nuclear" and second of 29 journal titles in the category "Physics, Particles and Fields". Beginning in 2020, the Annual Review of Nuclear and Particle Science is published open access under the Subscribe to Open (S2O) publishing model. The journal was first created by the National Research Council's Committee on Nuclear Science, which partnered with Annual Reviews to produce the first volume in 1952. The initial title of the journal was Annual Review of Nuclear Science. Annual Reviews published all volumes independently beginning with Volume 3. In 1978, the journal's name was changed to Annual Review of Nuclear and Particle Science. In its history, it has had eight editors, four of whom had tenures of 10 or more years: Emilio Segrè, John David Jackson, Chris Quigg, and Barry R. Holstein. History In the early 1950s, the National Research Council's Committee on Nuclear Science announced its support for an annual volume of review articles that covered recent developments in the field of nuclear science. One of the key proponents of creating the journal was Alberto F. Thompson, who had previously helped establish Nuclear Science Abstracts in 1948. The Committee on Nuclear Science consulted the nonprofit publishing company Annual Reviews for advice, and Annual Reviews agreed to publish the initial and subsequent volumes. Members of the Committee acted as the editorial board for the first volume, which was published in December 1952. Published under the title Annual Review of Nuclear Science, it covered nuclear science developments in 1950. Beginning with Volume 2, James G. Beckerley was editor, with Martin D. Kamen, Donald F. Mastick, and Leonard I. Schiff as associate e
https://en.wikipedia.org/wiki/Cryptographic%20protocol
A cryptographic protocol is an abstract or concrete protocol that performs a security-related function and applies cryptographic methods, often as sequences of cryptographic primitives. A protocol describes how the algorithms should be used and includes details about data structures and representations, at which point it can be used to implement multiple, interoperable versions of a program. Cryptographic protocols are widely used for secure application-level data transport. A cryptographic protocol usually incorporates at least some of these aspects: Key agreement or establishment Entity authentication Symmetric encryption and message authentication material construction Secured application-level data transport Non-repudiation methods Secret sharing methods Secure multi-party computation For example, Transport Layer Security (TLS) is a cryptographic protocol that is used to secure web (HTTPS) connections. It has an entity authentication mechanism, based on the X.509 system; a key setup phase, where a symmetric encryption key is formed by employing public-key cryptography; and an application-level data transport function. These three aspects have important interconnections. Standard TLS does not have non-repudiation support. There are other types of cryptographic protocols as well, and even the term itself has various readings; Cryptographic application protocols often use one or more underlying key agreement methods, which are also sometimes themselves referred to as "cryptographic protocols". For instance, TLS employs what is known as the Diffie–Hellman key exchange, which although it is only a part of TLS per se, Diffie–Hellman may be seen as a complete cryptographic protocol in itself for other applications. Advanced cryptographic protocols A wide variety of cryptographic protocols go beyond the traditional goals of data confidentiality, integrity, and authentication to also secure a variety of other desired characteristics of computer-mediated co
https://en.wikipedia.org/wiki/Journal%20of%20Wildlife%20Management
The Journal of Wildlife Management is a peer-reviewed scientific journal devoted to the ecology of non-domesticated animal species. It is published by John Wiley & Sons on behalf of The Wildlife Society. History July 1937 – first issue of the journal. See also Wildlife Monographs Wildlife Society Bulletin
https://en.wikipedia.org/wiki/List%20of%20psilocybin%20mushroom%20species
Psilocybin mushrooms are mushrooms which contain the hallucinogenic substances psilocybin, psilocin, baeocystin and norbaeocystin. The mushrooms are collected and grown as an entheogen and recreational drug, despite being illegal in many countries. Many psilocybin mushrooms are in the genus Psilocybe, but species across several other genera contain the drugs. General Conocybe Galerina Gymnopilus Inocybe Panaeolus Pholiotina Pluteus Psilocybe Conocybe Conocybe siligineoides R. Heim Conocybe velutipes (Velen.) Hauskn. & Svrcek Galerina Galerina steglichii Besl Gymnopilus Gymnopilus aeruginosus (Peck) Singer (photo) Gymnopilus braendlei (Peck) Hesler Gymnopilus cyanopalmicola Guzm.-Dáv Gymnopilus dilepis (Berk. & Broome) Singer Gymnopilus dunensis H. Bashir, Jabeen & Khalid Gymnopilus intermedius (Singer) Singer Gymnopilus lateritius (Pat.) Murrill Gymnopilus luteofolius (Peck) Singer (photo) Gymnopilus luteoviridis Thiers (photo) Gymnopilus luteus (Peck) Hesler (photo) Gymnopilus palmicola Murrill Gymnopilus purpuratus (Cooke & Massee) Singer (photo) Gymnopilus subpurpuratus Guzmán-Davalos & Guzmán Gymnopilus subspectabilis Hesler Gymnopilus validipes (Peck) Hesler Gymnopilus viridans Murrill Inocybe Inocybe aeruginascens Babos Inocybe caerulata Matheny, Bougher & G.M. Gates Inocybe coelestium Kuyper Inocybe corydalina Inocybe corydalina var. corydalina Quél. Inocybe corydalina var. erinaceomorpha (Stangl & J. Veselsky) Kuyper Inocybe haemacta (Berk. & Cooke) Sacc. Inocybe tricolor Kühner Most species in this genus are poisonous. Panaeolus Panaeolus affinis (E. Horak) Ew. Gerhardt Panaeolus africanus Ola'h Panaeolus axfordii Y. Hu, S.C. Karunarathna, P.E. Mortimer & J.C. Xu Panaeolus bisporus (Malencon and Bertault) Singer and Weeks Panaeolus cambodginiensis (OlaĽh et Heim) Singer & Weeks. (Merlin & Allen, 1993) Panaeolus chlorocystis (Singer & R.A. Weeks) Ew. Gerhardt Panaeolus cinctulus (Bolton) Britzelm. Panaeolus cyanescens (Berk. & Broome) Sacc. Pan
https://en.wikipedia.org/wiki/P700
P700, or photosystem I primary donor, is the reaction-center chlorophyll a molecular dimer associated with photosystem I in plants, algae, and cyanobacteria. Etymology Its name is derived from the word “pigment” (P) and the presence of a major bleaching band centered around 695-700 nm in the flash-induced absorbance difference spectra of P700/ P700+•. Components The structure of P700 consists of a heterodimer with two distinct chlorophyll molecules, most notably chlorophyll a and chlorophyll a’, giving it an additional name of “special pair”. Inevitably, however, the special pair of P700 behaves as if it were just one unit. This species is vital due to its ability to absorb light energy with a wavelength approximately between 430 nm-700 nm, and transfer high-energy electrons to a series of acceptors that are situated near it. Action and functions Photosystem I operates with the functions of producing NADPH, the reduced form of NADP, at the end of the photosynthetic reaction through electron transfer, and of providing energy to a proton pump and eventually ATP, for instance in cyclic electron transport. Excitation When photosystem I absorbs light, an electron is excited to a higher energy level in the P700 chlorophyll. The resulting P700 with an excited electron is designated as P700*, which is a strong reducing agent due to its very negative redox potential of -1.2 V. Electron transport chain Following the excitation of P700, one of its electrons is passed on to an electron acceptor, A, triggering charge separation producing an anionic A and cationic P700. Subsequently, electron transfer continues from A to a phylloquinone molecule known as A, and then to three iron-sulfur clusters. Type I photosystems use iron-sulfur cluster proteins as terminal electron acceptors. Thus, the electron is transferred from F to another iron sulfur cluster, F, and then passed on to the last iron-sulfur cluster serving as an electron acceptor, F. Eventually, the electron is t
https://en.wikipedia.org/wiki/Wirth%27s%20law
Wirth's law is an adage on computer performance which states that software is getting slower more rapidly than hardware is becoming faster. The adage is named after Niklaus Wirth, a computer scientist who discussed it in his 1995 article "A Plea for Lean Software". History Wirth attributed the saying to Martin Reiser, who in the preface to his book on the Oberon System wrote: "The hope is that the progress in hardware will cure all software ills. However, a critical observer may observe that software manages to outgrow hardware in size and sluggishness." Other observers had noted this for some time before; indeed, the trend was becoming obvious as early as 1987. He states two contributing factors to the acceptance of ever-growing software as: "rapidly growing hardware performance" and "customers' ignorance of features that are essential versus nice-to-have". Enhanced user convenience and functionality supposedly justify the increased size of software, but Wirth argues that people are increasingly misinterpreting complexity as sophistication, that "these details are cute but not essential, and they have a hidden cost". As a result, he calls for the creation of "leaner" software and pioneered the development of Oberon, a software system developed between 1986 and 1989 based on nothing but hardware. Its primary goal was to show that software can be developed with a fraction of the memory capacity and processor power usually required, without sacrificing flexibility, functionality, or user convenience. Other names The law was restated in 2009 and attributed to Google co-founder Larry Page. It has been referred to as Page's law. The first use of that name is attributed to fellow Google co-founder Sergey Brin at the 2009 Google I/O Conference. Other common forms use the names of the leading hardware and software companies of the 1990s, Intel and Microsoft, or their CEOs, Andy Grove and Bill Gates, for example "What Intel giveth, Microsoft taketh away" and Andy and
https://en.wikipedia.org/wiki/Template%20method%20pattern
In object-oriented programming, the template method is one of the behavioral design patterns identified by Gamma et al. in the book Design Patterns. The template method is a method in a superclass, usually an abstract superclass, and defines the skeleton of an operation in terms of a number of high-level steps. These steps are themselves implemented by additional helper methods in the same class as the template method. The helper methods may be either abstract methods, in which case subclasses are required to provide concrete implementations, or hook methods, which have empty bodies in the superclass. Subclasses can (but are not required to) customize the operation by overriding the hook methods. The intent of the template method is to define the overall structure of the operation, while allowing subclasses to refine, or redefine, certain steps. Overview This pattern has two main parts: The "template method" is implemented as a method in a base class (usually an abstract class). This method contains code for the parts of the overall algorithm that are invariant. The template ensures that the overarching algorithm is always followed. In the template method, portions of the algorithm that may vary are implemented by sending self messages that request the execution of additional helper methods. In the base class, these helper methods are given a default implementation, or none at all (that is, they may be abstract methods). Subclasses of the base class "fill in" the empty or "variant" parts of the "template" with specific algorithms that vary from one subclass to another. It is important that subclasses do not override the template method itself. At run-time, the algorithm represented by the template method is executed by sending the template message to an instance of one of the concrete subclasses. Through inheritance, the template method in the base class starts to execute. When the template method sends a message to self requesting one of the helper
https://en.wikipedia.org/wiki/Hibbertia%20scandens
Hibbertia scandens, sometimes known by the common names snake vine, climbing guinea flower and golden guinea vine, is a species of flowering plant in the family Dilleniaceae and is endemic to eastern Australia. It is climber or scrambler with lance-shaped or egg-shaped leaves with the narrower end towards the base, and yellow flowers with more than thirty stamens arranged around between three and seven glabrous carpels. Description Hibbertia scandens is a climber or scrambler with stems long. The leaves are lance-shaped or egg-shaped with the narrower end towards the base, long and wide, sessile and often stem-clasping with the lower surface silky-hairy. The flowers are arranged in leaf axils, each flower on a peduncle long. The sepals are long and the petals are yellow, long with more than thirty stamens surrounding the three to seven glabrous carpels. Flowering occurs in most months and the fruit is an orange aril. Plants near the coast tend to be densely hairy with spatula-shaped leaves and have flowers with six or seven carpels, whilst those further inland are usually more or less glabrous with tapering leaves and flowers with three or four carpels. The flowers have been reported as having an unpleasant odour variously described as similar to mothballs or animal urine or sweet but with "a pronounced faecal element". Taxonomy Snake vine was first formally described in 1799 by German botanist Carl Willdenow who gave it the name Dillenia scandens in Species Plantarum. In 1805, Swedish botanist Jonas Dryander transferred the species into the genus Hibbertia as H. scandens in the Annals of Botany. The specific epithet (scandens) is derived from Latin, and means "climbing". Three varieties of H. scandens have been described and the names are accepted by the Australian Plant Census but not by the National Herbarium of New South Wales: Hibbertia scandens var. glabra (Maiden) C.T.White; Hibbertia scandens var. oxyphylla Domin; Hibbertia scandens (Willd.)
https://en.wikipedia.org/wiki/Two-point%20tensor
Two-point tensors, or double vectors, are tensor-like quantities which transform as Euclidean vectors with respect to each of their indices. They are used in continuum mechanics to transform between reference ("material") and present ("configuration") coordinates. Examples include the deformation gradient and the first Piola–Kirchhoff stress tensor. As with many applications of tensors, Einstein summation notation is frequently used. To clarify this notation, capital indices are often used to indicate reference coordinates and lowercase for present coordinates. Thus, a two-point tensor will have one capital and one lower-case index; for example, AjM. Continuum mechanics A conventional tensor can be viewed as a transformation of vectors in one coordinate system to other vectors in the same coordinate system. In contrast, a two-point tensor transforms vectors from one coordinate system to another. That is, a conventional tensor, , actively transforms a vector u to a vector v such that where v and u are measured in the same space and their coordinates representation is with respect to the same basis (denoted by the "e"). In contrast, a two-point tensor, G will be written as and will transform a vector, U, in E system to a vector, v, in the e system as . The transformation law for two-point tensor Suppose we have two coordinate systems one primed and another unprimed and a vectors' components transform between them as . For tensors suppose we then have . A tensor in the system . In another system, let the same tensor be given by . We can say . Then is the routine tensor transformation. But a two-point tensor between these systems is just which transforms as . Simple example The most mundane example of a two-point tensor is the transformation tensor, the Q in the above discussion. Note that . Now, writing out in full, and also . This then requires Q to be of the form . By definition of tensor product, So we can write Thus Incorporating (),
https://en.wikipedia.org/wiki/Lebesgue%20point
In mathematics, given a locally Lebesgue integrable function on , a point in the domain of is a Lebesgue point if Here, is a ball centered at with radius , and is its Lebesgue measure. The Lebesgue points of are thus points where does not oscillate too much, in an average sense. The Lebesgue differentiation theorem states that, given any , almost every is a Lebesgue point of .
https://en.wikipedia.org/wiki/Function%20%28computer%20programming%29
In computer programming, a function or subroutine is a sequence of program instructions that performs a specific task, packaged as a unit. This unit can then be used in programs wherever that particular task should be performed. Functions may be defined within programs, or separately in libraries that can be used by many programs. In different programming languages, a function may be called a routine, subprogram, subroutine, or procedure; in object-oriented programming (OOP), it may be called a method. Technically, these terms all have different definitions, and the nomenclature varies from language to language. The generic umbrella term callable unit is sometimes used. A function is often coded so that it can be started several times and from several places during one execution of the program, including from other functions, and then branch back (return) to the next instruction after the call, once the function's task is done. The idea of a subroutine was initially conceived by John Mauchly and Kathleen Antonelli during their work on ENIAC, and recorded in a January 1947 Harvard symposium on "Preparation of Problems for EDVAC-type Machines". Maurice Wilkes, David Wheeler, and Stanley Gill are generally credited with the formal invention of this concept, which they termed a closed sub-routine, contrasted with an open subroutine or macro. However, Alan Turing had discussed subroutines in a paper of 1945 on design proposals for the NPL ACE, going so far as to invent the concept of a return address stack. Functions are a powerful programming tool, and the syntax of many programming languages includes support for writing and using subroutines. Judicious use of functions (for example, through the structured programming approach) will often substantially reduce the cost of developing and maintaining a large program, while increasing its quality and reliability. Functions, often collected into libraries, are an important mechanism for sharing and trading software. The
https://en.wikipedia.org/wiki/Genomovar
Genomovar is a term commonly used within the genera Burkholderia and Agrobacterium to denote strains which are phylogenetically differentiable, but are phenotypically indistinguishable. A genomovar cannot be identified by standard biochemical tests, but it is classified as a species when a biochemical test allows it to be identified.
https://en.wikipedia.org/wiki/Phalaenopsis%20%C3%97%20lotubela
Phalaenopsis × lotubela is a species of epiphytic orchid native to the island Sumatra of Indonesia. It is a hybrid of Phalaenopsis cornu-cervi and Phalaenopsis javanica. Discovery This species was first noticed to be distinct in 2018 by Gus Benk, who had found it among other Phalaenopsis species, such as Phalaenopsis cornu-cervi, Phalaenopsis javanica and Phalaenopsis fimbriata. It had previously been known by local people. Etymology The specific epithet lotubela is derived from the name, which local people call this species. It refers to the hill, where it was found. Description Phalaenopsis × lotubela is an epiphytic plant with 5–7, 12–20 cm long and 6–9 cm wide leaves. The pendent, 14–18 cm long inflorescence bears 5–10, sequentially opening, 3-3.5 cm wide flowers with a yellow ground colour and redish brown transverse barring. The sepals have acute tips. The labellum, which bears white trichomes, is 1 cm long and 0.5 cm wide.
https://en.wikipedia.org/wiki/Sauce%20P%C3%A9rigueux
Sauce Périgueux and its derivative Sauce Périgourdine, named after the city of Périgueux, capital of the Périgord region of France, are savoury sauces. Their principal ingredients are madeira and truffles. Background Périgord in western France is noted for its truffles. A sauce Périgord, made of vegetables, ham or bacon, and mushrooms, sometimes with truffle peelings, is named after the region. The more elaborate sauce Périgueux is mentioned frequently in the recipes of Marie-Antoine Carême from the late 18th and early 19th centuries. Carême calls for the sauce to be served with birds including chicken, thrush, and pheasant, and fish including lemon sole, whiting, and salmon. The author James Bentley calls sauce Périgueux, "an essential ingredient of many dishes and part of the repertoire of every French chef". He comments that it is often mistakenly called Dordogne sauce, "which obscures its origins in the capital of Périgord". Ingredients and use In L'art de la cuisine française, Carême's collected recipes, the ingredients of the sauce are not listed, but Auguste Escoffier, in his 1934 book Ma Cuisine, specifies a demi-glace finished with madeira, and chopped truffles. He recommends the sauce as an accompaniment to eggs; crépinettes of mutton, lamb or chicken; poussin; boudin blanc; saddle of hare; young turkey; and pheasant. Alternatives to madeira are mentioned in Gustav Carlin's 1889 Le cuisinier moderne, which suggests champagne, and in James Bentley's 1986 Life and Food in the Dordogne, which replaces madeira with white wine and cognac. Sauce Périgueux is the classic accompaniment to a Tournedos Rossini. Some modern recipes use tinned truffles: among the writers calling for these are Elizabeth David in French Provincial Cooking (1960) and Simone Beck, Louisette Bertholle and Julia Child in Mastering the Art of French Cooking (1961). Beck and her colleagues recommend the sauce as an accompaniment to fillet of beef, fresh foie gras, ham, veal, egg dis
https://en.wikipedia.org/wiki/Slidex
Slidex was a hand-held, paper-based encryption system used at a low, front line (platoon, troop and section) level in the British Army during the Second World War and later the Cold War period. It was replaced by the BATCO tactical code, which, in turn has been largely made obsolete by the Bowman secure voice radios. Design Slidex used a series of vocabulary cards arranged in a grid of 12 rows and 17 columns. Each of the 204 resulting cells has a word or phrase, as well as a letter or number. The latter allowed the system to spell out words and transmit numbers. The cards were stored in a folding case that had a pair of cursors to facilitate locating cells. Messages were encrypted and decrypted using code strips that could be placed in holder along the top and left side of the vocabulary card. Blank vocabulary cards were provided to allow units to create a word set for a specific mission. See also Encryption algorithm Military intelligence Further reading "The Slidex RT Code", Cryptologia 8(2), April 1984
https://en.wikipedia.org/wiki/Significant%20wave%20height
In physical oceanography, the significant wave height (SWH, HTSGW or Hs) is defined traditionally as the mean wave height (trough to crest) of the highest third of the waves (H1/3). It is usually defined as four times the standard deviation of the surface elevation – or equivalently as four times the square root of the zeroth-order moment (area) of the wave spectrum. The symbol Hm0 is usually used for that latter definition. The significant wave height (Hs) may thus refer to Hm0 or H1/3; the difference in magnitude between the two definitions is only a few percent. SWH is used to characterize sea state, including winds and swell. Origin and definition The original definition resulted from work by the oceanographer Walter Munk during World War II. The significant wave height was intended to mathematically express the height estimated by a "trained observer". It is commonly used as a measure of the height of ocean waves. Time domain definition Significant wave height H1/3, or Hs or Hsig, as determined in the time domain, directly from the time series of the surface elevation, is defined as the average height of that one-third of the N measured waves having the greatest heights: where Hm represents the individual wave heights, sorted into descending order of height as m increases from 1 to N. Only the highest one-third is used, since this corresponds best with visual observations of experienced mariners, whose vision apparently focuses on the higher waves. Frequency domain definition Significant wave height Hm0, defined in the frequency domain, is used both for measured and forecasted wave variance spectra. Most easily, it is defined in terms of the variance m0 or standard deviation ση of the surface elevation: where m0, the zeroth-moment of the variance spectrum, is obtained by integration of the variance spectrum. In case of a measurement, the standard deviation ση is the easiest and most accurate statistic to be used. Another wave-height statistic in common u
https://en.wikipedia.org/wiki/Asteroseismology
Asteroseismology is the study of oscillations in stars. Stars have many resonant modes and frequencies, and the path of sound waves passing through a star depends on the speed of sound, which in turn depends on local temperature and chemical composition. Because the resulting oscillation modes are sensitive to different parts of the star, they inform astronomers about the internal structure of the star, which is otherwise not directly possible from overall properties like brightness and surface temperature. Asteroseismology is closely related to helioseismology, the study of stellar pulsation specifically in the Sun. Though both are based on the same underlying physics, more and qualitatively different information is available for the Sun because its surface can be resolved. Theoretical background By linearly perturbing the equations defining the mechanical equilibrium of a star (i.e. mass conservation and hydrostatic equilibrium) and assuming that the perturbations are adiabatic, one can derive a system of four differential equations whose solutions give the frequency and structure of a star's modes of oscillation. The stellar structure is usually assumed to be spherically symmetric, so the horizontal (i.e. non-radial) component of the oscillations is described by spherical harmonics, indexed by an angular degree and azimuthal order . In non-rotating stars, modes with the same angular degree must all have the same frequency because there is no preferred axis. The angular degree indicates the number of nodal lines on the stellar surface, so for large values of , the opposing sectors roughly cancel out, making it difficult to detect light variations. As a consequence, modes can only be detected up to an angular degree of about 3 in intensity and about 4 if observed in radial velocity. By additionally assuming that the perturbation to the gravitational potential is negligible (the Cowling approximation) and that the star's structure varies more slowly with r
https://en.wikipedia.org/wiki/Kosaburo%20Hashiguchi
is a Japanese mathematician and computer scientist at the Toyohashi University of Technology and Okayama University, known for his research in formal language theory. In 1988, he found the first algorithm to determine the star height of a regular language, a problem that had been open since 1963 when Lawrence Eggan solved the related star height problem, showing that there is no finite bound on the star height. Hashiguchi's algorithm for star height is extremely complex, and impractical on all but the smallest examples. A simpler method, showing also that the problem is PSPACE-complete, was provided in 2005 by Kirsten. Earlier, in 1979, Hashiguchi had also solved another open problem on regular languages, of deciding whether, for a given language , there exists a finite number such that . Hashiguchi is the uncle of Japanese-born American pianist Grace Nikae. Selected publications
https://en.wikipedia.org/wiki/Indefinite%20and%20fictitious%20numbers
Many languages have words expressing indefinite and fictitious numbers—inexact terms of indefinite size, used for comic effect, for exaggeration, as placeholder names, or when precision is unnecessary or undesirable. One technical term for such words is "non-numerical vague quantifier". Such words designed to indicate large quantities can be called "indefinite hyperbolic numerals". Specific values used as indefinite In Arabic, 1001 is used similarly, as in The Book of One Thousand and One Nights (lit. "a thousand nights and one night"). Many modern English book titles use this convention as well: 1,001 Uses for .... In Chinese, , 108,000 li, means a great distance. In English, some words that have a precise numerical definition are often used indefinitely: couple, 2; dozen, 12; score, 20; myriad, 10,000. Unlike cardinal numbers these can be pluralized, in which case they require of before the noun (millions of dollars, but five million dollars) and require the indefinite article "a" in the singular (a million letters (indefinite) but one million letters (definite)). "Eleventy" (or "eleventy-seven"), popularized by The Lord of the Rings, is derived from an Old English word for 110 but is more commonly used as an indefinite number. In Hungarian, people often say "26 times" for expressing their impatience or dissatisfaction about a recurring act (for example, "26 times I told you that I know Peter!"). Csilliárd is also often used in the same "indefinitely large number" meaning as "zillion" in English. Probably humorous merging of words csillag ("star", referring to the large number of stars) and milliárd ("billion", cf. long scale). In French, 36 and 36,000 are occasionally used as a synonym for "very many". In Hebrew and other Middle Eastern traditions, the number 40 is used to express a large but unspecific number, as in the Hebrew Bible's "forty days and forty nights", Ali Baba and the Forty Thieves, and the Forty Martyrs of Sebaste. This usage is sometimes
https://en.wikipedia.org/wiki/Brane%20cosmology
Brane cosmology refers to several theories in particle physics and cosmology related to string theory, superstring theory and M-theory. Brane and bulk The central idea is that the visible, three-dimensional universe is restricted to a brane inside a higher-dimensional space, called the "bulk" (also known as "hyperspace"). If the additional dimensions are compact, then the observed universe contains the extra dimension, and then no reference to the bulk is appropriate. In the bulk model, at least some of the extra dimensions are extensive (possibly infinite), and other branes may be moving through this bulk. Interactions with the bulk, and possibly with other branes, can influence our brane and thus introduce effects not seen in more standard cosmological models. Why gravity is weak and the cosmological constant is small Some versions of brane cosmology, based on the large extra dimension idea, can explain the weakness of gravity relative to the other fundamental forces of nature, thus solving the hierarchy problem. In the brane picture, the electromagnetic, weak and strong nuclear force are localized on the brane, but gravity has no such constraint and propagates on the full spacetime, called the bulk. Much of the gravitational attractive power "leaks" into the bulk. As a consequence, the force of gravity should appear significantly stronger on small (subatomic or at least sub-millimetre) scales, where less gravitational force has "leaked". Various experiments are currently under way to test this. Extensions of the large extra dimension idea with supersymmetry in the bulk appears to be promising in addressing the so-called cosmological constant problem. Models of brane cosmology One of the earliest documented attempts to apply brane cosmology as part of a conceptual theory is dated to 1983. The authors discussed the possibility that the Universe has dimensions, but ordinary particles are confined in a potential well which is narrow along spatial directions an
https://en.wikipedia.org/wiki/Digital%20Media%20Initiative
The Digital Media Initiative (DMI) was a British broadcast engineering project launched by the BBC in 2008. It aimed to modernise the Corporation's production and archiving methods by using connected digital production and media asset management systems. After a protracted development process lasting five years with a spend of £98 million between 2010 and 2012, the project was finally abandoned in May 2013. Initial impetus and relaunch The technology programme was initiated by the director of BBC Technology Ashley Highfield in 2008. It aimed to streamline broadcast operations by moving to a fully digital, tapeless production workflow at a cost of £81.7 million. Forecast to deliver cost savings to the BBC of around £18 million, DMI was contracted out to the technology services provider Siemens with consulting by Deloitte. Among the production features to be provided by DMI were a media ingest system; a media asset management system, unifying audio, video and stills archival; an online storyboarding system; and metadata storage and sharing. A core part of the system was formed by using Cinegy, a production suite originally developed prior to the DMI project by the BBC and selected by Siemens in 2008. The DMI Programme Director was television producer and entrepreneur Raymond P. Le Gué. Costs of the project rose after a number of technical problems and delays, and in 2009 the BBC terminated its contract with Siemens. BBC losses were estimated to be £38.2m, partially offset by a £27.5m settlement paid by Siemens, leaving a loss of £10.7m to the BBC. The BBC was criticised by the UK National Audit Office (NAO) in 2011 for its handling of the project. In evidence given to the NAO, the Director of the BBC's Future Media and Technology division, Erik Huggers, stated that Siemens had been selected to run the project without a tendering process because the BBC already had a 10-year support contract with the company. He also remarked that transfer of risk on the project to
https://en.wikipedia.org/wiki/Plant%20lifecycle%20management
Plant lifecycle management (PLM) is the process of managing an industrial facility's data and information throughout its lifetime. Plant lifecycle management differs from product lifecycle management by its primary focus on the integration of logical, physical and technical plant data in a combined plant model. A PLM model can be used through a plants whole lifecycle, covering: Design, Construction, Erection, Commissioning, Handover, Operation, Maintenance/Refurbishment/Life Extension, Decommissioning, Land rehabilitation. Parts of the model Logical model The logical plant model may cover: Process & instrumentation diagrams (P&ID) Pipe & instrumentation diagrams (P&ID) P&I schematic Process flow Massflow diagram (similar to the process flow but used in the mineral industry) Electrical key diagram Cabling diagram Electrical Hydraulic Pneumatic Heating venting & air-conditioning (HVAC) Water and wastewater Physical model Physical parts of a plant are usually represented by 3D CAD models. The CAD system used would typically focus on top-down, routing, and DMU and would differ on many point from the systems used in the mechanical industry, or for Architectural engineering. Sometimes the CAD system would be supplemented by software to generate 3D views or walk-through features. Technical model The technical data is typically managed by an ERP system or some other database. There could also be links to systems for handling unstructured data, like EDM systems. Integration Integration with Enterprise (EPCM, ePCM), Integration with Enterprise (Owner/Operator), Integration with Regulator. Applicability New Build Return to Service (RTS) See also Lifecycle management ISO 10303 - Industrial automation systems and integration—Product data representation and exchange ISO 15926 - Process Plants including Oil and Gas facilities life-cycle data Notes Further reading about Virtual Mill about Augmented reality Product lifecycle management Computer-aided design Man
https://en.wikipedia.org/wiki/Mean%20speed%20theorem
The mean speed theorem, also known as the Merton rule of uniform acceleration, was discovered in the 14th century by the Oxford Calculators of Merton College, and was proved by Nicole Oresme. It states that a uniformly accelerated body (starting from rest, i.e. zero initial velocity) travels the same distance as a body with uniform speed whose speed is half the final velocity of the accelerated body. Details Oresme provided a geometrical verification for the generalized Merton rule, which we would express today as (i.e., distance traveled is equal to one half of the sum of the initial and final velocities, multiplied by the elapsed time ), by finding the area of a trapezoid. Clay tablets used in Babylonian astronomy (350–50 BC) present trapezoid procedures for computing Jupiter's position and motion and anticipate the theorem by 14 centuries. The medieval scientists demonstrated this theorem—the foundation of "the law of falling bodies"—long before Galileo, who is generally credited with it. Oresme's proof is also the first known example of the modelization of a physical problem as a mathematical function with a graphical representation, as well as of an early form of integration, thus laying the foundation of calculus. The mathematical physicist and historian of science Clifford Truesdell, wrote: The theorem is a special case of the more general kinematics equations for uniform acceleration. See also Science in the Middle Ages Scholasticism Notes Further reading Sylla, Edith (1982) "The Oxford Calculators", in Kretzmann, Kenny & Pinborg (edd.), The Cambridge History of Later Medieval Philosophy. Longeway, John (2003) "William Heytesbury", in The Stanford Encyclopedia of Philosophy. Natural philosophy Merton College, Oxford History of the University of Oxford 14th century in science Classical mechanics
https://en.wikipedia.org/wiki/Enhancer%20trap
An enhancer trap is a method in molecular biology. The enhancer trap construct contains a transposable element and a reporter gene. The first is necessary for (random) insertion in the genome, the latter is necessary for identification of the spatial regulation by the enhancer. On top of this, the construct usually includes a genetic marker, e.g., the white gene producing red-colored eyes in Drosophila, or ampicillin resistance in E. coli. The most common and basic enhancer traps are: P[lacZ] from the bacterium E. coli and P[GAL4] from yeast. There exists a large number of fly stocks containing GAL4 insertions and an equally large number of fly stocks containing an UAS DNA sequence followed by a gene of interest, which permits the expression of a large number of genes with different GAL4 "drivers". Rather than generating transgenic flies with the enhancer linked directly to the gene of interest (which takes about a year when starting without the appropriate DNA construct), one transgenic fly is simply mated (crossed) with another transgenic fly. See also Gene trapping P element
https://en.wikipedia.org/wiki/Abel%20polynomials
The Abel polynomials are a sequence of polynomials named after Niels Henrik Abel, defined by the following equation: This polynomial sequence is of binomial type: conversely, every polynomial sequence of binomial type may be obtained from the Abel sequence using umbral calculus. Examples For , the polynomials are For , the polynomials are
https://en.wikipedia.org/wiki/Additive%20smoothing
In statistics, additive smoothing, also called Laplace smoothing or Lidstone smoothing, is a technique used to smooth categorical data. Given a set of observation counts from a -dimensional multinomial distribution with trials, a "smoothed" version of the counts gives the estimator: where the smoothed count and the "pseudocount" α > 0 is a smoothing parameter. α = 0 corresponds to no smoothing. (This parameter is explained in below.) Additive smoothing is a type of shrinkage estimator, as the resulting estimate will be between the empirical probability (relative frequency) , and the uniform probability . Invoking Laplace's rule of succession, some authors have argued that α should be 1 (in which case the term add-one smoothing is also used), though in practice a smaller value is typically chosen. From a Bayesian point of view, this corresponds to the expected value of the posterior distribution, using a symmetric Dirichlet distribution with parameter α as a prior distribution. In the special case where the number of categories is 2, this is equivalent to using a beta distribution as the conjugate prior for the parameters of the binomial distribution. History Laplace came up with this smoothing technique when he tried to estimate the chance that the sun will rise tomorrow. His rationale was that even given a large sample of days with the rising sun, we still can not be completely sure that the sun will still rise tomorrow (known as the sunrise problem). Pseudocount A pseudocount is an amount (not generally an integer, despite its name) added to the number of observed cases in order to change the expected probability in a model of those data, when not known to be zero. It is so named because, roughly speaking, a pseudo-count of value weighs into the posterior distribution similarly to each category having an additional count of . If the frequency of each item is out of samples, the empirical probability of event is but the posterior probability when ad
https://en.wikipedia.org/wiki/Bioproduct
Bioproducts or bio-based products are materials, chemicals and energy derived from renewable biological material. Bioresources Biological resources include agriculture, forestry, and biologically derived waste, and there are many other renewable bioresource examples. Example One of the examples of renewable bioresources is lignocellulose. Lignocellulosic tissues are biologically derived natural resources containing some of the main constituents of the natural world. Holocellulose is the carbohydrate fraction of lignocellulose that includes cellulose, a common building block made of sugar (glucose) that is the most abundant biopolymer, as well as hemicellulose. Recent advances in the catalytic conversion of platform chemicals from this biomass fraction have attracted industry and academia alike. Lignin is the second most abundant biopolymer. Cellulose and lignin are two of the primary natural polymers used by plants to store energy as well as to give strength, as is the case in woody plant tissues. Other energy storage chemicals in plants include oils, waxes, fats, etc., and because these other plant compounds have distinct properties, they offer potential for a host of different bioproducts. Categorization Conventional bioproducts and emerging bioproducts are two broad categories used to categorize bioproducts. Examples of conventional bio-based products include building materials, pulp and paper, and forest products. Examples of emerging bioproducts or biobased products include biofuels, bioenergy, starch-based and cellulose-based ethanol, bio-based adhesives, biochemicals, bioplastics, etc. Emerging bioproducts are active subjects of research and development, and these efforts have developed significantly since the turn of the 20/21st century, in part driven by the price of traditional petroleum-based products, by the environmental impact of petroleum use, and by an interest in many countries to become independent from foreign sources of oil. Bioproducts der
https://en.wikipedia.org/wiki/ELOT%20927
ELOT 927 is 7-bit character set standardized by ELOT, the Hellenic Organization for Standardization (HOS). It is also known as ISO-IR-88, CSISO88GREEK7 or 7-bit DEC Greek. The standard was withdrawn in November 1986. Support for it was implemented in various dot matrix printers (for example by Fujitsu) and line printers (for example by Printronix and Siemens) as well as in computer terminals (for example by DEC). Support for it can still be found in various applications, languages and protocols today, for example in Perl and Kermit. Character set See also ISO/IEC 646 ELOT 928
https://en.wikipedia.org/wiki/Magnolia%20%C3%97%20soulangeana
Magnolia × soulangeana (Magnolia denudata × Magnolia liliiflora), the saucer magnolia or sometimes the tulip tree, is a hybrid flowering plant in the genus Magnolia and family Magnoliaceae. It is a deciduous tree with large, early-blooming flowers in various shades of white, pink, and purple. It is one of the most commonly used magnolias in horticulture, being widely planted in the British Isles, especially in the south of England; and in the United States, especially the east and west coasts. Description Growing as a multistemmed large shrub or small tree, Magnolia × soulangeana has alternate, simple, shiny, dark green oval-shaped leaves on stout stems. Its flowers emerge dramatically on a bare tree in early spring, with the deciduous leaves expanding shortly thereafter, lasting through summer until autumn. Magnolia × soulangeana flowers are large, commonly 10–20 cm (4–8 in) across, and colored various shades of white, pink, and maroon. An American variety, 'Grace McDade' from Alabama, is reported to bear the largest flowers, with a 35 cm (14 in) diameter, white tinged with pinkish-purple. Another variety, Magnolia × soulangeana 'Jurmag1', is supposed to have the darkest and tightest flowers. The exact timing and length of flowering varies between named varieties, as does the shape of the flower. Some are globular, others a cup-and-saucer shape. Hybrid origin Magnolia × soulangeana was initially bred by French plantsman Étienne Soulange-Bodin (1774–1846), a retired cavalry officer in Napoleon's army, at his château de Fromont near Paris. He crossed Magnolia denudata with M. liliiflora in 1820, and was impressed with the resulting progeny's first precocious flowering in 1826. Many times, Soulange-Bodin is cited as the author of this hybrid name, rarely with a reference to a publication however. If a source is given, it is often an English translation of a French title (see for example Callaway, D.J. (1994), World of Magnolias: 204). Soulange-Bodin certainly d
https://en.wikipedia.org/wiki/Generalized%20inverse
In mathematics, and in particular, algebra, a generalized inverse (or, g-inverse) of an element x is an element y that has some properties of an inverse element but not necessarily all of them. The purpose of constructing a generalized inverse of a matrix is to obtain a matrix that can serve as an inverse in some sense for a wider class of matrices than invertible matrices. Generalized inverses can be defined in any mathematical structure that involves associative multiplication, that is, in a semigroup. This article describes generalized inverses of a matrix . A matrix is a generalized inverse of a matrix if A generalized inverse exists for an arbitrary matrix, and when a matrix has a regular inverse, this inverse is its unique generalized inverse. Motivation Consider the linear system where is an matrix and the column space of . If is nonsingular (which implies ) then will be the solution of the system. Note that, if is nonsingular, then Now suppose is rectangular (), or square and singular. Then we need a right candidate of order such that for all That is, is a solution of the linear system . Equivalently, we need a matrix of order such that Hence we can define the generalized inverse as follows: Given an matrix , an matrix is said to be a generalized inverse of if The matrix has been termed a regular inverse of by some authors. Types Important types of generalized inverse include: One-sided inverse (right inverse or left inverse) Right inverse: If the matrix has dimensions and , then there exists an matrix called the right inverse of such that , where is the identity matrix. Left inverse: If the matrix has dimensions and , then there exists an matrix called the left inverse of such that , where is the identity matrix. Bott–Duffin inverse Drazin inverse Moore–Penrose inverse Some generalized inverses are defined and classified based on the Penrose conditions: where denotes conjugate transpose
https://en.wikipedia.org/wiki/Ramanujan%E2%80%93Nagell%20equation
In mathematics, in the field of number theory, the Ramanujan–Nagell equation is an equation between a square number and a number that is seven less than a power of two. It is an example of an exponential Diophantine equation, an equation to be solved in integers where one of the variables appears as an exponent. The equation is named after Srinivasa Ramanujan, who conjectured that it has only five integer solutions, and after Trygve Nagell, who proved the conjecture. It implies non-existence of perfect binary codes with the minimum Hamming distance 5 or 6. Equation and solution The equation is and solutions in natural numbers n and x exist just when n = 3, 4, 5, 7 and 15 . This was conjectured in 1913 by Indian mathematician Srinivasa Ramanujan, proposed independently in 1943 by the Norwegian mathematician Wilhelm Ljunggren, and proved in 1948 by the Norwegian mathematician Trygve Nagell. The values of n correspond to the values of x as:- x = 1, 3, 5, 11 and 181 . Triangular Mersenne numbers The problem of finding all numbers of the form 2b − 1 (Mersenne numbers) which are triangular is equivalent: The values of b are just those of n − 3, and the corresponding triangular Mersenne numbers (also known as Ramanujan–Nagell numbers) are: for x = 1, 3, 5, 11 and 181, giving 0, 1, 3, 15, 4095 and no more . Equations of Ramanujan–Nagell type An equation of the form for fixed D, A , B and variable x, n is said to be of Ramanujan–Nagell type. The result of Siegel implies that the number of solutions in each case is finite. By representing with and with , the equation of Ramanujan–Nagell type is reduced to three Mordell curves (indexed by ), each of which has a finite number of integer solutions: , , . The equation with has at most two solutions, except in the case corresponding to the Ramanujan–Nagell equation. There are infinitely many values of D for which there are two solutions, including . Equations of Lebesgue–Nagell type An equation of the form f
https://en.wikipedia.org/wiki/Flight-time%20equivalent%20dose
Flight-time equivalent dose (FED) is an informal unit of measurement of ionizing radiation exposure. Expressed in units of flight-time (i.e., flight-seconds, flight-minutes, flight-hours), one unit of flight-time is approximately equivalent to the radiological dose received during the same unit of time spent in an airliner at cruising altitude. FED is intended as a general educational unit to enable a better understanding of radiological dose by converting dose typically presented in sieverts into units of time. FED is only meant as an educational exercise and is not a formally adopted dose measurement. History The flight-time equivalent dose concept is the creation of Ulf Stahmer, a Canadian professional engineer working in the field of radioactive materials transport. It was first presented in the poster session at the 18th International Symposium of the Packaging and Transport of Radioactive Materials (PATRAM) held in Kobe, Hyogo, Japan where the poster received an Aoki Award for distinguished poster presentation. In 2018, an article on FED appeared in the peer-reviewed journal The Physics Teacher. Usage Flight-time equivalent dose is an informal measurement, so any equivalences are necessarily approximate. It has been found useful to provide context between radiological doses received from various every-day activities and medical procedures. Dose calculation FED corresponds to the time spent in an airliner flying at altitude required to receive a corresponding radiological dose. FED is calculated by taking a known dose (typically in millisieverts) and dividing it by the average dose rate (typically in millisieverts per hour) at an altitude of 10,000 m, a typical cruising altitude for a commercial airliner. While radiological dose at cruising altitudes varies with latitude, for FED calculations, the radiological dose rate at an altitude of 10,000 m has been standardized to be 0.004 mSv/h, about 15 times greater than the average dose rate at the Earth's surfac
https://en.wikipedia.org/wiki/Bonnor%20beam
In general relativity, the Bonnor beam is an exact solution which models an infinitely long, straight beam of light. It is an explicit example of a pp-wave spacetime. It is named after William B. Bonnor who first described it. The Bonnor beam is obtained by matching together two regions: a uniform plane wave interior region, which is shaped like the world tube of a solid cylinder, and models the electromagnetic and gravitational fields inside the beam, a vacuum exterior region, which models the gravitational field outside the beam. On the "cylinder" where they meet, the two regions are required to obey matching conditions stating that the metric tensor and extrinsic curvature tensor must agree. The interior part of the solution is defined by This is a null dust solution and can be interpreted as incoherent electromagnetic radiation. The exterior part of the solution is defined by The Bonnor beam can be generalized to several parallel beams travelling in the same direction. Perhaps surprisingly, the beams do not curve toward one another. On the other hand, "anti-parallel" beams (travelling along parallel trajectories, but in opposite directions) do attract each other. This reflects a general phenomenon: two pp-waves with parallel wave vectors superimpose linearly, but pp-waves with nonparallel wave vectors (including antiparallel Bonnor beams) do not superimpose linearly, as we would expect from the nonlinear nature of the Einstein field equation.
https://en.wikipedia.org/wiki/Culture%20Collection%20%28University%20of%20Gothenburg%29
Culture Collection University of Gothenburg (CCUG) is a Swedish microbial culture repository located in Gothenburg (Sweden) established by Enevold Falsen in 1968 and affiliated with the University of Gothenburg. The current curator is Prof. Dr. Edward R. B. Moore and it maintains bacterial, filamentous fungal and yeasts cultures, but it does not hold extremophiles and does not dispatch the most hazardous organisms classified in biosafety level 3. More than 73,000 strains of more than 4,500 species have so far been examined, whereof more than 21,000 are displayed on Internet. It represents the largest public collection of bacteria in Europe. The CCUG has been devoted to the identification of bacteria. The search engine is sophisticated and useful for clinical microbiologists who may check their diagnosis of an unusual species or order on-line a reference strain. The CCUG also performs research and development of novel methods for rapid diagnosis of infectious diseases mainly by using molecular, genomic and proteomic approaches. External links CCUG home page News about the CCUG at Sveriges Radio Microbiology organizations University of Gothenburg Culture collections 1968 establishments in Sweden 1968 in science Organizations established in 1968
https://en.wikipedia.org/wiki/Processor%20Control%20Region
Processor Control Region (PCR) is a Windows kernel mode data structure that contains information about the current processor. It can be accessed via the fs segment register on x86 versions, or the gs segment register on x64 versions respectively. Structure In Windows, the PCR is known as KPCR. It contains information about the current processor. Processor Control Block The PCR contains a substructure called Processor Control Block (KPRCB), which contains information such as CPU step and a pointer to the thread object of the current thread. See also Process Environment Block Process control block
https://en.wikipedia.org/wiki/Ringer%20equivalence%20number
The ringer equivalence number (REN) is a telecommunications measure that represents the electrical loading effect of a telephone ringer on a telephone line. In the United States, ringer equivalence was first defined by U.S. Code of Federal Regulations, Title 47, Part 68, based on the load that a standard Bell System model 500 telephone represented, and was later determined in accordance with specification ANSI/TIA-968-B (August 2009). Measurement systems analogous to the REN exist internationally. Definition The ringer equivalence of 1 represents the loading effect of a single traditional telephone ringing circuit, such as that within the Western Electric model 500 telephone. The ringer equivalence of modern telephone equipment may be significantly lower than 1. For example, externally powered electronic ringing telephones may have a value as low as 0.1, while modern line-powered telephones, in which the ringer is powered from the telephone line, typically have a REN of approximately 0.8. In the United States, the FCC Part 68 specification defined REN 1 as equivalent to a 6930 Ω resistor in series with an (microfarad) capacitor. The modern ANSI/TIA-968-B specification (August 2009) defines it as an impedance of at (type A ringer), or from to (type B ringer). Maximum ringer equivalence The total ringer load on a subscriber line is the sum of the ringer equivalences of all devices (phone, fax, a separate answerphone, etc.) connected to the line. This represents the overall loading effect of the subscriber equipment on the central office ringing current source. Subscriber telephone lines are usually limited to support a ringer equivalence of 5, per the federal specifications. If the total allowable ringer load is exceeded, the phone circuit may fail to ring or otherwise malfunction. For example, call waiting, caller ID, and ADSL services are often affected by high ringer load. Some analog telephone adapters for Internet telephony require analog telephones
https://en.wikipedia.org/wiki/Far-western%20blot
The far-western blot, or far-western blotting, is a molecular biological method based on the technique of western blot to detect protein-protein interaction in vitro. Whereas western blot uses an antibody probe to detect a protein of interest, far-western blot uses a non-antibody probe which can bind the protein of interest. Thus, whereas western blotting is used for the detection of certain proteins, far-western blotting is employed to detect protein/protein interactions. Method In conventional western blot, gel electrophoresis is used to separate proteins from a sample; these proteins are then transferred to a membrane in a 'blotting' step. In a western blot, specific proteins are then identified using an antibody probe. Far-western blot employs non-antibody proteins to probe the protein of interest on the blot. In this way, binding partners of the probe (or the blotted) protein may be identified. The probe protein is often produced in E. coli using an expression cloning vector. The probe protein can then be visualized through the usual methods — it may be radiolabelled; it may bear a specific affinity tag like His or FLAG for which antibodies exist; or there may be a protein specific antibody (to the probe protein). Because cell extracts are usually completely denatured by boiling in detergent before gel electrophoresis, this approach is most useful for detecting interactions that do not require the native folded structure of the protein of interest.
https://en.wikipedia.org/wiki/MotoMagx
MotoMagx was a Linux kernel-based mobile operating system developed and launched in 2007 by Motorola to run on their mid-to-high-end mobile phones. The system was based on MontaVista's Mobilinux. Originally intended for 60% of their upcoming devices, it was soon dropped in favor of Android and Windows Mobile operating systems. MOTOMAGX was only compatible with Motorola's GSM/UMTS devices (as shown below). This was due to the lack of an implementation compatible with Qualcomm CDMA2000 devices. As a result, Motorola often sold multiple device variants with radically different firmware. For example, the Motorola RAZR2 on T-Mobile shipped with MOTOMAGX, whereas the RAZR2 on Verizon Wireless shipped with Motorola's P2k firmware. This created significant confusion for customers, as the user experience varied widely between two otherwise identical devices, simply based on which carrier they were on. Devices Phones based on this OS are: Motorola EM30 Motorola ROKR E2 Motorola ROKR E8 Motorola ROKR/RIZR Z6 Motorola U9 Motorola RAZR2 V8 Motorola VE66 Motorola ZINE ZN5 Motorola Tundra V76r Motorola ROKR EM35 Motorola ROKR ZN200
https://en.wikipedia.org/wiki/AIX%20Toolbox%20for%20Linux%20Applications
The AIX Toolbox for Linux Applications is a collection of GNU tools for IBM AIX. These tools are available for installation using Red Hat's RPM format. Licensing Each of these packages includes its own licensing information and while IBM has made the code available to AIX users, the code is provided as is and has not been thoroughly tested. The Toolbox is meant to provide a core set of some of the most common development tools and libraries along with the more popular GNU packages.
https://en.wikipedia.org/wiki/Layered%20queueing%20network
In queueing theory, a discipline within the mathematical theory of probability, a layered queueing network (or rendezvous network) is a queueing network model where the service time for each job at each service node is given by the response time of a queueing network (and those service times in turn may also be determined by further nested networks). Resources can be nested and queues form along the nodes of the nesting structure. The nesting structure thus defines "layers" within the queueing model. Layered queueing has applications in a wide range of distributed systems which involve different master/slave, replicated services and client-server components, allowing each local node to be represented by a specific queue, then orchestrating the evaluation of these queues. For large population of jobs, a fluid limit has been shown in PEPA to be a give good approximation of performance measures. External links Tutorial Introduction to Layered Modeling of Software Performance by Murray Woodside, Carleton University
https://en.wikipedia.org/wiki/Windows%20Server%202003
Windows Server 2003, codenamed "Whistler Server", is the second version of the Windows Server operating system produced by Microsoft. It is part of the Windows NT family of operating systems and was released to manufacturing on March 28, 2003 and generally available on April 24, 2003. Windows Server 2003 is the successor to the Server editions of Windows 2000 and the predecessor to Windows Server 2008. An updated version, Windows Server 2003 R2, was released to manufacturing on December 6, 2005. Windows Server 2003 is based on Windows 2000. Windows Server 2003's kernel has also been used in Windows XP 64-bit Edition and Windows XP Professional x64 Edition, and was the starting point for the development of Windows Vista. Windows Server 2003 is the final version of Windows Server that supports processors without ACPI. Its successor, Windows Server 2008, requires a processor with ACPI in any supported architecture (x86, x64 and Itanium). As of July 2016, 18% of organizations used servers that were running Windows Server 2003. Overview Windows Server 2003 is the follow-up to Windows 2000 Server, incorporating compatibility and other features from Windows XP. Unlike Windows 2000, Windows Server 2003's default installation has none of the server components enabled, to reduce the attack surface of new machines. Windows Server 2003 includes compatibility modes to allow older applications to run with greater stability. It was made more compatible with Windows NT 4.0 domain-based networking. Windows Server 2003 brought in enhanced Active Directory compatibility and better deployment support to ease the transition from Windows NT 4.0 to Windows Server 2003 and Windows XP Professional. Windows Server 2003 is the first server edition of Windows to support the IA64 and x64 architectures. The product went through several name changes during the course of development. When first announced in 2000, it was known by its codename "Whistler Server"; it was named "Windows 2002 Serv
https://en.wikipedia.org/wiki/CTCF
Transcriptional repressor CTCF also known as 11-zinc finger protein or CCCTC-binding factor is a transcription factor that in humans is encoded by the CTCF gene. CTCF is involved in many cellular processes, including transcriptional regulation, insulator activity, V(D)J recombination and regulation of chromatin architecture. Discovery CCCTC-Binding factor or CTCF was initially discovered as a negative regulator of the chicken c-myc gene. This protein was found to be binding to three regularly spaced repeats of the core sequence CCCTC and thus was named CCCTC binding factor. Function The primary role of CTCF is thought to be in regulating the 3D structure of chromatin. CTCF binds together strands of DNA, thus forming chromatin loops, and anchors DNA to cellular structures like the nuclear lamina. It also defines the boundaries between active and heterochromatic DNA. Since the 3D structure of DNA influences the regulation of genes, CTCF's activity influences the expression of genes. CTCF is thought to be a primary part of the activity of insulators, sequences that block the interaction between enhancers and promoters. CTCF binding has also been both shown to promote and repress gene expression. It is unknown whether CTCF affects gene expression solely through its looping activity, or if it has some other, unknown, activity. In a recent study, it has been shown that, in addition to demarcating TADs, CTCF mediates promoter–enhancer loops, often located in promoter-proximal regions, to facilitate the promoter–enhancer interactions within one TAD. This is in line with the concept that a subpopulation of CTCF associates with the RNA polymerase II (Pol II) protein complex to activate transcription. It is likely that CTCF helps to bridge the transcription factor-bound enhancers to transcription start site-proximal regulatory elements and to initiate transcription by interacting with Pol II, thus supporting a role of CTCF in facilitating contacts between transcription r
https://en.wikipedia.org/wiki/Virtual%20legacy%20wires
Virtual legacy wires (VLW) are transactions over the Intel QuickPath Interconnect and Intel Ultra Path Interconnect interconnect fabrics that replace a particular set of physical legacy pins on Intel microprocessors. The legacy wires replaced include the INTR, A20M, and SMI legacy signals. See also A20 line INTR System Management Mode
https://en.wikipedia.org/wiki/Horton%20principle
The Horton principle is a design rule for cryptographic systems and can be expressed as "Authenticate what is being meant, not what is being said" or "mean what you sign and sign what you mean" not merely the encrypted version of what was meant. The principle is named after the title character in the Dr. Seuss children's book Horton Hatches the Egg. Utility The Horton principle becomes important when using message authentication codes (or MACs) in a cryptographic system. Suppose Alice wants to send a message to Bob, and she uses a MAC to authenticate a message m that was made by concatenating three data fields, where m := a || b || c. Bob needs to know what rules Alice used to create the message in order to split m back into its components, but if he uses the wrong rules then he will get the wrong values from an authenticated message. When applied in this case, the Horton principle works by authenticating the meaning instead of the message. For instance, MAC clears not only the communication but also the information that Bob used in parsing such message into its meaning. The meaning, however, also depends on the decryption key used and that the authentication must be applied to the plaintext instead of the ciphertext. Problems The problem is that the MAC is only authenticating a string of bytes, while Alice and Bob need to authenticate the way the message was constructed as well. If not, then it may be possible for an attacker to substitute a message with a valid MAC but a different meaning. Systems can manage this problem by adding metadata such as a protocol number or by formatting messages with an explicit structure, such as XML.
https://en.wikipedia.org/wiki/Pyrrho%27s%20lemma
In statistics, Pyrrho's lemma is the result that if one adds just one extra variable as a regressor from a suitable set to a linear regression model, one can get any desired outcome in terms of the coefficients (signs and sizes), as well as predictions, the R-squared, the t-statistics, prediction- and confidence-intervals. The argument for the coefficients was advanced by Herman Wold and Lars Juréen but named, extended to include the other statistics and explained more fully by Theo Dijkstra. Dijkstra named it after the sceptic philosopher Pyrrho and concludes his article by noting that this lemma provides "some ground for a wide-spread scepticism concerning products of extensive datamining". One can only prove that a model 'works' by testing it on data different from the data that gave it birth. The result has been discussed in the context of econometrics.
https://en.wikipedia.org/wiki/Locked%20Shields
Locked Shields is an annual cyber defence exercise organised by NATO's Cooperative Cyber Defence Centre of Excellence in Tallinn since 2010. The format is usually that a red team simulates a hostile attack while blue teams from the participating nations simulate their coordination and defence against this. The performance of teams is assessed using a mix of automated and manual scoring. In 2022, there were 24 teams with an average of 50 experts in each team. The team from Finland was declared as the 2022 winner for the excellence of their situation reporting and solid defence.
https://en.wikipedia.org/wiki/Hylotheism
Hylotheism (from Gk. hyle, 'matter' and theos, 'God') is the belief that matter and God are the same, so in other words, defining God as matter. The American Lutheran Church–Missouri Synod defines hylotheism is "Theory equating matter with God or merging one into the other" which it states as "Synonym for pantheism* and materialism.*". See also Pantheism
https://en.wikipedia.org/wiki/SREC%20%28file%20format%29
Motorola S-record is a file format, created by Motorola in the mid-1970s, that conveys binary information as hex values in ASCII text form. This file format may also be known as SRECORD, SREC, S19, S28, S37. It is commonly used for programming flash memory in microcontrollers, EPROMs, EEPROMs, and other types of programmable logic devices. In a typical application, a compiler or assembler converts a program's source code (such as C or assembly language) to machine code and outputs it into a HEX file. The HEX file is then imported by a programmer to "burn" the machine code into non-volatile memory, or is transferred to the target system for loading and execution. Overview History The S-record format was created in the mid-1970s for the Motorola 6800 processor. Software development tools for that and other embedded processors would make executable code and data in the S-record format. PROM programmers would then read the S-record format and "burn" the data into the PROMs or EPROMs used in the embedded system. Other hex formats There are other ASCII encoding with a similar purpose. BPNF, BHLF, and B10F were early binary formats, but they are neither compact nor flexible. Hexadecimal formats are more compact because they represent 4 bits rather than 1 bit per character. Many, such as S-record, are more flexible because they include address information so they can specify just a portion of a PROM. Intel HEX format was often used with Intel processors. TekHex is another hex format that can include a symbol table for debugging. Format Record structure An SREC format file consists of a series of ASCII text records. The records have the following structure from left to right: Record start - each record begins with an uppercase letter "S" character (ASCII 0x53) which stands for "Start-of-Record". Record type - single numeric digit "0" to "9" character (ASCII 0x30 to 0x39), defining the type of record. See table below. Byte count - two hex digits ("00" to "FF"), ind
https://en.wikipedia.org/wiki/BeerXML
BeerXML is a free, fully defined XML data description standard designed for the exchange of beer brewing recipes and other brewing data. Tables of recipes as well as other records such as hop schedules and malt bills can be represented using BeerXML for use by brewing software. BeerXML is an open standard and as a subset of Extensible Markup Language (XML). BeerXML is a markup language that defines a set of rules for encoding documents in a format that is both human-readable and machine-readable. BeerXML is supported by a number of web sites, computer programmes and an increasing number of Android Windows Phone and iOS apps. Plugins and extensions supporting BeerXML have been written for a variety of platforms including Ruby via RubyGems, WordPress, PHP and JavaScript Many brewing hardware manufacturers incorporate BeerXML into their systems and third party plugins and patches are being developed for brewery control hardware and embedded systems allowing the automation and fine control and timing of processes such as mashing and potentially fermentation. Common applications and examples of usage BeerXML is used in both amateur and professional brewing and facilitates the sharing of brewing data over the internet. Users of different applications such as the open-source software Brewtarget (with more than 52,000 downloads ) can share data via XML with users of popular proprietary software such as Beersmith and ORRTIZ: BMS 4 Breweries or upload their data to share on BeerXML compatible sharing sites and cloud platforms such as Brewtoad (over 50,000 registered users ) or the Beersmith Recipe Cloud (with 43,000 registered users). A user of a recipe design and sharing and creation site such as Brewersfriend.com can import and export BeerXML to and from mobile apps or enter it into a brewing competition database such as The Brew Competition Online Entry & Management (BCOE&M) system. The adoption of BeerXML as a standard is leading to new developments such a
https://en.wikipedia.org/wiki/Verdier%20duality
In mathematics, Verdier duality is a cohomological duality in algebraic topology that generalizes Poincaré duality for manifolds. Verdier duality was introduced in 1965 by as an analog for locally compact topological spaces of Alexander Grothendieck's theory of Poincaré duality in étale cohomology for schemes in algebraic geometry. It is thus (together with the said étale theory and for example Grothendieck's coherent duality) one instance of Grothendieck's six operations formalism. Verdier duality generalises the classical Poincaré duality of manifolds in two directions: it applies to continuous maps from one space to another (reducing to the classical case for the unique map from a manifold to a one-point space), and it applies to spaces that fail to be manifolds due to the presence of singularities. It is commonly encountered when studying constructible or perverse sheaves. Verdier duality Verdier duality states that (subject to suitable finiteness conditions discussed below) certain derived image functors for sheaves are actually adjoint functors. There are two versions. Global Verdier duality states that for a continuous map of locally compact Hausdorff spaces, the derived functor of the direct image with compact (or proper) supports has a right adjoint in the derived category of sheaves, in other words, for (complexes of) sheaves (of abelian groups) on and on we have Local Verdier duality states that in the derived category of sheaves on Y. It is important to note that the distinction between the global and local versions is that the former relates morphisms between complexes of sheaves in the derived categories, whereas the latter relates internal Hom-complexes and so can be evaluated locally. Taking global sections of both sides in the local statement gives the global Verdier duality. These results hold subject to the compactly supported direct image functor having finite cohomological dimension. This is the case if the there is a bound
https://en.wikipedia.org/wiki/Atomic%20commit
In the field of computer science, an atomic commit is an operation that applies a set of distinct changes as a single operation. If the changes are applied, then the atomic commit is said to have succeeded. If there is a failure before the atomic commit can be completed, then all of the changes completed in the atomic commit are reversed. This ensures that the system is always left in a consistent state. The other key property of isolation comes from their nature as atomic operations. Isolation ensures that only one atomic commit is processed at a time. The most common uses of atomic commits are in database systems and version control systems. The problem with atomic commits is that they require coordination between multiple systems. As computer networks are unreliable services, this means no algorithm can coordinate with all systems as proven in the Two Generals Problem. As databases become more and more distributed, this coordination will increase the difficulty of making truly atomic commits. Usage Atomic commits are essential for multi-step updates to data. This can be clearly shown in a simple example of a money transfer between two checking accounts. This example is complicated by a transaction to check the balance of account Y during a transaction for transferring 100 dollars from account X to Y. To start, first 100 dollars is removed from account X. Second, 100 dollars is added to account Y. If the entire operation is not completed as one atomic commit, then several problems could occur. If the system fails in the middle of the operation, after removing the money from X and before adding into Y, then 100 dollars has just disappeared. Another issue is if the balance of Y is checked before the 100 dollars is added, the wrong balance for Y will be reported. With atomic commits neither of these cases can happen, in the first case of the system failure, the atomic commit would be rolled back and the money returned to X. In the second case, the request of the
https://en.wikipedia.org/wiki/Artificial%20Life%20%28journal%29
Artificial Life is a peer-reviewed scientific journal that covers the study of man-made systems that exhibit the behavioral characteristics of natural living systems. Its articles cover system synthesis in software, hardware, and wetware. Artificial Life was established in 1993 and is the official journal of the International Society of Artificial Life. It is published online and in hard copy by the MIT Press. Abstracting and indexing Artificial Life is abstracted and indexed in Academic Search, Biological Abstracts, BIOSIS Previews, CSA Mechanical & Transportation Engineering Abstracts, Compendex, Current Contents, EMBASE, Excerpta Medica, Inspec, MEDLINE, METADEX, PubMed, Referativny Zhurnal, Science Citation Index Expanded, Scopus, and The Zoological Record.
https://en.wikipedia.org/wiki/Chiral%20analysis
Chiral analysis refers to the quantification of component enantiomers of racemic drug substances or pharmaceutical compounds. Other synonyms commonly used include enantiomer analysis, enantiomeric analysis, and enantioselective analysis. Chiral analysis includes all analytical procedures focused on the characterization of the properties of chiral drugs. Chiral analysis is usually performed with chiral separation methods where the enantiomers are separated on an analytical scale and simultaneously assayed for each enantiomer. Many compounds of biological and pharmacological interest are chiral. Pharmacodynamic, pharmacokinetic, and toxicological properties of the enantiomers of racemic chiral drugs has expanded significantly and become a key issue for both the pharmaceutical industry and regulatory agencies. Typically one of the enantiomers is more active pharmacologically (eutomer). In several cases, unwanted side effects or even toxic effects may occur with the inactive enantiomer (distomer). Even if the side effects are not that serious, the inactive enantiomer has to be metabolized, this puts an unnecessary burden on the already stressed out system of the patient. Large differences in activity between enantiomers reveal the need to accurate assessment of enantiomeric purity of pharmaceutical, agrochemicals, and other chemical entities like fragrances and flavors become very important. Moreover, the moment a racemic therapeutic is placed in a biological system, a chiral environment, it is no more 50:50 due enantioselective absorption, distribution, metabolism, and elimination (ADME) process. Hence to track the individual enantiomeric profile there is a need for chiral analysis tool. Chiral technology is an active subject matter related to asymmetric synthesis and enantioselective analysis, particularly in the area of chiral chromatography. As a consequence of the advances in chiral technology, a number of pharmaceuticals currently marketed as racemic drugs are
https://en.wikipedia.org/wiki/Kuramoto%20model
The Kuramoto model (or Kuramoto–Daido model), first proposed by , is a mathematical model used in describing synchronization. More specifically, it is a model for the behavior of a large set of coupled oscillators. Its formulation was motivated by the behavior of systems of chemical and biological oscillators, and it has found widespread applications in areas such as neuroscience and oscillating flame dynamics. Kuramoto was quite surprised when the behavior of some physical systems, namely coupled arrays of Josephson junctions, followed his model. The model makes several assumptions, including that there is weak coupling, that the oscillators are identical or nearly identical, and that interactions depend sinusoidally on the phase difference between each pair of objects. Definition In the most popular version of the Kuramoto model, each of the oscillators is considered to have its own intrinsic natural frequency , and each is coupled equally to all other oscillators. Surprisingly, this fully nonlinear model can be solved exactly in the limit of infinite oscillators, N→ ∞; alternatively, using self-consistency arguments one may obtain steady-state solutions of the order parameter. The most popular form of the model has the following governing equations: , where the system is composed of N limit-cycle oscillators, with phases and coupling constant K. Noise can be added to the system. In that case, the original equation is altered to , where is the fluctuation and a function of time. If we consider the noise to be white noise, then , with denoting the strength of noise. Transformation The transformation that allows this model to be solved exactly (at least in the N → ∞ limit) is as follows: Define the "order" parameters r and ψ as . Here r represents the phase-coherence of the population of oscillators and ψ indicates the average phase. Multiplying this equation with and only considering the imaginary part gives . Thus the oscillators' equations are no
https://en.wikipedia.org/wiki/Spacetime%20symmetries
Spacetime symmetries are features of spacetime that can be described as exhibiting some form of symmetry. The role of symmetry in physics is important in simplifying solutions to many problems. Spacetime symmetries are used in the study of exact solutions of Einstein's field equations of general relativity. Spacetime symmetries are distinguished from internal symmetries. Physical motivation Physical problems are often investigated and solved by noticing features which have some form of symmetry. For example, in the Schwarzschild solution, the role of spherical symmetry is important in deriving the Schwarzschild solution and deducing the physical consequences of this symmetry (such as the nonexistence of gravitational radiation in a spherically pulsating star). In cosmological problems, symmetry plays a role in the cosmological principle, which restricts the type of universes that are consistent with large-scale observations (e.g. the Friedmann–Lemaître–Robertson–Walker (FLRW) metric). Symmetries usually require some form of preserving property, the most important of which in general relativity include the following: preserving geodesics of the spacetime preserving the metric tensor preserving the curvature tensor These and other symmetries will be discussed below in more detail. This preservation property which symmetries usually possess (alluded to above) can be used to motivate a useful definition of these symmetries themselves. Mathematical definition A rigorous definition of symmetries in general relativity has been given by Hall (2004). In this approach, the idea is to use (smooth) vector fields whose local flow diffeomorphisms preserve some property of the spacetime. (Note that one should emphasize in one's thinking this is a diffeomorphism—a transformation on a differential element. The implication is that the behavior of objects with extent may not be as manifestly symmetric.) This preserving property of the diffeomorphisms is made precise as follows. A
https://en.wikipedia.org/wiki/Kempe%20Award%20for%20Distinguished%20Ecologists
The Kempe Award for Distinguished Ecologists is a prize awarded biennially from 1994 onwards to recognise outstanding individuals within the science of ecology. The Award is an honorarium of SEK 50,000. The award is given by the Kempe Foundations (Kempefonden), Umeå University and the Swedish University of Agricultural Sciences in cooperation. Kempe Award Laureates 1994 Stuart L. Pimm 1996 F. Stuart Chapin III 1998 John Lawton 2000 Daniel Simberloff 2002 David Read 2004 Mary Power 2006 Peter M. Vitousek 2008 Stephen P. Hubbell 2011 Ilkka Hanski See also List of ecology awards
https://en.wikipedia.org/wiki/Potentially%20unwanted%20program
A potentially unwanted program (PUP) or potentially unwanted application (PUA) is software that a user may perceive as unwanted or unnecessary. It is used as a subjective tagging criterion by security and parental control products. Such software may use an implementation that can compromise privacy or weaken the computer's security. Companies often bundle a wanted program download with a wrapper application and may offer to install an unwanted application, and in some cases without providing a clear opt-out method. Antivirus companies define the software bundled as potentially unwanted programs which can include software that displays intrusive advertising (adware), or tracks the user's Internet usage to sell information to advertisers (spyware), injects its own advertising into web pages that a user looks at, or uses premium SMS services to rack up charges for the user. A growing number of open-source software projects have expressed dismay at third-party websites wrapping their downloads with unwanted bundles, without the project's knowledge or consent. Nearly every third-party free download site bundles their downloads with potentially unwanted software. The practice is widely considered unethical because it violates the security interests of users without their informed consent. Some unwanted software bundles install a root certificate on a user's device, which allows hackers to intercept private data such as banking details, without a browser giving security warnings. The United States Department of Homeland Security has advised removing an insecure root certificate, because they make computers vulnerable to serious cyberattacks. Software developers and security experts recommend that people always download the latest version from the official project website, or a trusted package manager or app store. Origins Historically, the first big companies working with potentially unwanted programs for creating revenue came up in the US in the mid-2000s, such as Zango
https://en.wikipedia.org/wiki/Babenko%E2%80%93Beckner%20inequality
In mathematics, the Babenko–Beckner inequality (after and William E. Beckner) is a sharpened form of the Hausdorff–Young inequality having applications to uncertainty principles in the Fourier analysis of Lp spaces. The (q, p)-norm of the n-dimensional Fourier transform is defined to be In 1961, Babenko found this norm for even integer values of q. Finally, in 1975, using Hermite functions as eigenfunctions of the Fourier transform, Beckner proved that the value of this norm for all is Thus we have the Babenko–Beckner inequality that To write this out explicitly, (in the case of one dimension,) if the Fourier transform is normalized so that then we have or more simply Main ideas of proof Throughout this sketch of a proof, let (Except for q, we will more or less follow the notation of Beckner.) The two-point lemma Let be the discrete measure with weight at the points Then the operator maps to with norm 1; that is, or more explicitly, for any complex a, b. (See Beckner's paper for the proof of his "two-point lemma".) A sequence of Bernoulli trials The measure that was introduced above is actually a fair Bernoulli trial with mean 0 and variance 1. Consider the sum of a sequence of n such Bernoulli trials, independent and normalized so that the standard deviation remains 1. We obtain the measure which is the n-fold convolution of with itself. The next step is to extend the operator C defined on the two-point space above to an operator defined on the (n + 1)-point space of with respect to the elementary symmetric polynomials. Convergence to standard normal distribution The sequence converges weakly to the standard normal probability distribution with respect to functions of polynomial growth. In the limit, the extension of the operator C above in terms of the elementary symmetric polynomials with respect to the measure is expressed as an operator T in terms of the Hermite polynomials with respect to the standard normal distribution. The
https://en.wikipedia.org/wiki/Aethrioscope
An aethrioscope (or æthrioscope) is a meteorological device invented by Sir John Leslie in 1818 for measuring the chilling effect of a clear sky. The name is from the Greek word for clear – αίθριος. It consists of a metallic cup standing upon a tall hollow pedestal, with a differential thermometer placed so that one of its bulbs is in the focus of the paraboloid formed by the cavity of the cup. The interior of the cup is highly polished and is kept covered by a plate of metal, being opened when an observation is made. The second bulb is always screened from the sky and so is not affected by the radiative effect of the clear sky, the action of which is concentrated upon the first bulb. The contraction of the air in the second bulb by its sudden exposure to a clear sky causes the liquid in the stem to rise. The device will respond in a contrary fashion when exposed to heat radiation and so may be used as a pyrometer too.
https://en.wikipedia.org/wiki/Andris%20Ambainis
Andris Ambainis (born 18 January 1975) is a Latvian computer scientist active in the fields of quantum information theory and quantum computing. Education and career Ambainis has held past positions at the Institute for Advanced Study at Princeton, New Jersey and the Institute for Quantum Computing at the University of Waterloo. He is currently a professor in the Faculty of Computing at the University of Latvia. He received a Bachelors (1996), Masters (1997), and Doctorate (1997) in Computer Science from the University of Latvia, as well as a PhD (2001) from the University of California, Berkeley. Contributions Ambainis has contributed extensively to quantum information processing and foundations of quantum mechanics, mostly through his work on quantum walks and lower bounds for quantum query complexity. Recognition In 1991 he received a perfect score and gold medal at the International Mathematical Olympiad. He won an Alfred P. Sloan Fellowship in 2008. Ambainis was an invited speaker at the 2018 International Congress of Mathematicians, speaking on mathematical aspects of computer science.
https://en.wikipedia.org/wiki/Hengzhi%20chip
The Hengzhi chip (, 联想"恒智"安全芯片) is a microcontroller that can store secured information, designed by the People's Republic of China government and manufactured in China. Its functionalities should be similar to those offered by a Trusted Platform Module but, unlike the TPM, it does not follow Trusted Computing Group specifications. Lenovo is selling PCs installed with Hengzhi security chips. The chip could be a development of the IBM ESS (Embedded security subsystem) chip, which was a public key smart card placed directly on the motherboard's system management bus. As of September 2006, no public specifications about the chip are available. See also Trusted Computing Trusted Platform Module
https://en.wikipedia.org/wiki/Polyhedron%20model
A polyhedron model is a physical construction of a polyhedron, constructed from cardboard, plastic board, wood board or other panel material, or, less commonly, solid material. Since there are 75 uniform polyhedra, including the five regular convex polyhedra, five polyhedral compounds, four Kepler-Poinsot polyhedra, and thirteen Archimedean solids, constructing or collecting polyhedron models has become a common mathematical recreation. Polyhedron models are found in mathematics classrooms much as globes in geography classrooms. Polyhedron models are notable as three-dimensional proof-of-concepts of geometric theories. Some polyhedra also make great centerpieces, tree toppers, Holiday decorations, or symbols. The Merkaba religious symbol, for example, is a stellated octahedron. Constructing large models offer challenges in engineering structural design. Construction Construction begins by choosing a size of the model, either the length of its edges or the height of the model. The size will dictate the material, the adhesive for edges, the construction time and the method of construction. The second decision involves colours. A single-colour cardboard model is easiest to construct — and some models can be made by folding a pattern, called a net, from a single sheet of cardboard. Choosing colours requires geometric understanding of the polyhedron. One way is to colour each face differently. A second way is to colour all square faces the same, all pentagonal faces the same, and so forth. A third way is to colour opposite faces the same. Many polyhedra are also coloured such that no same-coloured faces touch each other along an edge or at a vertex. For example, a 20-face icosahedron can use twenty colours, one colour, ten colours, or five colours, respectively. An alternative way for polyhedral compound models is to use a different colour for each polyhedron component. Net templates are then made. One way is to copy templates from a polyhedron-making bo
https://en.wikipedia.org/wiki/Cuthill%E2%80%93McKee%20algorithm
In numerical linear algebra, the Cuthill–McKee algorithm (CM), named after Elizabeth Cuthill and James McKee, is an algorithm to permute a sparse matrix that has a symmetric sparsity pattern into a band matrix form with a small bandwidth. The reverse Cuthill–McKee algorithm (RCM) due to Alan George and Joseph Liu is the same algorithm but with the resulting index numbers reversed. In practice this generally results in less fill-in than the CM ordering when Gaussian elimination is applied. The Cuthill McKee algorithm is a variant of the standard breadth-first search algorithm used in graph algorithms. It starts with a peripheral node and then generates levels for until all nodes are exhausted. The set is created from set by listing all vertices adjacent to all nodes in . These nodes are ordered according to predecessors and degree. Algorithm Given a symmetric matrix we visualize the matrix as the adjacency matrix of a graph. The Cuthill–McKee algorithm is then a relabeling of the vertices of the graph to reduce the bandwidth of the adjacency matrix. The algorithm produces an ordered n-tuple of vertices which is the new order of the vertices. First we choose a peripheral vertex (the vertex with the lowest degree) and set . Then for we iterate the following steps while Construct the adjacency set of (with the i-th component of ) and exclude the vertices we already have in Sort ascending by minimum predecessor (the already-visited neighbor with the earliest position in R), and as a tiebreak ascending by vertex degree. Append to the Result set . In other words, number the vertices according to a particular level structure (computed by breadth-first search) where the vertices in each level are visited in order of their predecessor's numbering from lowest to highest. Where the predecessors are the same, vertices are distinguished by degree (again ordered from lowest to highest). See also Graph bandwidth Sparse matrix
https://en.wikipedia.org/wiki/Multilinear%20algebra
Multilinear algebra is the study of functions with multiple vector-valued arguments, which are linear maps with respect to each argument. Concepts such as matrices, vectors, systems of linear equations, higher-dimensional spaces, determinants, inner and outer products, and dual spaces emerge naturally in the mathematics of multilinear functions. Multilinear algebra is a foundational mathematical tool in engineering, machine learning, physics, and mathematics. Origin While many theoretical concepts and applications are concerned with single vectors, mathematicians such as Hermann Grassmann considered the structures involving pairs, triplets, and general multi-vectors that generalize vectors. With multiple combinational possibilities, the space of multi-vectors expands to 2n dimensions, where n is the dimension of the relevant vector space. The determinant can be formulated abstractly using the structures of multilinear algebra. Multilinear algebra appears in the study of the mechanical response of materials to stress and strain, involving various moduli of elasticity. The term "tensor" describes elements within the multilinear space due to its added structure. Despite Grassmann's early work in 1844 with his Ausdehnungslehre, which was also republished in 1862, the subject was initially not widely understood, as even ordinary linear algebra posed many challenges at the time. The concepts of multilinear algebra find applications in certain studies of multivariate calculus and manifolds, particularly in relation to the Jacobian matrix. Infinitesimal differentials encountered in single-variable calculus are transformed into differential forms in multivariate calculus, and their manipulation is carried out using exterior algebra. Following Grassmann, developments in multilinear algebra were made by Victor Schlegel in 1872 with the publication of the first part of his System der Raumlehre and by Elwin Bruno Christoffel. Notably, significant advancements came through the
https://en.wikipedia.org/wiki/Dextran
Dextran is a complex branched glucan (polysaccharide derived from the condensation of glucose), originally derived from wine. IUPAC defines dextrans as "Branched poly-α-d-glucosides of microbial origin having glycosidic bonds predominantly C-1 → C-6". Dextran chains are of varying lengths (from 3 to 2000 kilodaltons). The polymer main chain consists of α-1,6 glycosidic linkages between glucose monomers, with branches from α-1,3 linkages. This characteristic branching distinguishes a dextran from a dextrin, which is a straight chain glucose polymer tethered by α-1,4 or α-1,6 linkages. Occurrence Dextran was discovered by Louis Pasteur as a microbial product in wine, but mass production was only possible after the development by Allene Jeanes of a process using bacteria. Dental plaque is rich in dextrans. Dextran is a complicating contaminant in the refining of sugar because it elevates the viscosity of sucrose solutions and fouls plumbing. Dextran is now produced from sucrose by certain lactic acid bacteria of the family lactobacillus. Species include Leuconostoc mesenteroides and Streptococcus mutans. The structure of dextran produced depends not only on the family and species of the bacterium but on the strain. They are separated by fractional precipitation from protein-free extracts using ethanol. Some bacteria coproduce fructans, which can complicate isolation of the dextrans. Uses Dextran 70 is on the WHO Model List of Essential Medicines, the most important medications needed in a health system. Medicinally it is used as an antithrombotic (antiplatelet), to reduce blood viscosity, and as a volume expander in hypovolaemia. Microsurgery These agents are used commonly by microsurgeons to decrease vascular thrombosis. The antithrombotic effect of dextran is mediated through its binding of erythrocytes, platelets, and vascular endothelium, increasing their electronegativity and thus reducing erythrocyte aggregation and platelet adhesiveness. Dextrans also
https://en.wikipedia.org/wiki/Data%20masking
Data masking or data obfuscation is the process of modifying sensitive data in such a way that it is of no or little value to unauthorized intruders while still being usable by software or authorized personnel. Data masking can also be referred as anonymization, or tokenization, depending on different context. The main reason to mask data is to protect information that is classified as personally identifiable information, or mission critical data. However, the data must remain usable for the purposes of undertaking valid test cycles. It must also look real and appear consistent. It is more common to have masking applied to data that is represented outside of a corporate production system. In other words, where data is needed for the purpose of application development, building program extensions and conducting various test cycles. It is common practice in enterprise computing to take data from the production systems to fill the data component, required for these non-production environments. However, this practice is not always restricted to non-production environments. In some organizations, data that appears on terminal screens to call center operators may have masking dynamically applied based on user security permissions (e.g. preventing call center operators from viewing credit card numbers in billing systems). The primary concern from a corporate governance perspective is that personnel conducting work in these non-production environments are not always security cleared to operate with the information contained in the production data. This practice represents a security hole where data can be copied by unauthorized personnel, and security measures associated with standard production level controls can be easily bypassed. This represents an access point for a data security breach. The overall practice of data masking at an organizational level should be tightly coupled with the test management practice and underlying methodology and should incorporate process
https://en.wikipedia.org/wiki/Station%20HYPO
Station HYPO, also known as Fleet Radio Unit Pacific (FRUPAC), was the United States Navy signals monitoring and cryptographic intelligence unit in Hawaii during World War II. It was one of two major Allied signals intelligence units, called Fleet Radio Units in the Pacific theaters, along with FRUMEL in Melbourne, Australia. The station took its initial name from the phonetic code at the time for "H" for Heʻeia, Hawaii radio tower. The precise importance and role of HYPO in penetrating the Japanese naval codes has been the subject of considerable controversy, reflecting internal tensions amongst US Navy cryptographic stations. HYPO was under the control of the OP-20-G Naval Intelligence section in Washington. Before the attack on Pearl Harbor of December 7, 1941, and for some time afterwards, HYPO was in the basement of the Old Administration Building at Pearl Harbor. Later on, a new building was constructed for the station, though it had been reorganized and renamed by then. Background Cryptanalytic problems facing the United States in the Pacific prior to World War II were largely those related to Japan. An early decision by OP-20-G in Washington divided responsibilities for them among CAST at Cavite and then Corregidor, in the Philippines, HYPO in Hawaii, and OP-20-G itself in Washington. Other Navy crypto stations, including Guam and Bainbridge Island on Puget Sound were tasked and staffed for signals interception and traffic analysis. The US Army's SIS broke into the highest level Japanese diplomatic cypher (called PURPLE by the US) well before the attack on Pearl Harbor. PURPLE produced little of military value, as the Japanese Foreign Ministry was thought by the ultra-nationalists to be unreliable. Furthermore, decrypts from PURPLE, eventually called MAGIC, were poorly distributed and used in Washington. SIS was able to build several PURPLE machine equivalents. One was sent to CAST, but as HYPO's assigned responsibility did not include PURPLE traffic, no
https://en.wikipedia.org/wiki/Out-of-band%20data
In computer networking, out-of-band data is the data transferred through a stream that is independent from the main in-band data stream. An out-of-band data mechanism provides a conceptually independent channel, which allows any data sent via that mechanism to be kept separate from in-band data. The out-of-band data mechanism should be provided as an inherent characteristic of the data channel and transmission protocol, rather than requiring a separate channel and endpoints to be established. The term "out-of-band data" probably derives from out-of-band signaling, as used in the telecommunications industry. Example case Consider a networking application that tunnels data from a remote data source to a remote destination. The data being tunneled may consist of any bit patterns. The sending end of the tunnel may at times have conditions that it needs to notify the receiving end about. However, it cannot simply insert a message to the receiving end because that end will not be able to distinguish the message from data sent by the data source. By using an out-of-band mechanism, the sending end can send the message to the receiving end out of band. The receiving end will be notified in some fashion of the arrival of out-of-band data, and it can read the out of band data and know that this is a message intended for it from the sending end, independent of the data from the data source. Implementations It is possible to implement out-of-band data transmission using a physically separate channel, but most commonly out-of-band data is a feature provided by a transmission protocol using the same channel as normal data. A typical protocol might divide the data to be transmitted into blocks, with each block having a header word that identifies the type of data being sent, and a count of the data bytes or words to be sent in the block. The header will identify the data as being in-band or out-of-band, along with other identification and routing information. At the rece
https://en.wikipedia.org/wiki/Marinobacter%20algicola
Marinobacter algicola is a Gram-negative, aerobic and moderately halophilic bactebacterium from the genus of Marinobacter which has been isolated from the dinoflagellate Gymnodinium catenatum in Scotland.
https://en.wikipedia.org/wiki/Bankoff%20circle
In geometry, the Bankoff circle or Bankoff triplet circle is a certain Archimedean circle that can be constructed from an arbelos; an Archimedean circle is any circle with area equal to each of Archimedes' twin circles. The Bankoff circle was first constructed by Leon Bankoff in 1974. Construction The Bankoff circle is formed from three semicircles that create an arbelos. A circle C1 is then formed tangent to each of the three semicircles, as an instance of the problem of Apollonius. Another circle C2 is then created, through three points: the two points of tangency of C1 with the smaller two semicircles, and the point where the two smaller semicircles are tangent to each other. C2 is the Bankoff circle. Radius of the circle If r = AB/AC, then the radius of the Bankoff circle is:
https://en.wikipedia.org/wiki/Contra%20%28video%20game%29
is a run and gun video game developed and published by Konami, originally developed as a coin-operated arcade video game in 1986 and released on February 20, 1987. A home version was released for the Nintendo Entertainment System in 1988, along with ports for various home computer formats, including the MSX2. The arcade and computer versions were localized as Gryzor in Europe, and the NES version as Probotector in PAL regions. The arcade game was a commercial success worldwide, becoming one of the top four highest-grossing dedicated arcade games of 1987 in the United States. The NES version was also a critical and commercial success, with Electronic Gaming Monthly awarding it for being the Best Action Game of 1988. Several Contra sequels were produced following the original game. Gameplay Contra employs a variety of playing perspectives, which include a standard side view, a pseudo-3D view (in which the player proceeds by shooting and moving towards the background, in addition to left or right) and a fixed screen format (in which the player has their gun aimed upwards by default). Up to two people can play simultaneously, with one player as Bill (the blond-haired commando wearing a white tank top and blue bandana), and the other player as Lance (the shirtless dark-haired commando with a red bandana). The controls consists of an eight-way joystick and two action buttons for shooting (left) and jumping (right). When one of the protagonists jumps, he curls into a somersault instead of doing a conventional jump like in other games. The joystick controls not only the player's movement while running and jumping, but also his aiming. During side view stages, the player can shoot leftward, rightward or upward while standing, as well as horizontally and diagonally while running. The player can also shoot in any of eight directions, including downwards, while jumping. Pressing the joystick downwards while standing will cause the character to lie down on his stomach, allowi
https://en.wikipedia.org/wiki/4-Cyano-4%27-pentylbiphenyl
4-Cyano-4'-pentylbiphenyl is a commonly used nematic liquid crystal with the chemical formula C18H19N. It frequently goes by the common name 5CB. 5CB was first synthesized by George William Gray, Ken Harrison, and J.A. Nash at the University of Hull in 1972 and at the time it was the first member of the cyanobiphenyls. The liquid crystal was discovered after Gray's group received a grant from the UK Ministry of Defence to find a liquid crystal that had liquid crystal phases near room temperature with the specific intention of using them in liquid crystal displays. The molecule is about 20 Å long. The liquid crystal 5CB undergoes a phase transition from a crystalline state to a nematic state at 22.5 °C and it goes from a nematic to an isotropic state at 35.0 °C. Production 5CB is produced by modifying biphenyl in a linear manner. First Br2 is added to the biphenyl to introduce a bromine atom to the end of the moiety. Next aluminium chloride and C4H9COCl is added to the sample, followed by the addition of potassium hydroxide and NH2NH2. By this point the molecule will have a bromine atom on one end of the rigid core and C5H11 on the other end. Finally, introduction of copper(I) cyanide and DMF results in the removal of the bromine and its replacement with CN, yielding 5CB.
https://en.wikipedia.org/wiki/List%20of%20tumblers%20%28small%20Solar%20System%20bodies%29
This is a list of tumblers, minor planets, comets and natural satellites whose angular momentum vector is far from the principal axis of inertia, so that they do not rotate in a fairly constant manner with a constant period. Instead of rotating around a constant axis or around a wobbling axis, they appear to tumble (see Poinsot's ellipsoid for an explanation). For true tumbling, the three moments of inertia must be different. If two are equal, then the axis of rotation will simply precess in a circle. As of 2018, there are 3 natural satellites and 198 confirmed or likely tumblers out of a total of nearly 800,000 discovered small Solar System bodies. The data is sourced from the "Lightcurve Data Base" (LCDB). The tumbling of a body can be caused by the torque from asymmetrically emitted radiation known as the YORP effect. Note that the rotation periods given below are approximate. The rotation period is not constant for a tumbler. Natural satellites This is a list of tumbling natural satellites (moons) that orbit planets and dwarf planets in the Solar System. Minor planets and comets Notes
https://en.wikipedia.org/wiki/Software%20Engineering%20for%20Adaptive%20and%20Self-Managing%20Systems
The Workshop on Software Engineering for Adaptive and Self-Managing Systems (SEAMS) is an academic conference for exchanging research results and experiences in the areas of autonomic computing, self-managing, self-healing, self-optimizing, self-configuring, and self-adaptive systems theory. It was established in 2006 at the International Conference on Software Engineering (ICSE). It integrated workshops held mainly at ICSE and the Foundations of Software Engineering (FSE) conference since 2002, including the FSE 2002 and 2004 Workshops on Self-Healing (Self-Managed) Systems (WOSS), ICSE 2005 Workshop on Design and Evolution of Autonomic Application Software, and the ICSE 2002, 2003, 2004 and 2005 Workshops on Architecting Dependable Systems.
https://en.wikipedia.org/wiki/Temporary%20equilibrium%20method
The temporary equilibrium method has been devised by Alfred Marshall for analyzing economic systems that comprise interdependent variables of different speed. Sometimes it is referred to as the moving equilibrium method. For example, assume an industry with a certain capacity that produces a certain commodity. Given this capacity, the supply offered by the industry will depend on the prevailing price. The corresponding supply schedule gives short-run supply. The demand depends on the market price. The price in the market declines if supply exceeds demand, and it increases, if supply is less than demand. The price mechanism leads to market clearing in the short run. However, if this short-run equilibrium price is sufficiently high, production will be very profitable, and capacity will increase. This shifts the short-run supply schedule to the right, and a new short-run equilibrium price will be obtained. The resulting sequence of short-run equilibria are termed temporary equilibria. The overall system involves two state variables: price and capacity. Using the temporary equilibrium method, it can be reduced to a system involving only state variable. This is possible because each short-run equilibrium price will be a function of the prevailing capacity, and the change of capacity will be determined by the prevailing price. Hence the change of capacity will be determined by the prevailing capacity. The method works if the price adjusts fast and capacity adjustment is comparatively slow. The mathematical background is provided by the Moving equilibrium theorem. In physics, the method is known as scale separation,
https://en.wikipedia.org/wiki/Quasielastic%20scattering
In physics, quasielastic scattering designates a limiting case of inelastic scattering, characterized by energy transfers being small compared to the incident energy of the scattered particles. The term was originally coined in nuclear physics. It was applied to thermal neutron scattering by Leon van Hove and Pierre Gilles de Gennes (quasielastic neutron scattering, QENS). Finally, it is sometimes used for dynamic light scattering (also known by the more expressive term photon correlation spectroscopy).
https://en.wikipedia.org/wiki/Bubble%20nest
Bubble nests, also called foam nests, are created by some fish and frog species as floating masses of bubbles blown with an oral secretion, saliva bubbles, and occasionally aquatic plants. Fish that build and guard bubble nests are known as aphrophils. Aphrophils include gouramis (including Betta species) and the synbranchid eel Monopterus alba in Asia, Microctenopoma (Anabantidae), Polycentropsis (Nandidae), and Hepsetus odoe (the only member of Hepsetidae) in Africa, and callichthyines and the electric eel in South America. Most, if not all, fish that construct floating bubble nests live in tropical, oxygen-depleted standing waters. Osphronemidae, containing the Bettas and Gouramies, are the most commonly recognized family of bubble nest makers, though some members of that family mouthbrood instead. The nests are constructed as a place for fertilized eggs to be deposited while incubating and guarded by one or both parents (usually solely the male) until the fry hatch. Bubble nests can also be found in the habitats of domesticated male Betta fish. Nests found in these types of habitats indicate a healthy and happy fish. Construction and Nest Characteristics Bubble nests are built even when not in presence of female or fry (though often a female swimming past will trigger the frantic construction of the nest). Males will build bubble nests of various sizes and thicknesses, depending on the male's territory and personality. Some males build constantly, some occasionally, some when introduced to a female and some do not even begin until after spawning. Some nests will be large, some small, some thick. Nest size does not directly correlate with number of eggs. Bigger males build larger bubble nests. Large bubble nests are able to handle more eggs and larval fish and thus can only be handled by larger males. Larger males are also able to be more successful in protecting their eggs and juvenile fish from predators. Most nests are found in shallow bodies and margina