source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Navicular%20fossa%20of%20male%20urethra
The navicular fossa is a short dilated portion of (the spongy (or cavernous or penile) portion of) the male urethra within the glans penis just proximal to the external urethral meatus. The roof of the fossa is especially dilated, forming a lacuna; medical instruments being inserted into the male urethra should initially be directed towards the floor of the fossa so as not to get snagged at the fossa. It is one of three dilations of the male urethra (the other two occurring at the prostate, and the bulb of penis). The wall of the navicular fossa is the only part of the urethra that is lined with stratified squamous epithelium (instead of the transitional epithelium that is typical for the urinary tract). During development, the glans of the penis is initially solid but cannulates to give rise to the navicular fossa.
https://en.wikipedia.org/wiki/Rank%20abundance%20curve
A rank abundance curve or Whittaker plot is a chart used by ecologists to display relative species abundance, a component of biodiversity. It can also be used to visualize species richness and species evenness. It overcomes the shortcomings of biodiversity indices that cannot display the relative role different variables played in their calculation. The curve is a 2D chart with relative abundance on the Y-axis and the abundance rank on the X-axis. X-axis: The abundance rank. The most abundant species is given rank 1, the second most abundant is 2 and so on. Y-axis: The relative abundance. Usually measured on a log scale, this is a measure of a species abundance (e.g., the number of individuals) relative to the abundance of other species. Interpreting a rank abundance curve The rank abundance curve visually depicts both species richness and species evenness. Species richness can be viewed as the number of different species on the chart i.e., how many species were ranked. Species evenness is reflected in the slope of the line that fits the graph (assuming a linear, i.e. logarithmic series, relationship). A steep gradient indicates low evenness as the high-ranking species have much higher abundances than the low-ranking species. A shallow gradient indicates high evenness as the abundances of different species are similar. Quantitative comparison of rank abundance curves Quantitative comparison of rank abundance curves of different communities can be done using RADanalysis package in R. This package uses the max rank normalization method in which a rank abundance distribution is made by normalization of rank abundance curves of communities to the same number of ranks and then normalize the relative abundances to one.
https://en.wikipedia.org/wiki/Bainbridge%20reflex
The Bainbridge reflex or Bainbridge effect, also called the atrial reflex, is an increase in heart rate due to an increase in central venous pressure. Increased blood volume is detected by stretch receptors (Cardiac Receptors) located in both sides of atria at the venoatrial junctions. History Francis Arthur Bainbridge described this as a reflex in 1918 when he was experimenting on dogs. Bainbridge found that infusing blood or saline into the animal increased heart rate. This phenomenon occurred even if arterial blood pressure did not increase. He further observed that heart rate increased when venous pressure rose high enough to distend the right atrium, but denervation of the vagi to the heart eliminated these effects. Subsequent work demonstrated a stretch-induced increase in heart rate in isolated hearts or even the fully separated SAN. Thus, the positive chronotropic (from Χρόνος, Greek for 'time', and τρέπειν, Greek for 'to bend/turn') response of the heart to stretch must, in part, at least, be caused and accomplished by mechanisms located within the SAN. This led to the suggestion to refer to the response discovered by Bainbrindge as an 'effect' rather than a 'reflex'. Mechanism of Action Increased blood volume results in increased venous return to the heart, which leads to increased firing of B-fibers. B-fibers send signals to the brain (the afferent pathway of the neural portion of the Bainbridge reflex), which then modulates both sympathetic and parasympathetic pathways to the SA node of the heart (the efferent pathway of the neural portion of the Bainbridge reflex), causing an increase in heart rate. "Effects on cardiac contractility and stroke volume are insignificant." Bainbridge reflex can be blocked by atropine and can be abolished by cutting the vagus nerve. The local response of sino-atrial node pacemaker cells to stretch involves stretch-activated ion channels, as was demonstrated by stretching single isolated pacemaker cells while recording th
https://en.wikipedia.org/wiki/Analyte-specific%20reagent
Analyte-specific reagents (ASRs) are a class of biological molecules which can be used to identify and measure the amount of an individual chemical substance in biological specimens. Regulatory definition The U.S. Food and Drug Administration (FDA) defines analyte specific reagents (ASRs) in 21 CFR 864.4020 as “antibodies, both polyclonal and monoclonal, specific receptor proteins, ligands, nucleic acid sequences, and similar reagents which, through specific binding or chemical reaction with substances in a specimen, are intended to use in a diagnostic application for identification and quantification of an individual chemical substance or ligand in biological specimens.” In simple terms an analyte specific reagent is the active ingredient of an in-house test. External links Guidance for Industry and FDA Staff - Commercially Distributed Analyte Specific Reagents (ASRs): Frequently Asked Questions Code of Federal Regulations - Specimen Preparation Reagents (21CFR864.4020) Chemical tests Biomolecules
https://en.wikipedia.org/wiki/Twintron
In molecular biology, a twintron is an intron-within-intron excised by sequential splicing reactions. A twintron is presumably formed by the insertion of a mobile intron into an existing intron. Discovery Twintrons were discovered by Donald W. Copertino and Richard B. Hallick as a group II intron within another group II intron in Euglena chloroplast genome. They found that splicing of both the internal and external introns occurs via lariat intermediates. Additionally, twintron splicing was found to proceed by a sequential pathway, the internal intron being removed prior to the excision of the external intron. Since the original discovery, there have been other reports of Group III twintrons and GroupII/III twintrons in the chloroplast of Euglena gracilis. In 1993 a new type of complex twintron composed of four individual group III introns has been characterized. The external intron was interrupted by an internal intron containing two additional introns. In 1995 scientists discovered the first non-Euglena twintron in cryptomonad alga Pyrenomonas salina. In 2004, several twintrons were discovered in Drosophila. Distribution The majority of these twintrons have been characterized within the Euglena chloroplast genome but these elements have also been found in cryptomonad algae (Pyrenomonas salina), and group I intron based twintrons (group I inserted within a group I intron) have been described in Didymium iridis. Since the discovery of the psbF twintron, several categories of twintrons have been characterized. A twintron can be simple (external intron interrupted by 1 internal intron), or complex (external intron interrupted by multiple internal introns). Most probably, the internal and external introns comprising the twintron element are from the same category; group I internal to group I, group II internal to group II, and group III internal to group III. Mixed twintrons (consisting of introns belonging to different categories) were characterized from the Eugl
https://en.wikipedia.org/wiki/Engineer%20to%20order
Engineer to order is a production approach characterized by: Engineering activities need to be added to product lead time. Upon receipt of a customer order, the order engineering requirements and specifications are not known in detail. There is a substantial amount of design and engineering analysis required. To speed up delivery time, the adoption of concurrent engineering, integrated product team, and lean product development methodologies are used. The critical path methodology is also essential. To speed up the delivery time, many companies use customization approach (In SAP terminology it is called Variant configuration) where in, the most part of the BOM components and routing operation elements could be created automatically based on the design inputs received during quote/sales order stage. This approach speedup the BOM and routing creation process, there by help ETO companies to respond quickly to customer requirement. Engineer to order environments must employ a flexible and adaptive, demand-driven approach to the manufacturing process. It is usually the right solution when details on a customer order are not provided and engineering development must be added to product lead time. ETO is a technique that is leveraged to boost sales and improve margins for those companies with customers needing solutions that are tailored to fit their own unique environment. It begins with selling product concepts that don’t have fixed designs and are expected to result in a new, unique end product. This could be any product, from enterprise software applications to special aircraft to a pair of jeans. But the typical ETO environment usually deals with the design and build of unique custom engineered complex machinery and industrial equipment - one in which there is heavy involvement of the following engineering disciplines; mechanical, electrical, mechatronics, software, manufacturing and systems engineering. The ETO company works with its customers to develop ne
https://en.wikipedia.org/wiki/Nordic%20Mathematical%20Contest
The Nordic Mathematical Contest (NMC) is a mathematics competition for secondary school students from the five Nordic countries: Denmark, Finland, Iceland, Norway and Sweden. It takes place every year in March or April and serves the double purpose of being a regional secondary school level mathematics competition for the Nordic region and a step in the process of selection of the teams of the participating countries for the International Mathematical Olympiad (IMO) and the regional Baltic Way competition. Participation At most twenty participants from each country are appointed by the organisers of the national secondary school level mathematics competitions. They must either be eligible to the IMO or attend a secondary school. (The foreword of ref. renders the eligibility requirements unlike the past and present regulations.) Problems The exam consists of four problems to be answered in four hours. Only writing and drawing tools are permitted. For each problem the contestant can get from zero to seven points. The problems are of the IMO type and harder than those of the national secondary school level competitions in mathematics of the Nordic countries but not as hard as those of the IMO. They are chosen by the organising committee of the host country of the year from proposals submitted by the national organising committees. The official web site of the NMC provides a complete collection in English with solutions of the problems from all the years. It is compiled by Matti Lehtinen. Selected versions of the problems in other Nordic languages are also available at the site Organisation The NMC is run in a decentralised manner involving no travel of the contestants nor any other personnel. The contestants write the exam in their own schools on the same day. Thence the papers are sent to a committee in the contestants' country who mark them preliminarily. They are then forwarded with the preliminary marking to a committee in the host country of the year, who coo
https://en.wikipedia.org/wiki/Involutory%20matrix
In mathematics, an involutory matrix is a square matrix that is its own inverse. That is, multiplication by the matrix A is an involution if and only if A2 = I, where I is the n × n identity matrix. Involutory matrices are all square roots of the identity matrix. This is simply a consequence of the fact that any invertible matrix multiplied by its inverse is the identity. Examples The 2 × 2 real matrix is involutory provided that The Pauli matrices in M(2, C) are involutory: One of the three classes of elementary matrix is involutory, namely the row-interchange elementary matrix. A special case of another class of elementary matrix, that which represents multiplication of a row or column by −1, is also involutory; it is in fact a trivial example of a signature matrix, all of which are involutory. Some simple examples of involutory matrices are shown below. where I is the 3 × 3 identity matrix (which is trivially involutory); R is the 3 × 3 identity matrix with a pair of interchanged rows; S is a signature matrix. Any block-diagonal matrices constructed from involutory matrices will also be involutory, as a consequence of the linear independence of the blocks. Symmetry An involutory matrix which is also symmetric is an orthogonal matrix, and thus represents an isometry (a linear transformation which preserves Euclidean distance). Conversely every orthogonal involutory matrix is symmetric. As a special case of this, every reflection and 180° rotation matrix is involutory. Properties An involution is non-defective, and each eigenvalue equals , so an involution diagonalizes to a signature matrix. A normal involution is Hermitian (complex) or symmetric (real) and also unitary (complex) or orthogonal (real). The determinant of an involutory matrix over any field is ±1. If A is an n × n matrix, then A is involutory if and only if P+ = (I + A)/2 is idempotent. This relation gives a bijection between involutory matrices and idempotent matrices. Similarly, A
https://en.wikipedia.org/wiki/Software%20evolution
Software evolution is the continual development of a piece of software after its initial release to address changing stakeholder and/or market requirements. Software evolution is important because organizations invest large amounts of money in their software and are completely dependent on this software. Software evolution helps software adapt to changing businesses requirements, fix defects, and integrate with other changing systems in a software system environment. General introduction Fred Brooks, in his key book The Mythical Man-Month, states that over 90% of the costs of a typical system arise in the maintenance phase, and that any successful piece of software will inevitably be maintained. In fact, Agile methods stem from maintenance-like activities in and around web based technologies, where the bulk of the capability comes from frameworks and standards. Software maintenance addresses bug fixes and minor enhancements, while software evolution focuses on adaptation and migration. Software technologies will continue to develop. These changes will require new laws and theories to be created and justified. Some models as well would require additional aspects in developing future programs. Innovations and improvements do increase unexpected form of software development. The maintenance issues also would probably change as to adapt to the evolution of the future software. Software processes are themselves evolving, after going through learning and refinements, it is always improve their efficiency and effectiveness. Basic concepts The need for software evolution comes from the fact that no one is able to predict how user requirements will evolve a priori . In other words, the existing systems are never complete and continue to evolve. As they evolve, the complexity of the systems will grow unless there is a better solution available to solve these issues. The main objectives of software evolution are ensuring functional relevance, reliability and flexibility
https://en.wikipedia.org/wiki/Essentially%20unique
In mathematics, the term essentially unique is used to describe a weaker form of uniqueness, where an object satisfying a property is "unique" only in the sense that all objects satisfying the property are equivalent to each other. The notion of essential uniqueness presupposes some form of "sameness", which is often formalized using an equivalence relation. A related notion is a universal property, where an object is not only essentially unique, but unique up to a unique isomorphism (meaning that it has trivial automorphism group). In general there can be more than one isomorphism between examples of an essentially unique object. Examples Set theory At the most basic level, there is an essentially unique set of any given cardinality, whether one labels the elements or . In this case, the non-uniqueness of the isomorphism (e.g., match 1 to or 1 to ) is reflected in the symmetric group. On the other hand, there is an essentially unique ordered set of any given finite cardinality: if one writes and , then the only order-preserving isomorphism is the one which maps 1 to , 2 to , and 3 to . Number theory The fundamental theorem of arithmetic establishes that the factorization of any positive integer into prime numbers is essentially unique, i.e., unique up to the ordering of the prime factors. Group theory In the context of classification of groups, there is an essentially unique group containing exactly 2 elements. Similarly, there is also an essentially unique group containing exactly 3 elements: the cyclic group of order three. In fact, regardless of how one chooses to write the three elements and denote the group operation, all such groups can be shown to be isomorphic to each other, and hence are "the same". On the other hand, there does not exist an essentially unique group with exactly 4 elements, as there are in this case two non-isomorphic groups in total: the cyclic group of order 4 and the Klein four group. Measure theory There is an essentially
https://en.wikipedia.org/wiki/IS-DOS
iS-DOS is a disk operating system (DOS) for Soviet/Russian ZX Spectrum clones. iS-DOS was developed in 1990 or 1991, by Iskra Soft, in Leningrad, Soviet Union, now Saint Petersburg, Russia. It handles floppy disks (double sided, double density), hard disk drives, and CD-ROMs. Maximum iS-DOS disk partitioning size on a hard disk is 16 MiB. Unlike TR-DOS, iS-DOS is resident in random-access memory (RAM), and thus reduces the amount of memory available for user programs. Versions iS-DOS Chic is a version developed for the Nemo KAY. It provides more memory for user programs. TASiS, based on iS-DOS Chic, is a modern version developed by NedoPC for the ATM Turbo 2+ in 2006, supports the enhanced text mode and larger memory of that model. Distributors Slot Ltd. (Moscow) distributed iS-DOS in Moscow and regions in 1990s, and issued paper books. Nemo (Saint Petersburg) distributed iS-DOS in ex-USSR until 2004, and issued Open Letters electronic press. iS-DOS Support Team (Saratov Oblast) distributes iS-DOS in ex-USSR and issues iS-Files electronic press. NedoPC distributes TASiS as freeware. Books Картавцев И.Ю, Самыловский С.В., Криштопа С.В. "iS-DOS. Руководство пользователя". IskraSoft, Slot, С-Пб, Москва, 1993, 128 стр. Криштопа С.В. "Операционная система IS-DOS для ZX-SPECTRUM. Руководство программиста". "IskraSoft" С-Пб, "Slot" Москва, 1994, 84 стр. See also TR-DOS CP/M DISCiPLE MB02 ESX-DOS DNA OS Kay 1024 ATM Turbo 2+ Scorpion ZS-256
https://en.wikipedia.org/wiki/Open%20Grid%20Forum
The Open Grid Forum (OGF) is a community of users, developers, and vendors for standardization of grid computing. It was formed in 2006 in a merger of the Global Grid Forum and the Enterprise Grid Alliance. The OGF models its process on the Internet Engineering Task Force (IETF), and produces documents with many acronyms such as OGSA, OGSI, and JSDL. Organization The OGF has two principal functions plus an administrative function: being the standards organization for grid computing, and building communities within the overall grid community (including extending it within both academia and industry). Each of these function areas is then divided into groups of three types: working groups with a generally tightly defined role (usually producing a standard), research groups with a looser role bringing together people to discuss developments within their field and generate use cases and spawn working groups, and community groups (restricted to community functions). Three meetings are organized per year, divided (approximately evenly after averaging over a number of years) between North America, Europe and East Asia. Many working groups organize face-to-face meetings in the interim. History The concept of a forum to bring together developers, practitioners, and users of distributed computing (known as grid computing at the time) was discussed at a "Birds of a Feather" session in November 1998 at the SC98 supercomputing conference. Based on response to the idea during this BOF, Ian Foster and Bill Johnston convened the first Grid Forum meeting at NASA Ames Research Center in June 1999, drawing roughly 100 people, mostly from the US. A group of organizers nominated Charlie Catlett (from Argonne National Laboratory and the University of Chicago) to serve as the initial chair, confirmed via a plenary vote was held at the second Grid Forum meeting in Chicago in October 1999. With advice and assistance from the Internet Engineering Task Force (IETF), OGF established
https://en.wikipedia.org/wiki/TR-DOS
TR-DOS is a disk operating system for the ZX Spectrum with Beta Disc and Beta 128 disc interfaces. TR-DOS and Beta disc were developed by Technology Research Ltd (UK), in 1984. A clone of this interface is also used in the Russian Pentagon and Scorpion machines. It became a standard and most disk releases for the ZX Spectrum, especially of modern programs, are made for TR-DOS as opposed to other disk systems. The latest official firmware version is 5.03 (1986). Unofficial versions with various enhancements and bug-fixes have been released since 1990, with the latest being 6.10E (2006). TR-DOS handles SS/DS, SD/DD floppy disks. All modern versions support RAM Disk and some versions support hard disks. Current emulators support TR-DOS disk images in the formats .TRD or .SCL. Commands The following list of commands is supported by TR-DOS V4. 40 CAT CLOSE COPY ERASE FORMAT GO TO INPUT LOAD MOVE NEW OPEN PRINT RETURN RUN SAVE Utility programs include: FILER TAPECOPY (replaces BACKUP, COPY and SCOPY utility programs in TR-DOS V3) See also iS-DOS CP/M DISCiPLE
https://en.wikipedia.org/wiki/Clos%20network
In the field of telecommunications, a Clos network is a kind of multistage circuit-switching network which represents a theoretical idealization of practical, multistage switching systems. It was invented by Edson Erwin in 1938 and first formalized by the American engineer Charles Clos in 1952. By adding stages, a Clos network reduces the number of crosspoints required to compose a large crossbar switch. A Clos network topology (diagrammed below) is parameterized by three integers n, m, and r: n represents the number of sources which feed into each of r ingress stage crossbar switches; each ingress stage crossbar switch has m outlets; and there are m middle stage crossbar switches. Circuit switching arranges a dedicated communications path for a connection between endpoints for the duration of the connection. This sacrifices total bandwidth available if the dedicated connections are poorly utilized, but makes the connection and bandwidth more predictable, and only introduces control overhead when the connections are initiated, rather than with every packet handled, as in modern packet-switched networks. When the Clos network was first devised, the number of crosspoints was a good approximation of the total cost of the switching system. While this was important for electromechanical crossbars, it became less relevant with the advent of VLSI, wherein the interconnects could be implemented either directly in silicon, or within a relatively small cluster of boards. Upon the advent of complex data centers, with huge interconnect structures, each based on optical fiber links, Clos networks regained importance. A subtype of Clos network, the Beneš network, has also found recent application in machine learning. Topology Clos networks have three stages: the ingress stage, the middle stage, and the egress stage. Each stage is made up of a number of crossbar switches (see diagram below), often just called crossbars. The network implements an r-way perfect shuffle bet
https://en.wikipedia.org/wiki/Wine%20for%20the%20Confused
Wine for the Confused is a documentary hosted by John Cleese. It is a light-hearted introduction to wine for novices. Cleese guides viewers through the basics of wine types and grape varieties, wine making, wine tasting and terminology, buying and storing wines, through direct narrative and interviews with wine makers and wine sellers. The film duration is 92 minutes and includes visits to wineries in California. The film concludes with a large group conducting a blind wine tasting. One of the tasting results was the fact that most tasters could not distinguish between red wine and white wine. Another was that most tasters rated an inexpensive wine equal in taste to an expensive prestige wine, and both of these out scored the rest of the mid-priced and high-priced wines in the blind test.
https://en.wikipedia.org/wiki/Charles%20Doolittle%20Walcott%20Medal
Charles Doolittle Walcott Medal is an award presented by the National Academy of Sciences every five years to promote research and study in the fields of Precambrian and Cambrian life and history. The medal was established and endowed in 1934 by the Walcott Fund, a gift of Mary Vaux Walcott, in honor of paleontologist Charles Doolittle Walcott (1850-1927). The medal was sculpted by Laura Gardin Fraser. Since 2008 the award has been linked to the Stanley Miller Medal and the two medals are now presented alternately, known collectively as the NAS Award in Early Earth and Life Sciences. Each medal is supplemented by a $10,000 award. Medalists Source: NAS See also List of paleontology awards
https://en.wikipedia.org/wiki/William%20A.%20Stein
William Arthur Stein (born February 21, 1974 in Santa Barbara, California) is a software developer and previously a professor of mathematics at the University of Washington. He is the lead developer of SageMath and founder of CoCalc. Stein does computational and theoretical research into the problem of computing with modular forms and the Birch and Swinnerton-Dyer conjecture. He is considered "a leading expert in the field of computational arithmetic".
https://en.wikipedia.org/wiki/Food%20packaging
Food packaging is a packaging system specifically designed for food and represents one of the most important aspects among the processes involved in the food industry, as it provides protection from chemical, biological and physical alterations. The main goal of food packaging is to provide a practical means of protecting and delivering food goods at a reasonable cost while meeting the needs and expectations of both consumers and industries. Additionally, current trends like sustainability, environmental impact reduction, and shelf-life extension have gradually become among the most important aspects in designing a packaging system. History Packaging of food products has seen a vast transformation in technology usage and application from the stone age to the industrial revolution: 7000 BC: The adoption of pottery and glass which saw industrialization around 1500 BC. 1700s: The first manufacturing production of tinplate was introduced in England (1699) and in France (1720). Afterwards, the Dutch navy start to use such packaging to prolong the preservation of food products. 1804: Nicolas Appert, in response to inquiries into extending the shelf life of food for the French Army, employed glass bottles along with thermal food treatment. Glass has been replaced by metal cans in this application. However, there is still an ongoing debate about who first introduced the use of tinplates as food packaging. 1870: The use of paper board was launched and corrugated materials patented. 1880s: First cereal packaged in a folding box by Quaker Oats. 1890s: The crown cap for glass bottles was patented by William Painter. 1960s: Development of the two-piece drawn and wall-ironed metal cans in the US, along with the ring-pull opener and the Tetra Brik Aseptic carton package. 1970s: The barcode system was introduced in the retail and manufacturing industry. PET plastic blow-mold bottle technology, which is widely used in the beverage industry, was introduced. 1990s: The app
https://en.wikipedia.org/wiki/Cray%20XT4
The Cray XT4 (codenamed Hood during development) is an updated version of the Cray XT3 supercomputer. It was released on November 18, 2006. It includes an updated version of the SeaStar interconnect router called SeaStar2, processor sockets for Socket AM2 Opteron processors, and 240-pin unbuffered DDR2 memory. The XT4 also includes support for FPGA coprocessors that plug into riser cards in the Service and IO blades. The interconnect, cabinet, system software and programming environment remain unchanged from the Cray XT3. It was superseded in 2007 by the Cray XT5. External links News release regarding Hood "Cray Introduces XMT and XT4 Supercomputers" on HPCwire Cray XT4 at top500.org
https://en.wikipedia.org/wiki/Cray%20XMT
Cray XMT (Cray eXtreme MultiThreading, codenamed Eldorado) is a scalable multithreaded shared memory supercomputer architecture by Cray, based on the third generation of the Tera MTA architecture, targeted at large graph problems (e.g. semantic databases, big data, pattern matching). Presented in 2005, it supersedes the earlier unsuccessful Cray MTA-2. It uses the Threadstorm3 CPUs inside Cray XT3 blades. Designed to make use of commodity parts and existing subsystems for other commercial systems, it alleviated the shortcomings of Cray MTA-2's high cost of fully custom manufacture and support. It brought various substantial improvements over Cray MTA-2, most notably nearly tripling the peak performance, and vastly increased maximum CPU count to 8,192 and maximum memory to 128 TB, with a data TLB of maximal 512 TB. Cray XMT uses a scrambled content-addressable memory model on DDR1 ECC modules to implicitly load-balance memory access across the whole shared global address space of the system. Use of 4 additional Extended Memory Semantics bits (full/empty, forwarding and 2 trap bits) per 64-bit memory word enables lightweight, fine-grained synchronization on all memory. There are no hardware interrupts and hardware threads are allocated by an instruction, not the OS. Front-end (login, I/O, and other service nodes, utilizing AMD Opteron processors and running SLES Linux) and back-end (compute nodes, utilizing Threadstorm3 processors and running MTK, a simple BSD Unix-based microkernel) communicate through the LUC (Lightweight User Communication) interface, a RPC-style bidirectional client/server interface. Threadstorm3 Threadstorm3 (referred to as "MT processor" and Threadstorm before XMT2) is a 64-bit single-core VLIW barrel processor (compatible with 940-pin Socket 940 used by AMD Opteron processors) with 128 hardware streams, onto each a software thread can be mapped (effectively creating 128 hardware threads per CPU), running at 500 MHz and using the MTA instru
https://en.wikipedia.org/wiki/Stagnation%20enthalpy
In thermodynamics and fluid mechanics, the stagnation enthalpy of a fluid is the static enthalpy of the fluid at a stagnation point. The stagnation enthalpy is also called total enthalpy. At a point where the flow does not stagnate, it corresponds to the static enthalpy of the fluid at that point assuming it was brought to rest from velocity isentropically. That means all the kinetic energy was converted to internal energy without losses and is added to the local static enthalpy. When the potential energy of the fluid is negligible, the mass-specific stagnation enthalpy represents the total energy of a flowing fluid stream per unit mass. Stagnation enthalpy, or total enthalpy, is the sum of the static enthalpy (associated with the temperature and static pressure at that point) plus the enthalpy associated with the dynamic pressure, or velocity. This can be expressed in a formula in various ways. Often it is expressed in specific quantities, where specific means mass-specific, to get an intensive quantity: where: mass-specific total enthalpy, in [J/kg] mass-specific static enthalpy, in [J/kg] fluid velocity at the point of interest, in [m/s] mass-specific kinetic energy, in [J/kg] The volume-specific version of this equation (in units of energy per volume, [J/m^3] is obtained by multiplying the equation with the fluid density : where: volume-specific total enthalpy, in [J/m^3] volume-specific static enthalpy, in [J/m^3] fluid velocity at the point of interest, in [m/s] fluid density at the point of interest, in [kg/m^3] volume-specific kinetic energy, in [J/m^3] The non-specific version of this equation, that means extensive quantities are used, is: where: total enthalpy, in [J] static enthalpy, in [J] fluid mass, in [kg] fluid velocity at the point of interest, in [m/s] kinetic energy, in [J] The suffix ‘0’ usually denotes the stagnation condition and is used as such here. Enthalpy is the energy associated with the temperature plus the ener
https://en.wikipedia.org/wiki/Frond%20dimorphism
Frond dimorphism refers to a difference in ferns between the fertile and sterile fronds. Since ferns, unlike flowering plants, bear spores on the leaf blade itself, this may affect the form of the frond itself. In some species of ferns, there is virtually no difference between the fertile and sterile fronds, such as in the genus Dryopteris, other than the mere presence of the sori, or fruit-dots, on the back of the fronds. Some other species, such as Polystichum acrostichoides (Christmas fern), or some ferns of the genus Osmunda, feature dimorphism on a portion of the frond only. Others, such as some species of Blechnum and Woodwardia, have fertile fronds that are markedly taller than the sterile. Still others, such as Osmunda cinnamomea (Cinnamon fern), or plants of the family Onocleaceae, have fertile fronds that are completely different from the sterile. Only members of the Onocleaceae and Blechnaceae exhibit a propensity towards dimorphy, while no member of the Athyriaceae is strongly dimorphic, and only some representatives of the Thelypteridaceae have evolved the condition, suggesting a possible close relationship between Onocleaceae and Blechnaceae. Its importance has been disputed - Copeland for example, considered it taxonomically important, whereas Tryon and Tryon and Kramer all stated that the importance can only be judged in relation to other characteristics.
https://en.wikipedia.org/wiki/Asus%20Media%20Bus
The Asus Media Bus is a proprietary computer bus developed by Asus, which was used on some Socket 7 motherboards in the middle 1990s. It is a combined PCI and ISA slot. It was developed to provide a cost-efficient solution to a complete multimedia system. Using Media Bus cards for building a system reduced slot requirements and compatibility problems. Expansion cards supporting this interface were only manufactured by Asus for a very limited time. This bus is now obsolete. While similar to PCI-X in appearance, the extension contains 4 additional pins (2 on each side) for a total of 68. The divider between the PCI slot and Media Bus extension is too wide to support a properly-keyed PCI-X card. Despite the very short lifespan, there were at least two revisions of Asus Media Bus – revision 1.2 and 2.0. The difference between them is that the latter revision has 72 pins instead of 68 so it does not have to use any PCI slot signals reserved for PCI cards and PCI slot shared with the Media Bus slot becomes standards compliant. The gap between PCI slot and Media Bus extension is 0.32 in. for revision 1.2 (pictured) and 0.4 in. for revision 2.0 so expansion cards designed for two revisions are mutually incompatible. Expansion cards designed for this interface included primarily combined audio and video cards, but also some combined SCSI and audio cards. The (possibly incomplete) list of Media Bus expansion cards presented here (all cards manufactured by Asus): Media Bus rev. 1.2 cards PCI-AS7870 – Fast/Wide SCSI and audio card (Adaptec AS7870 and Vibra16s (with separate Yamaha yfm262-m)) PCI-AV264CT – audio and video card (ATI Mach64 PCI 1 MiB (up to 2 MiB) and Vibra16s (with separate Yamaha yfm262-m)) PCI-AV868 (pictured) – audio and video card (S3 Vision868 1 MiB and Vibra16s (with separate Yamaha yfm262-m)) Media Bus rev. 1.2 motherboards Asus P/I-P55SP4 Asus P/I-P55TP4XE Media Bus rev. 2.0 cards PCI-AS2940UW – Ultra Fast/Wide SCSI and audio card PCI-AV264C
https://en.wikipedia.org/wiki/Quantum%20finite%20automaton
In quantum computing, quantum finite automata (QFA) or quantum state machines are a quantum analog of probabilistic automata or a Markov decision process. They provide a mathematical abstraction of real-world quantum computers. Several types of automata may be defined, including measure-once and measure-many automata. Quantum finite automata can also be understood as the quantization of subshifts of finite type, or as a quantization of Markov chains. QFAs are, in turn, special cases of geometric finite automata or topological finite automata. The automata work by receiving a finite-length string of letters from a finite alphabet , and assigning to each such string a probability indicating the probability of the automaton being in an accept state; that is, indicating whether the automaton accepted or rejected the string. The languages accepted by QFAs are not the regular languages of deterministic finite automata, nor are they the stochastic languages of probabilistic finite automata. Study of these quantum languages remains an active area of research. Informal description There is a simple, intuitive way of understanding quantum finite automata. One begins with a graph-theoretic interpretation of deterministic finite automata (DFA). A DFA can be represented as a directed graph, with states as nodes in the graph, and arrows representing state transitions. Each arrow is labelled with a possible input symbol, so that, given a specific state and an input symbol, the arrow points at the next state. One way of representing such a graph is by means of a set of adjacency matrices, with one matrix for each input symbol. In this case, the list of possible DFA states is written as a column vector. For a given input symbol, the adjacency matrix indicates how any given state (row in the state vector) will transition to the next state; a state transition is given by matrix multiplication. One needs a distinct adjacency matrix for each possible input symbol, since each
https://en.wikipedia.org/wiki/Task%20Force%2072%20%28model%20boat%20builders%29
Task Force 72 is an international association of Radio controlled model boat builders, all building in the common scale (ratio) of 1:72 (1 inch in 1:72 equals 72 inches or 6 feet in real life). History Task Force 72 originated in Australia in 1994, when a number of individuals building radio controlled models came together to form a voluntary association. The name derives from the military term for a group of warships operating to a common purpose, a task force, and the scale of the ship models being built (1:72). In the years since, the association has grown from a small number of model builders to several hundred currently active and former members located in Australia, the United States, United Kingdom, Canada, New Zealand and several other countries. At this time the largest grouping of members is located within Australia, although it is hoped to grow membership internationally. Annual Regatta The Association holds an annual regatta each year, normally at Wentworth Falls Lake, located in the Blue Mountains, west of Sydney in New South Wales, Australia. The Task Force 72 Annual Regatta is usually held in November or December of each calendar year. The 2018 Annual Regatta will be held on the weekend of Saturday 25 (10AM – 8PM) and Sunday 26 November (10AM – 4PM) at Naracoorte in South Australia. Two major prizes are awarded during the Regatta. The Bravo Zulu Award, named for the Naval flag hoist signifying 'well done', is awarded by a reviewing officer, usually but not always a serving member of the Royal Australian Navy. The Wentworth Shield is awarded to the model judged best by the members of Task Force 72 attending the Regatta. Several minor prizes are usually presented, including: Best Warship; Best Non-warship; The Newbie Award for the best first model by a member; and The Rob Sullivan Engineering Award for the most innovative engineering incorporated into a model.. Task Force 72 hopes that the Association's membership in other countries
https://en.wikipedia.org/wiki/Singly%20fed%20electric%20machine
Singly fed electric machine is a broad term which covers ordinary electric motors and electric generators. Such machines have only one external connection to the windings, and thus are said to be singly fed. See also Doubly fed electric machine Rotary converter
https://en.wikipedia.org/wiki/Dimensional%20metrology
Dimensional metrology, also known as industrial metrology, is the application of metrology for quantifying the physical size, form (shape), characteristics, and relational distance from any given feature. History of metrology Standardized measurements are essential to technological advancement, and early measurement tools have been found dating back to the dawn of human civilization. Early Mesopotamian and Egyptian metrologists created a set of measurement standards based on body parts known as anthropic units. These ancient systems of measurements utilized fingers, palms, hands, feet, and paces as intervals. Carpenters and surveyors were some of the first dimensional inspectors, and many specialized units craftsmen, such as the remen, were worked into a system of unit fractions that allowed for calculations utilizing analytic geometry. Later agricultural measures such as feet, yards, paces, Cubits, fathoms, rods, cords, perch, stadia, miles and degrees of the Earth's circumference, many of which are still in use. Early Measurement Tools & Standardization Early Egyptian rulers were incremented in units of fingers, palms, and feet based on standardized inscription grids. These grids outlined the standards of measurement as canons of proportion, and were made commensurate with Mesopotamian standards based on fingers, hands, and feet. In this system, four palms or three hands measured one foot; ten hands equaled one meter. These standards were used to measure and define property and regulated by law for several purposes, such as taxation, infrastructure, and more. such as buildings and fields were adopted by the Greeks, Romans, and Persians as legal standards and became the basis of European standards of measure. They were also used to relate length to area with units such as the khet, setat and aroura, area to volume with units such as the artaba and space to time with units such as the Egyptian minute of march, the itrw which recorded an hours travel on a rive
https://en.wikipedia.org/wiki/Subnormal%20operator
In mathematics, especially operator theory, subnormal operators are bounded operators on a Hilbert space defined by weakening the requirements for normal operators. Some examples of subnormal operators are isometries and Toeplitz operators with analytic symbols. Definition Let H be a Hilbert space. A bounded operator A on H is said to be subnormal if A has a normal extension. In other words, A is subnormal if there exists a Hilbert space K such that H can be embedded in K and there exists a normal operator N of the form for some bounded operators Normality, quasinormality, and subnormality Normal operators Every normal operator is subnormal by definition, but the converse is not true in general. A simple class of examples can be obtained by weakening the properties of unitary operators. A unitary operator is an isometry with dense range. Consider now an isometry A whose range is not necessarily dense. A concrete example of such is the unilateral shift, which is not normal. But A is subnormal and this can be shown explicitly. Define an operator U on by Direct calculation shows that U is unitary, therefore a normal extension of A. The operator U is called the unitary dilation of the isometry A. Quasinormal operators An operator A is said to be quasinormal if A commutes with A*A. A normal operator is thus quasinormal; the converse is not true. A counter example is given, as above, by the unilateral shift. Therefore, the family of normal operators is a proper subset of both quasinormal and subnormal operators. A natural question is how are the quasinormal and subnormal operators related. We will show that a quasinormal operator is necessarily subnormal but not vice versa. Thus the normal operators is a proper subfamily of quasinormal operators, which in turn are contained by the subnormal operators. To argue the claim that a quasinormal operator is subnormal, recall the following property of quasinormal operators: Fact: A bounded operator A is quasinormal if
https://en.wikipedia.org/wiki/Decapentaplegic
Decapentaplegic (Dpp) is a key morphogen involved in the development of the fruit fly Drosophila melanogaster and is the first validated secreted morphogen. It is known to be necessary for the correct patterning and development of the early Drosophila embryo and the fifteen imaginal discs, which are tissues that will become limbs and other organs and structures in the adult fly. It has also been suggested that Dpp plays a role in regulating the growth and size of tissues. Flies with mutations in decapentaplegic fail to form these structures correctly, hence the name (decapenta-, fifteen, -plegic, paralysis). Dpp is the Drosophila homolog of the vertebrate bone morphogenetic proteins (BMPs), which are members of the TGF-β superfamily, a class of proteins that are often associated with their own specific signaling pathway. Studies of Dpp in Drosophila have led to greater understanding of the function and importance of their homologs in vertebrates like humans. Function in Drosophila Dpp is a classic morphogen, which means that it is present in a spatial concentration gradient in the tissues where it is found, and its presence as a gradient gives it functional meaning in how it affects development. The most studied tissues in which Dpp is found are the early embryo and the imaginal wing discs, which later form the wings of the fly. During embryonic development, Dpp is uniformly expressed at the dorsal side of the embryo, establishing a sharp concentration gradient. In the imaginal discs, Dpp is strongly expressed in a narrow stripe of cells down the middle of the disc where the tissue marks the border between the anterior and posterior sides. Dpp diffuses from this stripe towards the edges of the tissue, forming a gradient as expected of a morphogen. However, although cells in the Dpp domain in the embryo do not proliferate, cells in the imaginal wing disc proliferate heavily, causing tissue growth. Although gradient formation in the early embryo is well understood,
https://en.wikipedia.org/wiki/OpenNMS
OpenNMS is a free and open-source enterprise grade network monitoring and network management platform. It is developed and supported by a community of users and developers and by the OpenNMS Group, offering commercial services, training and support. The goal is for OpenNMS to be a truly distributed, scalable management application platform for all aspects of the FCAPS network management model while remaining 100% free and open source. Currently the focus is on Fault and Performance Management. All code associated with the project is available under the Affero General Public License. The OpenNMS Project is maintained by The Order of the Green Polo. History The OpenNMS Project was started in July, 1999 by Steve Giles, Brian Weaver and Luke Rindfuss and their company PlatformWorks. It was registered as project 4141 on SourceForge in March 2000. On September 28, 2000, PlatformWorks was acquired by Atipa, a Kansas City-based competitor to VA Linux Systems. In July 2001, Atipa changed its name to Oculan. In September 2002, Oculan decided to stop supporting the OpenNMS project. Tarus Balog, then an Oculan employee, left the company to continue to focus on the project. In September 2004, The OpenNMS Group was started by Balog, Matt Brozowski and David Hustace to provide a commercial services and support business around the project. Shortly after that, The Order of the Green Polo (OGP) was founded to manage the OpenNMS Project itself. While many members of the OGP are also employees of The OpenNMS Group, it remains a separate organization. Platform support and requirements OpenNMS is written in Java, and thus can run on any platform with support for a Java SDK version 11 or higher. Precompiled binaries are available for most Linux distributions. In addition to Java, it requires the PostgreSQL database, although work is being done to make the application database independent by leveraging the Hibernate project. Features OpenNMS describes itself as a "network m
https://en.wikipedia.org/wiki/EMagin
eMagin Corporation is an American electronic components manufacturer based in Hopewell Junction, New York. eMagin specializes in organic light emitting diode (OLED) technology and manufactures micro OLED display used in virtual imaging products and other related products. Since the company's founding in 1996 it has developed and manufactured products for other various markets, including medical, law enforcement, remote presence, industrial, computer interface, gaming and entertainment. For its microdisplays being incorporated in various military equipment such as night vision goggle and head-mounted display systems, the company has been a contractor to the U.S military. Recognition In 2000, eMagin Corporation was named the winner of 2000 SID Information Display Magazine Display of the Year Gold Award, for technological advancement in the development of the organic light emitting diode (OLED) microdisplay technology, referred to as OLED-on-silicon.
https://en.wikipedia.org/wiki/Electroencephalography%20functional%20magnetic%20resonance%20imaging
EEG-fMRI (short for EEG-correlated fMRI or electroencephalography-correlated functional magnetic resonance imaging) is a multimodal neuroimaging technique whereby EEG and fMRI data are recorded synchronously for the study of electrical brain activity in correlation with haemodynamic changes in brain during the electrical activity, be it normal function or associated with disorders. Principle Scalp EEG reflects the brain's electrical activity, and in particular post-synaptic potentials (see Inhibitory postsynaptic current and Excitatory postsynaptic potential) in the cerebral cortex, whereas fMRI is capable of detecting haemodynamic changes throughout the brain through the BOLD effect. EEG-fMRI therefore allows measuring both neuronal and haemodynamic activity which comprise two important components of the neurovascular coupling mechanism. Methodology The simultaneous acquisition of EEG and fMRI data of sufficient quality requires solutions to problems linked to potential health risks (due to currents induced by the MR image forming process in the circuits created by the subject and EEG recording system) and EEG and fMRI data quality. There are two degrees of integration of the data acquisition, reflecting technical limitations associated with the interference between the EEG and MR instruments. These are: interleaved acquisitions, in which each acquisition modality is interrupted in turn (periodically) to allow data of adequate quality to be recorded by the other modality; continuous acquisitions, in which both modalities are able to record data of adequate quality continuously. The latter can be achieved using real-time or post-processing EEG artifact reduction software. EEG was first recorded in an MR environment around 1993. Several groups have found independent means to solve the problems of mutual contamination of the EEG and fMRI signals. The first continuous EEG-fMRI experiment was performed in 1999 using a numerical filtering approach. A predominantly sof
https://en.wikipedia.org/wiki/Essential%20matrix
In computer vision, the essential matrix is a matrix, that relates corresponding points in stereo images assuming that the cameras satisfy the pinhole camera model. Function More specifically, if and are homogeneous normalized image coordinates in image 1 and 2, respectively, then if and correspond to the same 3D point in the scene (not an "if and only if" due to the fact that points that lie on the same epipolar line in the first image will get mapped to the same epipolar line in the second image). The above relation which defines the essential matrix was published in 1981 by H. Christopher Longuet-Higgins, introducing the concept to the computer vision community. Richard Hartley and Andrew Zisserman's book reports that an analogous matrix appeared in photogrammetry long before that. Longuet-Higgins' paper includes an algorithm for estimating from a set of corresponding normalized image coordinates as well as an algorithm for determining the relative position and orientation of the two cameras given that is known. Finally, it shows how the 3D coordinates of the image points can be determined with the aid of the essential matrix. Use The essential matrix can be seen as a precursor to the fundamental matrix, . Both matrices can be used for establishing constraints between matching image points, but the fundamental matrix can only be used in relation to calibrated cameras since the inner camera parameters (matrices and ) must be known in order to achieve the normalization. If, however, the cameras are calibrated the essential matrix can be useful for determining both the relative position and orientation between the cameras and the 3D position of corresponding image points. The essential matrix is related to the fundamental matrix with Derivation and definition This derivation follows the paper by Longuet-Higgins. Two normalized cameras project the 3D world onto their respective image planes. Let the 3D coordinates of a point P be and relative to eac
https://en.wikipedia.org/wiki/Neumann%27s%20law
Neumann's law states that the molecular heat in compounds of analogous constitution is always the same. It is named after German mineralogist and physicist Franz Ernst Neumann.
https://en.wikipedia.org/wiki/Chook%20raffle
Chook raffle is an Australian tradition of "raffling off", often in clubs or pubs, a "chook", which is an Australian slang term for a chicken. Most often the chicken is prepared by a butcher, but live chickens are sometimes raffled. The chook raffle is a special case of a meat raffle, but is more often used as a fund-raising activity by an amateur club or organisation. Perhaps because of this association, the expression tends to be used disparagingly about someone who claims to have, or should have, superior organisational skills, that they "couldn't run a chook raffle". The term is also used to describe any random process. An example is selecting the winner of an election by drawing a name from a hat, said to be turning the process into a "chook raffle".
https://en.wikipedia.org/wiki/Women%20in%20physics
This article discusses women who have made an important contribution to the field of physics. International physics awards Nobel laureates Five women have won the Nobel Prize in Physics, awarded annually since 1901 by the Royal Swedish Academy of Sciences. These are: 1903 Marie Curie: "in recognition of the extraordinary services they have rendered by their joint researches on the radiation phenomena discovered by Professor Henri Becquerel" 1963 Maria Goeppert Mayer: "for their discoveries concerning nuclear shell structure" 2018 Donna Strickland: "for their method high-intensity, ultra-short optical pulses" 2020 Andrea Ghez: "for the discovery of a supermassive compact object at the centre of our galaxy." 2023 Anne L'Huillier "for experimental methods that generate attosecond pulses of light for the study of electron dynamics in matter." Marie Curie was the first woman to receive the prize in 1903 and shared 1/2 of the prize with her husband Pierre Curie for their joint work on radioactivity, discovered by Henri Becquerel who got the other half of the prize. Marie Curie was the first woman to also receive the Nobel Prize in Chemistry in 1911, making her the first person to win two Nobel prizes and, as of 2023, the first to be awarded two Nobel prizes in two different scientific categories. Maria Goeppert Mayer became the second woman to win the prize in 1963, for the theoretical development of the nuclear shell model, a half of the prize shared with J. Hans D. Jensen (the other half given to Eugene Wigner). Donna Strickland shared half of the prize in 2018 with Gérard Mourou, for their work in chirped pulse amplification beginning in the 1980s (the other half given to Arthur Ashkin). Andrea Ghez was the fourth female Nobel laureate in 2020, she shared one half of the prize with Reinhard Genzel for the discovery of the supermassive compact object Sagittarius A* at the center of our galaxy (the other half given to Roger Penrose). In 2023, Anne L'Huillier
https://en.wikipedia.org/wiki/Unit%20tangent%20bundle
In Riemannian geometry, the unit tangent bundle of a Riemannian manifold (M, g), denoted by T1M, UT(M) or simply UTM, is the unit sphere bundle for the tangent bundle T(M). It is a fiber bundle over M whose fiber at each point is the unit sphere in the tangent bundle: where Tx(M) denotes the tangent space to M at x. Thus, elements of UT(M) are pairs (x, v), where x is some point of the manifold and v is some tangent direction (of unit length) to the manifold at x. The unit tangent bundle is equipped with a natural projection which takes each point of the bundle to its base point. The fiber π−1(x) over each point x ∈ M is an (n−1)-sphere Sn−1, where n is the dimension of M. The unit tangent bundle is therefore a sphere bundle over M with fiber Sn−1. The definition of unit sphere bundle can easily accommodate Finsler manifolds as well. Specifically, if M is a manifold equipped with a Finsler metric F : TM → R, then the unit sphere bundle is the subbundle of the tangent bundle whose fiber at x is the indicatrix of F: If M is an infinite-dimensional manifold (for example, a Banach, Fréchet or Hilbert manifold), then UT(M) can still be thought of as the unit sphere bundle for the tangent bundle T(M), but the fiber π−1(x) over x is then the infinite-dimensional unit sphere in the tangent space. Structures The unit tangent bundle carries a variety of differential geometric structures. The metric on M induces a contact structure on UTM. This is given in terms of a tautological one-form, defined at a point u of UTM (a unit tangent vector of M) by where is the pushforward along π of the vector v ∈ TuUTM. Geometrically, this contact structure can be regarded as the distribution of (2n−2)-planes which, at the unit vector u, is the pullback of the orthogonal complement of u in the tangent space of M. This is a contact structure, for the fiber of UTM is obviously an integral manifold (the vertical bundle is everywhere in the kernel of θ), and the remaining tangent
https://en.wikipedia.org/wiki/Spatial%20multiplexing
Spatial multiplexing or space-division multiplexing (SM, SDM or SMX) is a multiplexing technique in MIMO wireless communication, fibre-optic communication and other communications technologies used to transmit independent channels separated in space. Fibre-optic communication In fibre-optic communication SDM refers to the usage of the transverse dimension of the fibre to separate the channels. Techniques Multi-core fibre (MCF) Multi-core fibres are fibres designed with more than a single core. Amongst different types of MCFs exist, “Uncoupled MCF” is the most common in which each core is treated to be an independent optical path resulting in increasing in channel capacity. However, the main limitation to these systems is the presence of inter core crosstalk and ways to deal it as well as the coupling/de-coupling mechanism. Although, in recent times, different splicing techniques, coupling methods and schemes have been proposed and demonstrated and despite many of the component technologies still being in the development stage, MCF systems already present the capability for huge transmission capacity. Recently, some developed components technologies for multicore optical fiber are demonstrated, such as three-dimensional Y-splitters between different multicore fibers, a universal interconnection among the same fiber cores, and a device for fast swapping and interchange of wavelength-division multiplexed data among cores of multicore optical fiber. Multi-mode fibres (MMF) Multi-mode fibers are fibres designed to allow multiple modes to propagate through it where each mode is considered as separate channel enhancing its capacity in contrast to single mode fibre (SMF) that only supports single spatial mode, however MMF has two polarizations. The MMFs are limited by high dispersion and attenuation rate causing the signal quality to be diminished over long distances. In addition to this, the MMFs also suffer from intermodal crosstalk and requires digital signal p
https://en.wikipedia.org/wiki/Tunica%20albuginea%20%28ovaries%29
The tunica albuginea is a layer of condensed fibrous tissue on the surface of the ovary. Structure The tunica albuginea is composed of short connective tissue fibers. It is located immediately inside the surface epithelium (previously known as germinal epithelium) which is continuous with the peritoneum. It is non-vascularised. It is thinner than the tunica albuginea of the testis, and its thickness varies across the ovary. Development The tunica albuginea is formed late in prenatal development. It buds off from mesonephric stroma.
https://en.wikipedia.org/wiki/CUDA
CUDA (or Compute Unified Device Architecture) is a proprietary and closed source parallel computing platform and application programming interface (API) that allows software to use certain types of graphics processing units (GPUs) for general purpose processing, an approach called general-purpose computing on GPUs (GPGPU). CUDA is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements, for the execution of compute kernels. CUDA is designed to work with programming languages such as C, C++, and Fortran. This accessibility makes it easier for specialists in parallel programming to use GPU resources, in contrast to prior APIs like Direct3D and OpenGL, which required advanced skills in graphics programming. CUDA-powered GPUs also support programming frameworks such as OpenMP, OpenACC and OpenCL; and HIP by compiling such code to CUDA. CUDA was created by Nvidia. When it was first introduced, the name was an acronym for Compute Unified Device Architecture, but Nvidia later dropped the common use of the acronym. Background The graphics processing unit (GPU), as a specialized computer processor, addresses the demands of real-time high-resolution 3D graphics compute-intensive tasks. By 2012, GPUs had evolved into highly parallel multi-core systems allowing efficient manipulation of large blocks of data. This design is more effective than general-purpose central processing unit (CPUs) for algorithms in situations where processing large blocks of data is done in parallel, such as: cryptographic hash functions machine learning molecular dynamics simulations physics engines Ontology The following table offers a non-exact description for the ontology of CUDA framework. Programming abilities The CUDA platform is accessible to software developers through CUDA-accelerated libraries, compiler directives such as OpenACC, and extensions to industry-standard programming languages including C, C++ and Fortran. C/C++
https://en.wikipedia.org/wiki/Heteroreceptor
A heteroreceptor is a receptor regulating the synthesis and/or the release of mediators other than its own ligand. Heteroreceptors respond to neurotransmitters, neuromodulators, or neurohormones released from adjacent neurons or cells; they are opposite to autoreceptors, which are sensitive only to neurotransmitters or hormones released by the cell in whose wall they are embedded. Examples Norepinephrine can influence the release of acetylcholine from parasympathetic neurons by acting on α2 adrenergic (α2A, α2B, and α2C) heteroreceptors. Acetylcholine can influence the release of norepinephrine from sympathetic neurons by acting on muscarinic-2 and muscarinic-4 heteroreceptors. CB1 negatively modulates the release of GABA and glutamate, playing a crucial role in maintaining a homeostasis between excitatory and inhibitory transmission. Glutamate released from an excitatory neuron escapes from the synaptic cleft and preferentially affects mGluR III receptors on the presynaptic terminals of interneurons. Glutamate spillover leads to inhibition of GABA release, modulating GABAergic transmission. See also Autoreceptor
https://en.wikipedia.org/wiki/Line%20of%20Contact
The Line of Contact marked the farthest advance of American, British, French, and Soviet armies into German controlled territory at the end of World War II in Europe. In general a "line of contact" refers to the demarcation between two or more given armies, whether they are allied or belligerent. This contact began with the first meeting between Soviet and American forces at Torgau, near the Elbe river on Elbe Day, April 25, 1945. The line continued to form as American, British, French and Soviet forces took control of, or defeated, Nazi forces, up until the time of the May 8 unconditional surrender of Germany and beyond. This line of contact did not conform to the agreed-upon occupation zones, as stipulated in the Yalta Conference. Rather, it was simply the place where the two armies met each other. The Western Allies had actually gone far beyond the Yalta agreement boundaries, in some cases up to two hundred miles past, going deep into the states of Mecklenburg, Saxony-Anhalt, Saxony, as well as Brandenburg. The capital of Mecklenburg, the city of Schwerin, was captured on May 2, 1945. The city of Leipzig, in Saxony, was probably the largest of the cities captured by the Americans that were inside the areas to be later passed to the Soviets. The land of Thuringia was completely occupied by American forces. The complete line of contact between Western Allies forces and Soviet forces began at Wismar on the Baltic coast and proceeded south, passing along Schwerin; Magdeburg, taken over by the Soviet from the British in July 1st, 1945, after the end of the war in Europe and the Stör Canal, where Soviet and American forces met in May 4th, 1945; Dessau and Pratau, contact being made at 26 April, 1945 ;an area east of Leipzig, Leipzig being taken over by the Soviets from the Americans in July 3rd, 1945, after the end of the war in Europe; and on to the Czech town of Pilsen; and towards Linz, where Soviet and American armies meet in Austria. New Zealand units and Yugosl
https://en.wikipedia.org/wiki/Void%20ratio
The void ratio of a mixture or composite is the ratio of the volume of voids to volume of solids. It is a dimensionless quantity in materials science, and is closely related to porosity as follows: and where is void ratio, is porosity, VV is the volume of void-space (such as fluids), VS is the volume of solids, and VT is the total or bulk volume. This figure is relevant in composites, in mining (particular with regard to the properties of tailings), and in soil science. In geotechnical engineering, it is considered one of the state variables of soils and represented by the symbol e. Note that in geotechnical engineering, the symbol usually represents the angle of shearing resistance, a shear strength (soil) parameter. Because of this, the equation is usually rewritten using for porosity: and where is void ratio, is porosity, VV is the volume of void-space (air and water), VS is the volume of solids, and VT is the total or bulk volume. Engineering applications Volume change tendency control. If void ratio is high (loose soils) voids in a soil skeleton tend to minimize under loading - adjacent particles contract. The opposite situation, i.e. when void ratio is relatively small (dense soils), indicates that the volume of the soil is vulnerable to increase under loading - particles dilate. Fluid conductivity control (ability of water movement through the soil). Loose soils show high conductivity, while dense soils are not so permeable. Particles movement. In a loose soil particles can move quite easily, whereas in a dense one finer particles cannot pass through the voids, which leads to clogging. See also Void (composites) External links Relation between void ratio and porosity
https://en.wikipedia.org/wiki/Photometric%20redshift
A photometric redshift is an estimate for the recession velocity of an astronomical object such as a galaxy or quasar, made without measuring its spectrum. The technique uses photometry (that is, the brightness of the object viewed through various standard filters, each of which lets through a relatively broad passband of colours, such as red light, green light, or blue light) to determine the redshift, and hence, through Hubble's law, the distance, of the observed object. The technique was developed in the 1960s, but was largely replaced in the 1970s and 1980s by spectroscopic redshifts, using spectroscopy to observe the frequency (or wavelength) of characteristic spectral lines, and measure the shift of these lines from their laboratory positions. The photometric redshift technique has come back into mainstream use since 2000, as a result of large sky surveys conducted in the late 1990s and 2000s which have detected a large number of faint high-redshift objects, and telescope time limitations mean that only a small fraction of these can be observed by spectroscopy. Photometric redshifts were originally determined by calculating the expected observed data from a known emission spectrum at a range of redshifts. The technique relies upon the spectrum of radiation being emitted by the object having strong features that can be detected by the relatively crude filters. As photometric filters are sensitive to a range of wavelengths, and the technique relies on making many assumptions about the nature of the spectrum at the light-source, errors for these sorts of measurements can range up to δz = 0.5, and are much less reliable than spectroscopic determinations. In the absence of sufficient telescope time to determine a spectroscopic redshift for each object, the technique of photometric redshifts provides a method to determine an at least qualitative characterization of a redshift. For example, if a Sun-like spectrum had a redshift of z = 1, it would be brightest in
https://en.wikipedia.org/wiki/Open%20rate
There are two types of "open rates" one for electronic mail (aka e-mail; see below) and one for physical mail (aka snail mail via the USPS or other physical mail carrier). Email Open Rate The email open rate is a measure primarily used by marketers as an indication of how many people "view" or "open" the commercial electronic mail they send out. It is most commonly expressed as a percentage and calculated by dividing the number of email messages opened by the total number of email messages sent (excluding those that bounced.) Some Email Service Providers (ESP) also track unique email opens. Similar to an email open, unique email opens eliminate all duplicate opens that occur. Tracking Email Open Rates are typically tracked using a transparent 1x1 pixel, or small transparent tracking image, that is embedded in outgoing emails. When the client or browser used to display the email requests that image, then an "open" is recorded for that email by the image's host server. The email will not be counted as an open until one of the following occurs: The recipient enables the images in the email or The recipient interacts with the email by clicking on a link The open rate of any given email can vary based on a number of variables. For example, the type of industry the email is being sent to. In addition, the day and time an email is scheduled or sent to recipients can have an effect on email open rate. The length of an email's subject line can also affect whether or not it is opened. Tracking Concerns Open rates is one of the earliest metrics applied in email marketing, but its continued use has become controversial due to conflicting views on its usefulness. The open rate for an email sent to multiple recipients is then most often calculated as the total number of “opened” emails, expressed as a percentage of the total number of emails sent or—more usually—delivered. The number delivered is itself measured as the number of emails sent out minus the number of bou
https://en.wikipedia.org/wiki/George%20N.%20Hatsopoulos
George Nicholas Hatsopoulos (January 7, 1927 – September 20, 2018) was a Greek American mechanical engineer noted for his work in thermodynamics and for having founded Thermo Electron. Early life Hatsopoulos was born in Athens, Greece in 1927 and is related to the former rector of the Athens Polytechnic School, Nicolas Kitsikis. He attended Athens Polytechnic before entering MIT, where he received his Bachelor and Master of Science (1950), Mechanical Engineer (1954), and Doctorate of Science (1956). Hatsopoulos-Keenan reformulation of thermodynamics In 1965, he and Joseph Keenan published their textbook Principles of General Thermodynamics, which restates the second law of thermodynamics in terms of the existence of stable equilibrium states. Their formulation of the second law of thermodynamics states that: The Hatsopoulos-Keenan statement of the Second Law entails the Clausius, Kelvin-Planck, and Carathéodory statements of the Second Law, and has provided a basis to extend the traditional definition of entropy to the non-equilibrium domain. In 1976, Hatsopoulos also contributed to a formulation of a unified theory of mechanics and thermodynamics, arguably a precursor of the emerging field of quantum thermodynamics. Academic and industry leader While at MIT, Hatsopoulos was head of the engineering division of Matrad Corporation of New York. Matrad Corporation and MIT also provided financial support for his doctoral thesis The Thermo-Electron Engine. Matrad Corporation was owned by the family of Peter M. Nomikos, a Harvard Business School graduate. In 1956, Hatsopoulos founded the Thermo Electron Corporation with funding from Peter Nomikos. Several years later, George asked his brother (John Hatsopoulos) to join the company as financial controller. Under George Hatsopoulos, Thermo Electron became a major provider of analytical instruments and services for a variety of domains. John Hatsopoulos, and Arvin Smith. In 1965, George Hatsopoulos was president of the
https://en.wikipedia.org/wiki/Black%27s%20equation
Black's Equation is a mathematical model for the mean time to failure (MTTF) of a semiconductor circuit due to electromigration: a phenomenon of molecular rearrangement (movement) in the solid phase caused by an electromagnetic field. The equation is: is a constant is the current density is a model parameter is the activation energy is Boltzmann's constant is the absolute temperature in K The model is abstract, not based on a specific physical model, but flexibly describes the failure rate dependence on the temperature, the electrical stress, and the specific technology and materials. More adequately described as descriptive than prescriptive, the values for A, n, and Q are found by fitting the model to experimental data. The model's value is that it maps experimental data taken at elevated temperature and electrical stress levels in short periods of time to expected component failure rates under actual operating conditions. Experimental data is obtained by running a combination of high temperature operating life (HTOL), electrical, and any other relevant operating environment variables.
https://en.wikipedia.org/wiki/Potential%20density
The potential density of a fluid parcel at pressure is the density that the parcel would acquire if adiabatically brought to a reference pressure , often 1 bar (100 kPa). Whereas density changes with changing pressure, potential density of a fluid parcel is conserved as the pressure experienced by the parcel changes (provided no mixing with other parcels or net heat flux occurs). The concept is used in oceanography and (to a lesser extent) atmospheric science. Potential density is a dynamically important property: for static stability potential density must decrease upward. If it doesn't, a fluid parcel displaced upward finds itself lighter than its neighbors, and continues to move upward; similarly, a fluid parcel displaced downward would be heavier than its neighbors. This is true even if the density of the fluid decreases upward. In stable conditions (potential density decreasing upward) motion along surfaces of constant potential density (isopycnals) is energetically favored over flow across these surfaces (diapycnal flow), so most of the motion within a 3-D geophysical fluid takes place along these 2-D surfaces. In oceanography, the symbol is used to denote potential density, with the reference pressure taken to be the pressure at the ocean surface. The corresponding potential density anomaly is denoted by kg/m3. Because the compressibility of seawater varies with salinity and temperature, the reference pressure must be chosen to be near the actual pressure to keep the definition of potential density dynamically meaningful. Reference pressures are often chosen as a whole multiple of 100 bar; for water near a pressure of 400 bar (40 MPa), say, the reference pressure 400 bar would be used, and the potential density anomaly symbol would be written . Surfaces of constant potential density (relative to and in the vicinity of a given reference pressure) are used in the analyses of ocean data and to construct models of ocean currents. Neutral density surfac
https://en.wikipedia.org/wiki/Idler-wheel
An idler-wheel is a wheel which serves only to transmit rotation from one shaft to another, in applications where it is undesirable to connect them directly. For example, connecting a motor to the platter of a phonograph, or the crankshaft-to-camshaft gear train of an automobile. Because it does no work itself, it is called an "idler". Friction drive An idler-wheel may be used as part of a friction drive mechanism. For example, to connect a metal motor shaft to a metal platter without gear noise, early phonographs used a rubber idler wheel. Likewise, the pinch roller in a magnetic tape transport is a type of idler wheel, which presses against the driven capstan to increase friction. Idler pulley In a belt drive system, idlers are often used to alter the path of the belt, where a direct path would be impractical. Idler pulleys are also often used to press against the back of a pulley in order to increase the wrap angle (and thus contact area) of a belt against the working pulleys, increasing the force-transfer capacity. Belt drive systems commonly incorporate one movable pulley which is spring- or gravity-loaded to act as a belt tensioner, to accommodate stretching of the belt due to temperature or wear. An idler wheel is usually used for this purpose, in order to avoid having to move the power-transfer shafts. Idler gear An idler gear is a gear wheel that is inserted between two or more other gear wheels. The purpose of an idler gear can be two-fold. Firstly, the idler gear will change the direction of rotation of the output shaft. Secondly, an idler gear can assist to reduce the size of the input/output gears whilst maintaining the spacing of the shafts. Gear ratio An idler gear does not affect the gear ratio between the input and output shafts. Note that in a sequence of gears chained together, the ratio depends only on the number of teeth on the first and last gear. The intermediate gears, regardless of their size, do not alter the overall gear ratio
https://en.wikipedia.org/wiki/Spin%20echo
In magnetic resonance, a spin echo or Hahn echo is the refocusing of spin magnetisation by a pulse of resonant electromagnetic radiation. Modern nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI) make use of this effect. The NMR signal observed following an initial excitation pulse decays with time due to both spin relaxation and any inhomogeneous effects which cause spins in the sample to precess at different rates. The first of these, relaxation, leads to an irreversible loss of magnetisation. But the inhomogeneous dephasing can be removed by applying a 180° inversion pulse that inverts the magnetisation vectors. Examples of inhomogeneous effects include a magnetic field gradient and a distribution of chemical shifts. If the inversion pulse is applied after a period t of dephasing, the inhomogeneous evolution will rephase to form an echo at time 2t. In simple cases, the intensity of the echo relative to the initial signal is given by e–2t/T2 where T2 is the time constant for spin–spin relaxation. The echo time (TE) is the time between the excitation pulse and the peak of the signal. Echo phenomena are important features of coherent spectroscopy which have been used in fields other than magnetic resonance including laser spectroscopy and neutron scattering. History Echoes were first detected in nuclear magnetic resonance by Erwin Hahn in 1950, and spin echoes are sometimes referred to as Hahn echoes. In nuclear magnetic resonance and magnetic resonance imaging, radiofrequency radiation is most commonly used. In 1972 F. Mezei introduced spin-echo neutron scattering, a technique that can be used to study magnons and phonons in single crystals. The technique is now applied in research facilities using triple axis spectrometers. In 2020 two teams demonstrated that when strongly coupling an ensemble of spins to a resonator, the Hahn pulse sequence does not just lead to a single echo, but rather to a whole train of periodic echoes. In this pr
https://en.wikipedia.org/wiki/Matrix%20planting
Matrix planting is a form of self-sustaining gardening, with a focus on attractive plantings that are often purely ornamental, but can include food-bearing and medicinal plants. The idea Matrix planting is based on matching plant to space. The idea is that, when done successfully, plants replace spades, rakes, and hoes as the controllers of what goes on in the garden. Wildflowers grow all over the world with no help from humans. They are successful because the plants within each plant community have established a balance with one another: they each obtain a share of resources, living space, and opportunities to reproduce. Matrix planting is based on this natural model. It aims to set up similar self-sustaining communities in gardens, by bringing together plants that meld with one another in a balance: all survive and flourish; weeds are excluded. Matrix planting is based on choosing and managing plants in ways which enable them to form similar matrices in the garden. The aim is to enable the plants to occupy the ground and the space above it so effectively that no space is left for weeds and to do this in ways that are decorative and sympathetic to the setting of the garden. The aim of matrix planting is 1) encourage the plants you do want, and 2) discourage the plants you do not want. The key to success lies in the choice of plants. Ill-judged choices result in excessive dominance by one or two species, and the disappearance of those that cannot cope. Well judged choices lead to the establishment of persistent communities of plants which are diverse, self-renewing, resistant to invasion by weeds, and look attractive. It is not possible to plant and walk away as matrices take time to develop and depend on positive, rather than neutral, management. The strongest matrices consist of a succession of layers of vegetation through which sunlight filters, until at ground level there is enough only to support plants that can cope with very little light. The best ex
https://en.wikipedia.org/wiki/Interrupts%20in%2065xx%20processors
The 65xx family of microprocessors, consisting of the MOS Technology 6502 and its derivatives, the WDC 65C02, WDC 65C802 and WDC 65C816, and CSG 65CE02, all handle interrupts in a similar fashion. There are three hardware interrupt signals common to all 65xx processors and one software interrupt, the instruction. The WDC 65C816 adds a fourth hardware interrupt—, useful for implementing virtual memory architectures—and the software interrupt instruction (also present in the 65C802), intended for use in a system with a coprocessor of some type (e.g., a floating point processor). Interrupt types The hardware interrupt signals are all active low, and are as follows: RESETa reset signal, level-triggered NMIa non-maskable interrupt, edge-triggered IRQa maskable interrupt, level-triggered ABORTa special-purpose, non-maskable interrupt (65C816 only, see below), level-triggered The detection of a signal causes the processor to enter a system initialization period of six clock cycles, after which it sets the interrupt request disable flag in the status register and loads the program counter with the values stored at the processor initialization vector (–) before commencing execution. If operating in native mode, the 65C816/65C802 are switched back to emulation mode and stay there until returned to native mode under software control. The detection of an or signal, as well as the execution of a instruction, will cause the same overall sequence of events, which are, in order: The processor completes the current instruction and updates registers or memory as required before responding to the interrupt. 65C816/65C802 when operating in native mode: The program bank register (, the part of the address bus) is pushed onto the hardware stack. The most significant byte (MSB) of the program counter () is pushed onto the stack. The least significant byte (LSB) of the program counter is pushed onto the stack. The status register () is pushed onto the stack. The interrupt di
https://en.wikipedia.org/wiki/M-matrix
In mathematics, especially linear algebra, an M-matrix is a Z-matrix with eigenvalues whose real parts are nonnegative. The set of non-singular M-matrices are a subset of the class of P-matrices, and also of the class of inverse-positive matrices (i.e. matrices with inverses belonging to the class of positive matrices). The name M-matrix was seemingly originally chosen by Alexander Ostrowski in reference to Hermann Minkowski, who proved that if a Z-matrix has all of its row sums positive, then the determinant of that matrix is positive. Characterizations An M-matrix is commonly defined as follows: Definition: Let be a real Z-matrix. That is, where for all . Then matrix A is also an M-matrix if it can be expressed in the form , where with , for all , where is at least as large as the maximum of the moduli of the eigenvalues of , and is an identity matrix. For the non-singularity of , according to the Perron–Frobenius theorem, it must be the case that . Also, for a non-singular M-matrix, the diagonal elements of A must be positive. Here we will further characterize only the class of non-singular M-matrices. Many statements that are equivalent to this definition of non-singular M-matrices are known, and any one of these statements can serve as a starting definition of a non-singular M-matrix. For example, Plemmons lists 40 such equivalences. These characterizations has been categorized by Plemmons in terms of their relations to the properties of: (1) positivity of principal minors, (2) inverse-positivity and splittings, (3) stability, and (4) semipositivity and diagonal dominance. It makes sense to categorize the properties in this way because the statements within a particular group are related to each other even when matrix is an arbitrary matrix, and not necessarily a Z-matrix. Here we mention a few characterizations from each category. Equivalences Below, denotes the element-wise order (not the usual positive semidefinite order on matrices). That
https://en.wikipedia.org/wiki/Respiratory%20exchange%20ratio
The respiratory exchange ratio (RER) is the ratio between the metabolic production of carbon dioxide (CO2) and the uptake of oxygen (O2). The ratio is determined by comparing exhaled gases to room air. Measuring this ratio can be used for estimating the respiratory quotient (RQ), an indicator of which fuel (e.g. carbohydrate or fat) is being metabolized to supply the body with energy. Using RER to estimate RQ is only accurate during rest and mild to moderate aerobic exercise without the accumulation of lactate. The loss of accuracy during more intense anaerobic exercise is among others due to factors including the bicarbonate buffer system. The body tries to compensate for the accumulation of lactate and minimize the acidification of the blood by expelling more CO2 through the respiratory system. An RER near 0.7 indicates that fat is the predominant fuel source, a value of 1.0 is indicative of carbohydrate being the predominant fuel source, and a value between 0.7 and 1.0 suggests a mix of both fat and carbohydrate. In general a mixed diet corresponds with an RER of approximately 0.8. The RER can also exceed 1.0 during intense exercise. A value above 1.0 cannot be attributed to the substrate metabolism, but rather to the aforementioned factors regarding bicarbonate buffering. Calculation of RER is commonly done in conjunction with exercise tests such as the VO2 max test. This can be used as an indicator that the participants are nearing exhaustion and the limits of their cardio-respiratory system. An RER greater than or equal to 1.0 is often used as a secondary endpoint criterion of a VO2 max test. Oxidation of a carbohydrate molecule: Oxidation of a fatty acid molecule, namely palmitic acid: See also
https://en.wikipedia.org/wiki/Kisspeptin
Kisspeptins (including kisspeptin-54 (KP-54), formerly known as metastin) are proteins encoded by the KISS1 gene in humans. Kisspeptins are ligands of the G-protein coupled receptor, GPR54. Kiss1 was originally identified as a human metastasis suppressor gene that has the ability to suppress melanoma and breast cancer metastasis. Kisspeptin-GPR54 signaling has an important role in initiating secretion of gonadotropin-releasing hormone (GnRH) at puberty, the extent of which is an area of ongoing research. Gonadotropin-releasing hormone is released from the hypothalamus to act on the anterior pituitary triggering the release of luteinizing hormone (LH), and follicle stimulating hormone (FSH). These gonadotropic hormones lead to sexual maturation and gametogenesis. Disrupting GPR54 signaling can cause hypogonadotrophic hypogonadism in rodents and humans. The Kiss1 gene is located on chromosome 1. It is transcribed in the brain, adrenal gland, and pancreas. History In 1996, Danny Welch's lab in Hershey, Pennsylvania, isolated a cDNA from a cancer cell that was not able to undergo metastasis after the human chromosome 6 was added to the cell. This gene was named KISS1 because of the location of where it was discovered (Hershey, Pennsylvania, home of Hershey's Kisses). Introduction of this chromosome into the once active cancer cell inhibited it from spreading and the cDNA responsible was taken from that cell. The fact that KISS1 was responsible for this was proved when it was transfected into melanoma cells and yet again, metastasis was suppressed. Later, a breakthrough would occur not involving Kisspeptin, but with its receptor. Three years later in 1999, a G protein coupled receptor was identified in rat, cloned, and termed GPR54. Additionally, two years later, this receptor's ortholog in humans would be isolated. Using the identified receptors, endogenous ligands were isolated from cells (HEK293, B16-BL6, and CHO-K1 cells) that had these receptors inserted in
https://en.wikipedia.org/wiki/AuthIP
AuthIP is a Microsoft proprietary extension of the IKE cryptographic protocol. AuthIP is supported in Windows Vista and later on the client and Windows Server 2008 and later on the server. AuthIP adds a second authentication to the standard IKE authentication which, according to Microsoft, increases security and deployability of IPsec VPNs. AuthIP adds support for user-based authentication by using Kerberos v5 or SSL certificates. AuthIP is not compatible with IKEv2, an IETF standard with similar characteristics; however Windows 7 and Windows Server 2008 R2 also support IKEv2. See also SSTP External links AuthIP in Windows Vista - The Cable Guy column at the Microsoft website The Authenticated Internet Protocol - The Cable Guy column at the Microsoft website
https://en.wikipedia.org/wiki/Quantum%20Markov%20chain
In mathematics, the quantum Markov chain is a reformulation of the ideas of a classical Markov chain, replacing the classical definitions of probability with quantum probability. Introduction Very roughly, the theory of a quantum Markov chain resembles that of a measure-many automaton, with some important substitutions: the initial state is to be replaced by a density matrix, and the projection operators are to be replaced by positive operator valued measures. Formal statement More precisely, a quantum Markov chain is a pair with a density matrix and a quantum channel such that is a completely positive trace-preserving map, and a C*-algebra of bounded operators. The pair must obey the quantum Markov condition, that for all . See also Quantum walk
https://en.wikipedia.org/wiki/Public%20key%20fingerprint
In public-key cryptography, a public key fingerprint is a short sequence of bytes used to identify a longer public key. Fingerprints are created by applying a cryptographic hash function to a public key. Since fingerprints are shorter than the keys they refer to, they can be used to simplify certain key management tasks. In Microsoft software, "thumbprint" is used instead of "fingerprint." Creating public key fingerprints A public key fingerprint is typically created through the following steps: A public key (and optionally some additional data) is encoded into a sequence of bytes. To ensure that the same fingerprint can be recreated later, the encoding must be deterministic, and any additional data must be exchanged and stored alongside the public key. The additional data is typically information which anyone using the public key should be aware of. Examples of additional data include: which protocol versions the key should be used with (in the case of PGP fingerprints); and the name of the key holder (in the case of X.509 trust anchor fingerprints, where the additional data consists of an X.509 self-signed certificate). The data produced in the previous step is hashed with a cryptographic hash function such as SHA-1 or SHA-2. If desired, the hash function output can be truncated to provide a shorter, more convenient fingerprint. This process produces a short fingerprint which can be used to authenticate a much larger public key. For example, whereas a typical RSA public key will be 2048 bits in length or longer, typical MD5 or SHA-1 fingerprints are only 128 or 160 bits in length. When displayed for human inspection, fingerprints are usually encoded into hexadecimal strings. These strings are then formatted into groups of characters for readability. For example, a 128-bit MD5 fingerprint for SSH would be displayed as follows: 43:51:43:a1:b5:fc:8b:b7:0a:3a:a9:b1:0f:66:73:a8 Using public key fingerprints for key authentication When a public key
https://en.wikipedia.org/wiki/Fixed-asset%20turnover
Fixed-asset turnover is the ratio of sales (on the profit and loss account) to the value of fixed assets (on the balance sheet). It indicates how well the business is using its fixed assets to generate sales. Generally speaking, the higher the ratio, the better, because a high ratio indicates the business has less money tied up in fixed assets for each unit of currency of sales revenue. A declining ratio may indicate that the business is over-invested in plant, equipment, or other fixed assets. In A.A.T. assessments this financial measure is calculated in two different ways. 1. Total Asset Turnover Ratio = Revenue / Total Assets 2. Net Asset Turnover Ratio = Revenue / (Total Assets - Current Liabilities)
https://en.wikipedia.org/wiki/Debtor%20collection%20period
In accounting the term debtor collection period indicates the average time taken to collect trade debts. In other words, a reducing period of time is an indicator of increasing efficiency. It enables the enterprise to compare the real collection period with the granted/theoretical credit period. Debtor collection period = × (average debtors = debtors at the beginning of the year + debtors at the end of the year, divided by 2 or Debtors + Bills Receivables) The average collection period (ACP) is the time taken by businesses to convert their accounts receivable (AR) to cash. Credit sales are all sales made on credit (i.e. excluding cash sales). A long debtors collection period is an indication of slow or late payments by debtors. The multiplier may be changed to 12 (for months) or 52 (for weeks) if appropriate. See also Receivables turnover ratio Cash flow Debtor days Working capital
https://en.wikipedia.org/wiki/Ischiopubic%20ramus
The ischiopubic ramus is a compound structure consisting of the following two structures: from the pubis, the inferior pubic ramus from the ischium, the inferior ramus of the ischium It forms the inferior border of the obturator foramen and serves as part of the origin for the obturator internus and externus muscles. Also, most adductors originate at the ischiopubic ramus. The fascia of Colles is attached to its margin.
https://en.wikipedia.org/wiki/User%20experience%20design
User experience design (UX design, UXD, UED, or XD) is the process of defining the experience a user would go through when interacting with a company, its services, and its products. Design decisions in UX design are often driven by research, data analysis, and test results rather than aesthetic preferences and opinions. Unlike user interface design, which focuses solely on the design of a computer interface, UX design encompasses all aspects of a user's perceived experience with a product or website, such as its usability, usefulness, desirability, brand perception, and overall performance. UX design is also an element of the customer experience (CX), which encompasses all aspects and stages of a customer's experience and interaction with a company. History The field of user experience design is a conceptual design discipline and has its roots in human factors and ergonomics, a field that, since the late 1940s, has focused on the interaction between human users, machines, and the contextual environments to design systems that address the user's experience. With the proliferation of workplace computers in the early 1990s, user experience started to become a positive insight for designers. Donald Norman, a professor and researcher in design, usability, and cognitive science, coined the term "user experience," and brought it to a wider audience. There is a debate occurring in the experience design community regarding its focus, provoked in part by design scholar and practitioner, Don Norman. Norman claims that when designers describe people only as customers, consumers, and users, designers risk diminishing their ability to do good design. Elements Research User experience design draws from design approaches like human-computer interaction and user-centered design, and includes elements from similar disciplines like interaction design, visual design, information architecture, user research, and others. Another portion of the research is understanding the end-u
https://en.wikipedia.org/wiki/Boletus%20reticulatus
Boletus reticulatus (alternately known as Boletus aestivalis (Paulet) Fr.), and commonly referred to as the summer cep is a basidiomycete fungus of the genus Boletus. It occurs in deciduous forests of Europe, where it forms a symbiotic mycorrhizal relationship with species of oak (Quercus). The fungus produces fruiting bodies in the summer months which are edible and popularly collected. The summer cep was formally described by Jacob Christian Schäffer as Boletus reticulatus in 1774, which took precedence over B. aestivalis as described by Jean-Jacques Paulet in 1793. Taxonomy German naturalist Jacob Christian Schäffer described the summer cep as Boletus reticulatus in 1774, in his series on fungi of Bavaria and the Palatinate, Fungorum qui in Bavaria et Palatinatu circa Ratisbonam nascuntur icones. French mycologist Jean-Jacques Paulet described it as Le grand Mousseux (Tubiporus aestivalis) in 1793, adding that it was delicious with chicken fricassee and could be found in the Bois de Boulogne in summer. the species name the species name is derived from the Latin aestas "summer". Swedish mycologist Elias Magnus Fries followed Paulet, using Boletus aestivalis in 1838. The two names have been used in literature for many years. Boletus reticulatus is classified in Boletus section Boletus, alongside close relatives such as B. aereus, B. edulis, and B. pinophilus. A genetic study of the four European species found that B. reticulatus was sister to B. aereus. More extensive testing of worldwide taxa revealed that B. reticulatus was most closely related to two lineages that had been classified as B. edulis from southern China and Korea/northern China respectively. The common ancestor of these three species was related to a lineage consisting of B. aereus and the genetically close B. mamorensis. Molecular analysis suggests that the B. aereus/mamorensis and B. reticulatus/Chinese B. "edulis" lineages diverged around 6 to 7 million years ago. The British Mycological So
https://en.wikipedia.org/wiki/Computational%20engineering
Computational Engineering is an emerging discipline that deals with the development and application of computational models for engineering, known as Computational Engineering Models or CEM. Computational engineering uses computers to solve engineering design problems important to a variety of industries. At this time, various different approaches are summarized under the term Computational Engineering, including using computational geometry and virtual design for engineering tasks, often coupled with a simulation-driven approach In Computational Engineering, algorithms solve mathematical and logical models that describe engineering challenges, sometimes coupled with some aspect of AI, specifically Reinforcement Learning. In Computational Engineering the engineer encodes their knowledge using logical structuring. The result is an algorithm, the Computational Engineering Model, that can produce many different variants of engineering designs, based on varied input requirements. The results can then be analyzed through additional mathematical models to create algorithmic feedback loops. Simulations of physical behaviors relevant to the field, often coupled with high-performance computing, to solve complex physical problems arising in engineering analysis and design (as well as natural phenomena (computational science). It is therefore related to Computational Science and Engineering, which has been described as the "third mode of discovery" (next to theory and experimentation). In Computational Engineering, computer simulation provides the capability to create feedback that would be inaccessible to traditional experimentation or where carrying out traditional empirical inquiries is prohibitively expensive. Computational Engineering should neither be confused with pure computer science, nor with computer engineering, although a wide domain in the former is used in Computational Engineering (e.g., certain algorithms, data structures, parallel programming, high perfor
https://en.wikipedia.org/wiki/Butterfly%20keyboard
Butterfly keyboard may refer to keyboards used on specific laptop computer models: IBM ThinkPad 701 MacBook Pro MacBook Air
https://en.wikipedia.org/wiki/Gradient%20enhanced%20NMR%20spectroscopy
Gradient enhanced NMR is a method for obtaining high resolution nuclear magnetic resonance spectra without the need for phase cycling. Gradient methodology is used extensively for two purposes, either rephasing (selection) or dephasing (elimination) of a particular magnetization transfer pathway. It includes the application of magnetic field gradient pulses to select specific coherences. By using actively shielded gradients, a gradient pulse is applied during the evolution period of the selected coherence to dephase the transverse magnetization and another gradient pulse refocuses the desired coherences remaining during the acquisition period. Advantages Significant reduction in measuring time Reduced T1 artifacts Elimination of phase cycling and difference methods Possibility for three and four-quantum editing The ability to detect resonances at the same chemical shift as a strong solvent resonance Drawbacks A need for field-frequency-lock blanking during long runs. Examples Selection of transverse magnetization (Ix, Sx, Iy etc.): (+)gradient 180°(x) (+)gradient Suppression of transverse magnetization (Ix, Sx, Iy etc.): (+)gradient 180°(x) (-)gradient
https://en.wikipedia.org/wiki/Expanded%20bed%20adsorption
Expanded bed adsorption (EBA) is a preparative chromatographic technique which makes processing of viscous and particulate liquids possible. Principle The protein binding principles in EBA are the same as in classical column chromatography and the common ion-exchange, hydrophobic interaction and affinity chromatography ligands can be used. After the adsorption step is complete, the fluidized bed is washed to flush out any remaining particulates. Elution of the adsorbed proteins was commonly performed with the eluent flow in the reverse direction; that is, as a conventional packed bed, in order to recover the adsorbed solutes in a smaller volume of eluent. However, a new generation of EBA columns has been developed, which maintain the bed in the expanded state during this phase, producing high-purity, high yields of e.g. MAbs [monoclonal antibodies] in even smaller volumes of eluent. Process duration at manufacturing scale has also been cut considerably (under 7 hours in some cases). EBA may be considered to combine both the "Removal of Insolubles" and the "Isolation" steps of the 4-step downstream processing heuristic. The major limitations associated with EBA technology is biomass interactions and aggregations onto adsorbent during processing. Where classical column chromatography uses a solid phase made by a packed bed, EBA uses particles in a fluidized state, ideally expanded by a factor of 2. Expanded bed adsorption is, however, different from fluidised bed chromatography in essentially two ways: one, the EBA resin contains particles of varying size and density which results in a gradient of particle size when expanded; and two, when the bed is in its expanded state, local loops are formed. Particles such as whole cells or cell debris, which would clog a packed bed column, readily pass through a fluidized bed. EBA can therefore be used on crude culture broths or slurries of broken cells, thereby bypassing initial clearing steps such as centrifugation and filt
https://en.wikipedia.org/wiki/Pulsed%20field%20gradient
A pulsed field gradient is a short, timed pulse with spatial-dependent field intensity. Any gradient is identified by four characteristics: axis, strength, shape and duration. Pulsed field gradient (PFG) techniques are key to magnetic resonance imaging, spatially selective spectroscopy and studies of diffusion via diffusion ordered nuclear magnetic resonance spectroscopy (DOSY). PFG techniques are widely used as an alternative to phase cycling in modern NMR spectroscopy. Common field gradients in NMR The effect of a uniform magnetic field gradient in the z-direction on spin I, is considered to be a rotation around z-axis by an angle = γIGz; where Gz is the gradient magnitude (along the z-direction) and γI is the gyromagnetic ratio of spin I. It introduces a phase factor to the magnetizations: Φ (z,τ) = (γI)(Gz)(τ) The time duration τ is in the order of milliseconds. See also Gradient enhanced NMR spectroscopy
https://en.wikipedia.org/wiki/Lexical%20grammar
In computer science, a lexical grammar or lexical structure is a formal grammar defining the syntax of tokens. The program is written using characters that are defined by the lexical structure of the language used. The character set is equivalent to the alphabet used by any written language. The lexical grammar lays down the rules governing how a character sequence is divided up into subsequences of characters, each part of which represents an individual token. This is frequently defined in terms of regular expressions. For instance, the lexical grammar for many programming languages specifies that a string literal starts with a character and continues until a matching is found (escaping makes this more complicated), that an identifier is an alphanumeric sequence (letters and digits, usually also allowing underscores, and disallowing initial digits), and that an integer literal is a sequence of digits. So in the following character sequence the tokens are string, identifier and number (plus whitespace tokens) because the space character terminates the sequence of characters forming the identifier. Further, certain sequences are categorized as keywords – these generally have the same form as identifiers (usually alphabetical words), but are categorized separately; formally they have a different token type. Examples Regular expressions for common lexical rules follow (for example, C). Unescaped string literal (quote, followed by non-quotes, ending in a quote): "[^"]*" Escaped string literal (quote, followed by escaped characters or non-quotes, ending in a quote): "(\.|[^\"])*" Integer literal: [0-9]+ Decimal integer literal (no leading zero): [1-9][0-9]*|0 Hexadecimal integer literal: 0[Xx][0-9A-Fa-f]+ Octal integer literal: 0[0-7]+ Identifier: [A-Za-z_$][A-Za-z0-9_$]* See also Lexical analysis
https://en.wikipedia.org/wiki/Clamp%20connection
A clamp connection is a hook-like structure formed by growing hyphal cells of certain fungi. It is a characteristic feature of basidiomycete fungi. It is created to ensure that each cell, or segment of hypha separated by septa (cross walls), receives a set of differing nuclei, which are obtained through mating of hyphae of differing sexual types. It is used to maintain genetic variation within the hypha much like the mechanisms found in croziers (hooks) during the sexual reproduction of ascomycetes. Formation Clamp connections are formed by the terminal hypha during elongation. Before the clamp connection is formed this terminal segment contains two nuclei. Once the terminal segment is long enough it begins to form the clamp connection. At the same time, each nucleus undergoes mitotic division to produce two daughter nuclei. As the clamp continues to develop it uptakes one of the daughter (green circle) nuclei and separates it from its sister nucleus. While this is occurring the remaining nuclei (orange circles) begin to migrate from one another to opposite ends of the cell. Once all these steps have occurred a septum forms, separating each set of nuclei. Use in classification Clamp connections are structures unique to the phylum Basidiomycota. Many fungi from this phylum produce spores in basidiocarps (fruiting bodies, or mushrooms), above ground. Though clamp connections are exclusive to this phylum, not all species of Basidiomycota possess these structures. As such, the presence or absences of clamp connections has been a tool in categorizing genera and species. Fossil record A fungal mycelium containing abundant clamp connections was found that dated to the Pennsylvanian era (298.9–323.2 Mya). This fossil, classified in the form genus Palaeancistrus, has hyphae that compare with extant saprophytic basidiomycetes. The oldest known clamp connections exist in hyphae present in the fossil fern Botryopteris antiqua, which predate Palaeancistrus by about 25 Ma.
https://en.wikipedia.org/wiki/Rprop
Rprop, short for resilient backpropagation, is a learning heuristic for supervised learning in feedforward artificial neural networks. This is a first-order optimization algorithm. This algorithm was created by Martin Riedmiller and Heinrich Braun in 1992. Similarly to the Manhattan update rule, Rprop takes into account only the sign of the partial derivative over all patterns (not the magnitude), and acts independently on each "weight". For each weight, if there was a sign change of the partial derivative of the total error function compared to the last iteration, the update value for that weight is multiplied by a factor η−, where η− < 1. If the last iteration produced the same sign, the update value is multiplied by a factor of η+, where η+ > 1. The update values are calculated for each weight in the above manner, and finally each weight is changed by its own update value, in the opposite direction of that weight's partial derivative, so as to minimise the total error function. η+ is empirically set to 1.2 and η− to 0.5. Rprop can result in very large weight increments or decrements if the gradients are large, which is a problem when using mini-batches as opposed to full batches. RMSprop addresses this problem by keeping the moving average of the squared gradients for each weight and dividing the gradient by the square root of the mean square. RPROP is a batch update algorithm. Next to the cascade correlation algorithm and the Levenberg–Marquardt algorithm, Rprop is one of the fastest weight update mechanisms. Variations Martin Riedmiller developed three algorithms, all named RPROP. Igel and Hüsken assigned names to them and added a new variant: RPROP+ is defined at A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm. RPROP− is defined at Advanced Supervised Learning in Multi-layer Perceptrons – From Backpropagation to Adaptive Learning Algorithms. Backtracking is removed from RPROP+. iRPROP− is defined in Rprop – Descript
https://en.wikipedia.org/wiki/Video%20router
A video router, also known as a video matrix switch or SDI router, is an electronic switch designed to route video signals from multiple input sources such as cameras, VT/DDR, computers and DVD players, to one or more display devices, such as monitors, projectors, and TVs. Inputs and outputs The number of inputs and outputs varies dramatically. Routers are normally described by number of inputs by number of outputs e.g. 2x1, 256x256, 576x1152. Some video routers, by the use of additional drop-in cards, allow the system to be expanded for more inputs or outputs, or to support other formats. Signals The signal format that the router transports can be anything from analogue composite video using PAL and NTSC. Also multi-format routers can route more than one Digital video signal format, Serial Digital Interface (SDI), HD-SDI, component video. Some routers have the ability to internally convert digital to analog and analog to digital. For HD Video, HDMI Matrix switch can be used to switch any HDMI source to any connected HDTV using a HDMI connection. More recent developments have allowed audio embedding and de-embedding within the router, this allows for audio to be routed along with video. Crosspoints Because any of the sources can be routed to any destination, the internal arrangement of the router is arranged as a number of crosspoints which can be activated to pass the corresponding source signal to the desired destination. This architecture has guaranteed bandwidth and is non-blocking. Crosspoints can also be switched in the vertical interval to avoid losing picture information, for this the router would need to be genlocked to either black and burst or tri-level sync. Control Many types of broadcast automation systems can be used to control a video router via IP or serial communications such as RS-422. Video routers can also be controlled by other types of user interfaces, including front panel buttons, IR remote control, or application software runn
https://en.wikipedia.org/wiki/Characterization%20test
In computer programming, a characterization test (also known as Golden Master Testing) is a means to describe (characterize) the actual behavior of an existing piece of software, and therefore protect existing behavior of legacy code against unintended changes via automated testing. This term was coined by Michael Feathers. Overview The goal of characterization tests is to help developers verify that the modifications made to a reference version of a software system did not modify its behavior in unwanted or undesirable ways. They enable, and provide a safety net for, extending and refactoring code that does not have adequate unit tests. In James Bach's and Michael Bolton's classification of test oracles, this kind of testing corresponds to the historical oracle. In contrast to the usual approach of assertions-based software testing, the outcome of the test is not determined by individual values or properties (that are checked with assertions), but by comparing a complex result of the tested software-process as a whole with the result of the same process in a previous version of the software. In a sense, characterization testing inverts traditional testing: Traditional tests check individual properties (whitelists them), where characterization testing checks all properties that are not removed (blacklisted). When creating a characterization test, one must observe what outputs occur for a given set of inputs. Given an observation that the legacy code gives a certain output based on given inputs, then a test can be written that asserts that the output of the legacy code matches the observed result for the given inputs. For example, if one observes that f(3.14) == 42, then this could be created as a characterization test. Then, after modifications to the system, the test can determine if the modifications caused changes in the results when given the same inputs. Unfortunately, as with any testing, it is generally not possible to create a characterization test fo
https://en.wikipedia.org/wiki/Sign%20%28mathematics%29
In mathematics, the sign of a real number is its property of being either positive, negative, or 0. In some contexts, it makes sense to consider a signed zero (such as floating-point representations of real numbers within computers). Depending on local conventions, zero may be considered as being neither positive nor negative (having no sign or a unique third sign), or it may be considered both positive and negative (having both signs). Whenever not specifically mentioned, this article adheres to the first convention (zero having undefined sign). In mathematics and physics, the phrase "change of sign" is associated with the generation of the additive inverse (negation, or multiplication by −1) of any object that allows for this construction, and is not restricted to real numbers. It applies among other objects to vectors, matrices, and complex numbers, which are not prescribed to be only either positive, negative, or zero. The word "sign" is also often used to indicate other binary aspects of mathematical objects that resemble positivity and negativity, such as odd and even (sign of a permutation), sense of orientation or rotation (cw/ccw), one sided limits, and other concepts described in below. Sign of a number Numbers from various number systems, like integers, rationals, complex numbers, quaternions, octonions, ... may have multiple attributes, that fix certain properties of a number. A number system that bears the structure of an ordered ring contains a unique number that when added with any number leaves the latter unchanged. This unique number is known as the system's additive identity element. For example, the integers has the structure of an ordered ring. This number is generally denoted as Because of the total order in this ring, there are numbers greater than zero, called the positive numbers. Another property required for a ring to be ordered is that, for each positive number, there exists a unique corresponding number less than whose sum with t
https://en.wikipedia.org/wiki/Higher%20spin%20alternating%20sign%20matrix
In mathematics, a higher spin alternating sign matrix is a generalisation of the alternating sign matrix (ASM), where the columns and rows sum to an integer r (the spin) rather than simply summing to 1 as in the usual alternating sign matrix definition. HSASMs are square matrices whose elements may be integers in the range −r to +r. When traversing any row or column of an ASM or HSASM, the partial sum of its entries must always be non-negative. High spin ASMs have found application in statistical mechanics and physics, where they have been found to represent symmetry groups in ice crystal formation. Some typical examples of HSASMs are shown below: The set of HSASMs is a superset of the ASMs. The extreme points of the convex hull of the set of r-spin HSASMs are themselves integer multiples of the usual ASMs.
https://en.wikipedia.org/wiki/Botanical%20Garden%20of%20Vilnius%20University
Botanical Garden of Vilnius University () is a botanical garden situated in Vilnius, Lithuania. History The garden was established by professor Jean-Emmanuel Gilibert of Vilnius University in 1781. In 1832 the Vilnius University and Botanical Garden were closed. In 1919, the Botanical Garden of the Polish Stefan Batory University was started in a new location, in Vingis (known as Zakret at that time). In 1975 territory of the garden was expanded. Since then the main part of the garden is in Kairėnai (address: Kairėnų 43, LT-10239 Vilnius) which is situated in Antakalnis elderate of Vilnius. There is also a department of the garden in Vingis Park (address: M. K. Čiurlionio 110, LT-03100 Vilnius). Collection The collection of the botanical garden includes 11,000 taxa of plants, including: 2,500 taxa in Department of Dendrology 3,000 taxa in Department of Plant Systematic and Geography 3,200 taxa in Department of Floriculture 300 taxa in Department of Plant Genetic 750 taxa in Department of Pomology 100 taxa in Laboratory of Plant Physiology and Isolated Tissue Cultures About one third of the Lithuanian vascular plant inhabit the territory of the garden. Research The botanical garden carries out research in the areas of biotechnology, horticulture, molecular genetics, conservation, ethnobotany, systematics and taxonomy.
https://en.wikipedia.org/wiki/Joint%20Worldwide%20Intelligence%20Communications%20System
The Joint Worldwide Intelligence Communication System (JWICS, ) is the United States Department of Defense's secure intranet system that houses top secret and sensitive compartmented information. JWICS superseded the earlier DSNET2 and DSNET3, the Top Secret and SCI levels of the Defense Data Network based on ARPANET technology. The system deals primarily with intelligence information and was one of the networks accessed by Chelsea Manning, in the leaking of sensitive footage and intelligence during the Afghanistan and Iraq wars to whistleblower organization WikiLeaks in 2010. The video used in WikiLeaks' Collateral Murder and US diplomatic cables were leaked by Manning. In 2023, it was also accessed by Jack Teixeira who leaked information about the war in Ukraine. Because of the information it houses, JWICS is subject to discussion around cybersecurity and the United States' vulnerability to cyber threats. Opinions surrounding the Joint Worldwide Intelligence Communication system are varied. Some emphasize its importance as a measure to protect intelligence that helps to ensure the safety of US military interests and personnel. Others scrutinize the system for standing in the way of the transparency and accountability of government. JWICS in practice The Joint Worldwide Intelligence Communications System (JWICS) is a secure intranet system utilized by the United States Department of Defense to house "Top Secret/Sensitive Compartmented Information" In day-to-day usage, the JWICS is used primarily by members of the Intelligence Community, such as the DIA within the DoD, and the Federal Bureau of Investigation under the Justice Department. Conversely, SIPRNet and NIPRNet account for the overwhelming bulk of usage within DoD and non-intelligence government agencies and departments. There are three main router networks operated by the Department of Defense. Each is separated by the types of information they deal with. At the most open level, the Non-Classified I
https://en.wikipedia.org/wiki/Mass%20effect%20%28medicine%29
In medicine, a mass effect is the effect of a growing mass that results in secondary pathological effects by pushing on or displacing surrounding tissue. In oncology, the mass typically refers to a tumor. For example, cancer of the thyroid gland may cause symptoms due to compressions of certain structures of the head and neck; pressure on the laryngeal nerves may cause voice changes, narrowing of the windpipe may cause stridor, pressure on the gullet may cause dysphagia and so on. Surgical removal or debulking is sometimes used to palliate symptoms of the mass effect even if the underlying pathology is not curable. In neurology, a mass effect is the effect exerted by any mass, including, for example, hydrocephalus (cerebrospinal fluid buildup) or an evolving intracranial hemorrhage (bleeding within the skull) presenting with a clinically significant hematoma. The hematoma can exert a mass effect on the brain, increasing intracranial pressure and potentially causing midline shift or deadly brain herniation. In the past this effect held additional diagnostic importance since prior to the invention of modern tomographic soft-tissue imaging utilizing MRI or CT it was not possible to directly image many kinds of primary intracranial lesions. Therefore, in those days, the mass effect of these abnormalities on surrounding structures was sometimes used to indirectly infer the existence of the primary abnormalities themselves, for example by using a cerebral angiography to observe the secondary vascular displacement caused by a subdural hematoma pushing on the brain, or by looking for a distortion caused by a tumor on the normal outline of the ventricles as depicted on a pneumoencephalogram. These studies were often invasive and uncomfortable for patients and provided only a partial assessment of the primary condition being evaluated. Nowadays modern diagnostic tools exist which allow physicians to easily locate and visualize all kinds of intracranial lesions without hav
https://en.wikipedia.org/wiki/Generating%20function%20%28physics%29
In physics, and more specifically in Hamiltonian mechanics, a generating function is, loosely, a function whose partial derivatives generate the differential equations that determine a system's dynamics. Common examples are the partition function of statistical mechanics, the Hamiltonian, and the function which acts as a bridge between two sets of canonical variables when performing a canonical transformation. In canonical transformations There are four basic generating functions, summarized by the following table: Example Sometimes a given Hamiltonian can be turned into one that looks like the harmonic oscillator Hamiltonian, which is For example, with the Hamiltonian where p is the generalized momentum and q is the generalized coordinate, a good canonical transformation to choose would be This turns the Hamiltonian into which is in the form of the harmonic oscillator Hamiltonian. The generating function F for this transformation is of the third kind, To find F explicitly, use the equation for its derivative from the table above, and substitute the expression for P from equation (), expressed in terms of p and Q: Integrating this with respect to Q results in an equation for the generating function of the transformation given by equation (): {|cellpadding="2" style="border:2px solid #ccccff" | |} To confirm that this is the correct generating function, verify that it matches (): See also Hamilton–Jacobi equation Poisson bracket
https://en.wikipedia.org/wiki/Institute%20for%20Quantum%20Computing
The Institute for Quantum Computing (IQC) is an affiliate scientific research institute of the University of Waterloo located in Waterloo, Ontario with a multidisciplinary approach to the field of quantum information processing. IQC was founded in 2002 primarily through a donation made by Mike Lazaridis and his wife Ophelia whose substantial donations have continued over the years. The institute is now located in the Mike & Ophelia Lazaridis Quantum-Nano Centre and the Research Advancement Centre at the University of Waterloo. Its executive director is physics professor Norbert Lütkenhaus and hosts researchers based in 7 departments across 3 faculties at the University of Waterloo. In addition to theoretical and experimental research on quantum computing, IQC also hosts academic conferences and workshops, short courses for undergraduate and high school students, and scientific outreach events including open houses and tours for the public. History The Institute for Quantum Computing was officially created in 2002, sparked by Research In Motion co-founder Mike Lazaridis and then-president of the University of Waterloo, David Johnston, for research into quantum information. Since inception, Lazaridis has provided more than $100 million in private funding for IQC. The institute is a collaboration between academia, the private sector, and the federal and provincial governments. Raymond Laflamme is the founding executive director. At its establishment, the institute was composed of only a handful of researchers from the Departments of Computer Science and Physics. Ten years later, there are more than 200 researchers across six departments within the Faculties of Science, Mathematics, and Engineering at the University of Waterloo. In 2008, IQC moved into the Research Advancement Centre 1 (RAC I) in the University of Waterloo's Research & Technology Park. In 2010, research operations expanded into the adjacent building, Research Advancement Centre 2 (RAC II). In 2012
https://en.wikipedia.org/wiki/Restoring%20force
In physics, the restoring force is a force that acts to bring a body to its equilibrium position. The restoring force is a function only of position of the mass or particle, and it is always directed back toward the equilibrium position of the system. The restoring force is often referred to in simple harmonic motion. The force responsible for restoring original size and shape is called the restoring force. An example is the action of a spring. An idealized spring exerts a force proportional to the amount of deformation of the spring from its equilibrium length, exerted in a direction oppose the deformation. Pulling the spring to a greater length causes it to exert a force that brings the spring back toward its equilibrium length. The amount of force can be determined by multiplying the spring constant, characteristic of the spring, by the amount of stretch, also known as Hooke's Law. Another example is of a pendulum. When a pendulum is not swinging all the forces acting on it are in equilibrium. The force due to gravity and the mass of the object at the end of the pendulum is equal to the tension in the string holding the object up. When a pendulum is put in motion, the place of equilibrium is at the bottom of the swing, the location where the pendulum rests. When the pendulum is at the top of its swing the force returning the pendulum to this midpoint is gravity. As a result, gravity may be seen as a restoring force. See also Response amplitude operator
https://en.wikipedia.org/wiki/Turnstile%20%28symbol%29
In mathematical logic and computer science the symbol ⊢ () has taken the name turnstile because of its resemblance to a typical turnstile if viewed from above. It is also referred to as tee and is often read as "yields", "proves", "satisfies" or "entails". Interpretations The turnstile represents a binary relation. It has several different interpretations in different contexts: In epistemology, Per Martin-Löf (1996) analyzes the symbol thus: "...[T]he combination of Frege's , judgement stroke [ | ], and , content stroke [—], came to be called the assertion sign." Frege's notation for a judgement of some content can then be read I know is true. In the same vein, a conditional assertion can be read as: From , I know that In metalogic, the study of formal languages; the turnstile represents syntactic consequence (or "derivability"). This is to say, that it shows that one string can be derived from another in a single step, according to the transformation rules (i.e. the syntax) of some given formal system. As such, the expression means that is derivable from in the system. Consistent with its use for derivability, a "⊢" followed by an expression without anything preceding it denotes a theorem, which is to say that the expression can be derived from the rules using an empty set of axioms. As such, the expression means that is a theorem in the system. In proof theory, the turnstile is used to denote "provability" or "derivability". For example, if is a formal theory and is a particular sentence in the language of the theory then means that is provable from . This usage is demonstrated in the article on propositional calculus. The syntactic consequence of provability should be contrasted to semantic consequence, denoted by the double turnstile symbol . One says that is a semantic consequence of , or , when all possible valuations in which is true, is also true. For propositional logic, it may be shown that semantic consequence and derivability a
https://en.wikipedia.org/wiki/Bellairs%20Research%20Institute
The Bellairs Research Institute, located on the Caribbean island of Barbados, was founded in 1954 as a marine biology field-station for McGill University. The main campus of McGill University is in Montreal, Quebec, Canada. Bellairs was initial funding was from a bequest by British naval commander, Carlyon Bellairs, for whom the institute is named. The institute is used by both undergraduate and graduate students in a range of subjects, including marine science, geography, economics, engineering and international development studies. Bellairs hosts numerous McGill University field-courses and workshops throughout the year, including Applied Tropical Ecology, Geography, and the Barbados Field Study Semester (BFSS). Bellairs also holds annual field courses from other universities from around the world including the University of Toronto (marine biology) and Western Michigan University (archeology). Location Bellairs is located just north of the historic town of Holetown, in the parish of St. James, on the west coast of the Barbados. The facility is situated between the Folkestone Marine Park and Museum, to the south and the Coral Reef Club hotel to the north. The shallow coral reef, and calm and clear water found on the west coast of Barbados, make Bellairs ideally suited to marine research. The fringing reef adjacent to the research institute is known as Folkestone Reef, although researchers at the institute typically refer to the two flame-shaped formations as North and South Bellairs. The southern reef extends approximately 900 feet (275 meters) from shore and the northern reef extends approximately 350 feet (107 meters) from shore. Both reefs have extensive spur and groove formations along their seaward edge, which range in depth from about 15 to 25 feet (4.5 to 7.5 meters). These reefs were extensively mapped in 2018, using 3D modeling techniques, by Canadian company Reef Smart Guides, which was founded, and is managed by, former McGill University graduat
https://en.wikipedia.org/wiki/Computerized%20classification%20test
A computerized classification test (CCT) refers to, as its name would suggest, a Performance Appraisal System that is administered by computer for the purpose of classifying examinees. The most common CCT is a mastery test where the test classifies examinees as "Pass" or "Fail," but the term also includes tests that classify examinees into more than two categories. While the term may generally be considered to refer to all computer-administered tests for classification, it is usually used to refer to tests that are interactively administered or of variable-length, similar to computerized adaptive testing (CAT). Like CAT, variable-length CCTs can accomplish the goal of the test (accurate classification) with a fraction of the number of items used in a conventional fixed-form test. A CCT requires several components: An item bank calibrated with a psychometric model selected by the test designer A starting point An item selection algorithm A termination criterion and scoring procedure The starting point is not a topic of contention; research on CCT primarily investigates the application of different methods for the other three components. Note: The termination criterion and scoring procedure are separate in CAT, but the same in CCT because the test is terminated when a classification is made. Therefore, there are five components that must be specified to design a CAT. An introduction to CCT is found in Thompson (2007) and a book by Parshall, Spray, Kalohn and Davey (2006). A bibliography of published CCT research is found below. How it works A CCT is very similar to a CAT. Items are administered one at a time to an examinee. After the examinee responds to the item, the computer scores it and determines if the examinee is able to be classified yet. If they are, the test is terminated and the examinee is classified. If not, another item is administered. This process repeats until the examinee is classified or another ending point is satisfied (all it
https://en.wikipedia.org/wiki/Socioemotional%20selectivity%20theory
Socioemotional selectivity theory (SST; developed by Stanford psychologist Laura L. Carstensen) is a life-span theory of motivation. The theory maintains that as time horizons shrink, as they typically do with age, people become increasingly selective, investing greater resources in emotionally meaningful goals and activities. According to the theory, motivational shifts also influence cognitive processing. Aging is associated with a relative preference for positive over negative information in individuals who have had rewarding relationships. This selective narrowing of social interaction maximizes positive emotional experiences and minimizes emotional risks as individuals become older. According to this theory, older adults systematically hone their social networks so that available social partners satisfy their emotional needs. The theory also focuses on the types of goals that individuals are motivated to achieve. Knowledge-related goals aim at knowledge acquisition, career planning, the development of new social relationships and other endeavors that will pay off in the future. Emotion-related goals are aimed at emotion regulation, the pursuit of emotionally gratifying interactions with social partners and other pursuits whose benefits can be realized in the present. When people perceive their future as open ended, they tend to focus on future-oriented and development- or knowledge-related goals, but when they feel that time is running out and the opportunity to reap rewards from future-oriented goals' realization is dwindling, their focus tends to shift towards present-oriented and emotion- or pleasure-related goals. Research on this theory often compares age groups (e.g., young adulthood vs. old adulthood), but the shift in goal priorities is a gradual process that begins in early adulthood. Importantly, the theory contends that the cause of these goal shifts is not age itself, i.e., not the passage of time itself, but rather an age-associated shift in ti
https://en.wikipedia.org/wiki/Gliophorus%20psittacinus
Gliophorus psittacinus, commonly known as the parrot toadstool or parrot waxcap, is a colourful member of the genus Gliophorus, found across Northern Europe. It was formerly known as Hygrocybe psittacina, but a molecular phylogenetics study found it to belong in the genus Gliophorus. It had already been placed in Gliophorus, but it had been considered a synonym of Hygrocybe. Description The parrot toadstool is a small mushroom, with a convex to umbonate cap up to in diameter, which is green when young and later yellowish or even pinkish tinged. The stipe, measuring in length and 3–5 mm in width, is green to greenish yellow. The broad adnate gills are greenish with yellow edges and spore print white. The green colouring persists at the stem apex even in old specimens. The spores are white, elliptical, smooth and inamyloid. Its odour and taste are mild. There are no known chemical tests. It fruits late summer to autumn (September to November). Distribution and habitat Gliophorus psittacinus is widely distributed in grasslands in western Europe, United Kingdom, Iceland, Greenland, the Americas, South Africa, Japan, being found in late summer and autumn. In Europe it is apparently in decline due to the degradation of habitats. Early Australian records of this form have been found to be the similar green toadstools Gliophorus graminicolor or G. viridis on reexamination. Gliophorus psittacinus is known to occur at one site in the Lane Cove River valley near Sydney. Edibility Gliophorus psittacinus is generally considered edible, but not worthwhile due to its small size and sliminess. Consumption of over 20 specimens in one sitting can cause gastrointestinal disorders.
https://en.wikipedia.org/wiki/Generics%20in%20Java
Generics are a facility of generic programming that were added to the Java programming language in 2004 within version J2SE 5.0. They were designed to extend Java's type system to allow "a type or method to operate on objects of various types while providing compile-time type safety". The aspect compile-time type safety was not fully achieved, since it was shown in 2016 that it is not guaranteed in all cases. The Java collections framework supports generics to specify the type of objects stored in a collection instance. In 1998, Gilad Bracha, Martin Odersky, David Stoutamire and Philip Wadler created Generic Java, an extension to the Java language to support generic types. Generic Java was incorporated in Java with the addition of wildcards. Hierarchy and classification According to Java Language Specification: A type variable is an unqualified identifier. Type variables are introduced by generic class declarations, generic interface declarations, generic method declarations, and by generic constructor declarations. A class is generic if it declares one or more type variables. It defines one or more type variables that act as parameters. A generic class declaration defines a set of parameterized types, one for each possible invocation of the type parameter section. All of these parameterized types share the same class at runtime. An interface is generic if it declares one or more type variables. It defines one or more type variables that act as parameters. A generic interface declaration defines a set of types, one for each possible invocation of the type parameter section. All parameterized types share the same interface at runtime. A method is generic if it declares one or more type variables. These type variables are known as the formal type parameters of the method. The form of the formal type parameter list is identical to a type parameter list of a class or interface. A constructor can be declared as generic, independently of whether the class that the cons
https://en.wikipedia.org/wiki/Cereals%20%26%20Grains%20Association
Cereals & Grains Association (formerly AACC International, formerly the American Association of Cereal Chemists) is a non-profit professional organization of members who are specialists in the use of cereal grains in foods. Founded in 1916, they are headquartered in Eagan, Minnesota. Sections Cereals & Grains Association has nine active sections. Four of the nine active sections are located outside of the United States and they are located in western Canada, Australia, Japan, and Europe. Divisions Cereals & Grains Association has eleven divisions. These include biotechnology, carbohydrate, engineering/processing, milling/baking, nutrition, protein, rheology, rice, food safety and quality, pet and animal food, and pulses. Publications Cereals & Grains Association publishes Cereal Chemistry, a bimonthly publication in cereal science, including processing, oils, and laboratory tests on these grains (corn, oat, barley, rye, etc.), Cereal Foods World, the bi-monthly magazine of the association that deals with research papers and professional issues related to those who are involved in cereal science, and books on different issues relating to grains and cereals (storage, milling, processing, food quality, food safety, ingredients, dietary fiber, and nutrition). Continuing Education Throughout its existence, Cereals & Grains Association has offered continuing education or professional development courses to its members and non-members on issues dealing with cereal science and grain processing issues. These courses have included food safety, employee safety, extrusion, processing, and more.
https://en.wikipedia.org/wiki/Barilla
Barilla refers to several species of salt-tolerant (halophyte) plants that, until the 19th century, were the primary source of soda ash and hence of sodium carbonate. The word "barilla" was also used directly to refer to the soda ash obtained from plant sources. The word is an anglicization of the Spanish word barrilla for saltwort plants (a particular category of halophytes). A very early reference indicating the value placed upon soda ash in Catalonia has been given by Glick, who notes that "In 1189 the monastery of Poblet granted to the glassblower Guillem the right to gather glasswort in return for tithe and two hundred pounds of sheet glass paid annually (The site of these glassworks, at Narola, was excavated in 1935.)." By the 18th century, Spain's barilla industry was exporting large quantities of soda ash of exceptional purity; the product was refined from the ashes of barilla plants that were specifically cultivated for this purpose. Presumably the word "barilla" entered English and other languages as a consequence of this export trade. The main Spanish barilla species included (i) Salsola soda (the common English term barilla plant for Salsola soda reflects this usage), (ii) Salsola kali, and (iii) Halogeton sativus (formerly Salsola sativa). Fairly recently, Pérez has concluded that the most prominent species was likely Halogeton sativus; earlier authors have tended to favor Salsola soda. The word "barilla" was also used directly to refer to soda ash from any plant source, including not only the saltworts grown in Spain, but also glassworts, mangroves, and seaweed. These types of plant-derived soda ash are impure alkali substances that contain widely varying amounts of sodium carbonate (Na2CO3), some additional potassium carbonate (also an alkali), and a predominance of non-alkali impurities. The sodium carbonate, which is water-soluble, is "lixiviated" (extracted with water) from the ashes of the burned, dried plants. The resulting solution is boiled d
https://en.wikipedia.org/wiki/D%27Alembert%E2%80%93Euler%20condition
In mathematics and physics, especially the study of mechanics and fluid dynamics, the d'Alembert-Euler condition is a requirement that the streaklines of a flow are irrotational. Let x = x(X,t) be the coordinates of the point x into which X is carried at time t by a (fluid) flow. Let be the second material derivative of x. Then the d'Alembert-Euler condition is: The d'Alembert-Euler condition is named for Jean le Rond d'Alembert and Leonhard Euler who independently first described its use in the mid-18th century. It is not to be confused with the Cauchy–Riemann conditions.
https://en.wikipedia.org/wiki/Generalized%20quantifier
In formal semantics, a generalized quantifier (GQ) is an expression that denotes a set of sets. This is the standard semantics assigned to quantified noun phrases. For example, the generalized quantifier every boy denotes the set of sets of which every boy is a member: This treatment of quantifiers has been essential in achieving a compositional semantics for sentences containing quantifiers. Type theory A version of type theory is often used to make the semantics of different kinds of expressions explicit. The standard construction defines the set of types recursively as follows: e and t are types. If a and b are both types, then so is Nothing is a type, except what can be constructed on the basis of lines 1 and 2 above. Given this definition, we have the simple types e and t, but also a countable infinity of complex types, some of which include: Expressions of type e denote elements of the universe of discourse, the set of entities the discourse is about. This set is usually written as . Examples of type e expressions include John and he. Expressions of type t denote a truth value, usually rendered as the set , where 0 stands for "false" and 1 stands for "true". Examples of expressions that are sometimes said to be of type t are sentences or propositions. Expressions of type denote functions from the set of entities to the set of truth values. This set of functions is rendered as . Such functions are characteristic functions of sets. They map every individual that is an element of the set to "true", and everything else to "false." It is common to say that they denote sets rather than characteristic functions, although, strictly speaking, the latter is more accurate. Examples of expressions of this type are predicates, nouns and some kinds of adjectives. In general, expressions of complex types denote functions from the set of entities of type to the set of entities of type , a construct we can write as follows: . We can now assign types to the w
https://en.wikipedia.org/wiki/Restricted%20sumset
In additive number theory and combinatorics, a restricted sumset has the form where are finite nonempty subsets of a field F and is a polynomial over F. If is a constant non-zero function, for example for any , then is the usual sumset which is denoted by if When S is written as which is denoted by if Note that |S| > 0 if and only if there exist with Cauchy–Davenport theorem The Cauchy–Davenport theorem, named after Augustin Louis Cauchy and Harold Davenport, asserts that for any prime p and nonempty subsets A and B of the prime order cyclic group we have the inequality where , i.e. we're using modular arithmetic. It can be generalised to arbitrary (not necessarily abelian) groups using a Dyson transform. If are subsets of a group , then where is the size of the smallest nontrivial subgroup of (we set it to if there is no such subgroup). We may use this to deduce the Erdős–Ginzburg–Ziv theorem: given any sequence of 2n−1 elements in the cyclic group , there are n elements that sum to zero modulo n. (Here n does not need to be prime.) A direct consequence of the Cauchy-Davenport theorem is: Given any sequence S of p−1 or more nonzero elements, not necessarily distinct, of , every element of can be written as the sum of the elements of some subsequence (possibly empty) of S. Kneser's theorem generalises this to general abelian groups. Erdős–Heilbronn conjecture The Erdős–Heilbronn conjecture posed by Paul Erdős and Hans Heilbronn in 1964 states that if p is a prime and A is a nonempty subset of the field Z/pZ. This was first confirmed by J. A. Dias da Silva and Y. O. Hamidoune in 1994 who showed that where A is a finite nonempty subset of a field F, and p(F) is a prime p if F is of characteristic p, and p(F) = ∞ if F is of characteristic 0. Various extensions of this result were given by Noga Alon, M. B. Nathanson and I. Ruzsa in 1996, Q. H. Hou and Zhi-Wei Sun in 2002, and G. Karolyi in 2004. Combinatorial Nullstellensatz A p
https://en.wikipedia.org/wiki/CoreASM
CoreASM is an open source project (licensed under Academic Free License version 3.0) that focuses on the design of a lean executable ASM (Abstract State Machines) language, in combination with a supporting tool environment for high-level design, experimental validation, and formal verification (where appropriate) of abstract system models. Abstract state machines are known for their versatility in modeling of algorithms, architectures, languages, protocols, and virtually all kinds of sequential, parallel, and distributed systems. The ASM formalism has been studied extensively by researchers in academia and industry for more than 15 years with the intention to bridge the gap between formal and pragmatic approaches. Model-based systems engineering can benefit from abstract executable specifications as a tool for design exploration and experimental validation through simulation and testing. Building on experiences with two generations of ASM tools, a novel executable ASM language, called CoreASM, is being developed (see CoreASM homepage). The CoreASM language emphasizes freedom of experimentation, and supports the evolutionary nature of design as a product of creativity. It is particularly suited to Exploring the problem space for the purpose of writing an initial specification. The CoreASM language allows writing of highly abstract and concise specifications by minimizing the need for encoding in mapping the problem space to a formal model, and by allowing explicit declaration of the parts of the specification that are purposely left abstract. The principle of minimality, in combination with robustness of the underlying mathematical framework, improves modifiability of specifications, while effectively supporting the highly iterative nature of specification and design.
https://en.wikipedia.org/wiki/CodeGear
CodeGear is a wholly owned division of Embarcadero Technologies. CodeGear develops software development tools such as the Delphi Integrated development environment, the programming language Delphi, and the database server InterBase. Originally a division of Borland Software Corporation, it was launched on 14 November 2006. History On 8 February 2006 Borland announced that it would seek a buyer for its IDE division and database products. During the spin-off negotiations, these divisions ("developer tools group") internally reorganized into a division called CodeGear. Eventually, five parties bid for the group. However, no bidder offered Borland "numbers that appropriately reflected the value we think is in the business," according to a conference call with Borland CEO Tod Nielsen. Borland's 2006 annual report showed that its CodeGear IDE business had sales of US$75.7 million in 2006, which accounted for 25 percent of Borland's total revenue. On 7 May 2008, Borland Software Corporation and Embarcadero Technologies announced that Embarcadero had "signed a definitive asset purchase agreement to purchase CodeGear." On 1 July 2008, Embarcadero Technologies announced the completed acquisition of CodeGear from Borland Software Corporation on 30 June 2008, for approximately $24.5 million. Embarcadero Technologies, Inc. era On 25 August 2008, Embarcadero Technologies announced the release of Delphi 2009 and C++Builder 2009. On 28 September 2008, Embarcadero Technologies announced the release of InterBase SMP 2009. On 1 December 2008, Embarcadero Technologies announced the general availability of CodeGear RAD Studio 2009. Products RAD Studio (including Delphi, Delphi Prism and C++Builder) Delphi for PHP Delphi Delphi Prism JBuilder InterBase C++Builder JGear 3rdRail
https://en.wikipedia.org/wiki/Embeddable%20Linux%20Kernel%20Subset
The Embeddable Linux Kernel Subset (ELKS), formerly known as Linux-8086, is a Linux-like operating system kernel. It is a subset of the Linux kernel, intended for 16-bit computers with limited processor and memory resources such as machines powered by Intel 8086 and compatible microprocessors not supported by 32-bit Linux. Features and compatibility ELKS is free software and available under the GNU General Public License (GPL). It can work with early 16-bit and many 32-bit x86 (8088, 8086) computers like IBM PC compatible systems, and later x86 models in real mode. Another useful area are single board microcomputers, intended as educational tools for "homebrew" projects (hardware hacking), as well as embedded controller systems (e.g. Automation). Early versions of ELKS also ran on Psion 3a and 3aR SIBO (SIxteen Bit Organiser) PDAs with NEC V30 CPUs, providing another possible field of operation (gadget hardware), if ported to such a platform. This effort was called ELKSibo. Due to lack of interest, SIBO support was removed from version 0.4.0. Native ELKS programs may run emulated with Elksemu, allowing 8086 code to be used under Linux-i386. An effort to provide ELKS with an Eiffel compliant library also exists. History Development of Linux-8086 started in 1995 by Linux kernel developers Alan Cox and Chad Page as a fork of the standard Linux. By early 1996 the project was renamed ELKS (Embeddable Linux Kernel Subset), and in 1997 the first website www.elks.ecs.soton.ac.uk/ (offline, ) was created. ELKS version 0.0.63 followed on August 8 that same year. On June 22, 1999, ELKS release 0.0.77 was available, the first version able to run a graphical user interface (the Nano-X Window System). On July 21, ELKS booted on a Psion PDA with SIBO architecture. ELKS 0.0.82 came out on January 10, 2000. By including the SIBO port, it became the first official version running on other computer hardware than the original 8086 base. On March 3 that year, the project was regist
https://en.wikipedia.org/wiki/Event%20%28computing%29
In programming and software design, an event is an action or occurrence recognized by software, often originating asynchronously from the external environment, that may be handled by the software. Computer events can be generated or triggered by the system, by the user, or in other ways. Typically, events are handled synchronously with the program flow; that is, the software may have one or more dedicated places where events are handled, frequently an event loop. A source of events includes the user, who may interact with the software through the computer's peripherals - for example, by typing on the keyboard. Another source is a hardware device such as a timer. Software can also trigger its own set of events into the event loop, e.g. to communicate the completion of a task. Software that changes its behavior in response to events is said to be event-driven, often with the goal of being interactive. Description Event driven systems are typically used when there is some asynchronous external activity that needs to be handled by a program; for example, a user who presses a button on their mouse. An event driven system typically runs an event loop, that keeps waiting for such activities, e.g. input from devices or internal alarms. When one of these occurs, it collects data about the event and dispatches the event to the event handler software that will deal with it. A program can choose to ignore events, and there may be libraries to dispatch an event to multiple handlers that may be programmed to listen for a particular event. The data associated with an event at a minimum specifies what type of event it is, but may include other information such as when it occurred, who or what caused it to occur, and extra data provided by the event source to the handler about how the event should be processed. Events are typically used in user interfaces, where actions in the outside world (mouse clicks, window-resizing, keyboard presses, messages from other programs, etc.)
https://en.wikipedia.org/wiki/Ultra-linear
Ultra-linear electronic circuits are those used to couple a tetrode or pentode vacuum-tube (also called "electron-valve") to a load (e.g. to a loudspeaker). 'Ultra-linear' is a special case of 'distributed loading'; a circuit technique patented by Alan Blumlein in 1937 (Patent No. 496,883), although the name 'distributed loading' is probably due to Mullard. In 1938 he applied for the US patent 2218902. The particular advantages of ultra-linear operation, and the name itself, were published by David Hafler and Herbert Keroes in the early 1950s through articles in the magazine "Audio Engineering" from the USA. The special case of 'ultra linear' operation is sometimes confused with the more general principle of distributed loading. Operation A pentode or tetrode vacuum-tube (valve) configured as a common-cathode amplifier (where the output signal appears on the plate) may be operated as: a pentode or tetrode, in which the screen-grid is connected to a stable DC voltage so there are no signal variations on the screen-grid (i.e. the screen-grid has 0% of the plate's output signal impressed on it), or a triode, in which the screen-grid is connected to the plate (i.e. the screen-grid has 100% of the plate's output signal voltage impressed on it), or a blend of triode and pentode, in which the screen-grid has a percentage (between 0% and 100%) of the plate's output signal impressed on it. This is the basis of the distributed load circuit, and is usually achieved by incorporating a suitable "tap" on the primary winding of the output transformer that the vacuum-tube (valve) is connected to. The impression of any portion of the output signal onto the screen-grid can be seen as a form of feedback, which alters the behaviour of the electron stream passing from cathode to anode. Advantages By judicious choice of the screen-grid percentage-tap, the benefits of both triode and pentode vacuum-tubes can be realised. Over a very narrow range of percentage-tapping, distortion i
https://en.wikipedia.org/wiki/Deterministic%20context-free%20language
In formal language theory, deterministic context-free languages (DCFL) are a proper subset of context-free languages. They are the context-free languages that can be accepted by a deterministic pushdown automaton. DCFLs are always unambiguous, meaning that they admit an unambiguous grammar. There are non-deterministic unambiguous CFLs, so DCFLs form a proper subset of unambiguous CFLs. DCFLs are of great practical interest, as they can be parsed in linear time, and various restricted forms of DCFGs admit simple practical parsers. They are thus widely used throughout computer science. Description The notion of the DCFL is closely related to the deterministic pushdown automaton (DPDA). It is where the language power of pushdown automata is reduced to if we make them deterministic; the pushdown automata become unable to choose between different state-transition alternatives and as a consequence cannot recognize all context-free languages. Unambiguous grammars do not always generate a DCFL. For example, the language of even-length palindromes on the alphabet of 0 and 1 has the unambiguous context-free grammar S → 0S0 | 1S1 | ε. An arbitrary string of this language cannot be parsed without reading all its letters first, which means that a pushdown automaton has to try alternative state transitions to accommodate for the different possible lengths of a semi-parsed string. Properties Deterministic context-free languages can be recognized by a deterministic Turing machine in polynomial time and O(log2 n) space; as a corollary, DCFL is a subset of the complexity class SC. The set of deterministic context-free languages is closed under the following operations: complement inverse homomorphism right quotient with a regular language pre: pre() is the subset of all strings having a proper prefix that also belongs to . min: min() is the subset of all strings that do not have a proper prefix in . max: max() is the subset of all strings that are not the prefix of a longer
https://en.wikipedia.org/wiki/Vaginal%20fornix
The fornices of the vagina (: fornix of the vagina or fornix vaginae) are the superior portions of the vagina, extending into the recesses created by the vaginal portion of cervix. The word is Latin for 'arch'. Structure There are four named fornices (two primary) according to their anatomical position: The posterior fornix is the larger recess, behind the cervix. It is close to the recto-uterine pouch. There are three smaller recesses in front and at the sides: the anterior fornix is close to the vesico-uterine pouch. the two lateral fornices. Sexual During sexual intercourse in the missionary position, the tip of the penis reaches the anterior fornix, while in the rear-entry position it reaches the posterior fornix. The fornices appear to be close to one reported erogenous zone, the cul-de-sac, which is near the posterior fornix. See also G-spot
https://en.wikipedia.org/wiki/Anal%20columns
Anal columns (Columns of Morgagni or less commonly Morgagni's columns) are a number of vertical folds, produced by an infolding of the mucous membrane and some of the muscular tissue in the upper half of the lumen of the anal canal. They are named after Giovanni Battista Morgagni, who has several other eponyms named after him.