source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Thermogenesis | Thermogenesis is the process of heat production in organisms. It occurs in all warm-blooded animals, and also in a few species of thermogenic plants such as the Eastern skunk cabbage, the Voodoo lily (Sauromatum venosum), and the giant water lilies of the genus Victoria. The lodgepole pine dwarf mistletoe, Arceuthobium americanum, disperses its seeds explosively through thermogenesis.
Types
Depending on whether or not they are initiated through locomotion and intentional movement of the muscles, thermogenic processes can be classified as one of the following:
Exercise-associated thermogenesis (EAT)
Non-exercise activity thermogenesis (NEAT), energy expended for everything that is not sleeping, eating or sports-like exercise.
Diet-induced thermogenesis (DIT)
Shivering
One method to raise temperature is through shivering. It produces heat because the conversion of the chemical energy of ATP into kinetic energy causes almost all of the energy to show up as heat. Shivering is the process by which the body temperature of hibernating mammals (such as some bats and ground squirrels) is raised as these animals emerge from hibernation.
Non-shivering
Non-shivering thermogenesis occurs in brown adipose tissue (brown fat) that is present in almost all eutherians (swine being the only exception currently known). Brown adipose tissue has a unique uncoupling protein (thermogenin, also known as uncoupling protein 1) that allows the uncoupling of protons (H+) moving down their mitochondrial gradient from the synthesis of ATP, thus allowing the energy to be dissipated as heat. The atomic structure of human uncoupling protein 1 UCP1 has been solved by cryogenic-electron microscopy. The structure has the typical fold of a member of the SLC25 family. UCP1 is locked in a cytoplasmic-open state by guanosine triphosphate in a pH-dependent manner, preventing proton leak.
In this process, substances such as free fatty acids (derived from triacylglycerols) remove purine (ADP, GDP a |
https://en.wikipedia.org/wiki/Faraday%20effect | The Faraday effect or Faraday rotation, sometimes referred to as the magneto-optic Faraday effect (MOFE), is a physical magneto-optical phenomenon. The Faraday effect causes a polarization rotation which is proportional to the projection of the magnetic field along the direction of the light propagation. Formally, it is a special case of gyroelectromagnetism obtained when the dielectric permittivity tensor is diagonal. This effect occurs in most optically transparent dielectric materials (including liquids) under the influence of magnetic fields.
Discovered by Michael Faraday in 1845, the Faraday effect was the first experimental evidence that light and electromagnetism are related. The theoretical basis of electromagnetic radiation (which includes visible light) was completed by James Clerk Maxwell in the 1860s. Maxwell's equations were rewritten in their current form in the 1870s by Oliver Heaviside.
The Faraday effect is caused by left and right circularly polarized waves propagating at slightly different speeds, a property known as circular birefringence. Since a linear polarization can be decomposed into the superposition of two equal-amplitude circularly polarized components of opposite handedness and different phase, the effect of a relative phase shift, induced by the Faraday effect, is to rotate the orientation of a wave's linear polarization.
The Faraday effect has applications in measuring instruments. For instance, the Faraday effect has been used to measure optical rotatory power and for remote sensing of magnetic fields (such as fiber optic current sensors). The Faraday effect is used in spintronics research to study the polarization of electron spins in semiconductors. Faraday rotators can be used for amplitude modulation of light, and are the basis of optical isolators and optical circulators; such components are required in optical telecommunications and other laser applications.
History
By 1845, it was known through the work of Fresnel, Malus |
https://en.wikipedia.org/wiki/Quadratic%20integral | In mathematics, a quadratic integral is an integral of the form
It can be evaluated by completing the square in the denominator.
Positive-discriminant case
Assume that the discriminant q = b2 − 4ac is positive. In that case, define u and A by
and
The quadratic integral can now be written as
The partial fraction decomposition
allows us to evaluate the integral:
The final result for the original integral, under the assumption that q > 0, is
Negative-discriminant case
In case the discriminant q = b2 − 4ac is negative, the second term in the denominator in
is positive. Then the integral becomes
References
Weisstein, Eric W. "Quadratic Integral." From MathWorld--A Wolfram Web Resource, wherein the following is referenced:
Integral calculus |
https://en.wikipedia.org/wiki/Projection%20%28linear%20algebra%29 | In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself (an endomorphism) such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once (i.e. is idempotent). It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.
Definitions
A projection on a vector space is a linear operator such that .
When has an inner product and is complete, i.e. when is a Hilbert space, the concept of orthogonality can be used. A projection on a Hilbert space is called an orthogonal projection if it satisfies for all . A projection on a Hilbert space that is not orthogonal is called an oblique projection.
Projection matrix
A square matrix is called a projection matrix if it is equal to its square, i.e. if .
A square matrix is called an orthogonal projection matrix if for a real matrix, and respectively for a complex matrix, where denotes the transpose of and denotes the adjoint or Hermitian transpose of .
A projection matrix that is not an orthogonal projection matrix is called an oblique projection matrix.
The eigenvalues of a projection matrix must be 0 or 1.
Examples
Orthogonal projection
For example, the function which maps the point in three-dimensional space to the point is an orthogonal projection onto the xy-plane. This function is represented by the matrix
The action of this matrix on an arbitrary vector is
To see that is indeed a projection, i.e., , we compute
Observing that shows that the projection is an orthogonal projection.
Oblique projection
A simple example of a non-orthogonal (oblique) projection is
Via matrix multiplication, one sees that
showing that is indeed a projection.
The projection is orthogonal if and only if bec |
https://en.wikipedia.org/wiki/Hessenberg%20matrix | In linear algebra, a Hessenberg matrix is a special kind of square matrix, one that is "almost" triangular. To be exact, an upper Hessenberg matrix has zero entries below the first subdiagonal, and a lower Hessenberg matrix has zero entries above the first superdiagonal. They are named after Karl Hessenberg.
A Hessenberg decomposition is a matrix decomposition of a matrix into a unitary matrix and a Hessenberg matrix such that where denotes the conjugate transpose.
Definitions
Upper Hessenberg matrix
A square matrix is said to be in upper Hessenberg form or to be an upper Hessenberg matrix if for all with .
An upper Hessenberg matrix is called unreduced if all subdiagonal entries are nonzero, i.e. if for all .
Lower Hessenberg matrix
A square matrix is said to be in lower Hessenberg form or to be a lower Hessenberg matrix if its transpose is an upper Hessenberg matrix or equivalently if for all with .
A lower Hessenberg matrix is called unreduced if all superdiagonal entries are nonzero, i.e. if for all .
Examples
Consider the following matrices.
The matrix is an upper unreduced Hessenberg matrix, is a lower unreduced Hessenberg matrix and is a lower Hessenberg matrix but is not unreduced.
Computer programming
Many linear algebra algorithms require significantly less computational effort when applied to triangular matrices, and this improvement often carries over to Hessenberg matrices as well. If the constraints of a linear algebra problem do not allow a general matrix to be conveniently reduced to a triangular one, reduction to Hessenberg form is often the next best thing. In fact, reduction of any matrix to a Hessenberg form can be achieved in a finite number of steps (for example, through Householder's transformation of unitary similarity transforms). Subsequent reduction of Hessenberg matrix to a triangular matrix can be achieved through iterative procedures, such as shifted QR-factorization. In eigenvalue algorithms, the Hessenberg |
https://en.wikipedia.org/wiki/Quarth | , known as Block Hole outside Japan, is a hybrid puzzle/shoot 'em up game developed by Konami which was released in 1989 as an arcade game. Besides the arcade version, there were also ports of the game to the MSX2 (with a built-in SCC chip), Famicom, and Game Boy—home releases used the Quarth name worldwide (with the exception of the Game Boy Color release in Europe of Konami GB Collection Vol. 2, where the game was renamed to the generic title Block Game for unknown reasons).
Quarth was released on the Konami Net i-mode service as Block Quarth, with an updated Block Quarth DX in 2001. It was released without the "DX" suffix in 2005 and was made globally available through Konami Net licensing on many i-mode services offered by mobile operators. In Europe, for example, it was available from O2 UK, O2 Ireland, and Telefónica Spain.
In 2005, Konami also included the game in the Nintendo DS title Ganbare Goemon: Tōkaidōchū Ōedo Tengurikaeshi no Maki. An emulated version of the game was released in 2006 for PlayStation 2 in Japan as part of the Oretachi Geasen Zoku Sono series.
Gameplay
Quarth is a combination of Tetris-style gameplay and a fixed shooter in the Space Invaders tradition. The player's focus is on falling blocks, and the action is geometrical. Rather than arranging the blocks together to make a row of disappearing blocks, a spaceship positioned at the bottom of the screen shoots blocks upwards to make the falling block pattern into squares or rectangles. Once the blocks have been arranged properly, the shape is destroyed and the player is awarded points based on the shape's size. The blocks continue to drop from the top of the screen in various incomplete shapes. As each level progresses, the blocks drop at greater speed and frequency. There are also various power-ups which could be located to increase your ship's speed, among other bonuses.
The game continues until the blocks reach the dotted line at the bottom of the screen, whereupon the player's s |
https://en.wikipedia.org/wiki/Tridiagonal%20matrix | In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main diagonal). For example, the following matrix is tridiagonal:
The determinant of a tridiagonal matrix is given by the continuant of its elements.
An orthogonal transformation of a symmetric (or Hermitian) matrix to tridiagonal form can be done with the Lanczos algorithm.
Properties
A tridiagonal matrix is a matrix that is both upper and lower Hessenberg matrix. In particular, a tridiagonal matrix is a direct sum of p 1-by-1 and q 2-by-2 matrices such that — the dimension of the tridiagonal. Although a general tridiagonal matrix is not necessarily symmetric or Hermitian, many of those that arise when solving linear algebra problems have one of these properties. Furthermore, if a real tridiagonal matrix A satisfies ak,k+1 ak+1,k > 0 for all k, so that the signs of its entries are symmetric, then it is similar to a Hermitian matrix, by a diagonal change of basis matrix. Hence, its eigenvalues are real. If we replace the strict inequality by ak,k+1 ak+1,k ≥ 0, then by continuity, the eigenvalues are still guaranteed to be real, but the matrix need no longer be similar to a Hermitian matrix.
The set of all n × n tridiagonal matrices forms a 3n-2
dimensional vector space.
Many linear algebra algorithms require significantly less computational effort when applied to diagonal matrices, and this improvement often carries over to tridiagonal matrices as well.
Determinant
The determinant of a tridiagonal matrix A of order n can be computed from a three-term recurrence relation. Write f1 = |a1| = a1 (i.e., f1 is the determinant of the 1 by 1 matrix consisting only of a1), and let
The sequence (fi) is called the continuant and satisfies the recurrence relation
with initial values f0 = 1 and f−1 = 0. The cost of computing the det |
https://en.wikipedia.org/wiki/Spectravideo | Spectravideo International Limited (SVI) was an American computer manufacturer and software house. It was originally called SpectraVision, a company founded by Harry Fox in 1981. The company produced video games and other software for the VIC-20 home computer, the Atari 2600 home video game console, and its CompuMate peripheral. Some of their own computers were compatible with the Microsoft MSX or the IBM PC.
Despite their initial success, the company faced financial troubles, and by 1988, operations ceased. Later, a UK-based company bought the Spectravideo brand name from Bondwell in 1988, but this company, known as Logic3, had no connection to the original Spectravideo products and was dissolved in 2016.
History
SpectraVision was founded in 1981 by Harry Fox and Alex Weiss as a distributor of computer games, contracting external developers to write the software. Their main products were gaming cartridges for the Atari 2600, Colecovision and VIC-20. They also made the world's first ergonomic joystick, the QuickShot. In late 1982 the company was renamed to Spectravideo due to a naming conflict with On Command Corporation's Hotel TV system called SpectraVision.
In the early 1980s, the company developed 11 games for the Atari 2600, including several titles of some rarity: Chase the Chuckwagon, Mangia and Bumper Bash.
A few of their titles were only available through the Columbia House music club.
The company's first attempt at a computer was an add-on for the Atari 2600 called the Spectravideo CompuMate, with a membrane keyboard and very simple programmability.
Spectravideo's first real computers were the SV-318 and SV-328, released in 1983. Both were powered by a Z80 A at 3.6 MHz, but differed in the amount of RAM (SV-318 had 32KB and SV-328 had 80KB total, of which 16KB was reserved for video) and keyboard style. The main operating system, residing in ROM, was a version of Microsoft Extended BASIC, but if the computer was equipped with a floppy drive, the us |
https://en.wikipedia.org/wiki/TRON%20project | TRON (acronym for The Real-time Operating system Nucleus) is an open architecture real-time operating system kernel design. The project was started by Professor Dr. Ken Sakamura of the University of Tokyo in 1984. The project's goal is to create an ideal computer architecture and network, to provide for all of society's needs.
The Industrial TRON (ITRON) derivative was one of the world's most used operating systems in 2003, being present in billions of electronic devices such as mobile phones, appliances and even cars. Although mainly used by Japanese companies, it garnered interest worldwide. However, a dearth of quality English documentation was said to hinder its broader adoption.
The TRON project was integrated into T-Engine Forum in 2010. Today, it is supported by popular Secure Socket Layer (SSL) and Transport Layer Security (TLS) libraries such as wolfSSL.
Architecture
TRON does not specify the source code for the kernel, but instead is a "set of interfaces and design guidelines" for creating the kernel. This allows different companies to create their own versions of TRON, based on the specifications, which can be suited for different microprocessors.
While the specification of TRON is publicly available, implementations can be proprietary at the discretion of the implementer.
Sub-architectures
The TRON framework defines a complete architecture for the different computing units:
ITRON (Industrial TRON): an architecture for real-time operating systems for embedded systems; this is the most popular use of the TRON architecture
JTRON (Java TRON): a sub-project of ITRON to allow it to use the Java platform
BTRON (Business TRON): for personal computers, workstations, PDAs, mainly as the human–machine interface in networks based on the TRON architecture
CTRON (Central and Communications TRON): for mainframe computers, digital switching equipment
MTRON (Macro TRON): for intercommunication between the different TRON components.
STRON (Silicon TRON): hardwa |
https://en.wikipedia.org/wiki/List%20of%20mathematics%20history%20topics | This is a list of mathematics history topics, by Wikipedia page. See also list of mathematicians, timeline of mathematics, history of mathematics, list of publications in mathematics.
1729 (anecdote)
Adequality
Archimedes Palimpsest
Archimedes' use of infinitesimals
Arithmetization of analysis
Brachistochrone curve
Chinese mathematics
Cours d'Analyse
Edinburgh Mathematical Society
Erlangen programme
Fermat's Last Theorem
Greek mathematics
Thomas Little Heath
Hilbert's problems
History of topos theory
Hyperbolic quaternion
Indian mathematics
Islamic mathematics
Italian school of algebraic geometry
Kraków School of Mathematics
Law of Continuity
Lwów School of Mathematics
Nicolas Bourbaki
Non-Euclidean geometry
Scottish Café
Seven bridges of Königsberg
Spectral theory
Synthetic geometry
Tautochrone curve
Unifying theories in mathematics
Waring's problem
Warsaw School of Mathematics
Academic positions
Lowndean Professor of Astronomy and Geometry
Lucasian professor
Rouse Ball Professor of Mathematics
Sadleirian Chair
See also
History |
https://en.wikipedia.org/wiki/Koszul%20complex | The Koszul complex is a concept in mathematics introduced by Jean-Louis Koszul.
Definition
Let A be a commutative ring and s: Ar → A an A-linear map. Its Koszul complex Ks is
where the maps send
where means the term is omitted and means the wedge product. One may replace Ar with any A-module.
Motivating example
Let M be a manifold, variety, scheme, ..., and A be the ring of functions on it, denoted .
The map s : Ar → A corresponds to picking r functions . When r = 1, the Koszul complex is
whose cokernel is the ring of functions on the zero locus f = 0. In general, the Koszul complex is
The cokernel of the last map is again functions on the zero locus f1 = ... = fr = 0. It is the tensor product of the r many Koszul complexes for fi = 0, so its dimensions are given by binomial coefficients.
In pictures: given functions si, how do we define the locus where they all vanish?
In algebraic geometry, the ring of functions of the zero locus is A/(s1, ..., sr). In derived algebraic geometry, the dg ring of functions is the Koszul complex. If the loci si = 0 intersect transversely, these are equivalent.
Thus: Koszul complexes are derived intersections of zero loci.
Properties
Algebra structure
First, the Koszul complex Ks of (A,s) is a chain complex: the composition of any two maps is zero. Second, the map
makes it into a dg algebra.
As a tensor product
The Koszul complex is a tensor product: if s = (s1, ..., sr), then
where ⊗ denotes the derived tensor product of chain complexes of A-modules.
Vanishing in regular case
When s1, ..., sr form a regular sequence, Ks → A is a quasi-isomorphism, i.e.
and as for any s, H0(Ks) = A.
History
The Koszul complex was first introduced to define a cohomology theory for Lie algebras, by Jean-Louis Koszul (see Lie algebra cohomology). It turned out to be a useful general construction in homological algebra. As a tool, its homology can be used to tell when a set of elements of a (local) ring is an M-regular s |
https://en.wikipedia.org/wiki/IRAF | IRAF (Image Reduction and Analysis Facility) is a collection of software written at the National Optical Astronomy Observatory (NOAO) geared towards the reduction of astronomical images and spectra in pixel array form. This is primarily data taken from imaging array detectors such as CCDs. It is available for all major operating systems for mainframes and desktop computers. IRAF was designed cross-platform, supporting VMS and UNIX-like operating systems. Use on Microsoft Windows was made possible by Cygwin in earlier versions, and can be today done with the Windows Subsystem for Linux. Today, it is primarily used on macOS and Linux.
IRAF commands (known as tasks) are organized into package structures. Additional packages may be added to IRAF. Packages may contain other packages. There are many packages available by NOAO and external developers often focusing on a particular branch of research or facility.
Functionality available in IRAF includes the calibration of the fluxes and positions of astronomical objects within an image, compensation for sensitivity variations between detector pixels, combination of multiple images or measurement of the redshifts of absorption or emission lines in a spectrum.
While IRAF is still very popular among astronomers, institutional development and maintenance was stopped. IRAF is now maintained as community software.
History
The IRAF project started in fall 1981 at Kitt Peak National Observatory. In 1982, a preliminary design and the first version of the Command Language (CL) were completed. The IRAF Group was founded. Designer of the IRAF system and chief programmer was Doug Tody. In 1983, Space Telescope Science Institute selected IRAF as the environment for their SDAS data analysis system and ported the system to VMS. The first internal IRAF release was in 1984. After a limited distribution of a few outside sites the first public release was 1987.
In the middle of the 1990s, the "Open IRAF" project was started to address th |
https://en.wikipedia.org/wiki/History%20of%20cryptography | Cryptography, the use of codes and ciphers to protect secrets, began thousands of years ago. Until recent decades, it has been the story of what might be called classical cryptography — that is, of methods of encryption that use pen and paper, or perhaps simple mechanical aids. In the early 20th century, the invention of complex mechanical and electromechanical machines, such as the Enigma rotor machine, provided more sophisticated and efficient means of encryption; and the subsequent introduction of electronics and computing has allowed elaborate schemes of still greater complexity, most of which are entirely unsuited to pen and paper.
The development of cryptography has been paralleled by the development of cryptanalysis — the "breaking" of codes and ciphers. The discovery and application, early on, of frequency analysis to the reading of encrypted communications has, on occasion, altered the course of history. Thus the Zimmermann Telegram triggered the United States' entry into World War I; and Allies reading of Nazi Germany's ciphers shortened World War II, in some evaluations by as much as two years.
Until the 1960s, secure cryptography was largely the preserve of governments. Two events have since brought it squarely into the public domain: the creation of a public encryption standard (DES), and the invention of public-key cryptography.
Antiquity
The earliest known use of cryptography is found in non-standard hieroglyphs carved into the wall of a tomb from the Old Kingdom of Egypt circa 1900 BC. These are not thought to be serious attempts at secret communications, however, but rather to have been attempts at mystery, intrigue, or even amusement for literate onlookers.
Some clay tablets from Mesopotamia somewhat later are clearly meant to protect information—one dated near 1500 BC was found to encrypt a craftsman's recipe for pottery glaze, presumably commercially valuable. Furthermore, Hebrew scholars made use of simple monoalphabetic substitution cip |
https://en.wikipedia.org/wiki/International%20Collegiate%20Programming%20Contest | The International Collegiate Programming Contest, known as the ICPC, is an annual multi-tiered competitive programming competition among the universities of the world. Directed by ICPC Executive Director and Baylor Professor Dr. William B. Poucher, the ICPC operates autonomous regional contests covering six continents culminating in a global World Finals every year. In 2018, ICPC participation included 52,709 students from 3,233 universities in 110 countries.
The ICPC operates under the auspices of the ICPC Foundation and operates under agreements with host universities and non-profits, all in accordance with the ICPC Policies and Procedures. From 1977 until 2017 ICPC was held under the auspices of ACM and was referred to as ACM-ICPC.
Mission
The ICPC, the “International Collegiate Programming Contest”, is an extra-curricular, competitive programming sport for students at universities around the world. ICPC competitions provide gifted students opportunities to interact, demonstrate, and improve their teamwork, programming, and problem-solving process. The ICPC is a global platform for academia, industry, and community to shine the spotlight on and raise the aspirations of the next generation of computing professionals as they pursue excellence. In its own words, ICPC is:
History
The ICPC traces its roots to a competition held at Texas A&M University in 1970 hosted by the Alpha Chapter of the Upsilon Pi Epsilon Computer Science Honor Society (UPE). This initial programming competition was titled First Annual Texas Collegiate Programming Championship and each University was represented by a team of up to five members. The computer used was a 360 model 65 which was one of the first machines with a DAT (Dynamic Address Translator aka "paging") system for accessing memory. The start of the competition was delayed for about 90 minutes because two of the four "memory bank" amplifiers were down. Teams that participated included, Texas A&M, Texas Tech, University of Hous |
https://en.wikipedia.org/wiki/Two-color%20system | The two-color system of projection is a name given to a variety of methods of projecting a full-color image using (only) two different single-color projectors. James Clerk Maxwell first suggested he had discovered such a projection system, but it was not reproduced until the 1950s, when Edwin Land accidentally noticed a similar effect while working on his three-color system of projection.
Despite Land's later work on the subject, the physics behind the success of this system of projection (and similar methods of apparent full-color projection involving only one color of light, sometimes in different polarizations) is not clearly understood since it involves not only the projected light but also the human visual system's response to it.
External links
http://www.greatreality.com/Color2Color.htm
Display technology |
https://en.wikipedia.org/wiki/4-bit%20computing | 4-bit computing is the use of computer architectures in which integers and other data units are 4 bits wide. 4-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on registers or data buses of that size. Memory addresses (and thus address buses) for 4-bit CPUs are generally much larger than 4-bit (since only 16 memory locations would be very restrictive), such as 12-bit or more, while they could in theory be 8-bit.
A group of four bits is also called a nibble and has 24 = 16 possible values.
While 4-bit computing is mostly obsolete, 4-bit communication (even 1- or 2-bit) is still used in modern computers, that are otherwise e.g. 64-bit, and thus also have much larger buses.
History
A 4-bit processor may seem limited, but it is a good match for calculators, where each decimal digit fits into four bits.
Some of the first microprocessors had a 4-bit word length and were developed around 1970. The first commercial microprocessor was the binary-coded decimal (BCD-based) Intel 4004, developed for calculator applications in 1971; it had a 4-bit word length, but had 8-bit instructions and 12-bit addresses. It was succeeded by the Intel 4040.
The first commercial single-chip computer was the 4-bit Texas Instruments TMS 1000 (1974). It contained a 4-bit CPU with a Harvard architecture and 8-bit-wide instructions, an on-chip instruction ROM, and an on-chip data RAM with 4-bit words.
The Rockwell PPS-4 was another early 4-bit processor, introduced in 1972, which had a long lifetime in handheld games and similar roles. It was steadily improved and by 1975 been combined with several support chips to make a one-chip computer.
The 4-bit processors were programmed in assembly language or Forth, e.g. "MARC4 Family of 4 bit Forth CPU" (which is now discontinued) because of the extreme size constraint on programs and because common programming languages (for microcontrollers, 8-bit and larger), such as the C programming lang |
https://en.wikipedia.org/wiki/Macrophilia | Macrophilia is a fascination with or a sexual fantasy involving giants, most commonly expressed as giantesses (female giants), as well as giant objects. It is typically believed to be a male fantasy, with the male playing the smaller part; however, people with any background can have it. When the smaller part is male, they may be depicted as entering, being dominated by, or being eaten by the larger woman. Generally, depictions range from sexually explicit actions to non-sexual interactions while still providing sexual stimulation for those with the fantasy.
Online communities refer to this subculture as macro fetish or GTS fetish, an abbreviation of "giantess" and sometimes the backronym "giant tiny sex".
Description
Although macrophilia literally translates to simply a "lover of large", in the context of a sexual fantasy, it is used to denote attraction to beings larger than themselves. Males who are attracted to larger females are known as Amazon chasers. Generally, the interest differs between macrophiles, and depends on gender and sexual orientation. Macrophiles often enjoy the feel of being physically smaller, ranging from sadomasochistic fantasies such as being abused, degraded, dominated, or eaten, to friendly fantasies such as being rescued, protected and befriended by the larger, typically heroic female, and they often view the much taller being as powerful and dominating.
Psychologist Mark Griffiths speculates that the roots of macrophilia may lie in sexual arousal in childhood and early adolescence that is accidentally associated with giants.
Speculating on why there are not as many female macrophiles, psychologist Helen Friedman theorized that women who already view men as dominant and powerful have no need to fantasize about it. Still there exists a presence of women who can enjoy both aspects of macrophilia. Women who take on the roles of the giantess within this fetish often find the practice to be empowering and enjoy being worshipped.
One ar |
https://en.wikipedia.org/wiki/TRSDOS | TRSDOS (which stands for the Tandy Radio Shack Disk Operating System) is the operating system for the Tandy TRS-80 line of eight-bit Zilog Z80 microcomputers that were sold through Radio Shack from 1977 through 1991. Tandy's manuals recommended that it be pronounced triss-doss. TRSDOS should not be confused with Tandy DOS, a version of MS-DOS licensed from Microsoft for Tandy's x86 line of personal computers (PCs).
With the original TRS-80 Model I of 1977, TRSDOS was primarily a way of extending the MBASIC (BASIC in ROM) with additional I/O (input/output) commands that worked with disk files rather than the cassette tapes that were used by non-disk Model I systems. Later disk-equipped Model III computers used a completely different version of TRSDOS by Radio Shack which culminated in 1981 with TRSDOS Version 1.3. From 1983 disk-equipped TRS-80 Model 4 computers used TRSDOS Version 6, which was a development of Model III LDOS by Logical Systems, Inc. This last was updated in 1987 and released as LS-DOS 6.3.
Completely unrelated was a version of TRSDOS by Radio Shack for its TRS-80 Model II professional computer from 1979, also based on the Z80 and equipped with 8-inch disk drives. The later machines in this line, the Models 12, 16 and 6000, used the Z80 as an alternate CPU to its main Motorola 68000 chip and could run this version of TRSDOS for backwards compatibility with older Z80 applications software.
History
Tandy Corporation's TRS-80 microcomputer did not have a disk drive or disk operating system at release. The first version of TRSDOS, by Randy Cook, was so buggy that others wrote alternatives, including NewDOS and LDOS. After disputes with Cook over ownership of the source code, Tandy hired Logical Systems, LDOS's developer, to continue TRSDOS development. TRSDOS 6, shipped with the TRS-80 Model 4 in 1983, is identical to LDOS 6.00.
Dates
May 8, 1979 – Radio Shack releases TRSDOS 2.3
May 1, 1981 – Radio Shack releases Model III TRSDOS 1.3
April 26, 19 |
https://en.wikipedia.org/wiki/Tropical%20forest | Tropical forests are forested landscapes in tropical regions: i.e. land areas approximately bounded by the tropic of Cancer and Capricorn, but possibly affected by other factors such as prevailing winds.
Some tropical forest types are difficult to categorize. While forests in temperate areas are readily categorized on the basis of tree canopy density, such schemes do not work well in tropical forests. There is no single scheme that defines what a forest is, in tropical regions or elsewhere. Because of these difficulties, information on the extent of tropical forests varies between sources. However, tropical forests are extensive, making up just under half the world's forests. The tropical domain has the largest proportion of the world’s forests (45 percent), followed by the boreal, temperate and subtropical domains.
More than 3.6 million hectares of virgin tropical forest was lost in 2018.
History
The original tropical rainforests, which covered the planet's land surface, were the type of flora that covered Earth. Other canopy forests expanded north-south of the equator during the Paleogene epoch, around 40 million years ago, as a result of the emergence of drier, cooler climates.
The tropical forest was originally identified as a specific type of biome in 1949.
Types of tropical forest
Tropical forests are often thought of as evergreen rainforests and moist forests, but these account for only a portion of them (depending on how they are defined - see maps). The remaining tropical forests are a diversity of many different forest types including:
Eucalyptus open forest, tropical coniferous forests, savanna woodland (e.g. Sahelian forest), and mountain forests (the higher elevations of which are cloud forests). Over even relatively short distances, the boundaries between these biomes may be unclear, with ecotones between the main types.
The nature of tropical forests in any given area is affected by several factors, most importantly:
Geographical: location a |
https://en.wikipedia.org/wiki/UNIVAC%20LARC | The UNIVAC LARC, short for the Livermore Advanced Research Computer, is a mainframe computer designed to a requirement published by Edward Teller in order to run hydrodynamic simulations for nuclear weapon design. It was one of the earliest supercomputers.
LARC supported multiprocessing with two CPUs (called Computers) and an input/output (I/O) Processor (called the Processor). Two LARC machines were built, the first delivered to Livermore in June 1960, and the second to the Navy's David Taylor Model Basin. Both examples had only one Computer, so no multiprocessor LARCs were ever built.
The LARC CPUs were able to perform addition in about 4 microseconds, corresponding to about 250 kIPS speed. This made it the fastest computer in the world until 1962 when the IBM 7030 took the title. The 7030 started as IBM's entry to the LARC contest, but Teller chose the simpler Univac over the more risky IBM design.
Description
The LARC was a decimal mainframe computer with 60 bits per word. It used bi-quinary coded decimal arithmetic with five bits per digit (see below), allowing for 11-digit signed numbers. Instructions were 60 bits long, one per word. The basic configuration had 26 general-purpose registers, which could be expanded to 99. The general-purpose registers had an access time of one microsecond.
LARC weighed about .
The basic configuration had one Computer and LARC could be expanded to a multiprocessor with a second Computer.
The Processor is an independent CPU (with a different instruction set from the Computers) and provides control for 12 to 24 magnetic drum storage units, four to forty UNISERVO II tape drives, two electronic page recorders (a 35mm film camera facing a cathode-ray tube), one or two high-speed printers, and a high-speed punched card reader.
The LARC used core memory banks of 2500 words each, housed four banks per memory cabinet. The basic configuration had eight banks of core (two cabinets), 20,000 words. The memory could be expanded to a m |
https://en.wikipedia.org/wiki/Priority%20inversion | In computer science, priority inversion is a scenario in scheduling in which a high-priority task is indirectly superseded by a lower-priority task effectively inverting the assigned priorities of the tasks. This violates the priority model that high-priority tasks can only be prevented from running by higher-priority tasks. Inversion occurs when there is a resource contention with a low-priority task that is then preempted by a medium-priority task.
Formulation
Consider two tasks H and L, of high and low priority respectively, either of which can acquire exclusive use of a shared resource R. If H attempts to acquire R after L has acquired it, then H becomes blocked until L relinquishes the resource. Sharing an exclusive-use resource (R in this case) in a well-designed system typically involves L relinquishing R promptly so that H (a higher-priority task) does not stay blocked for excessive periods of time. Despite good design, however, it is possible that a third task M of medium priority becomes runnable during L's use of R. At this point, M being higher in priority than L, preempts L (since M does not depend on R), causing L to not be able to relinquish R promptly, in turn causing H—the highest-priority process—to be unable to run (that is, H suffers unexpected blockage indirectly caused by lower-priority tasks like M).
Consequences
In some cases, priority inversion can occur without causing immediate harm—the delayed execution of the high-priority task goes unnoticed, and eventually, the low-priority task releases the shared resource. However, there are also many situations in which priority inversion can cause serious problems. If the high-priority task is left starved of the resources, it might lead to a system malfunction or the triggering of pre-defined corrective measures, such as a watchdog timer resetting the entire system. The trouble experienced by the Mars Pathfinder lander in 1997 is a classic example of problems caused by priority inversion in rea |
https://en.wikipedia.org/wiki/Schur%27s%20lemma | In mathematics, Schur's lemma is an elementary but extremely useful statement in representation theory of groups and algebras. In the group case it says that if M and N are two finite-dimensional irreducible representations
of a group G and φ is a linear map from M to N that commutes with the action of the group, then either φ is invertible, or φ = 0. An important special case occurs when M = N, i.e. φ is a self-map; in particular, any element of the center of a group must act as a scalar operator (a scalar multiple of the identity) on M. The lemma is named after Issai Schur who used it to prove the Schur orthogonality relations and develop the basics of the representation theory of finite groups. Schur's lemma admits generalisations to Lie groups and Lie algebras, the most common of which are due to Jacques Dixmier and Daniel Quillen.
Representation theory of groups
Representation theory is the study of homomorphisms from a group, G, into the general linear group GL(V) of a vector space V; i.e., into the group of automorphisms of V. (Let us here restrict ourselves to the case when the underlying field of V is , the field of complex numbers.) Such a homomorphism is called a representation of G on V. A representation on V is a special case of a group action on V, but rather than permit any arbitrary bijections (permutations) of the underlying set of V, we restrict ourselves to invertible linear transformations.
Let ρ be a representation of G on V. It may be the case that V has a subspace, W, such that for every element g of G, the invertible linear map ρ(g) preserves or fixes W, so that (ρ(g))(w) is in W for every w in W, and (ρ(g))(v) is not in W for any v not in W. In other words, every linear map ρ(g): V→V is also an automorphism of W, ρ(g): W→W, when its domain is restricted to W. We say W is stable under G, or stable under the action of G. It is clear that if we consider W on its own as a vector space, then there is an obvious representation of G on W—the r |
https://en.wikipedia.org/wiki/Flashover | A flashover is the near-simultaneous ignition of most of the directly exposed combustible material in an enclosed area. When certain organic materials are heated, they undergo thermal decomposition and release flammable gases. Flashover occurs when the majority of the exposed surfaces in a space are heated to their autoignition temperature and emit flammable gases (see also flash point). Flashover normally occurs at or for ordinary combustibles and an incident heat flux at floor level of .
An example of flashover is the ignition of a piece of furniture in a domestic room. The fire involving the initial piece of furniture can produce a layer of hot smoke, which spreads across the ceiling in the room. The hot buoyant smoke layer grows in depth, as it is bounded by the walls of the room. The radiated heat from this layer heats the surfaces of the directly exposed combustible materials in the room, causing them to give off flammable gases, via pyrolysis. When the temperatures of the evolved gases become high enough, these gases will ignite throughout their extent.
Types
The original Swedish terminology related to the term 'flashover' has been altered in its translation to conform with current European and North American accepted [scientific] definitions as follows:
A lean flashover (sometimes called rollover) is the ignition of the gas layer under the ceiling, leading to total involvement of the compartment. The fuel/air ratio is at the bottom region of the flammability range (i.e. lean).
A rich flashover occurs when the flammable gases are ignited while at the upper region of the flammability range (i.e. rich). This can happen in rooms where the fire subsided because of lack of oxygen. The ignition source can be a smouldering object, or the stirring up of embers by the air track. Such an event is known as backdraft.
A delayed flashover occurs when the colder gray smoke cloud ignites after congregating outside of its room of origin. This results in a volatile si |
https://en.wikipedia.org/wiki/Von%20Karman%20Institute%20for%20Fluid%20Dynamics | The von Karman Institute for Fluid Dynamics (VKI) is a non-profit educational and scientific organization which specializes in three specific fields: aeronautics and aerospace, environment and applied fluid dynamics, turbomachinery and propulsion. Founded in 1956, it is located in Sint-Genesius-Rode, Belgium.
About
The von Karman Institute for Fluid Dynamics is a non-profit international, educational and scientific organization which is working in three specific fields: aeronautics and aerospace, environment and applied fluid dynamics, turbomachinery and propulsion.
The VKI provides education in these specific areas for students from all over the world. A hundred students come to the Institute each year to study fluid dynamics, for a PhD programme, a research master in Fluid Dynamics, a final year project and also to gather further knowledge while doing a work placement in a specific area.
Each year, Lecture Series and events are being organized inside and outside of the organization. These events emphasize on topics of great importance such as aerodynamics, fluid mechanics, heat transfer with application to aeronautics, space, turbomachinery, the environment and also industrial fluid dynamics. The Institute has built an international renown in these domains. Students who study these fields, researchers, industrials and engineers want to follow these Lecture Series. The information presented is accurate and reliable.
History
In the course of 1955, Professor Theodore von Kármán proposed with his assistants the establishment of an institution devoted to training and research in aerodynamics which would be open to young engineers and scientists of the NATO nations. It was strongly felt that this form of international undertaking would fulfil the important objective of fostering fruitful exchanges and understanding between the participating nations in a well-defined technical field.
The von Karman Institute was established in October 1956 in the buildings which |
https://en.wikipedia.org/wiki/Iwahori%E2%80%93Hecke%20algebra | In mathematics, the Iwahori–Hecke algebra, or Hecke algebra, named for Erich Hecke and Nagayoshi Iwahori, is a deformation of the group algebra of a Coxeter group.
Hecke algebras are quotients of the group rings of Artin braid groups. This connection found a spectacular application in Vaughan Jones' construction of new invariants of knots. Representations of Hecke algebras led to discovery of quantum groups by Michio Jimbo. Michael Freedman proposed Hecke algebras as a foundation for topological quantum computation.
Hecke algebras of Coxeter groups
Start with the following data:
(W, S) is a Coxeter system with the Coxeter matrix M = (mst),
R is a commutative ring with identity.
{qs | s ∈ S} is a family of units of R such that qs = qt whenever s and t are conjugate in W
A is the ring of Laurent polynomials over Z with indeterminates qs (and the above restriction that qs = qt whenever s and t are conjugated), that is A = Z [q]
Multiparameter Hecke Algebras
The multiparameter Hecke algebra HR(W,S,q) is a unital, associative R-algebra with generators Ts for all s ∈ S and relations:
Braid Relations: Ts Tt Ts ... = Tt Ts Tt ..., where each side has mst < ∞ factors and s,t belong to S.
Quadratic Relation: For all s in S we have: (Ts - qs)(Ts + 1) = 0.
Warning: in later books and papers, Lusztig used a modified form of the quadratic relation that reads After extending the scalars to include the half integer powers q the resulting Hecke algebra is isomorphic to the previously defined one (but the Ts here corresponds to q Ts in our notation). While this does not change the general theory, many formulae look different.
Generic Multiparameter Hecke Algebras
HA(W,S,q) is the generic multiparameter Hecke algebra. This algebra is universal in the sense that every other multiparameter Hecke algebra can be obtained from it via the (unique) ring homomorphism A → R which maps the indeterminate qs ∈ A to the unit qs ∈ R. This homomorphism turns R into a A-algebra and the |
https://en.wikipedia.org/wiki/Hydraulic%20engineering | Hydraulic engineering as a sub-discipline of civil engineering is concerned with the flow and conveyance of fluids, principally water and sewage. One feature of these systems is the extensive use of gravity as the motive force to cause the movement of the fluids. This area of civil engineering is intimately related to the design of bridges, dams, channels, canals, and levees, and to both sanitary and environmental engineering.
Hydraulic engineering is the application of the principles of fluid mechanics to problems dealing with the collection, storage, control, transport, regulation, measurement, and use of water. Before beginning a hydraulic engineering project, one must figure out how much water is involved. The hydraulic engineer is concerned with the transport of sediment by the river, the interaction of the water with its alluvial boundary, and the occurrence of scour and deposition. "The hydraulic engineer actually develops conceptual designs for the various features which interact with water such as spillways and outlet works for dams, culverts for highways, canals and related structures for irrigation projects, and cooling-water facilities for thermal power plants."
Fundamental principles
A few examples of the fundamental principles of hydraulic engineering include fluid mechanics, fluid flow, behavior of real fluids, hydrology, pipelines, open channel hydraulics, mechanics of sediment transport, physical modeling, hydraulic machines, and drainage hydraulics.
Fluid mechanics
Fundamentals of Hydraulic Engineering defines hydrostatics as the study of fluids at rest. In a fluid at rest, there exists a force, known as pressure, that acts upon the fluid's surroundings. This pressure, measured in N/m2, is not constant throughout the body of fluid. Pressure, p, in a given body of fluid, increases with an increase in depth. Where the upward force on a body acts on the base and can be found by the equation:
where,
ρ = density of water
g = specific gravity
|
https://en.wikipedia.org/wiki/Sodium%20hydride | Sodium hydride is the chemical compound with the empirical formula NaH. This alkali metal hydride is primarily used as a strong yet combustible base in organic synthesis. NaH is a saline (salt-like) hydride, composed of Na+ and H− ions, in contrast to molecular hydrides such as borane, methane, ammonia, and water. It is an ionic material that is insoluble in all solvents (other than molten Na), consistent with the fact that H− ions do not exist in solution. Because of the insolubility of NaH, all reactions involving NaH occur at the surface of the solid.
Basic properties and structure
NaH is produced by the direct reaction of hydrogen and liquid sodium. Pure NaH is colorless, although samples generally appear grey. NaH is around 40% denser than Na (0.968 g/cm3).
NaH, like LiH, KH, RbH, and CsH, adopts the NaCl crystal structure. In this motif, each Na+ ion is surrounded by six H− centers in an octahedral geometry. The ionic radii of H− (146 pm in NaH) and F− (133 pm) are comparable, as judged by the Na−H and Na−F distances.
"Inverse sodium hydride"
A very unusual situation occurs in a compound dubbed "inverse sodium hydride", which contains H+ and Na− ions. Na− is an alkalide, and this compound differs from ordinary sodium hydride in having a much higher energy content due to the net displacement of two electrons from hydrogen to sodium. A derivative of this "inverse sodium hydride" arises in the presence of the base [36]adamanzane. This molecule irreversibly encapsulates the H+ and shields it from interaction with the alkalide Na−. Theoretical work has suggested that even an unprotected protonated tertiary amine complexed with the sodium alkalide might be metastable under certain solvent conditions, though the barrier to reaction would be small and finding a suitable solvent might be difficult.
Applications in organic synthesis
As a strong base
NaH is a base of wide scope and utility in organic chemistry. As a superbase, it is capable of deprotonating a ra |
https://en.wikipedia.org/wiki/Sneakernet | Sneakernet, also called sneaker net, is an informal term for the transfer of electronic information by physically moving media such as magnetic tape, floppy disks, optical discs, USB flash drives or external hard drives between computers, rather than transmitting it over a computer network. The term, a tongue-in-cheek play on net(work) as in Internet or Ethernet, refers to walking in sneakers as the transport mechanism. Alternative terms may be floppy net, train net, or pigeon net.
Summary and background
Sneakernets are in use throughout the computer universe. A sneakernet may be used when computer networks are prohibitively expensive for the owner to maintain; in high-security environments where manual inspection (for re-classification of information) is necessary; where information needs to be shared between networks with different levels of security clearance; when data transfer is impractical due to bandwidth limitations; when a particular system is simply incompatible with the local network, unable to be connected, or when two systems are not on the same network at the same time. Because sneakernets take advantage of physical media, security measures used for the transfer of sensitive information are respectively physical.
This form of data transfer is also used for peer-to-peer (or friend-to-friend) file sharing and has grown in popularity in metropolitan areas and college communities. The ease of this system has been facilitated by the availability of USB external hard drives, USB flash drives and portable music players.
The United States Postal Service offers a Media Mail service for compact discs, among other items. This provides a viable mode of transport for long distance sneakernet use. In fact, when mailing media with sufficiently high data density such as high capacity hard drives, the throughput (data transferred per unit of time) as well as the cost per unit of data transferred may compete favorably with networked methods of data transfer.
Usage |
https://en.wikipedia.org/wiki/List%20of%20numeral%20system%20topics | This is a list of Wikipedia articles on topics of numeral system and "numeric representations"
See also: computer numbering formats and number names.
Arranged by base
Radix, radix point, mixed radix, base (mathematics)
Unary numeral system (base 1)
Binary numeral system (base 2)
Negative base numeral system (base −2)
Ternary numeral system numeral system (base 3)
Balanced ternary numeral system (base 3)
Negative base numeral system (base −3)
Quaternary numeral system (base 4)
Quater-imaginary base (base 2)
Quinary numeral system (base 5)
Senary numeral system (base 6)
Septenary numeral system (base 7)
Octal numeral system (base 8)
Nonary (novenary) numeral system (base 9)
Decimal (denary) numeral system (base 10)
Negative base numeral system (base −10)
Duodecimal (dozenal) numeral system (base 12)
Hexadecimal numeral system (base 16)
Vigesimal numeral system (base 20)
Sexagesimal numeral system (base 60)
Arranged by culture
Other
Numeral system topics |
https://en.wikipedia.org/wiki/Capture%20the%20flag | Capture the flag (CTF) is a traditional outdoor sport where two or more teams each have a flag (or other markers) and the objective is to capture the other team's flag, located at the team's "base" (or hidden or even buried somewhere in the territory), and bring it safely back to their own base. Enemy players can be "tagged" by players when out of their home territory and, depending on the rules, they may be out of the game, become members of the opposite team, be sent back to their own territory, be frozen in place, or be sent to "jail" until freed by a member of their own team.
Overview
Capture the Flag requires a playing field. In both indoor and outdoor versions, the field is divided into two clearly designated halves, known as territories. Players form two teams, one for each territory. Each side has a "flag", which is most often a piece of fabric, but can be any object small enough to be easily carried by a person (night time games might use flashlights, glowsticks or lanterns as the "flags"). Sometimes teams wear dark colors at night to make it more difficult for their opponents to see them.
The objective of the game is for players to venture into the opposing team's territory, grab the flag, and return with it to their territory without being tagged. The flag is defended mainly by tagging opposing players who attempt to take it. Within their territory players are "safe", meaning that they cannot be tagged by opposing players. Once they cross into the opposing team's territory they are vulnerable to tagging.
Rules for Capture the Flag appear in 1860 in the German gymnastic manual Lehr- und Handbuch der deutschen Turnkunst by Wilhelm Lübeck under the name Fahnenbarlauf. In the 19th century, Capture the Flag was not considered a distinct game, but a variation of the European game "Barlaufen" (Barlauf mit Fahnenraub), played in France and Germany.
Descriptions of Capture the Flag in English appeared in the early 20th century, e. g. in "Scouting for Boys" wr |
https://en.wikipedia.org/wiki/Engineering%20tolerance | Engineering tolerance is the permissible limit or limits of variation in:
a physical dimension;
a measured value or physical property of a material, manufactured object, system, or service;
other measured values (such as temperature, humidity, etc.);
in engineering and safety, a physical distance or space (tolerance), as in a truck (lorry), train or boat under a bridge as well as a train in a tunnel (see structure gauge and loading gauge);
in mechanical engineering, the space between a bolt and a nut or a hole, etc.
Dimensions, properties, or conditions may have some variation without significantly affecting functioning of systems, machines, structures, etc. A variation beyond the tolerance (for example, a temperature that is too hot or too cold) is said to be noncompliant, rejected, or exceeding the tolerance.
Considerations when setting tolerances
A primary concern is to determine how wide the tolerances may be without affecting other factors or the outcome of a process. This can be by the use of scientific principles, engineering knowledge, and professional experience. Experimental investigation is very useful to investigate the effects of tolerances: Design of experiments, formal engineering evaluations, etc.
A good set of engineering tolerances in a specification, by itself, does not imply that compliance with those tolerances will be achieved. Actual production of any product (or operation of any system) involves some inherent variation of input and output. Measurement error and statistical uncertainty are also present in all measurements. With a normal distribution, the tails of measured values may extend well beyond plus and minus three standard deviations from the process average. Appreciable portions of one (or both) tails might extend beyond the specified tolerance.
The process capability of systems, materials, and products needs to be compatible with the specified engineering tolerances. Process controls must be in place and an effective qu |
https://en.wikipedia.org/wiki/Institution%20of%20Engineers%20of%20Ireland | The Institution of Engineers of Ireland () or the IEI, is the second oldest Engineering Society on the islands of Great Britain and Ireland, and was established in 1835. The institution primarily represents members based in Ireland.
Membership of the institution is open to individuals based on academic and professional background and is separated into grades in accordance with criteria, including the Chartered Engineer and European Engineer titles.
The institution received its current legal name in 1969 by an Act of the Oireachtas. In October 2005 the institution adopted the operating name Engineers Ireland in an attempt to reduce any confusion over what the abbreviation IEI means, and as a substitute for its current legal name which is often considered unwieldy; the legal name is, however, unchanged.
History
The history of the institution can be traced to 6 August 1835 when civil engineers met in Dublin; the result was the Civil Engineers Society of Ireland, in 1844 the society adopted the name the Institution of Civil Engineers of Ireland (ICEI). The institution received a royal charter on 15 October 1877, this being a significant milestone in obtaining international recognition and standing. In the early years of the Irish Free State Cumann na nInnealtóirí (The Engineers Association) was set up independently, in 1928, by incorporation under the Companies Act, 1908 to "improve and advance the status and remuneration of qualified members of the engineering profession" as it was felt that the ICEI's charter prevented its negotiation of employment conditions and salary.
In 1927 the ICEI elected their first woman member when Iris Cummins was admitted to the organisation.
As time progressed it was realised that the institution and association might better advance engineering in Ireland by amalgamation of both into a single organisation which would represent a broader set of engineering disciplines, so discussions commenced in 1965, and resulted in The Instituti |
https://en.wikipedia.org/wiki/Requirements%20analysis | In systems engineering and software engineering, requirements analysis focuses on the tasks that determine the needs or conditions to meet the new or altered product or project, taking account of the possibly conflicting requirements of the various stakeholders, analyzing, documenting, validating and managing software or system requirements.
Requirements analysis is critical to the success or failure of a systems or software project.<ref>{{cite book
|editor1= Alain Abran |editor2=James W. Moore |editor3=Pierre Bourque |editor4=Robert Dupuis
| title = Guide to the software engineering body of knowledge|url = http://www.swebok.org
|access-date = 2007-02-08|edition=2004 |date=March 2005
| publisher = IEEE Computer Society Press | location = Los Alamitos, CA | isbn = 0-7695-2330-7
| chapter = Chapter 2: Software Requirements | chapter-url = http://www.computer.org/portal/web/swebok/html/ch2
| quote = It is widely acknowledged within the software industry that software engineering projects are critically vulnerable when these activities are performed poorly.
}}</ref> The requirements should be documented, actionable, measurable, testable, traceable, related to identified business needs or opportunities, and defined to a level of detail sufficient for system design.
Overview
Conceptually, requirements analysis includes three types of activities:
Eliciting requirements: (e.g. the project charter or definition), business process documentation, and stakeholder interviews. This is sometimes also called requirements gathering or requirements discovery.
Recording requirements: Requirements may be documented in various forms, usually including a summary list and may include natural-language documents, use cases, user stories, process specifications and a variety of models including data models.
Analyzing requirements: determining whether the stated requirements are clear, complete, unduplicated, concise, valid, consistent and unambiguous, and resolving any apparent confl |
https://en.wikipedia.org/wiki/Homotopy%20lifting%20property | In mathematics, in particular in homotopy theory within algebraic topology, the homotopy lifting property (also known as an instance of the right lifting property or the covering homotopy axiom) is a technical condition on a continuous function from a topological space E to another one, B. It is designed to support the picture of E "above" B by allowing a homotopy taking place in B to be moved "upstairs" to E.
For example, a covering map has a property of unique local lifting of paths to a given sheet; the uniqueness is because the fibers of a covering map are discrete spaces. The homotopy lifting property will hold in many situations, such as the projection in a vector bundle, fiber bundle or fibration, where there need be no unique way of lifting.
Formal definition
Assume all maps are continuous functions between topological spaces. Given a map , and a space , one says that has the homotopy lifting property, or that has the homotopy lifting property with respect to , if:
for any homotopy , and
for any map lifting (i.e., so that ),
there exists a homotopy lifting (i.e., so that ) which also satisfies .
The following diagram depicts this situation:
The outer square (without the dotted arrow) commutes if and only if the hypotheses of the lifting property are true. A lifting corresponds to a dotted arrow making the diagram commute. This diagram is dual to that of the homotopy extension property; this duality is loosely referred to as Eckmann–Hilton duality.
If the map satisfies the homotopy lifting property with respect to all spaces , then is called a fibration, or one sometimes simply says that has the homotopy lifting property.
A weaker notion of fibration is Serre fibration, for which homotopy lifting is only required for all CW complexes .
Generalization: homotopy lifting extension property
There is a common generalization of the homotopy lifting property and the homotopy extension property. Given a pair of spaces , for simplicity we denote . |
https://en.wikipedia.org/wiki/Apache%20Lucene | Apache Lucene is a free and open-source search engine software library, originally written in Java by Doug Cutting. It is supported by the Apache Software Foundation and is released under the Apache Software License. Lucene is widely used as a standard foundation for production search applications.
Lucene has been ported to other programming languages including Object Pascal, Perl, C#, C++, Python, Ruby and PHP.
History
Doug Cutting originally wrote Lucene in 1999. Lucene was his fifth search engine. He had previously written two while at Xerox PARC, one at Apple, and a fourth at Excite. It was initially available for download from its home at the SourceForge web site. It joined the Apache Software Foundation's Jakarta family of open-source Java products in September 2001 and became its own top-level Apache project in February 2005. The name Lucene is Doug Cutting's wife's middle name and her maternal grandmother's first name.
Lucene formerly included a number of sub-projects, such as Lucene.NET, Mahout, Tika and Nutch. These three are now independent top-level projects.
In March 2010, the Apache Solr search server joined as a Lucene sub-project, merging the developer communities.
Version 4.0 was released on October 12, 2012.
In March 2021, Lucene changed its logo, and Apache Solr became a top level Apache project again, independent from Lucene.
Features and common use
While suitable for any application that requires full text indexing and searching capability, Lucene is recognized for its utility in the implementation of Internet search engines and local, single-site searching.
Lucene includes a feature to perform a fuzzy search based on edit distance.
Lucene has also been used to implement recommendation systems. For example, Lucene's 'MoreLikeThis' Class can generate recommendations for similar documents. In a comparison of the term vector-based similarity approach of 'MoreLikeThis' with citation-based document similarity measures, such as co-citation a |
https://en.wikipedia.org/wiki/Ciphertext-only%20attack | In cryptography, a ciphertext-only attack (COA) or known ciphertext attack is an attack model for cryptanalysis where the attacker is assumed to have access only to a set of ciphertexts. While the attacker has no channel providing access to the plaintext prior to encryption, in all practical ciphertext-only attacks, the attacker still has some knowledge of the plaintext. For instance, the attacker might know the language in which the plaintext is written or the expected statistical distribution of characters in the plaintext. Standard protocol data and messages are commonly part of the plaintext in many deployed systems, and can usually be guessed or known efficiently as part of a ciphertext-only attack on these systems.
Attack
The attack is completely successful if the corresponding plaintexts can be deduced, or even better, the key. The ability to obtain any information at all about the underlying plaintext beyond what was pre-known to the attacker is still considered a success. For example, if an adversary is sending ciphertext continuously to maintain traffic-flow security, it would be very useful to be able to distinguish real messages from nulls. Even making an informed guess of the existence of real messages would facilitate traffic analysis.
In the history of cryptography, early ciphers, implemented using pen-and-paper, were routinely broken using ciphertexts alone. Cryptographers developed statistical techniques for attacking ciphertext, such as frequency analysis. Mechanical encryption devices such as Enigma made these attacks much more difficult (although, historically, Polish cryptographers were able to mount a successful ciphertext-only cryptanalysis of the Enigma by exploiting an insecure protocol for indicating the message settings). More advanced ciphertext-only attacks on the Enigma were mounted in Bletchley Park during World War II, by intelligently guessing plaintexts corresponding to intercepted ciphertexts.
Modern
Every modern cipher attem |
https://en.wikipedia.org/wiki/Deterministic%20system | In mathematics, computer science and physics, a deterministic system is a system in which no randomness is involved in the development of future states of the system. A deterministic model will thus always produce the same output from a given starting condition or initial state.
In physics
Physical laws that are described by differential equations represent deterministic systems, even though the state of the system at a given point in time may be difficult to describe explicitly.
In quantum mechanics, the Schrödinger equation, which describes the continuous time evolution of a system's wave function, is deterministic. However, the relationship between a system's wave function and the observable properties of the system appears to be non-deterministic.
In mathematics
The systems studied in chaos theory are deterministic. If the initial state were known exactly, then the future state of such a system could theoretically be predicted. However, in practice, knowledge about the future state is limited by the precision with which the initial state can be measured, and chaotic systems are characterized by a strong dependence on the initial conditions. This sensitivity to initial conditions can be measured with Lyapunov exponents.
Markov chains and other random walks are not deterministic systems, because their development depends on random choices.
In computer science
A deterministic model of computation, for example a deterministic Turing machine, is a model of computation such that the successive states of the machine and the operations to be performed are completely determined by the preceding state.
A deterministic algorithm is an algorithm which, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states. There may be non-deterministic algorithms that run on a deterministic machine, for example, an algorithm that relies on random choices. Generally, for such random choices, one |
https://en.wikipedia.org/wiki/White%20feather | The white feather is a widely recognised propaganda symbol. It has, among other things, represented cowardice or conscientious pacifism; as in A. E. W. Mason's 1902 book The Four Feathers. In Britain during the First World War it was often given to males out of uniform by women to shame them publicly into signing up. In the United States armed forces, however, it is used to signify extraordinary bravery and excellence in combat marksmanship.
As a symbol of cowardice
The use of the phrase "white feather" to symbolise cowardice is attested from the late 18th century, according to the Oxford English Dictionary. The OED cites A Classical Dictionary of the Vulgar Tongue (1785), in which lexicographer Francis Grose wrote "White feather, he has a white feather, he is a coward, an allusion to a game cock, where having a white feather, is a proof he is not of the true game breed". This was in the context of cockfighting, a common entertainment in Georgian England.
The Crusades
Shame was exerted upon men in England and France who had not taken the cross at the time of the Third Crusade. "A great many men sent each other wool and distaff, hinting that if anyone failed to join this military undertaking they were only fit for women's work". Wool played an important role in the medieval economy, and a distaff is a tool for spinning the raw material into yarn; the activities of textile production were so firmly associated with girls and women that "distaff" became a metonym for women's work.
World War I
At the start of World War I, Admiral Charles Fitzgerald, who was a strong advocate of conscription, wanted to increase the number of those enlisting in the armed forces. Therefore he organized on 30 August 1914 a group of thirty women in his home town of Folkestone to hand out white feathers to any men that were not in uniform. Fitzgerald believed using women to shame the men into enlisting would be the most effective method of encouraging enlistment. The group that he founded |
https://en.wikipedia.org/wiki/Excitable%20medium | An excitable medium is a nonlinear dynamical system which has the capacity to propagate a wave of some description, and which cannot support the passing of another wave until a certain amount of time has passed (known as the refractory time).
A forest is an example of an excitable medium: if a wildfire burns through the forest, no fire can return to a burnt spot until the vegetation has gone through its refractory period and regrown. In chemistry, oscillating reactions are excitable media, for example the Belousov–Zhabotinsky reaction and the Briggs–Rauscher reaction. Cell excitability is the change in membrane potential that is necessary for cellular responses in various tissues. The resting potential forms the basis of cell excitability and these processes are fundamental for the generation of graded and action potentials. Normal and pathological activities in the heart and brain can be modelled as excitable media. A group of spectators at a sporting event are an excitable medium, as can be observed in a Mexican wave (so-called from its initial appearance in the 1986 World Cup in Mexico).
Modelling excitable media
Excitable media can be modelled using both partial differential equations and cellular automata.
With cellular automata
Cellular automata provide a simple model to aid in the understanding of excitable media. Perhaps the simplest such model is in. See Greenberg-Hastings cellular automaton for this model.
Each cell of the automaton is made to represent some section of the medium being modelled (for example, a patch of trees in a forest, or a segment of heart tissue). Each cell can be in one of the three following states:
Quiescent or excitable — the cell is unexcited, but can be excited. In the forest fire example, this corresponds to the trees being unburnt.
Excited — the cell is excited. The trees are on fire.
Refractory — the cell has recently been excited and is temporarily not excitable. This corresponds to a patch of land where the |
https://en.wikipedia.org/wiki/Free%20monoid | In abstract algebra, the free monoid on a set is the monoid whose elements are all the finite sequences (or strings) of zero or more elements from that set, with string concatenation as the monoid operation and with the unique sequence of zero elements, often called the empty string and denoted by ε or λ, as the identity element. The free monoid on a set A is usually denoted A∗. The free semigroup on A is the subsemigroup of A∗ containing all elements except the empty string. It is usually denoted A+.
More generally, an abstract monoid (or semigroup) S is described as free if it is isomorphic to the free monoid (or semigroup) on some set.
As the name implies, free monoids and semigroups are those objects which satisfy the usual universal property defining free objects, in the respective categories of monoids and semigroups. It follows that every monoid (or semigroup) arises as a homomorphic image of a free monoid (or semigroup). The study of semigroups as images of free semigroups is called combinatorial semigroup theory.
Free monoids (and monoids in general) are associative, by definition; that is, they are written without any parenthesis to show grouping or order of operation. The non-associative equivalent is the free magma.
Examples
Natural numbers
The monoid (N0,+) of natural numbers (including zero) under addition is a free monoid on a singleton free generator, in this case the natural number 1.
According to the formal definition, this monoid consists of all sequences like "1", "1+1", "1+1+1", "1+1+1+1", and so on, including the empty sequence.
Mapping each such sequence to its evaluation result
and the empty sequence to zero establishes an isomorphism from the set of such sequences to N0.
This isomorphism is compatible with "+", that is, for any two sequences s and t, if s is mapped (i.e. evaluated) to a number m and t to n, then their concatenation s+t is mapped to the sum m+n.
Kleene star
In formal language theory, usually a finite set of "symbols |
https://en.wikipedia.org/wiki/Linear%20separability | In Euclidean geometry, linear separability is a property of two sets of points. This is most easily visualized in two dimensions (the Euclidean plane) by thinking of one set of points as being colored blue and the other set of points as being colored red. These two sets are linearly separable if there exists at least one line in the plane with all of the blue points on one side of the line and all the red points on the other side. This idea immediately generalizes to higher-dimensional Euclidean spaces if the line is replaced by a hyperplane.
The problem of determining if a pair of sets is linearly separable and finding a separating hyperplane if they are, arises in several areas. In statistics and machine learning, classifying certain types of data is a problem for which good algorithms exist that are based on this concept.
Mathematical definition
Let and be two sets of points in an n-dimensional Euclidean space. Then and are linearly separable if there exist n + 1 real numbers , such that every point satisfies and every point satisfies , where is the -th component of .
Equivalently, two sets are linearly separable precisely when their respective convex hulls are disjoint (colloquially, do not overlap).
In simple 2D, it can also be imagined that the set of points under a linear transformation collapses into a line, on which there exists a value, k, greater than which one set of points will fall into, and lesser than which the other set of points fall.
Examples
Three non-collinear points in two classes ('+' and '-') are always linearly separable in two dimensions. This is illustrated by the three examples in the following figure (the all '+' case is not shown, but is similar to the all '-' case):
However, not all sets of four points, no three collinear, are linearly separable in two dimensions. The following example would need two straight lines and thus is not linearly separable:
Notice that three points which are collinear and of the form "+ ⋅⋅⋅ |
https://en.wikipedia.org/wiki/Constructible%20polygon | In mathematics, a constructible polygon is a regular polygon that can be constructed with compass and straightedge. For example, a regular pentagon is constructible with compass and straightedge while a regular heptagon is not. There are infinitely many constructible polygons, but only 31 with an odd number of sides are known.
Conditions for constructibility
Some regular polygons are easy to construct with compass and straightedge; others are not. The ancient Greek mathematicians knew how to construct a regular polygon with 3, 4, or 5 sides, and they knew how to construct a regular polygon with double the number of sides of a given regular polygon. This led to the question being posed: is it possible to construct all regular polygons with compass and straightedge? If not, which n-gons (that is, polygons with n edges) are constructible and which are not?
Carl Friedrich Gauss proved the constructibility of the regular 17-gon in 1796. Five years later, he developed the theory of Gaussian periods in his Disquisitiones Arithmeticae. This theory allowed him to formulate a sufficient condition for the constructibility of regular polygons. Gauss stated without proof that this condition was also necessary, but never published his proof. A full proof of necessity was given by Pierre Wantzel in 1837. The result is known as the Gauss–Wantzel theorem:
A regular n-gon can be constructed with compass and straightedge if and only if n is a power of 2 or the product of a power of 2 and any number of distinct Fermat primes.
A Fermat prime is a prime number of the form
In order to reduce a geometric problem to a problem of pure number theory, the proof uses the fact that a regular n-gon is constructible if and only if the cosine is a constructible number—that is, can be written in terms of the four basic arithmetic operations and the extraction of square roots. Equivalently, a regular n-gon is constructible if any root of the nth cyclotomic polynomial is constructible.
Deta |
https://en.wikipedia.org/wiki/Advanced%20Communications%20Riser | The Advanced Communications Riser, or ACR, is a form factor and technical specification for PC motherboard expansion slots. It is meant as a supplement to PCI slots, a replacement for the original Audio/modem riser (AMR) slots, and a competitor and alternative to Intel's communications and networking riser (CNR) slots.
Technology
The ACR specification provides a lower cost method to connect certain expansion cards to a computer, with an emphasis on audio and communications devices. Sound cards and modems are the most common devices to use the specification. ACR and other riser cards lower hardware costs by offloading much of the computing tasks of the peripheral to the CPU.
ACR uses a 120 pin PCI connector which is reversed and offset, retaining backward compatibility with 46 pin AMR cards while including support for newer technologies. It is also more cost-effective and simple for the manufacturer, since the connectors are identical to the PCI connectors already purchased in quantity. New features supported by ACR include standards for an EEPROM for storing model and vendor information, USB support, and the Integrated Packet Bus for digital subscriber line (DSL), cable modem, and wireless networking support.
History
The ACR specification was created by the Advanced Communications Riser Special Interest Group (ACR SIG) in 2000 with the intent to replace the AMR specification. Because it was backwards compatible with AMR cards, and technically superior, it quickly replaced it.
ACR is rendered obsolete by discrete components mounted on the motherboard.
See also
Mobile Daughter Card (MDC), a version of ACR for mobile devices
GeoPort, a similar standard for the Apple Macintosh
References
Motherboard expansion slot |
https://en.wikipedia.org/wiki/List%20of%20GTK%20applications | This is a list of notable applications that use GTK and/or Clutter for their GUI widgets. Such applications blend well with desktop environments that are GTK-based as well, such as GNOME, Cinnamon, LXDE, MATE, Pantheon, Sugar, Xfce or ROX Desktop.
Official GNOME applications
The GNOME Project, i.e. all the people involved with the development of the GNOME desktop environment, is the biggest contributor to GTK, and the GNOME Core Applications as well as the GNOME Games employ the newest GUI widgets from the cutting-edge version of GTK and demonstrates their capabilities.
Shells, user interfaces, application launchers
GNOME Shell – the desktop graphical GUI shell introduced with GNOME version 3.0
Cinnamon fork of the GNOME Shell
GNOME Panel – applications launcher
Maynard, a shell for Weston by Collabora originally for the Raspberry Pi
GNOME Panel and forks
Budgie is a distro-agnostic desktop environment
Education software
Tux Typing – typing tutor for children
DrGeo – geometry software
GCompris – educational entertainment for children (legacy version only)
Utility software
Operating system administration
Disk Usage Analyzer – Disk-usage analyzer
GNOME Disks – utility for the hard disk; partition editor, S.M.A.R.T. monitoring, formerly known as Gnome Disk Utility or palimpsest
GParted – utility for the hard disk; partition editor
GDM – X display manager
GNOME Keyring Manager – Password manager
GNOME Screensaver – Simple screensaver configuration
Alacarte – Menu editor
End-user utilities
Archive Manager – archive manager
Cheese – webcam application
Conduit Synchronizer – Photo/music/notes/files etc. synchronization
Eye of GNOME – official image-viewer for GNOME
Getting Things GNOME! – Personal tasks management software
gnee – A GNOME GUI and a panel applet that can be used to record and replay test cases.
GNOME Boxes – Application to access remote or virtual systems
GNOME Screenshot – take screenshots of desktop and windows
GNOME Calculator |
https://en.wikipedia.org/wiki/Cache%20on%20a%20stick | COASt, an acronym for "cache on a stick", is a packaging standard for modules containing SRAM used as an L2 cache in a computer. COASt modules look like somewhat oversized SIMM modules. These modules were somewhat popular in the Apple and PC platforms during early to mid-1990s, but with newer computers cache is built into either the CPU or the motherboard. COASt modules decoupled the motherboard from its cache, allowing varying configurations to be created. A low-cost system could run with no cache, while a more expensive system could come equipped with 512 KB or more cache. Later COASt modules were equipped with pipelined-burst SRAM.
The standard was originally defined by Motorola to be between 4.33 and 4.36 inches (110 and 111 mm) wide, and between 1.12 and 1.16 inches (28 and 29 mm) high. It could be found in many Apple Macintosh in the early-to-mid-90s, but disappeared as the Mac moved to the PowerPC platform.
Intel also used the COASt standard for their Pentium systems, where it could be found as late as 1998 in Pentium MMX systems utilizing Intel chipsets such as 430VX and 430TX. Later, Intel combined this architecture with the CPU and created the Slot 1 CPU cartridge which contained both the CPU and separate cache chips.
The slot that the COASt module plugged into was named "CELP", or "card edge low profile", referring to the small circuit board and the conductors on its edge. It had 80 contacts on each side of a circuit board (for a total of 160), spaced 0.050" apart, plus an identification notch between contacts 42 and 43.
Operation
COASt modules provided either 256K or 512K of direct-mapped cache, organized as 8192 or 16384 lines of 32 bytes. A 64-bit data bus allowed the cache line to be transferred in a 4-cycle burst.
The modules contained 256K or 512K of fast pipeline burst SRAM, plus 8 or 11 bits of even faster static RAM per line to store the cache tags. (The module provides pins for 11 lines, but many motherboards and modules provided only 8 |
https://en.wikipedia.org/wiki/256%20%28number%29 | 256 (two hundred [and] fifty-six) is the natural number following 255 and preceding 257.
In mathematics
256 is a composite number, with the factorization 256 = 28, which makes it a power of two.
256 is 4 raised to the 4th power, so in tetration notation, 256 is 24.
256 is the value of the expression , where .
256 is a perfect square (162).
256 is the only 3-digit number that is zenzizenzizenzic. It is 2 to the 8th power or .
256 is the lowest number that is a product of eight prime factors.
256 is the number of parts in all compositions of 7.
In computing
One octet (in most cases one byte) is equal to eight bits and has 28 or 256 possible values, counting from 0 to 255. The number 256 often appears in computer applications (especially on 8-bit systems) such as:
The typical number of different values in each color channel of a digital color image (256 values for red, 256 values for green, and 256 values for blue used for 24-bit color) (see color space or Web colors).
The number of colors available in a GIF or a 256-color (8-bit) bitmap.
The number of characters in extended ASCII and Latin-1.
The number of columns available in a Microsoft Excel worksheet until Excel 2007.
The split-screen level in Pac-Man, which results from the use of a single byte to store the internal level counter.
A 256-bit integer can represent up to 115,792,089,237,316,195,423,570,985,008,687,907,853,269,984,665,640,564,039,457,584,007,913,129,639,936 values.
The number of bits in the SHA-256 cryptographic hash.
The branding number of Nvidia's GeForce 256.
References
Integers |
https://en.wikipedia.org/wiki/Simply%20connected%20space | In topology, a topological space is called simply connected (or 1-connected, or 1-simply connected) if it is path-connected and every path between two points can be continuously transformed (intuitively for embedded spaces, staying within the space) into any other such path while preserving the two endpoints in question. The fundamental group of a topological space is an indicator of the failure for the space to be simply connected: a path-connected topological space is simply connected if and only if its fundamental group is trivial.
Definition and equivalent formulations
A topological space is called if it is path-connected and any loop in defined by can be contracted to a point: there exists a continuous map such that restricted to is Here, and denotes the unit circle and closed unit disk in the Euclidean plane respectively.
An equivalent formulation is this: is simply connected if and only if it is path-connected, and whenever and are two paths (that is, continuous maps) with the same start and endpoint ( and ), then can be continuously deformed into while keeping both endpoints fixed. Explicitly, there exists a homotopy such that and
A topological space is simply connected if and only if is path-connected and the fundamental group of at each point is trivial, i.e. consists only of the identity element. Similarly, is simply connected if and only if for all points the set of morphisms in the fundamental groupoid of has only one element.
In complex analysis: an open subset is simply connected if and only if both and its complement in the Riemann sphere are connected. The set of complex numbers with imaginary part strictly greater than zero and less than one furnishes a nice example of an unbounded, connected, open subset of the plane whose complement is not connected. It is nevertheless simply connected. It might also be worth pointing out that a relaxation of the requirement that be connected leads to an interesting exploration of |
https://en.wikipedia.org/wiki/Ideal%20number | In number theory an ideal number is an algebraic integer which represents an ideal in the ring of integers of a number field; the idea was developed by Ernst Kummer, and led to Richard Dedekind's definition of ideals for rings. An ideal in the ring of integers of an algebraic number field is principal if it consists of multiples of a single element of the ring, and nonprincipal otherwise. By the principal ideal theorem any nonprincipal ideal becomes principal when extended to an ideal of the Hilbert class field. This means that there is an element of the ring of integers of the Hilbert class field, which is an ideal number, such that the original nonprincipal ideal is equal to the collection of all multiples of this ideal number by elements of this ring of integers that lie in the original field's ring of integers.
Example
For instance, let be a root of , then the ring of integers of the field is , which means all with and integers form the ring of integers. An example of a nonprincipal ideal in this ring is the set of all where and are integers; the cube of this ideal is principal, and in fact the class group is cyclic of order three. The corresponding class field is obtained by adjoining an element satisfying to , giving . An ideal number for the nonprincipal ideal is . Since this satisfies the equation
it is an algebraic integer.
All elements of the ring of integers of the class field which when multiplied by give a result in are of the form , where
and
The coefficients α and β are also algebraic integers, satisfying
and
respectively. Multiplying by the ideal number gives , which is the nonprincipal ideal.
History
Kummer first published the failure of unique factorization in cyclotomic fields in 1844 in an obscure journal; it was reprinted in 1847 in Liouville's journal. In subsequent papers in 1846 and 1847 he published his main theorem, the unique factorization into (actual and ideal) primes.
It is widely believed that Kummer was led to |
https://en.wikipedia.org/wiki/Internal%20and%20external%20angles | In geometry, an angle of a polygon is formed by two adjacent sides. For a simple (non-self-intersecting) polygon, regardless of whether it is convex or non-convex, this angle is called an (or interior angle) if a point within the angle is in the interior of the polygon. A polygon has exactly one internal angle per vertex.
If every internal angle of a simple polygon is less than a straight angle ( radians or 180°), then the polygon is called convex.
In contrast, an (also called a turning angle or exterior angle) is an angle formed by one side of a simple polygon and a line extended from an adjacent side.
Properties
The sum of the internal angle and the external angle on the same vertex is π radians (180°).
The sum of all the internal angles of a simple polygon is π(n−2) radians or 180(n–2) degrees, where n is the number of sides. The formula can be proved by using mathematical induction: starting with a triangle, for which the angle sum is 180°, then replacing one side with two sides connected at another vertex, and so on.
The sum of the external angles of any simple convex or non-convex polygon, if only one of the two external angles is assumed at each vertex, is 2π radians (360°).
The measure of the exterior angle at a vertex is unaffected by which side is extended: the two exterior angles that can be formed at a vertex by extending alternately one side or the other are vertical angles and thus are equal.
Extension to crossed polygons
The interior angle concept can be extended in a consistent way to crossed polygons such as star polygons by using the concept of directed angles. In general, the interior angle sum in degrees of any closed polygon, including crossed (self-intersecting) ones, is then given by 180(n–2k)°, where n is the number of vertices, and the strictly positive integer k is the number of total (360°) revolutions one undergoes by walking around the perimeter of the polygon. In other words, the sum of all the exterior angles is 2πk radians |
https://en.wikipedia.org/wiki/Pathogenesis | In pathology, pathogenesis is the process by which a disease or disorder develops. It can include factors which contribute not only to the onset of the disease or disorder, but also to its progression and maintenance. The word comes .
Description
Types of pathogenesis include microbial infection, inflammation, malignancy and tissue breakdown. For example, bacterial pathogenesis is the process by which bacteria cause infectious illness.
Most diseases are caused by multiple processes. For example, certain cancers arise from dysfunction of the immune system (skin tumors and lymphoma after a renal transplant, which requires immunosuppression), Streptococcus pneumoniae is spread through contact with respiratory secretions, such as saliva, mucus, or cough droplets from an infected person and colonizes the upper respiratory tract and begins to multiply.
The pathogenic mechanisms of a disease (or condition) are set in motion by the underlying causes, which if controlled would allow the disease to be prevented. Often, a potential cause is identified by epidemiological observations before a pathological link can be drawn between the cause and the disease. The pathological perspective can be directly integrated into an epidemiological approach in the interdisciplinary field of molecular pathological epidemiology. Molecular pathological epidemiology can help to assess pathogenesis and causality by means of linking a potential risk factor to molecular pathologic signatures of a disease. Thus, the molecular pathological epidemiology paradigm can advance the area of causal inference.
See also
Causal inference
Epidemiology
Molecular pathological epidemiology
Molecular pathology
Pathology
Pathophysiology
Salutogenesis
References
Further reading
Pathology |
https://en.wikipedia.org/wiki/Lasso%20%28programming%20language%29 | Lasso is an application server and server management interface used to develop internet applications and is a general-purpose, high-level programming language. Originally a web datasource connection tool for Filemaker and later included in Apple Computer's FileMaker 4.0 and Claris Homepage as CDML, it has since evolved into a complex language used to develop and serve large-scale internet applications and web pages.
Lasso includes a simple template system allowing code to control generation of HTML and other content types. Lasso is object-oriented and every value is an object. It also supports procedural programming through unbound methods. The language uses traits and multiple dispatch extensively.
Lasso has a dynamic type system, where objects can be loaded and augmented at runtime, automatic memory management, a comprehensive standard library, and three compiling methodologies: dynamic (comparable to PHP-Python), just-in-time compilation (comparable to Java or .NET Framework), and pre-compiled (comparable to C). Lasso also supports Query Expressions, allowing elements within arrays and other types of sequences to be iterated, filtered, and manipulated using a natural language syntax similar to SQL.
Lasso includes full Unicode character support in the standard string object, allowing it to serve and support multi-byte characters such as Japanese and Swedish, and supports transparent UTF-8 conversion when writing string data to the network or file system.
Lasso is often used as a scripting language, and also used in a wide range of non-scripting contexts. Lasso code can be packaged into standalone executable programs called "LassoApps", in which folder structures are compiled into single files.
The Lasso Server application server runs as a system service and receives requests from the web server through FastCGI. It then hands the request off to the appropriate Lasso Instance, which formulates the response. Multiple individual instances are supported, allowing o |
https://en.wikipedia.org/wiki/Ecosystem%20diversity | Ecosystem diversity deals with the variations in ecosystems within a geographical location and its overall impact on human existence and the environment.
Ecosystem diversity addresses the combined characteristics of biotic properties (biodiversity) and abiotic properties (geodiversity). It is a variation in the ecosystems found in a region or the variation in ecosystems over the whole planet. Ecological diversity includes the variation in both terrestrial and aquatic ecosystems. Ecological diversity can also take into account the variation in the complexity of a biological community, including the number of different niches, the number of and other ecological processes. An example of ecological diversity on a global scale would be the variation in ecosystems, such as deserts, forests, grasslands, wetlands and oceans. Ecological diversity is the largest scale of biodiversity, and within each ecosystem, there is a great deal of both species and genetic diversity.
Impact
Diversity in the ecosystem is significant to human existence for a variety of reasons. Ecosystem diversity boosts the availability of oxygen via the process of photosynthesis amongst plant organisms domiciled in the habitat. Diversity in an aquatic environment helps in the purification of water by plant varieties for use by humans. Diversity increases plant varieties which serves as a good source for medicines and herbs for human use. A lack of diversity in the ecosystem produces an opposite result.
Examples
Some examples of ecosystems that are rich in diversity are:
Deserts
Forests
Large marine ecosystems
Marine ecosystems
Old-growth forests
Rainforests
Tundra
Coral reefs
Marine
Ecosystem diversity as a result of evolutionary pressure
Ecological diversity around the world can be directly linked to the evolutionary and selective pressures that constrain the diversity outcome of the ecosystems within different niches. Tundras, Rainforests, coral reefs and deciduous forests all are form |
https://en.wikipedia.org/wiki/CARNET | CARNET (Croatian Academic and Research Network, ) is the national research and education network of Croatia. It is funded from the government budget and it operates from offices in Zagreb and five other cities.
CARNET was established in 1991 as a project of the Ministry of Science and Technology of the Republic of Croatia. In March 1995 the Government of the Republic of Croatia passed the Decree on founding of the CARNET institution with the purpose of facilitating progress of individuals, as well as of the society as a whole, through the use of new information technologies.
CARNET's activities can be divided in three basic areas: Internet service provision, encouragement of information society development and education for the new era.
History
The institution
A body responsible for coordinating the establishment of the Croatian educational computer network has been established on 3 October 1991. That was the beginning of the work of the Croatian Academic and Research Network - CARNET, the first Internet Service Provider (ISP) in Croatia. In the several years that followed CARNET was the only Internet service provider in Croatia, providing the service free of charge, not only to the academic community, but to all citizens of the Republic of Croatia as well.
In November 1992 the first international communication connection was established, which connected CARNET Internet exchange point in Zagreb to Austria. By that act Croatia became a part of the world computer network – the Internet.
During 1992, the first equipment was procured and the backbone of the CARNET network was built. Institutions in Croatia were connected at the speed of 19 - 200 kbit/s, while the whole network was connected to the Internet through Austria at the speed of 64 kbit/s.
The first institutions to be connected to the Internet were the University Computing Centre - SRCE, the Faculty of Electrical Engineering and Computing in Zagreb, the Ruđer Bošković Institute, the Faculty of Science, th |
https://en.wikipedia.org/wiki/Computer%20science%20and%20engineering | Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations.
Academic courses
Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism.
Example universities with CSE majors and departments
APJ Abdul Kalam Technological University
American International University-B |
https://en.wikipedia.org/wiki/Self-destruct | A self-destruct is a mechanism that can cause an object to destroy itself or render itself inoperable after a predefined set of circumstances has occurred.
Self-destruct mechanisms are typically found on devices and systems where malfunction could endanger large numbers of people.
Uses
Land mines
Some types of modern land mines are designed to self-destruct, or chemically render themselves inert after a period of weeks or months to reduce the likelihood of friendly casualties during the conflict or civilian casualties after the conflict's end. The Amended Protocol II to the Convention on Certain Conventional Weapons (CCW), amended in 1996, requires that anti-personnel land mines deactivate and self-destruct, and sets standards for both. Landmines currently used by the United States military are designed to self-destruct after between 4 hours and 15 days depending upon the type. The landmines have a battery and when the battery dies, the land mine self-destructs. The self-destruct system never failed in over 67,000 tested landmines in a variety of conditions. Not all self-destruct mechanisms are absolutely reliable, and most landmines that have been laid throughout history are not equipped to self-destruct. Landmines can also be designed to self-deactivate, for instance by a battery running out of a charge, but deactivation is considered a different mechanism from self-destruction.
Military ships
Another form of a self-destruct system can be seen in the naval procedure of scuttling, which is used to destroy a ship or ships to prevent them from being seized and/or reverse engineered.
Generally the scuttling of a ship uses strategically-placed explosive charges by a demolition crew and/or the deliberate cutting open of the hull rather than an in-built self-destruct system.
Rockets
Launch vehicles self-destruct when they go errant, to prevent the endangerment of nearby ground personnel, spectators, buildings and infrastructure. When a rocket flies outside o |
https://en.wikipedia.org/wiki/Space%20fountain | A space fountain is a proposed form of an extremely tall tower extending into space. As known materials cannot support a static tower with this height, a space fountain has to be an active structure: A stream of pellets is accelerated upwards from a ground station. At the top it is deflected downwards. The necessary force for this deflection supports the station at the top and payloads going up the structure. A spacecraft could launch from the top without having to deal with the atmosphere. This could reduce the cost of placing payloads into orbit. Its largest downside is that the tower will re-enter the atmosphere if the accelerator fails and the stream stops. This risk could be reduced by several redundant streams.
The lower part of a pellet stream has to be in a vacuum tube to avoid excessive drag in the atmosphere. Similar to the top station, this tube can be supported by its own system of transferring momentum from a space-bound stream to a surface-bound stream. If the tube itself also accelerates the station-supporting stream, it would have to transfer additional momentum to an earth-bound stream in order to keep itself supported. The tube-supporting streams could also be designed to integrate with the station-supporting streams.
Unlike a space elevator, this concept does not need extremely strong materials anywhere, and unlike space elevators and orbital rings, it does not need a long structure.
See also
Launch loop
Mass driver
Megascale engineering
Non-rocket spacelaunch
Orbital ring
Space elevator
Space gun
References
Space elevator
Exploratory engineering
Megastructures
Spaceflight technology
Vertical transport devices
Space access
Hypothetical technology |
https://en.wikipedia.org/wiki/Tymnet | Tymnet was an international data communications network headquartered in Cupertino, California that used virtual call packet-switched technology and X.25, SNA/SDLC, BSC and Async interfaces to connect host computers (servers) at thousands of large companies, educational institutions, and government agencies. Users typically connected via dial-up connections or dedicated asynchronous connections.
The business consisted of a large public network that supported dial-up users and a private network that allowed government agencies and large companies (mostly banks and airlines) to build their own dedicated networks. The private networks were often connected via gateways to the public network to reach locations not on the private network. Tymnet was also connected to dozens of other public networks in the United States and internationally via X.25/X.75 gateways.
As the Internet grew and became almost universally accessible in the late 1990s, the need for services such as Tymnet migrated to the Internet style connections, but still had some value in the Third World and for specific legacy roles. However the value of these links continued to decrease, and Tymnet shut down in 2004.
Network
Tymnet offered local dial-up modem access in most cities in the United States and to a limited degree in Canada, which preferred its own DATAPAC service.
Users would dial into Tymnet and then interact with a simple command-line interface to establish a connection with a remote system. Once connected, data was passed to and from the user as if connected directly to a modem on the distant system. For various technical reasons, the connection was not entirely "invisible", and sometimes required the user to enter arcane commands to make 8-bit clean connections work properly for file transfer.
Tymnet was extensively used by large companies to provide dial-up services for their employees who were "on the road", as well as a gateway for users to connect to large online services such as Compu |
https://en.wikipedia.org/wiki/Ernst%20Abbe | Ernst Karl Abbe (23 January 1840 – 14 January 1905) was a German physicist, optical scientist, entrepreneur, and social reformer. Together with Otto Schott and Carl Zeiss, he developed numerous optical instruments. He was also a co-owner of Carl Zeiss AG, a German manufacturer of scientific microscopes, astronomical telescopes, planetariums, and other advanced optical systems.
Personal life
Abbe was born 23 January 1840 in Eisenach, Saxe-Weimar-Eisenach, to Georg Adam Abbe and Elisabeth Christina Barchfeldt. He came from a humble home – his father was a foreman in a spinnery. Supported by his father's employer, Abbe was able to attend secondary school and to obtain the general qualification for university entrance with fairly good grades, at the Eisenach Gymnasium, which he graduated from in 1857. By the time he left school, his scientific talent and his strong will had already become obvious. Thus, in spite of the family's strained financial situation, his father decided to support Abbe's studies at the Universities of Jena (1857–1859) and Göttingen (1859–1861). During his time as a student, Abbe gave private lessons to improve his income. His father's employer continued to fund him. Abbe was awarded his PhD in Göttingen on 23 March 1861. While at school, he was influenced by Bernhard Riemann and Wilhelm Eduard Weber, who also happened to be one of the Göttingen Seven. This was followed by two short assignments at the Göttingen observatory and at Physikalischer Verein in Frankfurt (an association of citizens interested in physics and chemistry that was founded by Johann Wolfgang von Goethe in 1824 and still exists today). On 8 August 1863 he qualified as a university lecturer at the University of Jena. In 1870, he accepted a contract as an associate professor of experimental physics, mechanics and mathematics in Jena. In 1871, he married Else Snell, daughter of the mathematician and physicist Karl Snell, one of Abbe's teachers, with whom he had two daughters. H |
https://en.wikipedia.org/wiki/Phone-sync | A phone-sync (also known as a tape-sync, a simul-rec, or a double-ender) was a technique used to conduct televised interviews over long distances in the 1980s before satellite television became commonplace, in order to provide video to what would otherwise be an audio-only interview. It was commonplace in such news programs as The Journal on CBC Television.
A standard Tape sync works as follows: an interviewer, usually in a television studio, is videotaped conducting an interview via a long-distance phone call to the interviewee in another part of the world. This interviewee, often in a studio in front of a background representing the city in which he or she is located, is videotaped as he or she participates in the interview. The two videotapes are then sent to the interviewer's production team to be synchronized through video editing. Cuts between shots of the interviewer and interviewee are made accordingly, and the higher-quality sound of the videotapes are used instead of the telephone audio. For effect, the interviewer may be taped looking into a bluescreen or greenscreen, into which the video of the interviewee would at this point be resized if necessary and inserted using chroma key.
The double-ender technique has become much less commonplace with the proliferation of live satellite television feeds and video over Internet (Skype, etc.), but is still used today when such technology is not available.
The double-ender technique can also be done with audio-only mediums, such as radio or podcasting. Syndicated radio show interviews are often done as a double-ender, with the host in their studio, and the guests recording in their own city, in the studio of their local affiliate.
Double-ender audio interviews have become more common with the rise in popularity of podcasting. The result is a cast that sounds like the hosts and guests are in the same room, when they're actually in different cities.
References
Further reading
Television technology
Broad |
https://en.wikipedia.org/wiki/Hypersurface | In geometry, a hypersurface is a generalization of the concepts of hyperplane, plane curve, and surface. A hypersurface is a manifold or an algebraic variety of dimension , which is embedded in an ambient space of dimension , generally a Euclidean space, an affine space or a projective space.
Hypersurfaces share, with surfaces in a three-dimensional space, the property of being defined by a single implicit equation, at least locally (near every point), and sometimes globally.
A hypersurface in a (Euclidean, affine, or projective) space of dimension two is a plane curve. In a space of dimension three, it is a surface.
For example, the equation
defines an algebraic hypersurface of dimension in the Euclidean space of dimension . This hypersurface is also a smooth manifold, and is called a hypersphere or an -sphere.
Smooth hypersurface
A hypersurface that is a smooth manifold is called a smooth hypersurface.
In , a smooth hypersurface is orientable. Every connected compact smooth hypersurface is a level set, and separates Rn into two connected components; this is related to the Jordan–Brouwer separation theorem.
Affine algebraic hypersurface
An algebraic hypersurface is an algebraic variety that may be defined by a single implicit equation of the form
where is a multivariate polynomial. Generally the polynomial is supposed to be irreducible. When this is not the case, the hypersurface is not an algebraic variety, but only an algebraic set. It may depend on the authors or the context whether a reducible polynomial defines a hypersurface. For avoiding ambiguity, the term irreducible hypersurface is often used.
As for algebraic varieties, the coefficients of the defining polynomial may belong to any fixed field , and the points of the hypersurface are the zeros of in the affine space where is an algebraically closed extension of .
A hypersurface may have singularities, which are the common zeros, if any, of the defining polynomial and its partial deriva |
https://en.wikipedia.org/wiki/Codimension | In mathematics, codimension is a basic geometric idea that applies to subspaces in vector spaces, to submanifolds in manifolds, and suitable subsets of algebraic varieties.
For affine and projective algebraic varieties, the codimension equals the height of the defining ideal. For this reason, the height of an ideal is often called its codimension.
The dual concept is relative dimension.
Definition
Codimension is a relative concept: it is only defined for one object inside another. There is no “codimension of a vector space (in isolation)”, only the codimension of a vector subspace.
If W is a linear subspace of a finite-dimensional vector space V, then the codimension of W in V is the difference between the dimensions:
It is the complement of the dimension of W, in that, with the dimension of W, it adds up to the dimension of the ambient space V:
Similarly, if N is a submanifold or subvariety in M, then the codimension of N in M is
Just as the dimension of a submanifold is the dimension of the tangent bundle (the number of dimensions that you can move on the submanifold), the codimension is the dimension of the normal bundle (the number of dimensions you can move off the submanifold).
More generally, if W is a linear subspace of a (possibly infinite dimensional) vector space V then the codimension of W in V is the dimension (possibly infinite) of the quotient space V/W, which is more abstractly known as the cokernel of the inclusion. For finite-dimensional vector spaces, this agrees with the previous definition
and is dual to the relative dimension as the dimension of the kernel.
Finite-codimensional subspaces of infinite-dimensional spaces are often useful in the study of topological vector spaces.
Additivity of codimension and dimension counting
The fundamental property of codimension lies in its relation to intersection: if W1 has codimension k1, and W2 has codimension k2, then if U is their intersection with codimension j we have
max (k1, k2) ≤ j ≤ k |
https://en.wikipedia.org/wiki/Inkometer | An inkometer is a specialized measuring instrument used by the printing industry to measure the "tack" (adhesiveness) of an ink with the roller system on an offset press. The importance of tack is that it is not so excessive that it doesn't allow effective transfer from the rollers to the plate and then to the blanket and onto the substrate being printed. Inks can also be tack "graded" in descending sequence to allow for better trapping of one color over another. Inks with too much tack can cause the surface of the paper to pick off and interfere with transfer on subsequent printing units and copies.
The amount of tack can be controlled by changing the amount of solvent or other diluent used in the ink. The inkometer is made up of three rollers. The center roller is a temperature controlled brass roller, the bottom roller is an oscillating rubber distribution roller. The top roller is attached to a load cell which measures the tack at a given press speed (i.e. 800 feet per minute for a web press or 15000 sheets per hour for an offset press)
References
Measuring instruments
Printing terminology
Print production |
https://en.wikipedia.org/wiki/Address%20pool | In the context of the Internet addressing structure, an address pool is a set of Internet Protocol addresses available at any level in the IP address allocation hierarchy. At the top level, the IP address pool is managed by the Internet Assigned Numbers Authority (IANA). The total IPv4 address pool contains (232) addresses, while the size of the IPv6 address pool is 2128 () addresses.
In the context of application design, an address pool may be the availability of a set of addresses (IP address, MAC address) available to an application that is shared among its users, or available for allocation to users, such as in host configurations with the Dynamic Host Configuration Protocol (DHCP).
See also
Dynamic Host Configuration Protocol
IPv4 address exhaustion
List of assigned /8 IPv4 address blocks
References
Internet architecture |
https://en.wikipedia.org/wiki/Quoted-printable | Quoted-Printable, or QP encoding, is a binary-to-text encoding system using printable ASCII characters (alphanumeric and the equals sign =) to transmit 8-bit data over a 7-bit data path or, generally, over a medium which is not 8-bit clean. Historically, because of the wide range of systems and protocols that could be used to transfer messages, e-mail was often assumed to be non-8-bit-clean – however, modern SMTP servers are in most cases 8-bit clean and support 8BITMIME extension. It can also be used with data that contains non-permitted octets or line lengths exceeding SMTP limits. It is defined as a MIME content transfer encoding for use in e-mail.
QP works by using the equals sign = as an escape character. It also limits line length to 76, as some software has limits on line length.
Introduction
MIME defines mechanisms for sending other kinds of information in e-mail, including text in languages other than English, using character encodings other than ASCII. However, these encodings often use byte values outside the ASCII range so they need to be encoded further before they are suitable for use in a non-8-bit-clean environment. Quoted-Printable encoding is one method used for mapping arbitrary bytes into sequences of ASCII characters. So, Quoted-Printable is not a character encoding scheme itself, but a data coding layer to be used under some byte-oriented character encoding. QP encoding is reversible, meaning the original bytes and hence the non-ASCII characters they represent can be identically recovered.
Quoted-Printable and Base64 are the two MIME content transfer encodings, if the trivial "7bit" and "8bit" encoding are not counted. If the text to be encoded does not contain many non-ASCII characters, then Quoted-Printable results in a fairly readable and compact encoded result. On the other hand, if the input has many 8-bit characters, then Quoted-Printable becomes both unreadable and extremely inefficient. Base64 is not human-readable, but has a unifor |
https://en.wikipedia.org/wiki/Ordered%20ring | In abstract algebra, an ordered ring is a (usually commutative) ring R with a total order ≤ such that for all a, b, and c in R:
if a ≤ b then a + c ≤ b + c.
if 0 ≤ a and 0 ≤ b then 0 ≤ ab.
Examples
Ordered rings are familiar from arithmetic. Examples include the integers, the rationals and the real numbers. (The rationals and reals in fact form ordered fields.) The complex numbers, in contrast, do not form an ordered ring or field, because there is no inherent order relationship between the elements 1 and i.
Positive elements
In analogy with the real numbers, we call an element c of an ordered ring R positive if 0 < c, and negative if c < 0. 0 is considered to be neither positive nor negative.
The set of positive elements of an ordered ring R is often denoted by R+. An alternative notation, favored in some disciplines, is to use R+ for the set of nonnegative elements, and R++ for the set of positive elements.
Absolute value
If is an element of an ordered ring R, then the absolute value of , denoted , is defined thus:
where is the additive inverse of and 0 is the additive identity element.
Discrete ordered rings
A discrete ordered ring or discretely ordered ring is an ordered ring in which there is no element between 0 and 1. The integers are a discrete ordered ring, but the rational numbers are not.
Basic properties
For all a, b and c in R:
If a ≤ b and 0 ≤ c, then ac ≤ bc. This property is sometimes used to define ordered rings instead of the second property in the definition above.
|ab| = |a||b|.
An ordered ring that is not trivial is infinite.
Exactly one of the following is true: a is positive, −a is positive, or a = 0. This property follows from the fact that ordered rings are abelian, linearly ordered groups with respect to addition.
In an ordered ring, no negative element is a square: Firstly, 0 is square. Now if a ≠ 0 and a = b2 then b ≠ 0 and a = (−b)2; as either b or −b is positive, a must be nonnegative.
See also
|
https://en.wikipedia.org/wiki/Colostrum | Colostrum, or first milk, is the first form of milk produced by the mammary glands of humans and other mammals immediately following delivery of the newborn. It may be called beestings when referring to the first milk of a cow or similar animal. Most species will begin to generate colostrum just prior to giving birth. Colostrum has an especially high amount of bioactive compounds compared to mature milk to give the newborn the best possible start to life. Specifically, colostrum contains antibodies to protect the newborn against disease and infection, and immune and growth factors and other bioactives that help to activate a newborn's immune system, jumpstart gut function, and seed a healthy gut microbiome in the first few days of life. The bioactives found in colostrum are essential for a newborn's health, growth and vitality. Colostrum strengthens a baby's immune system and is filled with white blood cells to protect it from infection.
At birth, the surroundings of the newborn mammal change from the relatively sterile environment in the mother's uterus, with a constant nutrient supply via the placenta, to the microbe-rich environment outside, with irregular oral intake of complex milk nutrients through the gastrointestinal tract. This transition puts high demands on the gastrointestinal tract of the neonate, as the gut plays an important part in both the digestive system and the immune system. Colostrum has evolved to care for highly sensitive mammalian neonates and contributes significantly to initial immunological defense as well as to the growth, development, and maturation of the neonate's gastrointestinal tract by providing key nutrients and bioactive factors. Bovine colostrum powder is rich in protein and low in sugar and fat. Bovine colostrum can also be used for a range of conditions in humans, and can boost a neonate's immunity.
Colostrum also has a mild laxative effect, encouraging the passing of a baby's first stool, which is called meconium. Thi |
https://en.wikipedia.org/wiki/Density%20of%20states | In solid-state physics and condensed matter physics, the density of states (DOS) of a system describes the number of modes per unit frequency range. The density of states is defined as where is the number of states in the system of volume whose energies lie in the range from to . It is mathematically represented as a distribution by a probability density function, and it is generally an average over the space and time domains of the various states occupied by the system. The density of states is directly related to the dispersion relations of the properties of the system. High DOS at a specific energy level means that many states are available for occupation.
Generally, the density of states of matter is continuous. In isolated systems however, such as atoms or molecules in the gas phase, the density distribution is discrete, like a spectral density. Local variations, most often due to distortions of the original system, are often referred to as local densities of states (LDOSs).
Introduction
In quantum mechanical systems, waves, or wave-like particles, can occupy modes or states with wavelengths and propagation directions dictated by the system. For example, in some systems, the interatomic spacing and the atomic charge of a material might allow only electrons of certain wavelengths to exist. In other systems, the crystalline structure of a material might allow waves to propagate in one direction, while suppressing wave propagation in another direction. Often, only specific states are permitted. Thus, it can happen that many states are available for occupation at a specific energy level, while no states are available at other energy levels.
Looking at the density of states of electrons at the band edge between the valence and conduction bands in a semiconductor, for an electron in the conduction band, an increase of the electron energy makes more states available for occupation. Alternatively, the density of states is discontinuous for an interval of energy |
https://en.wikipedia.org/wiki/Quark%20%28kernel%29 | In computing, Quark is an operating system kernel used in MorphOS. It is a microkernel designed to run fully virtualized computers, called boxes (see sandbox). , only one box is available, the ABox, that lets users run extant AmigaOS software compiled for Motorola 68000 series (MC680x0 or 68k) and PowerPC central processing units (CPUs).
Design goals
The Quark microkernel is not a member of the L4 microkernel family, but borrows concepts from it, including: the clan (group of tasks), ID concept, and recursive address mapping. Quark also has an asynchronous/synchronous message interface similar to Amiga's Exec kernel but adapted to an environment with memory protection.
Other Quark features include:
High super/usermode switch speed
Low interrupt latency
Interrupt threads (IntThreads) and Int P-code abstraction
Symmetric multiprocessing (SMP)
Models task/thread and clan/chief
Resource tracking
Virtual memory (optional)
Distributed computing
No access to kernel structures
Clean design with an elegant application programming interface (API)
Micro/pico kernel mix
For this new kernel, a hardware abstraction layer is used which provides the needed hardware resource information like scanning all Amiga Zorro II bus boards, Peripheral Component Interconnect (PCI) boards, and local hardware resources.
Functions
SYS_AddLinkMessage
SYS_AttemptSemaphore
SYS_AttemptSemaphoreShared
SYS_CopyCPUHalConfig
SYS_CreateMemList
SYS_CreateTask
SYS_DeletePort
SYS_DeleteSemaphore
SYS_DumpMemHeader
SYS_FindFreeMemArea
SYS_FindSkipSize
SYS_GetLinkMessage
SYS_GetMessageAttr
SYS_GetNextCPU
SYS_Init
SYS_InsideClan
SYS_IsClanMember
SYS_MMUAddPage
SYS_MMUGetEntry
SYS_MoveRomModuleToMemoryEnd
SYS_ObtainPort
SYS_ObtainSemaphore
SYS_ObtainSemaphoreShared
SYS_ReleaseSemaphore
SYS_ReplyMessage
SYS_SendMessage
SYS_SetMessageAttr
SYS_SetupPageTable
SYS_ShowExceptionThreads
SYS_ShowForbidThreads
SYS_ShowIntThreads
SYS_ShowQuarkState
SYS_ShowReadyThreads
SYS |
https://en.wikipedia.org/wiki/Transmeta%20Efficeon | The Efficeon (stylized as efficēon) processor is Transmeta's second-generation 256-bit VLIW design released in 2004 which employs a software engine Code Morphing Software (CMS) to convert code written for x86 processors to the native instruction set of the chip. Like its predecessor, the Transmeta Crusoe (a 128-bit VLIW architecture), Efficeon stresses computational efficiency, low power consumption, and a low thermal footprint.
Processor
Efficeon most closely mirrors the feature set of Intel Pentium 4 processors, although, like AMD Opteron processors, it supports a fully integrated memory controller, a HyperTransport IO bus, and the NX bit, or no-execute x86 extension to PAE mode. NX bit support is available starting with CMS version 6.0.4.
Efficeon's computational performance relative to mobile CPUs like the Intel Pentium M is thought to be lower, although little appears to be published about the relative performance of these competing processors.
Efficeon came in two package types: a 783- and a 592-contact ball grid array (BGA). Its power consumption is moderate (with some consuming as little as 3 watts at 1 GHz and 7 watts at 1.5 GHz), so it can be passively cooled.
Two generations of this chip were produced. The first generation (TM8600) was manufactured using a TSMC 0.13 micrometre process and produced at speeds up to 1.2 GHz. The second generation (TM8800 and TM8820) was manufactured using a Fujitsu 90 nm process and produced at speeds ranging from 1 GHz to 1.7 GHz.
Internally, the Efficeon has two arithmetic logic units, two load/store/add units, two execute units, two floating-point/MMX/SSE/SSE2 units, one branch prediction unit, one alias unit, and one control unit. The VLIW core can execute a 256-bit VLIW instruction per cycle, which is called a molecule, and has room to store eight 32-bit instructions (called atoms) per cycle.
The Efficeon has a 128 KB L1 instruction cache, a 64 KB L1 data cache and a 1 MB L2 cache. All caches are on die.
Additio |
https://en.wikipedia.org/wiki/Power%20loom | A power loom is a mechanized loom, and was one of the key developments in the industrialization of weaving during the early Industrial Revolution. The first power loom was designed and patented in 1785 by Edmund Cartwright. It was refined over the next 47 years until a design by the Howard and Bullough company made the operation completely automatic. This device was designed in 1834 by James Bullough and William Kenworthy, and was named the Lancashire loom.
By the year 1850, there were a total of around 260,000 power loom operations in England. Two years later came the Northrop loom which replenished the shuttle when it was empty. This replaced the Lancashire loom.
Shuttle looms
The main components of the loom are the warp beam, heddles, harnesses, shuttle, reed, and takeup roll. In the loom, yarn processing includes shedding, picking, battening and taking-up operations.
Shedding. Shedding is the raising of the warp yarns to form a loop through which the filling yarn, carried by the shuttle, can be inserted. The shed is the vertical space between the raised and unraised warp yarns. On the modern loom, simple and intricate shedding operations are performed automatically by the heddle or heald frame, also known as a harness. This is a rectangular frame to which a series of wires, called heddles or healds, are attached. The yarns are passed through the eye holes of the heddles, which hang vertically from the harnesses. The weave pattern determines which harness controls which warp yarns, and the number of harnesses used depends on the complexity of the weave. Two common methods of controlling the heddles are dobbies and a Jacquard Head.
Picking. As the harnesses raise the heddles or healds, which raise the warp yarns, the shed is created. The filling yarn is inserted through the shed by a small carrier device called a shuttle. The shuttle is normally pointed at each end to allow passage through the shed. In a traditional shuttle loom, the filling yarn is wound |
https://en.wikipedia.org/wiki/Parchive | Parchive (a portmanteau of parity archive, and formally known as Parity Volume Set Specification) is an erasure code system that produces par files for checksum verification of data integrity, with the capability to perform data recovery operations that can repair or regenerate corrupted or missing data.
Parchive was originally written to solve the problem of reliable file sharing on Usenet, but it can be used for protecting any kind of data from data corruption, disc rot, bit rot, and accidental or malicious damage. Despite the name, Parchive uses more advanced techniques (specifically error correction codes) than simplistic parity methods of error detection.
As of 2014, PAR1 is obsolete, PAR2 is mature for widespread use, and PAR3 is a discontinued experimental version developed by MultiPar author Yutaka Sawada. The original SourceForge Parchive project has been inactive since April 30, 2015. A new PAR3 specification has been worked on since April 28, 2019 by PAR2 specification author Michael Nahas. An alpha version of the PAR3 specification has been published on January 29, 2022 while the program itself is being developed.
History
Parchive was intended to increase the reliability of transferring files via Usenet newsgroups. Usenet was originally designed for informal conversations, and the underlying protocol, NNTP was not designed to transmit arbitrary binary data. Another limitation, which was acceptable for conversations but not for files, was that messages were normally fairly short in length and limited to 7-bit ASCII text.
Various techniques were devised to send files over Usenet, such as uuencoding and Base64. Later Usenet software allowed 8 bit Extended ASCII, which permitted new techniques like yEnc. Large files were broken up to reduce the effect of a corrupted download, but the unreliable nature of Usenet remained.
With the introduction of Parchive, parity files could be created that were then uploaded along with the original data files. If an |
https://en.wikipedia.org/wiki/Elder%20abuse | Elder abuse (also called elder mistreatment, senior abuse, abuse in later life, abuse of older adults, abuse of older women, and abuse of older men) is "a single, or repeated act, or lack of appropriate action, occurring within any relationship where there is an expectation of trust, which causes harm or distress to an older person." This definition has been adopted by the World Health Organization (WHO) from a definition put forward by Hourglass (formerly Action on Elder Abuse) in the UK. Laws protecting the elderly from abuse are similar to and related to laws protecting dependent adults from abuse.
It includes harms by people, the older person knows, or has a relationship with, such as a spouse, partner, or family member; a friend or neighbor; or people that the older person relies on for services. Many forms of elder abuse are recognized as types of domestic violence or family violence since they are committed by family members. Paid caregivers have also been known to prey on their elderly patients.
While a variety of circumstances are considered elder abuse, it does not include general criminal activities against older persons, such as home break-ins, robbery or muggings in the street, or "distraction burglary," where a stranger distracts an older person at the doorstep while another person enters the property to steal.
The abuse of elders by caregivers is a worldwide issue. In 2002, WHO brought international attention to the issue of elder abuse. Over the years, government agencies and community professional groups, worldwide, have specified elder abuse as a social problem. In 2006, the International Network for Prevention of Elder Abuse (INPEA) designated June 15 as World Elder Abuse Awareness Day (WEAAD), and an increasing number of events are held across the globe on this day to raise awareness of elder abuse and highlight ways to challenge such abuse.
Types
Although there are common themes of elder abuse across nations, there are also unique manifestat |
https://en.wikipedia.org/wiki/Gene%20family | A gene family is a set of several similar genes, formed by duplication of a single original gene, and generally with similar biochemical functions. One such family are the genes for human hemoglobin subunits; the ten genes are in two clusters on different chromosomes, called the α-globin and β-globin loci. These two gene clusters are thought to have arisen as a result of a precursor gene being duplicated approximately 500 million years ago.
Genes are categorized into families based on shared nucleotide or protein sequences. Phylogenetic techniques can be used as a more rigorous test. The positions of exons within the coding sequence can be used to infer common ancestry. Knowing the sequence of the protein encoded by a gene can allow researchers to apply methods that find similarities among protein sequences that provide more information than similarities or differences among DNA sequences.
If the genes of a gene family encode proteins, the term protein family is often used in an analogous manner to gene family.
The expansion or contraction of gene families along a specific lineage can be due to chance, or can be the result of
natural selection. To distinguish between these two cases is often difficult in practice. Recent work uses a combination
of statistical models and algorithmic techniques to detect gene families that are under the effect of natural selection.
The HUGO Gene Nomenclature Committee (HGNC) creates nomenclature schemes using a "stem" (or "root") symbol for members of a gene family (by homology or function), with a hierarchical numbering system to distinguish the individual members. For example, for the peroxiredoxin family, PRDX is the root symbol, and the family members are PRDX1, PRDX2, PRDX3, PRDX4, PRDX5, and PRDX6.
Basic structure
One level of genome organization is the grouping of genes into several gene families. Gene families are groups of related genes that share a common ancestor. Members of gene families may be paralogs or orthologs |
https://en.wikipedia.org/wiki/Kalanchoe | Kalanchoe ( ), also written Kalanchöe or Kalanchoë, is a genus of about 125 species of tropical, succulent plants in the stonecrop family Crassulaceae, mainly native to Madagascar and tropical Africa. A Kalanchoe species was one of the first plants to be sent into space, sent on a resupply to the Soviet Salyut 1 space station in 1979. The majority of kalanchoes require around 6–8 hours of sunlight a day; a few cannot tolerate this, and survive with bright, indirect sunlight to bright shade.
Description
Most are shrubs or perennial herbaceous plants, but a few are annual or biennial. The largest, Kalanchoe beharensis from Madagascar, can reach tall, but most species are less than tall.
Kalanchoes open their flowers by growing new cells on the inner surface of the petals to force them outwards, and on the outside of the petals to close them. Kalanchoe flowers are divided into 4 sections with 8 stamens. The petals are fused into a tube, in a similar way to some related genera such as Cotyledon.
Taxonomy
The genus Kalanchoe was first described by the French botanist Michel Adanson in 1763.
The genus Bryophyllum was described by Salisbury in 1806 and the genus Kitchingia was created by Baker in 1881. Kitchingia is now regarded as a synonym for Kalanchoe, while Bryophyllum has also been treated as a separate genus, since species of Bryophyllum appear to be nested within Kalanchoe on molecular phylogenetic analysis, Bryophyllum is considered as a section of the former, dividing the genus into three sections, Kitchingia, Bryophyllum, and Eukalanchoe. these were formalised as subgenera by Smith and Figueiredo (2018).
Etymology
Adanson cited Georg Joseph Kamel (Camellus) as his source for the name. The name came from the Cantonese name 伽藍菜 (Jyutping: gaa1 laam4 coi3).
Kalanchoe ceratophylla and Kalanchoe laciniata are both called (apparently "Buddhist monastery [samghārāma] herb") in China. In Mandarin Chinese, it does not seem very close in pronunciation (qiélán |
https://en.wikipedia.org/wiki/Chargaff%27s%20rules | Chargaff's rules [given by Erwin Chargaff] states that in the DNA of any species and any organism, the amount of guanine should be equal to the amount of cytosine and the amount of adenine should be equal to the amount of thymine. Further a 1:1 stoichiometric ratio of purine and pyrimidine bases (i.e., A+G=T+C) should exist. This pattern is found in both strands of the DNA. They were discovered by Austrian-born chemist Erwin Chargaff, in the late 1940s.
Definitions
First parity rule
The first rule holds that a double-stranded DNA molecule, globally has percentage base pair equality: A% = T% and G% = C%. The rigorous validation of the rule constitutes the basis of Watson-Crick pairs in the DNA double helix model.
Second parity rule
The second rule holds that both Α% ≈ Τ% and G% ≈ C% are valid for each of the two DNA strands. This describes only a global feature of the base composition in a single DNA strand.
Research
The second parity rule was discovered in 1968. It states that, in single-stranded DNA, the number of adenine units is approximately equal to that of thymine (%A ≈ %T), and the number of cytosine units is approximately equal to that of guanine (%C ≈ %G).
The first empirical generalization of Chargaff's second parity rule, called the Symmetry Principle, was proposed by Vinayakumar V. Prabhu in 1993. This principle states that for any given oligonucleotide, its frequency is approximately equal to the frequency of its complementary reverse oligonucleotide. A theoretical generalization was mathematically derived by Michel E. B. Yamagishi and Roberto H. Herai in 2011.
In 2006, it was shown that this rule applies to four of the five types of double stranded genomes; specifically it applies to the eukaryotic chromosomes, the bacterial chromosomes, the double stranded DNA viral genomes, and the archaeal chromosomes. It does not apply to organellar genomes (mitochondria and plastids) smaller than ~20-30 kbp, nor does it apply to single stranded DNA (vira |
https://en.wikipedia.org/wiki/Address%20munging | Address munging is the practice of disguising
an e-mail address to prevent it from being automatically collected by unsolicited bulk e-mail providers.
Address munging is intended to disguise an e-mail address in a way that prevents computer software from seeing the real address, or even any address at all, but still allows a human reader to reconstruct the original and contact the author: an email address such as, "no-one@example.com", becomes "no-one at example dot com", for instance.
Any e-mail address posted in public is likely to be automatically collected by computer software used by bulk emailers (a process known as e-mail address scavenging). Addresses posted on webpages, Usenet or chat rooms are particularly vulnerable to this. Private e-mail sent between individuals is highly unlikely to be collected, but e-mail sent to a mailing list that is archived and made available via the web, or passed onto a Usenet news server and made public, may eventually be scanned and collected.
Disadvantages
Disguising addresses makes it more difficult for people to send e-mail to each other. Many see it as an attempt to fix a symptom rather than solving the real problem of e-mail spam, at the expense of causing problems for innocent users. In addition, there are e-mail address harvesters who have found ways to read the munged email addresses.
The use of address munging on Usenet is contrary to the recommendations of RFC 1036 governing the format of Usenet posts, which requires a valid e-mail address be supplied in the From: field of the post. In practice, few people follow this recommendation strictly.
Disguising e-mail addresses in a systematic manner (for example, user[at]domain[dot]com) offers little protection.
Any impediment reduces the user's willingness to take the extra trouble to email the user. In contrast, well-maintained e-mail filtering on the user's end does not drive away potential correspondents. No spam filter is 100% immune to false positives, however |
https://en.wikipedia.org/wiki/Lists%20of%20mathematics%20topics | Lists of mathematics topics cover a variety of topics related to mathematics. Some of these lists link to hundreds of articles; some link only to a few. The template to the right includes links to alphabetical lists of all mathematical articles. This article brings together the same content organized in a manner better suited for browsing.
Lists cover aspects of basic and advanced mathematics, methodology, mathematical statements, integrals, general concepts, mathematical objects, and reference tables.
They also cover equations named after people, societies, mathematicians, journals, and meta-lists.
The purpose of this list is not similar to that of the Mathematics Subject Classification formulated by the American Mathematical Society. Many mathematics journals ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The subject codes so listed are used by the two major reviewing databases, Mathematical Reviews and Zentralblatt MATH. This list has some items that would not fit in such a classification, such as list of exponential topics and list of factorial and binomial topics, which may surprise the reader with the diversity of their coverage.
Basic mathematics
This branch is typically taught in secondary education or in the first year of university.
Outline of arithmetic
Outline of discrete mathematics
List of calculus topics
List of geometry topics
Outline of geometry
List of trigonometry topics
Outline of trigonometry
List of trigonometric identities
List of logarithmic identities
List of integrals of logarithmic functions
List of set identities and relations
List of topics in logic
Areas of advanced mathematics
As a rough guide, this list is divided into pure and applied sections although in reality, these branches are overlapping and intertwined.
Pure mathematics
Algebra
Algebra includes the study of algebraic structures, which are sets and operations defined o |
https://en.wikipedia.org/wiki/Hilbert%27s%20eighth%20problem | Hilbert's eighth problem is one of David Hilbert's list of open mathematical problems posed in 1900. It concerns number theory, and in particular the Riemann hypothesis, although it is also concerned with the Goldbach Conjecture. The problem as stated asked for more work on the distribution of primes and generalizations of Riemann hypothesis to other rings where prime ideals take the place of primes.
Subtopics
Riemann hypothesis and generalizations
Hilbert calls for a solution to the Riemann hypothesis, which has long been regarded as the deepest open problem in mathematics. Given the solution, he calls for more thorough investigation into Riemann's zeta function and the prime number theorem.
Goldbach conjecture
He calls for a solution to the Goldbach conjecture, as well as more general problems, such as finding infinitely many pairs of primes solving a fixed linear diophantine equation.
Twin prime conjecture
Generalized Riemann conjecture
Finally, he calls for mathematicians to generalize the ideas of the Riemann hypothesis to counting prime ideals in a number field.
External links
English translation of Hilbert's original address
08
References |
https://en.wikipedia.org/wiki/Flowchart | A flowchart is a type of diagram that represents a workflow or process. A flowchart can also be defined as a diagrammatic representation of an algorithm, a step-by-step approach to solving a task.
The flowchart shows the steps as boxes of various kinds, and their order by connecting the boxes with arrows. This diagrammatic representation illustrates a solution model to a given problem. Flowcharts are used in analyzing, designing, documenting or managing a process or program in various fields.
Overview
Flowcharts are used to design and document simple processes or programs. Like other types of diagrams, they help visualize the process. Two of the many benefits are flaws and bottlenecks may become apparent. Flowcharts typically use the following main symbols:
A process step, usually called an activity, is denoted by a rectangular box.
A decision is usually denoted by a diamond.
A flowchart is described as "cross-functional" when the chart is divided into different vertical or horizontal parts, to describe the control of different organizational units. A symbol appearing in a particular part is within the control of that organizational unit. A cross-functional flowchart allows the author to correctly locate the responsibility for performing an action or making a decision, and to show the responsibility of each organizational unit for different parts of a single process.
Flowcharts represent certain aspects of processes and are usually complemented by other types of diagram. For instance, Kaoru Ishikawa defined the flowchart as one of the seven basic tools of quality control, next to the histogram, Pareto chart, check sheet, control chart, cause-and-effect diagram, and the scatter diagram. Similarly, in UML, a standard concept-modeling notation used in software development, the activity diagram, which is a type of flowchart, is just one of many different diagram types.
Nassi-Shneiderman diagrams and Drakon-charts are an alternative notation for process flow. |
https://en.wikipedia.org/wiki/ChipTest | ChipTest was a 1985 chess playing computer built by Feng-hsiung Hsu, Thomas Anantharaman and Murray Campbell at Carnegie Mellon University. It is the predecessor of Deep Thought which in turn evolved into Deep Blue.
ChipTest was based on a special VLSI-technology move generator chip developed by Hsu. ChipTest was controlled by a Sun-3/160 workstation and capable of searching approximately 50,000 moves per second. Hsu and Anantharaman entered ChipTest in the 1986 North American Computer Chess Championship, and it was only partially tested when the tournament began. It lost its first two rounds, but finished with an even score.
In August 1987 ChipTest was overhauled and renamed ChipTest-M, M standing for microcode. The new version had eliminated ChipTest's bugs and was ten times faster, searching 500,000 moves per second and running on a Sun-4 workstation. ChipTest-M won the North American Computer Chess Championship in 1987 with a 4–0 sweep.
ChipTest was invited to play in the 1987 American Open, but the team did not enter due to an objection by the HiTech team, also from Carnegie Mellon University. HiTech and ChipTest shared some code, and Hitech was already playing in the tournament. The two teams became rivals.
Designing and implementing ChipTest revealed many possibilities for improvement, so the designers started on a new machine. Deep Thought 0.01 was created in May 1988 and the version 0.02 in November the same year. This new version had two customized VLSI chess processors and it was able to search 720,000 moves per second. With the "0.02" dropped from its name, Deep Thought won the World Computer Chess Championship with a perfect 5–0 score in 1989.
See also
Computer chess
Deep Thought, the second in the line of chess computers developed by Feng-hsiung Hsu
Deep Blue (chess computer), another chess computer developed by Feng-hsiung Hsu, being the first computer to win a chess match against the world champion
References
External links
The making |
https://en.wikipedia.org/wiki/HiTech | HiTech is a chess machine built at Carnegie Mellon University under the direction of World Correspondence Chess Champion Dr. Hans J. Berliner, by Berliner, Carl Ebeling, Murray Campbell, and Gordon Goetsch.
HiTech was the first computer chess system to reach the 2400 (senior master) USCF rating level. It won the Pennsylvania State Chess Championship twice.
HiTech won the 1985 and 1989 editions of the North American Computer Chess Championship. In 1988 HiTech defeated GM Arnold Denker 3½-½ in a match (though Denker was at the time well past his best, with an Elo rating of 2300).
HiTech was one of two competing chess projects at Carnegie Mellon; the one that would succeed in the quest of beating the World Chess Champion was its rival ChipTest (the predecessor of IBM's Deep Thought and Deep Blue).
References
Chess computers
One-of-a-kind computers |
https://en.wikipedia.org/wiki/Pulse%20%28signal%20processing%29 | A pulse in signal processing is a rapid, transient change in the amplitude of a signal from a baseline value to a higher or lower value, followed by a rapid return to the baseline value.
Pulse shapes
Pulse shapes can arise out of a process called pulse-shaping. Optimum pulse shape depends on the application.
Rectangular pulse
These can be found in pulse waves, square waves, boxcar functions, and rectangular functions. In digital signals the up and down transitions between high and low levels are called the rising edge and the falling edge. In digital systems the detection of these sides or action taken in response is termed edge-triggered, rising or falling depending on which side of rectangular pulse. A digital timing diagram is an example of a well-ordered collection of rectangular pulses.
Nyquist pulse
A Nyquist pulse is one which meets the Nyquist ISI criterion and is important in data transmission. An example of a pulse which meets this condition is the sinc function. The sinc pulse is of some significance in signal-processing theory but cannot be produced by a real generator for reasons of causality.
In 2013, Nyquist pulses were produced in an effort to reduce the size of pulses in optical fibers, which enables them to be packed 10 times more closely together, yielding a corresponding 10-fold increase in bandwidth. The pulses were more than 99 percent perfect and were produced using a simple laser and modulator.
Dirac pulse
A Dirac pulse has the shape of the Dirac delta function. It has the properties of infinite amplitude and its integral is the Heaviside step function. Equivalently, it has zero width and an area under the curve of unity. This is another pulse that cannot be created exactly in real systems, but practical approximations can be achieved. It is used in testing, or theoretically predicting, the impulse response of devices and systems, particularly filters. Such responses yield a great deal of information about the system.
Gaussian |
https://en.wikipedia.org/wiki/Trinitite | Trinitite, also known as atomsite or Alamogordo glass, is the glassy residue left on the desert floor after the plutonium-based Trinity nuclear bomb test on July 16, 1945, near Alamogordo, New Mexico. The glass is primarily composed of arkosic sand composed of quartz grains and feldspar (both microcline and smaller amount of plagioclase with small amount of calcite, hornblende and augite in a matrix of sandy clay) that was melted by the atomic blast. It was first academically described in American Mineralogist in 1948.
It is usually a light green, although red trinitite was also found in one section of the blast site, and rare pieces of black trinitite also formed. It is mildly radioactive but safe to handle.
Pieces of the material may still be found at the Trinity site as of 2018, although most of it was bulldozed and buried by the United States Atomic Energy Commission in 1953.
Formation
In 2005 it was theorized by Los Alamos National Laboratory scientist Robert Hermes and independent investigator William Strickfaden that much of the mineral was formed by sand which was drawn up inside the fireball itself and then rained down in a liquid form. In a 2010 article in Geology Today, Nelson Eby of University of Massachusetts Lowell and Robert Hermes described trinitite:
This was supported by a 2011 study based on nuclear imaging and spectrometric techniques. Green trinitite is theorised by researchers to contain material from the bomb's support structure, while red trinitite contains material originating from copper electrical wiring.
An estimated joules of heat energy went into forming the glass. As the temperature required to melt the sand into the observed glass form was about , this was estimated to have been the minimum temperature the sand was exposed to. Material within the blast fireball was superheated for an estimated 2–3 seconds before resolidification. Relatively volatile elements such as zinc are found in decreasing quantities the closer the trinit |
https://en.wikipedia.org/wiki/BioRuby | BioRuby is a collection of open-source Ruby code, comprising classes for computational molecular biology and bioinformatics. It contains classes for DNA and protein sequence analysis, sequence alignment, biological database parsing, structural biology and other bioinformatics tasks. BioRuby is released under the GNU GPL version 2 or Ruby licence and is one of a number of Bio* projects, designed to reduce code duplication.
In 2011, the BioRuby project introduced the Biogem software plugin system, with two or three new plugins added every month.
BioRuby is managed via the BioRuby website and GitHub repository.
History
BioRuby
The BioRuby project was first started in 2000 by Toshiaki Katayama as a Ruby implementation of similar bioinformatics packages such as BioPerl and BioPython. The initial release of version 0.1 was frequently updated by contributors both informally and at organised “hackathon” events; in June 2005, BioRuby was funded by IPA as an Exploratory Software Project, culminating with the release of version 1.0.0 in February 2006. Between 2009 and 2012, BioRuby was the focus of a number of Google Summer of Code projects to improve the codebase. BioRuby Version 2.0.0 was released in 2019.
Biogem
Biogem provides a set of tools for bioinformaticians who want to code an application or library that uses or extends BioRuby's core library, as well as share the code as a gem on rubygems.org. Any gem published via the Biogem framework is also listed at biogems.info.
The aim of Biogem is to promote a modular approach to the BioRuby package and simplify the creation of modules by automating process of setting up directory/file scaffolding, a git repository and releasing online package databases.
Popular Biogems
See also
Open Bioinformatics Foundation
BioPerl
BioPython
BioJava
BioJS
References
External links
Free bioinformatics software |
https://en.wikipedia.org/wiki/Code%20review | Code review (sometimes referred to as peer review) is a software quality assurance activity in which one or more people check a program, mainly by viewing and reading parts of its source code, either after implementation or as an interruption of implementation. At least one of the persons must not have authored the code. The persons performing the checking, excluding the author, are called "reviewers".
Although direct discovery of quality problems is often the main goal, code reviews are usually performed to reach a combination of goals:
Better code qualityImprove internal code quality and maintainability (such as readability, uniformity, and understandability)
Finding defectsImprove quality regarding external aspects, especially correctness, but also find issues such as performance problems, security vulnerabilities, and injected malware
Learning/Knowledge transferHelp transfer codebase knowledge, solution approaches, and quality expectations, both to the reviewers and the author
Increase sense of mutual responsibilityIncrease a sense of collective code ownership and solidarity
Finding better solutionsGenerate ideas for new and better solutions and ideas that transcend the specific code at hand
Complying to QA guidelines, ISO/IEC standardsCode reviews are mandatory in some contexts, such as air traffic software and safety-critical software
This definition of code review distinguishes it from related software quality assurance techniques, such as static code analysis, self checks, testing, and pair programming. In static code analysis the main checking is performed by an automated program, in self checks only the author checks the code, in testing the execution of the code is an integral part, and pair programming is performed continuously during implementation and not as a separate step.
Review types
There are many variations of code review processes, some of which are detailed below. Additional review types are part of IEEE 1028.
IEEE 1028-2008 lists t |
https://en.wikipedia.org/wiki/Bertrand%20paradox%20%28economics%29 | In economics and commerce, the Bertrand paradox — named after its creator, Joseph Bertrand — describes a situation in which two players (firms) reach a state of Nash equilibrium where both firms charge a price equal to marginal cost ("MC"). The paradox is that in models such as Cournot competition, an increase in the number of firms is associated with a convergence of prices to marginal costs. In these alternative models of oligopoly, a small number of firms earn positive profits by charging prices above cost.
Suppose two firms, A and B, sell a homogeneous commodity, each with the same cost of production and distribution, so that customers choose the product solely on the basis of price. It follows that demand is infinitely price-elastic. Neither A nor B will set a higher price than the other because doing so would yield the entire market to their rival. If they set the same price, the companies will share both the market and profits.
On the other hand, if either firm were to lower its price, even a little, it would gain the whole market and substantially larger profits. Since both A and B know this, they will each try to undercut their competitor until the product is selling at zero economic profit. This is the pure-strategy Nash equilibrium. Recent work has shown that there may be an additional mixed-strategy Nash equilibrium with positive economic profits under the assumption that monopoly profits are infinite. For the case of finite monopoly profits, it has been shown that positive profits under price competition are impossible in mixed equilibria and even in the more general case of correlated equilibria.
The Bertrand paradox rarely appears in practice because real products are almost always differentiated in some way other than price (brand name, if nothing else); firms have limitations on their capacity to manufacture and distribute, and two firms rarely have identical costs.
Bertrand's result is paradoxical because if the number of firms goes from one to |
https://en.wikipedia.org/wiki/Translation%20lookaside%20buffer | A translation lookaside buffer (TLB) is a memory cache that stores the recent translations of virtual memory to physical memory. It is used to reduce the time taken to access a user memory location. It can be called an address-translation cache. It is a part of the chip's memory-management unit (MMU). A TLB may reside between the CPU and the CPU cache, between CPU cache and the main memory or between the different levels of the multi-level cache. The majority of desktop, laptop, and server processors include one or more TLBs in the memory-management hardware, and it is nearly always present in any processor that utilizes paged or segmented virtual memory.
The TLB is sometimes implemented as content-addressable memory (CAM). The CAM search key is the virtual address, and the search result is a physical address. If the requested address is present in the TLB, the CAM search yields a match quickly and the retrieved physical address can be used to access memory. This is called a TLB hit. If the requested address is not in the TLB, it is a miss, and the translation proceeds by looking up the page table in a process called a page walk. The page walk is time-consuming when compared to the processor speed, as it involves reading the contents of multiple memory locations and using them to compute the physical address. After the physical address is determined by the page walk, the virtual address to physical address mapping is entered into the TLB. The PowerPC 604, for example, has a two-way set-associative TLB for data loads and stores. Some processors have different instruction and data address TLBs.
Overview
A TLB has a fixed number of slots containing page-table entries and segment-table entries; page-table entries map virtual addresses to physical addresses and intermediate-table addresses, while segment-table entries map virtual addresses to segment addresses, intermediate-table addresses and page-table addresses. The virtual memory is the memory space as seen fro |
https://en.wikipedia.org/wiki/Von%20Neumann%E2%80%93Bernays%E2%80%93G%C3%B6del%20set%20theory | In the foundations of mathematics, von Neumann–Bernays–Gödel set theory (NBG) is an axiomatic set theory that is a conservative extension of Zermelo–Fraenkel–choice set theory (ZFC). NBG introduces the notion of class, which is a collection of sets defined by a formula whose quantifiers range only over sets. NBG can define classes that are larger than sets, such as the class of all sets and the class of all ordinals. Morse–Kelley set theory (MK) allows classes to be defined by formulas whose quantifiers range over classes. NBG is finitely axiomatizable, while ZFC and MK are not.
A key theorem of NBG is the class existence theorem, which states that for every formula whose quantifiers range only over sets, there is a class consisting of the sets satisfying the formula. This class is built by mirroring the step-by-step construction of the formula with classes. Since all set-theoretic formulas are constructed from two kinds of atomic formulas (membership and equality) and finitely many logical symbols, only finitely many axioms are needed to build the classes satisfying them. This is why NBG is finitely axiomatizable. Classes are also used for other constructions, for handling the set-theoretic paradoxes, and for stating the axiom of global choice, which is stronger than ZFC's axiom of choice.
John von Neumann introduced classes into set theory in 1925. The primitive notions of his theory were function and argument. Using these notions, he defined class and set. Paul Bernays reformulated von Neumann's theory by taking class and set as primitive notions. Kurt Gödel simplified Bernays' theory for his relative consistency proof of the axiom of choice and the generalized continuum hypothesis.
Classes in set theory
The uses of classes
Classes have several uses in NBG:
They produce a finite axiomatization of set theory.
They are used to state a "very strong form of the axiom of choice"—namely, the axiom of global choice: There exists a global choice function defined |
https://en.wikipedia.org/wiki/Semigroupoid | In mathematics, a semigroupoid (also called semicategory, naked category or precategory) is a partial algebra that satisfies the axioms for a small category, except possibly for the requirement that there be an identity at each object. Semigroupoids generalise semigroups in the same way that small categories generalise monoids and groupoids generalise groups. Semigroupoids have applications in the structural theory of semigroups.
Formally, a semigroupoid consists of:
a set of things called objects.
for every two objects A and B a set Mor(A,B) of things called morphisms from A to B. If f is in Mor(A,B), we write f : A → B.
for every three objects A, B and C a binary operation Mor(A,B) × Mor(B,C) → Mor(A,C) called composition of morphisms. The composition of f : A → B and g : B → C is written as g ∘ f or gf. (Some authors write it as fg.)
such that the following axiom holds:
(associativity) if f : A → B, g : B → C and h : C → D then h ∘ (g ∘ f) = (h ∘ g) ∘ f.
References
Algebraic structures
Category theory |
https://en.wikipedia.org/wiki/Hazard%20symbol | Hazard symbols or warning symbols are recognisable symbols designed to warn about hazardous or dangerous materials, locations, or objects, including electromagnetic fields, electric currents; harsh, toxic or unstable chemicals (acids, poisons, explosives); and radioactivity. The use of hazard symbols is often regulated by law and directed by standards organizations. Hazard symbols may appear with different colors, backgrounds, borders, and supplemental information in order to specify the type of hazard and the level of threat (for example, toxicity classes). Warning symbols are used in many places in lieu of or addition to written warnings as they are quickly recognized (faster than reading a written warning) and more commonly understood (the same symbol can be recognized as having the same meaning to speakers of different languages).
List of common symbols
Tape with yellow and black diagonal stripes is commonly used as a generic hazard warning. This can be in the form of barricade tape, or as a self-adhesive tape for marking floor areas and the like. In some regions (for instance the UK) yellow tape is buried a certain distance above buried electrical cables to warn future groundworkers of the hazard.
Generic warning symbol
On roadside warning signs, an exclamation mark is often used to draw attention to a generic warning of danger, hazards, and the unexpected. In Europe and elsewhere in the world (except North America and Australia), this type of sign is used if there are no more-specific signs to denote a particular hazard. When used for traffic signs, it is accompanied by a supplementary sign describing the hazard, usually mounted under the exclamation mark.
This symbol has also been more widely adopted for generic use in many other contexts not associated with road traffic. It often appears on hazardous equipment, in instruction manuals to draw attention to a precaution, on tram and train blind spot warning stickers or on natural disaster (earthquake, t |
https://en.wikipedia.org/wiki/Qt%20Extended | Qt Extended (named Qtopia before September 30, 2008) is an application platform for embedded Linux-based mobile computing devices such as personal digital assistants, video projectors and mobile phones. It was initially developed by The Qt Company, at the time known as Qt Software and a subsidiary of Nokia. When they cancelled the project the free software portion of it was forked by the community and given the name Qt Extended Improved. The QtMoko Debian-based distribution is the natural successor to these projects as continued by the efforts of the Openmoko community.
Features
Qt Extended features:
Windowing system
Synchronization framework
Integrated development environment
Internationalization and localization support
Games and multimedia
Personal information manager applications
Full screen handwriting
Input methods
Personalization options
Productivity applications
Internet applications
Java integration
Wireless support
Qt Extended is dual licensed under the GNU General Public License (GPL) and proprietary licenses.
Devices and deployment
As of 2006, Qtopia was running on several million devices, including 11 mobile phone models and 30 other handheld devices.
Models included the Sharp Corporation Zaurus line of Linux handhelds, the Sony mylo, the Archos Portable Media Assistant (PMA430) (a multimedia device), the GamePark Holdings GP2X, Greenphone (an open phone initiative), Pocket PC, FIC Openmoko phones: Neo 1973 and FreeRunner. An unofficial hack allows its use on the Archos wifi series of portable media players (PMP) 604, 605, 705, and also on several Motorola phones such as E2, Z6 and A1200. The U980 of ZTE is the last phone running it.
Software development
Native applications could be developed and compiled using C++. Managed applications could be developed in Java.
Discontinuation
On March 3, 2009, Qt Software announced the discontinuation of Qt Extended as a standalone product, with some features integrated on the Qt Framework.
|
https://en.wikipedia.org/wiki/Surface%20integral | In mathematics, particularly multivariable calculus, a surface integral is a generalization of multiple integrals to integration over surfaces. It can be thought of as the double integral analogue of the line integral. Given a surface, one may integrate a scalar field (that is, a function of position which returns a scalar as a value) over the surface, or a vector field (that is, a function which returns a vector as value). If a region R is not flat, then it is called a surface as shown in the illustration.
Surface integrals have applications in physics, particularly with the theories of classical electromagnetism.
Surface integrals of scalar fields
Assume that f is a scalar, vector, or tensor field defined on a surface S.
To find an explicit formula for the surface integral of f over S, we need to parameterize S by defining a system of curvilinear coordinates on S, like the latitude and longitude on a sphere. Let such a parameterization be , where varies in some region in the plane. Then, the surface integral is given by
where the expression between bars on the right-hand side is the magnitude of the cross product of the partial derivatives of , and is known as the surface element (which would, for example, yield a smaller value near the poles of a sphere. where the lines of longitude converge more dramatically, and latitudinal coordinates are more compactly spaced). The surface integral can also be expressed in the equivalent form
where is the determinant of the first fundamental form of the surface mapping .
For example, if we want to find the surface area of the graph of some scalar function, say , we have
where . So that , and . So,
which is the standard formula for the area of a surface described this way. One can recognize the vector in the second-last line above as the normal vector to the surface.
Because of the presence of the cross product, the above formulas only work for surfaces embedded in three-dimensional space.
This can be seen |
https://en.wikipedia.org/wiki/Seto%20Kaiba | is a fictional character in the manga Yu-Gi-Oh! by Kazuki Takahashi. As the majority shareholder and CEO of his own multi-national gaming company, Kaiba Corporation, Kaiba is reputed to be Japan's greatest gamer and aims to become the world's greatest player of the American card game, Duel Monsters (Magic & Wizards in the Japanese manga). In all mediums, his arch-rival is the protagonist of the series, Yugi Mutou, who is also a superb game player. He is the modern day reincarnation of one of the Pharaoh's Six High Priests, "Priest Seto", who appears in the manga's final arc. Kaiba has also appeared in related anime works and feature films.
Seto Kaiba originates from one of the stories Takahashi heard from a friend involving a selfish card collector. Like the card collector, Kaiba is obsessed with gaming, but Takahashi also gave Kaiba a calmer demeanor when developing his relationship with his rival. He was first voiced by Hikaru Midorikawa in Japanese with Kenjirō Tsuda replacing him in the sequel Duel Monsters. Eric Stuart voiced him in all of his English appearances.
Critical reception to Kaiba has been mixed for being compared to simplistic anime rivals based on his multiple attempts to defeat Yugi and become the superior Duel Monsters player. While his development in the film Dark Side of Dimensions was praised for being the major focus in the narrative, critics still felt Kaiba was obsessed with Duel Monsters to a ridiculous extent based on his continued focus on his original goal.
Creation and development
Seto Kaiba originates from Kazuki Takahashi's stories he was told by a friend. According to the story, there was a real life person who played trading cards but was unwilling to play with him because he was not an expert. Displeased with hearing about this person, Takahashi decided to use this cardgame collector as a manga character, resulting in Kaiba's creation.
In the making of the series, Takahashi wanted to create an appealing creature for his Duel |
https://en.wikipedia.org/wiki/Steve%20Furber | Stephen Byram Furber (born 21 March 1953) is a British computer scientist, mathematician and hardware engineer, currently the ICL Professor of Computer Engineering in the Department of Computer Science at the University of Manchester, UK. After completing his education at the University of Cambridge (BA, MMath, PhD), he spent the 1980s at Acorn Computers, where he was a principal designer of the BBC Micro and the ARM 32-bit RISC microprocessor. , over 100 billion copies of the ARM processor have been manufactured, powering much of the world's mobile computing and embedded systems.
In 1990, he moved to Manchester to lead research into asynchronous systems, low-power electronics and neural engineering, where the Spiking Neural Network Architecture (SpiNNaker) project is delivering a computer incorporating a million ARM processors optimised for computational neuroscience.
Education
Furber was educated at Manchester Grammar School and represented the UK in the International Mathematical Olympiad in Hungary in 1970 winning a bronze medal. He went on to study the Mathematical Tripos as an undergraduate student of St John's College, Cambridge, receiving a Bachelor of Arts (BA) and Master of Mathematics (MMath - Part III of the Mathematical Tripos) degrees. In 1978, he was appointed a Rolls-Royce research fellow in aerodynamics at Emmanuel College, Cambridge and was awarded a PhD in 1980 for research on the fluid dynamics of the Weis-Fogh principle supervised by John Ffowcs Williams. During his PhD in the late 1970s, Furber worked on a voluntary basis for Hermann Hauser and Chris Curry within the fledging Acorn Computers (originally the Cambridge Processor Unit), on a number of projects; notably a microprocessor based fruit machine controller, and the Proton - the initial prototype version of what was to become the BBC Micro, in support of Acorn's tender for the BBC Computer Literacy Project.
Career and research
In 1981, following the completion of his PhD and the award |
https://en.wikipedia.org/wiki/Audio%20control%20surface | In the domain of digital audio, a control surface is a human interface device (HID) which allows the user to control a digital audio workstation or other digital audio application. Generally, a control surface will contain one or more controls that can be assigned to parameters in the software, allowing tactile control of the software. As digital audio software is complex and can play any number of functions in the audio chain, control surfaces can be used to control many aspects of music production, including virtual instruments, samplers, signal processors, mixers, DJ software, and music sequencers.
Since control surfaces are designed to perform different functions, they vary widely in size, shape and number and type of controls. A basic control surface for mixing resembles a traditional analogue mixing console, featuring faders, knobs (rotary encoders), and buttons that can be assigned to parameters in the software. Other control surfaces are designed to give a musician control over the sequencer while recording, and thus provide transport controls (remote control of record, playback and song position). Control surfaces are often incorporated into MIDI controllers to give the musician more control over an instrument. Control surfaces with motorized faders can read and write mix automation.
The control surface connects to the host computer via many different interfaces. MIDI was the first major interface created for this purpose, although many devices now use USB, FireWire, or Ethernet.
Examples
Smart AV Tango - A hybrid controller with a 22" touch screen, compatible with major DAWs for MAC & PC.
M-Audio ProjectMix - Control surface that can control many different applications.
Mackie Control - Serves a similar purpose as the ProjectMix.
External links
An introduction to control surfaces
Audio electronics |
https://en.wikipedia.org/wiki/Sirolimus | Sirolimus, also known as rapamycin and sold under the brand name Rapamune among others, is a macrolide compound that is used to coat coronary stents, prevent organ transplant rejection, treat a rare lung disease called lymphangioleiomyomatosis, and treat perivascular epithelioid cell tumor (PEComa). It has immunosuppressant functions in humans and is especially useful in preventing the rejection of kidney transplants. It is a mechanistic target of rapamycin kinase (mTOR) inhibitor that inhibits activation of T cells and B cells by reducing their sensitivity to interleukin-2 (IL-2).
It is produced by the bacterium Streptomyces hygroscopicus and was isolated for the first time in 1972, from samples of Streptomyces hygroscopicus found on Easter Island. The compound was originally named rapamycin after the native name of the island, Rapa Nui. Sirolimus was initially developed as an antifungal agent. However, this use was abandoned when it was discovered to have potent immunosuppressive and antiproliferative properties due to its ability to inhibit mTOR. It was approved by the U.S. Food and Drug Administration (FDA) in September 1999. Hyftor was approved for treatment of facial angiofibroma in the European Union in May 2023.
Medical uses
Sirolimus is indicated for the prevention of organ transplant rejection and for the treatment of lymphangioleiomyomatosis (LAM).
Sirolimus (Fyarro), as protein-bound particles, is indicated for the treatment of adults with locally advanced unresectable or metastatic malignant perivascular epithelioid cell tumor (PEComa).
In the EU, sirolimus, as Rapamune, is indicated for the prophylaxis of organ rejection in adults at low to moderate immunological risk receiving a renal transplant and, as Hyftor, is indicated for the treatment of facial angiofibroma associated with tuberous sclerosis complex.
Prevention of transplant rejection
The chief advantage sirolimus has over calcineurin inhibitors is its low toxicity toward kidneys. Transpl |
https://en.wikipedia.org/wiki/TOPS-10 | TOPS-10 System (Timesharing / Total Operating System-10) is a discontinued operating system from Digital Equipment Corporation (DEC) for the PDP-10 (or DECsystem-10) mainframe computer family. Launched in 1967, TOPS-10 evolved from the earlier "Monitor" software for the PDP-6 and PDP-10 computers; this was renamed to TOPS-10 in 1970.
Overview
TOPS-10 supported shared memory and allowed the development of one of the first true multiplayer computer games. The game, called DECWAR, was a text-oriented Star Trek-type game. Users at terminals typed in commands and fought each other in real time. TOPS-10 was also the home of the original Multi User Dungeon, MUD, the forerunner to today's MMORPGs.
Another groundbreaking application was called FORUM. This application was perhaps the first so-called CB Simulator that allowed users to converse with one another in what is now known as a chat room. This application showed the potential of multi-user communication and led to the development of CompuServe's chat application.
TOPS-10 had a very robust application programming interface (API) that used a mechanism called a UUO or Unimplemented User Operation. UUOs implemented operating system calls in a way that made them look like machine instructions. The Monitor Call API was very much ahead of its time, like most of the operating system, and made system programming on DECsystem-10s simple and powerful.
The TOPS-10 scheduler supported prioritized run queues, and appended a process onto a queue depending on its priority. The system also included User file and Device independence.
Commands
The following list of commands are supported by TOPS-10.
ASSIGN
ATTACH
BACKSPACE
BACKUP
CCONTINUE
COMPILE
CONTINUE
COPY
CORE
CPUNCH
CREATE
CREDIR
CREF
CSTART
D(eposit)
DAYTIME
DCORE
DDT
DEASSIGN
DEBUG
DELETE
DETACH
DIRECTORY
DISABLE
DISMOUNT
DSK
DUMP
E(xamine)
EDIT
ENABLE
EOF
EXECUTE
FILCOM
FILE
FINISH
FUDGE
GET
GLOB
HALT
HELP
INITIA
JCONTINUE
KJ |
https://en.wikipedia.org/wiki/Binding%20site | In biochemistry and molecular biology, a binding site is a region on a macromolecule such as a protein that binds to another molecule with specificity. The binding partner of the macromolecule is often referred to as a ligand. Ligands may include other proteins (resulting in a protein–protein interaction), enzyme substrates, second messengers, hormones, or allosteric modulators. The binding event is often, but not always, accompanied by a conformational change that alters the protein's function. Binding to protein binding sites is most often reversible (transient and non-covalent), but can also be covalent reversible or irreversible.
Function
Binding of a ligand to a binding site on protein often triggers a change in conformation in the protein and results in altered cellular function. Hence binding site on protein are critical parts of signal transduction pathways. Types of ligands include neurotransmitters, toxins, neuropeptides, and steroid hormones. Binding sites incur functional changes in a number of contexts, including enzyme catalysis, molecular pathway signaling, homeostatic regulation, and physiological function. Electric charge, steric shape and geometry of the site selectively allow for highly specific ligands to bind, activating a particular cascade of cellular interactions the protein is responsible for.
Catalysis
Enzymes incur catalysis by binding more strongly to transition states than substrates and products. At the catalytic binding site, several different interactions may act upon the substrate. These range from electric catalysis, acid and base catalysis, covalent catalysis, and metal ion catalysis. These interactions decrease the activation energy of a chemical reaction by providing favorable interactions to stabilize the high energy molecule. Enzyme binding allows for closer proximity and exclusion of substances irrelevant to the reaction. Side reactions are also discouraged by this specific binding.
Types of enzymes that can perform t |
https://en.wikipedia.org/wiki/Punnett%20square | The Punnett square is a square diagram that is used to predict the genotypes of a particular cross or breeding experiment. It is named after Reginald C. Punnett, who devised the approach in 1905. The diagram is used by biologists to determine the probability of an offspring having a particular genotype. The Punnett square is a tabular summary of possible combinations of maternal alleles with paternal alleles. These tables can be used to examine the genotypical outcome probabilities of the offspring of a single trait (allele), or when crossing multiple traits from the parents. The Punnett square is a visual representation of Mendelian inheritance. For multiple traits, using the "forked-line method" is typically much easier than the Punnett square. Phenotypes may be predicted with at least better-than-chance accuracy using a Punnett square, but the phenotype that may appear in the presence of a given genotype can in some instances be influenced by many other factors, as when polygenic inheritance and/or epigenetics are at work.
Zygosity
Zygosity refers to the grade of similarity between the alleles that determine one specific trait in an organism. In its simplest form, a pair of alleles can be either homozygous or heterozygous. Homozygosity, with homo relating to same while zygous pertains to a zygote, is seen when a combination of either two dominant or two recessive alleles code for the same trait. Recessive are always lowercase letters. For example, using 'A' as the representative character for each allele, a homozygous dominant pair's genotype would be depicted as 'AA', while homozygous recessive is shown as 'aa'. Heterozygosity, with hetero associated with different, can only be 'Aa' (the capital letter is always presented first by convention). The phenotype of a homozygous dominant pair is 'A', or dominant, while the opposite is true for homozygous recessive. Heterozygous pairs always have a dominant phenotype. To a lesser degree, hemizygosity and nullizygosit |
https://en.wikipedia.org/wiki/Coronary%20arteries | The coronary arteries are the arterial blood vessels of coronary circulation, which transport oxygenated blood to the heart muscle. The heart requires a continuous supply of oxygen to function and survive, much like any other tissue or organ of the body.
The coronary arteries wrap around the entire heart. The two main branches are the left coronary artery and right coronary artery. The arteries can additionally be categorized based on the area of the heart for which they provide circulation. These categories are called epicardial (above the epicardium, or the outermost tissue of the heart) and microvascular (close to the endocardium, or the innermost tissue of the heart).
Reduced function of the coronary arteries can lead to decreased flow of oxygen and nutrients to the heart. Not only does this affect supply to the heart muscle itself, but it also can affect the ability of the heart to pump blood throughout the body. Therefore, any disorder or disease of the coronary arteries can have a serious impact on health, possibly leading to angina, a heart attack, and even death.
Structure
The coronary arteries are mainly composed of the left and right coronary arteries, both of which give off several branches, as shown in the 'coronary artery flow' figure.
Aorta
Left coronary artery
Left anterior descending artery
Left circumflex artery
Posterior descending artery
Ramus or intermediate artery
Right coronary artery
Right marginal artery
Posterior descending artery
The left coronary artery arises from the aorta within the left cusp of the aortic valve and feeds blood to the left side of the heart. It branches into two arteries, the left anterior descending and the left circumflex. The left anterior descending artery perfuses the interventricular septum and anterior wall of the left ventricle. The left circumflex artery perfuses the left ventricular free wall. In approximately 33% of individuals, the left coronary artery gives rise to the posterior descending artery wh |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.